uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,750,481
arxiv
\section{Problem definition} \label{Introd} Ground-state calculations of inhomogeneous many-electron systems involve generally a solving of the Poisson equation for averaged Coulomb potential $u(\mathbf{r})$ at given spatial electron density $n(\mathbf{r})$ and the Schr\"{o}dinger equation for single-particle orbitals $\psi _{\mathrm{E}}(\mathbf{r})$\ in the potential $u_{eff}$ accounting by some approximation for the difference between $u(\mathbf{r})$ and the microscopic local field. In the density functional theory the corresponding set of Kohn-Sham equations for the spin-unpolarized electron gas has the form (in atomic units $|e|=m=\hbar=1$) \begin{equation} -\frac{1}{2}\nabla^{2}\Psi_{E}(\mathbf{r})+u_{\mathrm{eff}}(\mathbf{r}% )\Psi_{E}(\mathbf{r})=E\Psi_{E}(\mathbf{r})\label{Schroed}% \end{equation}% \begin{equation} u_{\mathrm{eff}}(\mathbf{r})=u(\mathbf{r})+u_{\mathrm{xc}}(\mathbf{r}% )\label{Eff-pot}% \end{equation}% \begin{equation} \nabla^{2}u(\mathbf{r})=4\pi(N_{\mathrm{+}}(\mathbf{r})-n(\mathbf{r}% ))\label{Poisson-gen}% \end{equation}% \begin{equation} n(\mathbf{r})=2\sum_{E\leqslant E_{F}}|\Psi_{E}|^{2}(\mathbf{r}% )\label{n(r)-gen}% \end{equation} Here $E$ is the energy eigenvalue of the single-particle Hamiltonian, $E_{F}$ is the Fermi energy of the electrons, $N_{\mathrm{+}}$ is the density of the positive background, $u$ is the Coulomb potential energy of the electron, and $u_{\mathrm{xc}}$ is the exchange-correlation potential energy, which is assumed in the local-density approximation $u_{\mathrm{xc}}(\mathbf{r})\equiv u_{\mathrm{xc}}\left[ n(\mathbf{r})\right] $. Due to nonlinearity and complexity of this set of equations, one believes in the iterative solution procedure that should consist in consecutive improvement of $u(\mathbf{r})$ and $n(\mathbf{r})$ until the self-consistency is attained. While this approach exists for a long time and as if were used in many articles there are two grave problems accompanying its applications and pertinent to very background of the iterative method in the case of infinitely extended many-electron systems. The first of them is related with Poisson equation and lies in possible incompatibility of the boundary conditions for $u(\mathbf{r})$ with the distribution $n(\mathbf{r})$ renewed by means of the Schr\"{o}dinger equation solutions and substituted to the right-hand side of the Poisson equation (\ref{Poisson-gen}). Rigorously speaking, it means the iteration process cannot be continued. To manage this difficulty there were several empirical techniques suggested (see e.c. \cite{Liebsch97}) but their shortcomings are either lack of iteration convergence and a transfer to some kind of variation solution instead (see \cite{L-K70PRB4555}, Appendix B), or an appearance of solution instability \cite{Ferr-Sm85PRB3427} or a change in the positive charge distribution \cite{Liebsch97} that violates the derivation conditions of self-consistent field equations as variational equations . In the case of extended but finite systems, the effect may result in the non-physical growth of the electric filed far away from the inhomogeneity region \cite{Frens90RMP745}. The compliance of obtained solution with true one is an open question in all cases. The suggested approach to deal with this problem are described in Section \ref{Poiss-iter sec} for systems with Fermi level given at the infinity and in Section \ref{finite number sec} for systems with given number of electrons. The second problem results from the existence of continuous spectrum of Hamiltonian eigenvalues for unbounded systems. It is necessary to have such a definition of Hilbert space with eigenfunctions of continuous spectrum as elements, which would be convenient in numerical applications. A limiting transition to Hamiltonian operator with continuous spectrum by means of the unlimited increase in system size is used to build up mathematically the Hilbert space of singular self-adjoined operators (see e.c. \cite{Lev-S70}). However, this way is practically inappropriate for inhomogeneous systems. It is suggested here to introduce a limiting transition into the definition of the scalar product specifying the Hilbert space. The adopted form of the scalar product provides self-adjointness of Hamiltonian operator and, respectively, the orthogonality of all eigenfunctions corresponding to the different eigenvalues. In addition, it allows to normalize them effectively to delta-function and to prove the orthogonality of the ''right'' and ''left'' current-carried eigenfunctions belonging to twofold degenerate eigenvalues. This is particularly of the essence for the problem of tunneling through a self-consistent barrier considered as an example in Section \ref{Schred-Hilb Sec}. Also this approach is implemented to the Bloch wave functions of periodic solids in Section \ref{finite number sec}. \section{Poisson equation and iteration algorithm} \label{Poiss-iter sec} Let us illustrate the neutrality problem with one-dimensional Poisson equation given on semi-axis $z\in\lbrack0,\infty)$:% \begin{equation} u^{\prime\prime}(z)=4\pi\rho(z),\,\,\rho=N_{\mathrm{+}}% (z)-n(z)\label{Poiss-1D}% \end{equation} The simple integration results in% \begin{equation} u^{\prime}(z)=u^{\prime}(0)+4\pi\int_{0}^{z}dz_{1}\rho(z_{1}% ),\label{1D u' solution}% \end{equation}% \begin{equation} u(z)=u(0)+zu^{\prime}(z)-4\pi\int_{0}^{z}dz_{1}z_{1}\rho(z_{1}% ).\label{1D u solution}% \end{equation} The finiteness condition of $u(\infty)<\infty$ requires $\lim_{z\rightarrow \infty}zu^{\prime}(z)=0$ and \begin{equation} 4\pi\int_{0}^{\infty}dz_{1}\rho(z_{1})=-u^{\prime}(0),\label{charge cond}% \end{equation}% \begin{equation} 4\pi\int_{0}^{\infty}dz_{1}z_{1}\rho(z_{1})=u(0)-u(\infty)\label{dipole cond}% \end{equation} Assuming an energy scale such that $u(\infty)=0$ it is easily to see the existence of the direct relationship between the total charge and a boundary condition for the electric field $u^{\prime}(0)$ in Eq.(\ref{1D u' solution}) or between the total dipole moment and a boundary condition for the potential $u(0)$ in Eq.(\ref{1D u solution}). However, the electron density $n^{(i)}(\mathbf{r})$ in Eq.(\ref{n(r)-gen}) obtained after the solution of the Schr\"{o}dinger Eq.(\ref{Schroed}) at an $i$-th iteration step may not satisfy the imposed boundary conditions for the Poisson equation as usually it takes place. In this case there is no possibility to solve the Eq.(\ref{Poiss-1D}) and the iteration procedure should be stopped. In the presented approach this difficulty is removed by partition of full density $n(\mathbf{r})$ in two terms% \begin{equation} n(z)=n_{\mathrm{ind}}\left[ u(z)\right] +n_{\mathrm{Q}}(z), \label{N=Ni+Nq}% \end{equation} where $n_{\mathrm{ind}}$ is defined as a function of the Coulomb potential $u$ via its relation to the effective potential $u_{\mathrm{eff}}% =u(z)+u_{\mathrm{xc}}\left[ n(z)\right] $ by the known quasi-classical expression% \begin{equation} n_{\mathrm{ind}}\left( u,n\right) =\frac{2^{3/2}}{3\pi^{2}}\left[ E_{F}-u_{\mathrm{eff}}(z,n(z))\right] ^{3/2} \label{Ni-quasicl}% \end{equation} and% \begin{equation} n_{\mathrm{Q}}(z)=n(z)-n_{\mathrm{ind}}(z) \label{Nq-definit}% \end{equation} is named as the quantum correction.\ Using the definition (\ref{N=Ni+Nq}), the Poisson equation (\ref{Poiss-1D}) can be rewritten in the form% \begin{equation} u^{\prime\prime}+4\pi n_{\mathrm{ind}}[u,n_{\mathrm{Q}}]=4\pi(N_{\mathrm{+}% }-n_{\mathrm{Q}}(z)). \label{Poiss+Ni}% \end{equation} If the pair functions $n(z)$ and $u(z)$ is the true self-consistent solution of the problem (\ref{Schroed}-\ref{n(r)-gen}) then the Eq.(\ref{Poiss+Ni}) is the simply rearranged Eq.(\ref{Poiss-1D}). However, the Eq.(\ref{Poiss+Ni}) is much more appropriate for the iterative procedure since, due to the screening effect, the induced electron density $n_{\mathrm{ind}}^{(i)}$ depending on the unknown Coulomb potential provides the existence of self-consistent solution $u^{(i)}(\mathbf{r})$ at every iteration step and any possible spatial dependence of right-hand side of Eq.(\ref{Poiss+Ni}). It is useful to note that the expression (\ref{Ni-quasicl}) is a good approximation of the solution of Schr\"{o}dinger equation for the smooth part of the $u_{\mathrm{eff}}$ and produces the true screening of the long-range part of the Coulomb potential since the $n_{\mathrm{ind}}$.is found simultaneously with $u$ in the course of self-consistent solution of the Poisson equation. The rest short-range variations of the density $n(z)$ are exactly described by n$_{\mathrm{Q}}(z)$, which is to be found in the usual iterative cycle after the solution of Schr\"{o}dinger equation. This is the root cause of the algorithm efficiency. It will be convenient to designate the Eq. (\ref{Poiss+Ni}) as self-consistent Poisson equation. The full iteration algorithm in the case of the one-dimensional inhomogeneity of the charge distribution can be described now as following% \begin{equation} i=0,1,...;\,\,\,n_{\mathrm{Q}}^{(0)}=0, \end{equation}% \begin{equation} u^{\prime\prime(i)}+4\pi n_{\mathrm{ind}}\left[ u^{(i)},n_{\mathrm{Q}}% ^{(i)}\right] =4\pi(N_{\mathrm{+}}(z)-n_{\mathrm{Q}}^{(i)}% (z)),\label{Poiss-it}% \end{equation}% \begin{equation} n_{\mathrm{s}}^{(i)}(z)=n_{\mathrm{ind}}\left[ u^{(i)},n_{\mathrm{Q}}% ^{(i)}\right] +n_{\mathrm{Q}}^{(i)}(z);\,\,u_{\mathrm{eff}}^{(i)}% (z)=u^{(i)}(z)+u_{\mathrm{xc}}\left( n_{\mathrm{s}}^{(i)}(z)\right) ,\label{Ns-it}% \end{equation}% \begin{equation} \frac{1}{2}\psi_{k}^{\prime\prime(i)}(z)+\left( \frac{1}{2}k^{2}% +u_{\mathrm{eff}}(\infty)-u_{\mathrm{eff}}^{(i)}(z)\right) \psi_{k}% ^{(i)}(z)=0,\label{Schred-it}% \end{equation}% \begin{equation} E=\frac{1}{2}(k^{2}+\mathbf{k}_{||}^{2}+u_{\mathrm{eff}}(\infty));\,\,\Psi _{E}^{(i)}(\mathbf{r})=\frac{1}{2\pi}\exp(i\mathbf{k}_{||}\mathbf{r}_{||}% )\psi_{k}^{(i)}(z);\,\,n^{(i)}(z)=2\sum_{E\leqslant E_{F}}|\Psi_{E}^{(i)}% |^{2}(\mathbf{r}),\label{N-it}% \end{equation}% \begin{equation} n_{\mathrm{Q}}^{(i+1)}(z)=n^{(i)}(z)-n_{\mathrm{ind}}\left[ u^{(i)}% (z),n^{(i)}(z)\right] .\label{Nq-next}% \end{equation} Here $n_{\mathrm{s}}^{(i)}(z)$ is the electron density that is self-consistent with the Coulomb potential $u^{(i)}(z)$ at the given quantum correction density n$_{\mathrm{Q}}^{(i)}(z))$. The self-consistent Poisson equation (\ref{Poiss-it}) is solved as the boundary problem and the Schr\"{o}dinger equation (\ref{Schred-it}) is solved as the Cauchy problem. It needs to say a few words relative to the speciality of the quasi-classical expression (\ref{Ni-quasicl}) for $n_{\mathrm{ind}}$ in the case of density functional approach. Because $u_{\mathrm{xc}}$ depends itself on the electron density the Eq.(\ref{Ni-quasicl}) determines $n_{\mathrm{ind}}$ as the implicit function of the Coulomb potential $u$ when $n_{\mathrm{Q}}$ is known. This function has the physical meaning only under condition $\partial n_{\mathrm{ind}}/\partial u<0$, which is the stability condition for solutions of the Eq.(\ref{Poiss-it}% ) (see a discussion in \cite{ASH2001}). The validity of the described method for semi-infinite electron systems was examined by calculations of surface properties of simple metals and quantum corrections to the capacity of barrier structures \cite{Sh-P02}. The expected convergency of the iteration procedure has been obtained and the true self-consistency of the solution has been successfully checked by the Budd-Vannimenus criterion \cite{Budd-Van73}. \section{Schr\"{o}dinger equation and Hilbert space} \label{Schred-Hilb Sec}Difficulties of calculations of wave functions of the continuous spectrum are also present in problems of type of surface properties of metals where one has deal with the semi-infinite inhomogeneous electron gas. Attempts to replace the unbounded system by a system of finite size give rise to serious complications both in analytical and in numerical computations (see e.c. \cite{Paasch-Hiet83}). However, it is more instructive to analyze here the problem by an example of the tunneling in many-electron system. In this case the questions related to the eigenfunctions pertinent to the continuous spectrum of the Schr\"{o}dinger equation (\ref{Schroed}) can be considered with a fair degree of details and usefulness. In such a system the self-consistent Coulomb and exchange-correlation potentials effect inevitably on the shape of the tunnel barrier. The resultant contribution to tunnel current-voltage characteristics can vary from insignificant, as in metal-insulator-metal junctions, up to decisive, as in the Schottky-barrier metal-semiconductor junctions (see \cite{ASH2001} and references therein). The essential dependence of the transparency of self-consistent barrier on the energy of tunneling electrons and a reconstruction of the barrier with the applied bias voltage make in principal impossible the use of the tunnel Hamiltonian approach to describe such systems. Therefore it is necessary to formulate some regular scheme of tunnel current calculations that would form also a basis for the numerical realization of self-consistent solution. \subsection{Scalar product and orthonormal basis in single-particle continuous spectrum} \label{basis} Let two parts of system (left - \textrm{L}, right - \textrm{R}) occupy half-space $z<0$ (metal) and $z>0$ (semiconductor with a degenerate electron gas and the Schottky barrier), respectively. The effective potential energy $V(z)$ of electrons is considered as independent of coordinates in the $(x,y)$ interface plane. Then the single-particle Hamilton operator is:% \begin{equation} \hat{H}=\hat{T}+V(z), \label{Hamilton}% \end{equation} where $\hat{T}$ is the kinetic energy operator. Let us assume following conditions for the asymptotes of $V(z)$% \begin{equation} V(z)\rightarrow V^{L},\,z\rightarrow-\infty;\,\,\,V(z)\rightarrow 0,\,z\rightarrow\infty. \label{V(z) def}% \end{equation} In the case of metal-semiconductor junctions one can accept $V(z)\equiv V^{L}<0$ for $z\leq0$. Due to constraints (\ref{V(z) def}) the eigenfunctions of continuous spectrum $\hat{H}$ have an oscillating behavior at the left or both infinities depending on the relation between the $V^{L}$ and the energy eigenvalue $E$. The second type eigenstates only give the contribution to the current. Let $k\geq0$ is the wavevector of wave function oscillations at $z\rightarrow\infty$ and $q\geq0$ is the same at $z\rightarrow-\infty$. The eigenfunctions obey the equation% \begin{equation} \hat{H}\Psi_{E}(x,y,z)=E\Psi_{E}(x,y,z) \label{H-E}% \end{equation} and, in view of the translation invariance along the interface, they can be taken in the form% \begin{equation} \Psi_{E}(x,y,z)=C_{k}\psi_{k}(z)\exp(i\mathbf{k}_{||}\mathbf{r}_{||})/2\pi, \label{Psi-gen}% \end{equation} where $E^{R}=(k^{2}+\mathbf{k}_{||}^{2})/2$ is the energy spectrum of electrons in the bulk of semiconductor. The wave functions of continuous spectrum should be normalized to the $\delta $-function of quantum numbers. The Eq.(\ref{Psi-gen}) provides already the normalization to $\delta(\mathbf{k}_{||}-\mathbf{k}_{||}^{\prime})$ in the lateral plane. This result ($2\pi$ in the denominator) is usually obtained by means of Born-Karman periodic boundary condition in the normalization box. Evidently, the system under consideration has no periodicity in $z$ direction. To determine the constant $C$ and to avoid cumbersome calculations of eigenfunctions for a finite-size system mentioned in Section \ref{Introd} let us define the scalar product of the eigenfunctions by% \begin{equation} \left\langle \psi_{k}|\psi_{k_{1}}\right\rangle =\lim_{\epsilon\rightarrow 0}\int_{-\infty}^{\infty}dz\exp(-\epsilon|z|)\psi_{k}^{\ast}(z)\psi_{k_{1}% }(z)\label{scalar-prod}% \end{equation} with the natural definition of the eigenfunction norm by \begin{equation} \left\| \psi_{k}\right\| ^{2}=\lim_{k_{1}\rightarrow k}\left\langle \psi _{k}|\psi_{k_{1}}\right\rangle .\label{norm-def}% \end{equation} It is easily to check that the operator $\hat{T}$ \ in Eq. (\ref{Hamilton}) and, therefore, Hamiltonian $\hat{H}$ are self-adjoined relative to the scalar product (\ref{scalar-prod}). Hence, the orthogonality of eigenfunctions for different eigenvalues is provided by the self-adjointness of $\hat{H}$. However, the quantum number $k$ for current-carrying eigenstates is twofold degenerate. Thus any complex solution $\psi_{k}$ and its conjugate $\psi _{k}^{\ast}$ form the linear-independent pair of solutions. In order to the degenerate pair of eigenfunctions do not violate the necessary orthonormality of the basis in the Hilbert space generated by Hamiltonian $\hat{H}$ they must be orthogonalized and normalized. Let us adopt as $\psi_{k}$ the wave function describing tunneling from right half-space to the left one and having asymptotes of form% \begin{equation} \psi_{k}^{\mathrm{R}}=C_{k}^{\mathrm{R}}\left( \mathsf{e}^{-\mathrm{i}% kz}+r_{k}^{\mathrm{R}}\mathsf{e}^{\mathrm{i}kz}\right) ,\,z\rightarrow \infty;\,\,\,\psi_{k}^{\mathrm{R}}=C_{k}^{\mathrm{R}}t_{k}^{\mathrm{R}% }\,\mathsf{e}^{-\mathrm{i}qz},\,\,z\rightarrow-\infty.\label{psi-R asympt}% \end{equation} The usual continuity condition of the probability flow density \begin{equation} j_{k}(z)=\psi_{k}^{\ast}(z)\left( \mathsf{\hat{v}}+\mathsf{\hat{v}}% ^{+}\right) \psi_{k}(z)/2=\mathrm{const}(z)\label{continuity-def}% \end{equation} gives the relation between amplitudes of the transmission and reflection coefficients% \begin{equation} \frac{\partial E^{\mathrm{L}}(q,\mathbf{k}_{||})}{\partial q}\left| t_{k}^{\mathrm{R}}\right| ^{2}=\frac{\partial E^{\mathrm{R}}(k,\mathbf{k}% _{||})}{\partial k}\left( 1-\left| r_{k}^{\mathrm{R}}\right| ^{2}\right) .\label{t2-r2 relation}% \end{equation} Here $E^{\mathrm{L}}(q,\mathbf{k}_{||})$ and $E^{\mathrm{R}}(k,\mathbf{k}% _{||})$ are, respectively, the left and right energy spectrum, the velocity operator is defined by $\mathsf{\hat{v}}=\mathrm{i}\left[ \hat{H},\hat {z}\right] $. The conservation of the total energy $E$ and the transverse momentum $\mathbf{k}_{||}$ \begin{equation} E^{\mathrm{L}}(q,\mathbf{k}_{||})=E^{\mathrm{R}}(k,\mathbf{k}_{||}% )=E\label{energy conservation}% \end{equation} determines $q(k)$ as a function $k$ and vice versa. Accounting for Eq.(\ref{energy conservation}), the Eq.(\ref{t2-r2 relation}) can be written in more compact form% \begin{equation} \frac{1}{\partial q/\partial k}\left| t_{k}^{\mathrm{R}}\right| ^{2}=1-\left| r_{k}^{\mathrm{R}}\right| ^{2}.\label{t2-r2-R}% \end{equation} The contribution to the normalizing integral (\ref{norm-def}) is formed by infinite regions of the $z$ axis where the asymptotic expressions (\ref{psi-R asympt}) are valid. Using the definition (\ref{scalar-prod}) and the relation (\ref{t2-r2-R}) the value of $\left| C_{k}^{\mathrm{R}}\right| ^{2}=1/2\pi$ can be found that provides% \begin{equation} \lim_{k\rightarrow k_{1}}\left\langle \psi_{k}^{\mathrm{R}}|\psi_{k_{1}% }^{\mathrm{R}}\right\rangle =\delta\left( k-k_{1}\right) .\label{norm-psiR}% \end{equation} The second normalized solution $\psi_{q}^{\mathrm{L}}$ which is linear-independent of $\psi_{k}^{\mathrm{R}}$ can be obtained by the similar procedure with the following results% \begin{equation} \psi_{q}^{\mathrm{L}}=C_{q}^{\mathrm{L}}\left( \mathsf{e}^{\mathrm{i}% qz}+r_{q}^{\mathrm{L}}\mathsf{e}^{-\mathrm{i}qz}\right) ,\,z\rightarrow -\infty;\,\,\,\psi_{q}^{\mathrm{L}}=C_{q}^{\mathrm{L}}t_{q}^{\mathrm{L}% }\mathsf{e}^{\mathrm{i}kz},\,\,z\rightarrow\infty, \label{psi-L asympt}% \end{equation}% \begin{equation} \frac{1}{\partial k/\partial q}\left| t_{q}^{\mathrm{L}}\right| ^{2}=1-\left| r_{q}^{\mathrm{L}}\right| ^{2} \label{t2-r2-L}% \end{equation}% \begin{equation} \left| C_{q}^{\mathrm{L}}\right| ^{2}=1/2\pi,\,\,\lim_{q\rightarrow q_{1}% }\left\langle \psi_{q}^{\mathrm{L}}|\psi_{q_{1}}^{\mathrm{L}}\right\rangle =\delta\left( q-q_{1}\right) ,\,\,q=q(k). \label{norm-psiLq}% \end{equation} The other useful normalization \begin{equation} \lim_{k_{1}\rightarrow k}\left\langle \psi_{q(k)}^{\mathrm{L}}|\psi_{q(k_{1}% )}^{\mathrm{L}}\right\rangle =\delta\left( k-k_{1}\right) \label{norm-psiLk}% \end{equation} is obtained under condition \begin{equation} \left| C_{k}^{\mathrm{L}}\right| ^{2}=\frac{\partial q/\partial k}{2\pi}. \label{C4L-k norm}% \end{equation} To elucidate a question about mutual orthogonality of $\psi_{k}^{\mathrm{R}}$ and $\psi_{q(k)}^{\mathrm{L}}$ it needs to know the interrelation between the pairs $\left( t_{k}^{\mathrm{R}},r_{k}^{\mathrm{R}}\right) $ and $\left( t_{q(k)}^{\mathrm{L}},r_{q(k)}^{\mathrm{L}}\right) .$ The required relations can be found in general form independently of a particular barrier if one realizes $\psi_{q(k)}^{\mathrm{L}}$ as the linear combination of $\psi _{k}^{\mathrm{R}}$ and $\psi_{k}^{\mathrm{R}\ast}$ that gives% \begin{equation} t_{q(k)}^{\mathrm{L}}=\frac{1}{\partial q/\partial k}t_{k}^{\mathrm{R}% },\,\,\,r_{q(k)}^{\mathrm{L}}=-r_{k}^{\mathrm{R\ast}}\left( t_{k}^{\mathrm{R}% }/t_{k}^{\mathrm{R\ast}}\right) .\label{(t,r)L-(t,r)R}% \end{equation} Using Eq.(\ref{(t,r)L-(t,r)R}) one can show that terms like $\delta$-function in scalar product \ $\left\langle \psi_{q(k)}^{\mathrm{L}}|\psi_{k_{1}% }^{\mathrm{R}}\right\rangle $ cancel each other and hence \begin{equation} \lim_{k_{1}\rightarrow k}\left\langle \psi_{q(k)}^{\mathrm{L}}|\psi_{k_{1}% }^{\mathrm{R}}\right\rangle =0.\label{L-R ortho}% \end{equation} The proof of Eq.(\ref{L-R ortho}) completes the construction of the orthonormal basis in Hilbert space of single-particle Hamiltonian $\hat{H}$. \subsection{Electron density and current density} In the case of an equilibrium system the single-particle density matrix is $\hat{\rho}=\hat{\rho}(\hat{H})$ and it has only diagonal non-zero elements $\rho^{\mathrm{LL}}\equiv F^{\mathrm{L}}$ and $\rho^{\mathrm{RR}}\equiv F^{\mathrm{R}}$ in the chosen basis due to the orthogonality relation (\ref{L-R ortho}). Therefore,% \begin{equation} n(z)=\mathrm{Sp}\hat{\rho}\hat{n}(z)=\nonumber \end{equation}% \begin{equation} =2\int_{0}^{\infty}dq\int_{-\infty}^{\infty}d\mathbf{k}_{||}F^{\mathrm{L}% }\left[ E^{\mathrm{L}}(q,\mathbf{k}_{||})\right] \left| \psi_{q}% ^{\mathrm{L}}(z)\right| ^{2}+2\int_{0}^{\infty}dk\int_{-\infty}^{\infty }d\mathbf{k}_{||}F^{\mathrm{R}}\left[ E^{\mathrm{R}}(k,\mathbf{k}% _{||})\right] \left| \psi_{k}^{\mathrm{R}}(z)\right| ^{2}.\label{n(z)-exp}% \end{equation} Since the Schottky barrier is completely situated in the semiconductor it is convenient to change the $q$-integration in Eq.(\ref{n(z)-exp}) by $k$-integration because $k$ is the quantum number of the \textrm{R}% -eigenstates. The result is% \begin{equation} n(z)=2\int_{0}^{\infty}dk\int_{-\infty}^{\infty}d\mathbf{k}_{||}\left\{ F^{\mathrm{L}}\left[ E^{\mathrm{R}}(k,\mathbf{k}_{||})\right] \left| \psi_{k}^{\mathrm{L}}(z)\right| ^{2}+F^{\mathrm{R}}\left[ E^{\mathrm{R}% }(k,\mathbf{k}_{||})\right] \left| \psi_{k}^{\mathrm{R}}(z)\right| ^{2}\right\} .\label{n(z)-fin}% \end{equation} Here the subindex $q(k)$ in $\psi^{\mathrm{L}}$ was changed by $k$ to recall the necessity to use the normalization (\ref{norm-psiLk})-(\ref{C4L-k norm}). The diagonal elements $F^{\mathrm{L}}$ and $F^{\mathrm{R}}$ of the density matrix determine respectively the occupation of \textrm{L}- and \textrm{R}% -eigenstates and are described by Fermi distributions. The choice of the \textrm{L}- \textrm{R}-states as the basis allows to account mathematically for the presence of two independent reservoirs of particles (thermostats) in the left and right infinities. The Fermi level $E_{F}^{\mathrm{L,R}}$\ of each reservoir is determined by the proper neutrality condition far away from the interface and they are different as the bias voltage is applied to the junction. It is important to stress that both $\psi_{k}^{\mathrm{L}}(z)$ and $\psi_{k}^{\mathrm{R}}(z)$ states extend to the both infinities and contribute to $n(z)$ at any point. Thus the bias applied changes the position of each Fermi level. Substituting asymptotic expressions (\ref{psi-R asympt}) and (\ref{psi-L asympt}) for $\psi_{k}^{\mathrm{R}}$ and $\psi_{k}^{\mathrm{L}}$ in Eq.(\ref{n(z)-fin}) and using the neutrality condition $n(\infty )=N_{\mathrm{+}}^{\mathrm{R}}$ we obtain the equation determining the dependence of the Fermi level of electrons in semiconductor on the bias $U=-V$ (compare with Eq.(3.101) in \cite{Ferry-Good97})% \[ n(z\rightarrow\infty)\equiv N_{+}^{\mathrm{R}}=\frac{4}{(2\pi)^{3}}\int _{0}^{\infty}dk\int_{-\infty}^{\infty}d\mathbf{k}_{||}F^{\mathrm{R}}\left[ E^{\mathrm{R}}(k,\mathbf{k}_{||})\right] + \]% \begin{equation} +\frac{2}{(2\pi)^{3}}\int_{k_{U}}^{\infty}dk\int_{-\infty}^{\infty}% d\mathbf{k}_{||}\left( 1-\left| r_{k}^{\mathrm{R}}\right| ^{2}\right) \left\{ F^{\mathrm{L}}\left[ E^{\mathrm{R}}(k,\mathbf{k}_{||})\right] -F^{\mathrm{R}}[E^{\mathrm{R}}(k,\mathbf{k}_{||})]\right\} .\label{Efermi}% \end{equation} Here $V$ is the structure voltage drop, $k_{U}=\left[ 2\max(E^{L}% (0,0)-U,0)\right] ^{1/2}$, $E_{F}^{\mathrm{L}}=E_{F}^{\mathrm{R}}-U$. At zero temperature $E_{F}^{\mathrm{R}}=E^{\mathrm{R}}(k_{F})$ and this equation can be transformed into the equation for the Fermi wave vector of the right electrons% \begin{equation} \frac{k_{F}^{3}}{3\pi^{2}}=N_{\mathrm{+}}+2\mathrm{sgn}(U)\int \limits_{E(k,\mathbf{k}_{||})\in\lbrack E_{F}-U,E_{F}]}\frac{dkd\mathbf{k}% _{||}}{(2\pi)^{3}}\left( 1-\left| r_{k}\right| ^{2}\right) .\label{kF}% \end{equation} The index \textrm{R} is suppressed here for short. It is easily to see from Eq.(\ref{kF}) that at $U>0$ we have $k_{F}(U)>k_{F}(0)$ and vice versa as it should be. Under the low barrier transparency (the reflection coefficient is of order of 1) the Eq.(\ref{kF}) can be solved by simple iterative method% \footnote{In the case of metal-semiconductor junctions the relative correction to the solution of Eq.(\ref{kF}) due to a shift of the metal $E_{F}^{\mathrm{L}}$ from the value $E_{F}^{\mathrm{R}}-U$ by neutrality constraint is of order of $\approx (U/E_{F}^{\mathrm{L}})(1-|r_{F}^{\mathrm{R}}|^{2})^{2}$ and can be neglected in calculations of $I-V$ characteristics of real structures with typical values $1-|r_{F}^{\mathrm{R}}|^{2}<10^{-4}$, $U<1\,\mathrm{eV}$, $E_{F}^{\mathrm{L}}\lesssim 10\,\mathrm{eV}$. Direct numeric calculations in Ref.\cite{Mera-eaPRB05} have shown that a violation of the equality $\Delta E_{F}=U$ becomes essential when the barrier height is $\lesssim E_{F}$ and barrier width is $\lesssim 2\pi /k_{F}$.}. After the Fermi level is found one can calculate the tunnel current, averaging the current density (\ref{continuity-def}) with density matrix and using the asymptotic representation of the wave functions at $z\rightarrow\infty$. The result is% \begin{equation} I(U)=-2e\int_{0}^{\infty}\frac{dk}{2\pi}\int\frac{d\mathbf{k}_{||}}{(2\pi )^{2}}\frac{\partial E}{\partial k}\left[ F(E)-F(E+U)\right] \left( 1-\left| r_{k}\right| ^{2}\right) . \label{current}% \end{equation} Here $F(E)$ is the Fermi distribution at $T\neq0$, $e=-1$ is the electron charge and $E(k,\mathbf{k}_{||})$ is the energy dispersion relation of the semiconductor electrons. \section{Many-electron structures with finite number of electrons} \label{finite number sec} \subsection{Many-electron atoms and molecules} The neutrality problem of iteration procedure in self-consistent field theory has been considered here for the extended many-electron systems with undetermined number of electrons and the solution is suggested in Section \ref{Poiss-iter sec}. In the case of the finite many-electron systems like atom or molecule the number of electrons is exactly known and the insolubility of iteration equations seems not present. Anyway, the required charge state can be prepared if there is sufficient number of bounded-state levels. However, the question is then transformed into the impossibility to find the Poisson equation solution with a given angle symmetry if the charge density at the right-hand side does not have the desired symmetry. Fertig and Kohn \cite{Fert-Kohn00}\ has considered the problem and discussed its source and consequences. But the solution suggested consists in additional artificial constraints on the sought variational solution. It seems, however, that separation of the induced charge in the self-consistent Poisson equation like it is done in Eq.(\ref{Poiss+Ni}) might also solve the problem in the case of finite Fermi systems. The induced charge can be taken in the form like in the Thomas-Fermi-Dirac theory \cite{Bethe64} with correlation potential included. \subsection{Periodic electronic structures and supercells} The spatial periodicity of crystal solids makes possible to replace numeric simulations of infinitely extended system by computations for a finite fragment (cell) supplemented by periodic boundary conditions. To map the preceding analysis of the interrelation between the charge distribution and boundary conditions for Coulomb potential on such a case let us rewrite the Eqs. (\ref{1D u' solution}),(\ref{1D u solution}) for the finite interval $(a,b)$% \begin{equation} u^{\prime}(b)-u^{\prime}(a)=4\pi\int_{a}^{b}dz_{1}\rho(z_{1}% ),\label{1D-u'(a,b)}% \end{equation}% \begin{equation} u(b)-u(a)=(b-a)u^{\prime}(b)-4\pi\int_{a}^{b}dz_{1}z_{1}\rho(z_{1}% ).\label{1D u (a,b)}% \end{equation} The periodicity demands the conditions for the total charge $Q\equiv\int _{a}^{b}dz_{1}\rho(z_{1})=0$ and the total dipole moment \[ D\equiv\int_{a}^{b}dz_{1}z_{1}\rho(z_{1})=(b-a)u^{\prime}(b)/4\pi \] to be fulfilled. In the case of centrosymmetric structure there should be $u^{\prime}(a)=u^{\prime}(b)=0$ and therefore $D=0$ (see e.g. \cite{L-L82Electrodyn} \S \S 6, 13). At each $i$th iteration step it is easy to provide the neutrality condition $Q^{(i)}=0$ for the finite structure with a prescribed ionic charge by filling the necessary number of electronic states after solving the Schr\"{o}dinger equation. However, the charge distribution with $D^{(i)}\neq0$ is the rather likely result that cannot be prevented. In this case the periodic boundary condition for the solution of the Poisson equation can be fulfilled only at $u^{\prime}\neq0$ at the boundaries. At the same time the behavior of the potential should be essential contorted in comparison with a ''true'' charge distribution at $D=0$ that produces increasingly distorted charge distribution at the next iteration cycle. In the case of Fourier-series decomposition of the electron density and potential, the discontinuity of the potential at the boundaries entails a growth of the solution $u$ and, therefore, its derivative $u^{\prime}$ again near the boundaries owing to the Gibbs effect \cite{Zigm59}. In the case of two- or three-dimensional cell the Eqs. (\ref{1D-u'(a,b)}) and (\ref{1D u (a,b)}) should be replaced by% \begin{equation} \oint\limits_{S_{\mathrm{cell}}}dS\mathbf{\nu}_{s}\mathbf{\cdot}\nabla u=4\pi\int\limits_{\Omega_{\mathrm{cell}}}d\mathbf{r}\rho(\mathbf{r}% )\equiv4\pi Q_{\mathrm{cell}},\,\,\,the\,\,Gauss\,\,theorem,\label{Gauss theo}% \end{equation}% \begin{equation} \oint\limits_{S_{\mathrm{cell}}}dS\left[ \mathbf{\nu}_{s}\mathbf{\cdot}\nabla u-u(\mathbf{\nu}_{s}\mathbf{\cdot}\nabla)\right] \mathbf{r}=4\pi \int\limits_{\Omega_{\mathrm{cell}}}d\mathbf{rr}\rho(\mathbf{r})\equiv 4\pi\mathbf{D}_{\mathrm{cell}},\label{D-theo}% \end{equation} with the same conclusions as in the one-dimensional case above. Here $\Omega_{\mathrm{cell}}$ and $S_{\mathrm{cell}}$ are the volume and bounding surface of the cell, respectively, $\mathbf{\nu}_{s}$ is the external unit normal to the cell surface. The formulae (\ref{Gauss theo}) and (\ref{D-theo}) are derived from the Poisson equation with the use of the Green theorem% \begin{equation} \Delta u(\mathbf{r})=4\pi\rho(\mathbf{r}),\,\,\,\,\int_{\Omega}d\mathbf{r}% \Phi\Delta\Psi=\int_{\Omega}d\mathbf{r}\Psi\Delta\Phi+\oint_{S_{\Omega}% }d\mathbf{S}\left( \Phi\nabla\Psi-\Psi\nabla\Phi\right) \label{Green theo}% \end{equation} that requires for proper differentiability of the involved functions and correspondent smoothness of the bounding surface $S$ to be fulfilled \cite{Vlad71-EqMatPhys}. The incompatibility of the boundary conditions with the right-hand-side of the Poisson equation makes the Dirichlet problem as ill-posed one with corresponding strong perturbance of the solution in response to insignificant perturbance of the initial data \cite{Courant62}. This mechanism may be responsible for the observed deterioration of current self-consistent iterative algorithms manifested in the long-wavelength charge instability, which is named as ''charge sloshing'' \cite{KerkerPRB81}% -\cite{Kress-FurthPRB96}. It is necessary to note that at initial iteration step the solution of self-consistent Poisson equation (\ref{Poiss+Ni})\ with the induced electron distribution $n_{\mathrm{ind}}$\ defined by Eq.(\ref{Ni-quasicl}) should give a good starting guess for the potential and the valence electron distribution in the cell since they are self-consistent and meet the boundary conditions. However, the expression (\ref{Ni-quasicl}) is evidently inappropriate in the case of crystals with the filled energy bands and it needs to use a next quasi-classical approximation for the induced electron density expressed as a function of derivatives of the potential (see, e.g. \cite{Kirzh63}). Let us consider now the application of the results of Sec. \ref{Schred-Hilb Sec} to the continuum spectrum of infinitely extended periodic system. The eigenfunctions of single-particle Hamiltonian according to Bloch theorem can be taken in the form% \begin{equation} \psi_{\mathsf{j}\mathbf{k}}(\mathbf{r})=C\exp(\mathsf{i}\mathbf{kr}% )\phi_{\mathsf{j}\mathbf{k}}(\mathbf{r}), \label{Bloch w-f}% \end{equation} where $\phi_{\mathsf{j}\mathbf{k}}(\mathbf{r})$ is the cell-periodic part, $\mathsf{j}$ is the energy band index, $\mathbf{k}$\ is the wave vector, and $C$ is the normalizing constant. The definition of the scalar product like in Eq.(\ref{scalar-prod}) by the expression% \begin{equation} \left\langle \psi_{\mathsf{j}\mathbf{k}}|\psi_{\mathsf{j}_{1}\mathbf{k}_{1}% }\right\rangle =\lim_{\epsilon\rightarrow0}\int d\mathbf{r}\exp(-\epsilon |\mathbf{r}|)\psi_{\mathsf{j}\mathbf{k}}^{\ast}(\mathbf{r})\psi_{\mathsf{j}% _{1}\mathbf{k}_{1}}(\mathbf{r}) \label{norm-def Bloch}% \end{equation} and the eigenfunction norm like in Eq.(\ref{norm-def}) provides the self-adjointness of single-particle Hamiltonian and results in the orthonormal basis of the Hilbert space with $|C|^{2}=(2\pi)^{-3}$% \begin{equation} \left\langle \psi_{\mathsf{j}\mathbf{k}}|\psi_{\mathsf{j}_{1}\mathbf{k}_{1}% }\right\rangle =\delta\left( \mathbf{k}-\mathbf{k}_{1}\right) \delta _{\mathsf{jj}_{1}} \label{ortho-norm Bloch}% \end{equation} under natural conditions $\mathbf{k},\mathbf{k}_{1}\in$ 1st Brillouin zone and \begin{equation} \int\limits_{\Omega_{\mathrm{cell}}}d\mathbf{r}\left| \phi_{\mathsf{j}% \mathbf{k}}(\mathbf{r})\right| ^{2}=\Omega_{\mathrm{cell}}. \label{norm Fi-Bloch}% \end{equation} The relationships just obtained allow to determine the electron density inside the cell by \begin{equation} n(\mathbf{r})=2\sum_{\mathsf{j}}\int\limits_{\mho_{\mathrm{BZ}}}% \frac{d\mathbf{k}}{(2\pi)^{3}}\left| \phi_{\mathsf{j}\mathbf{k}}% (\mathbf{r})\right| ^{2} \Theta(E_{F}-E_{\mathsf{j}}(\mathbf{k})), \label{n(r)-cell}% \end{equation} where the multiplier $2$ takes into account the spin degeneration and $k$-integration is taken over the unit cell of the reciprocal lattice that is the first Brillouin zone. Because the volume of the Brillouin zone $\mho_{_{\mathrm{BZ}}}=(2\pi)^{3}/\Omega_{\mathrm{cell}}$ we have% \begin{equation} \int\limits_{\mho_{\mathrm{BZ}}}\frac{d\mathbf{k}}{(2\pi)^{3}}=\frac{1}% {\Omega_{\mathrm{cell}}}. \label{dk-Brill-volume}% \end{equation} Thus the band filling and the position of the Fermi level can be determined from the neutrality condition% \begin{equation} \int\limits_{\Omega_{\mathrm{cell}}}d\mathbf{r}n(\mathbf{r})=N_{\mathrm{+}}, \label{dr-n(r)-cell}% \end{equation} where $N_{\mathrm{+}}$ is the ion charge of the unit cell. The expressions (\ref{norm Fi-Bloch})-(\ref{dr-n(r)-cell}) allow to abandon the use of unnecessary Born-Karman boundary condition for the eigenfunctions $\psi_{\mathsf{j}\mathbf{k}}(\mathbf{r})$, which limits admissible points in $k$-space by the discrete set. As a result, one can calculate the energy bands $\varepsilon_{\mathsf{j}}(\mathbf{k})$ of perfect crystal or make integrations over Brillouin zone, using any set of $k$-points dictated by selected algorithm \cite{Monk-PackPRB76}, and avoid disadvantages of the cell size extension beyond the size of the minimal unit cell to increase the sampling density of $k$-points \cite{Niem-PRB05}. Similarly, the artificial periodicity of the supercell \cite{Payne-TeterRMP92}% \ can be eliminated from the calculations of solid surface or imperfect crystal with a point defect and replaced by asymptotic boundary conditions at infinity for the Coulomb potential and wave functions. This removes the known difficulties introduced in calculations of extended systems by making use the slab \cite{App-HamRMP76} or supercell \cite{Mak-PayPRB95}% -\cite{Niem-PRB05-Supcell}\ geometries. In this case the induced charge resulted from the free carriers or nonuniform polarization of the valence electrons should be introduced into the self-consistent Poisson equation using the effective mass approximation or the macroscopic electric susceptibility, respectively. \section{Concluding remarks} The origin of all difficulties with self-consistency listed above is the long-range character of the Coulomb interaction that is responsible for an interdependence of the charge distribution with the boundary conditions on the potential. Thus large-scale self-consistent distributions can only result from the direct solution of the self-consistent Poisson equation itself because the boundary conditions take into account the reaction of distant charges that are exterior ones relative to the considered system. The separation of the induced charge as a function of the potential and the modification of the Poisson equation is only way in order to obtain the self-consistent distributions of potential and charge with due accounting for the boundary conditions. To elucidate the point in more detail let us consider the electrostatic part of the Kohn-Sham energy functional% \begin{equation} E_{\mathrm{es}}=\frac{1}{2}\int d\mathbf{r}d\mathbf{r}^{\prime}\frac {\rho(\mathbf{r})\rho(\mathbf{r}^{\prime})}{\left| \mathbf{r-r}^{\prime }\right| } \label{E_es}% \end{equation} and the corresponding contribution to the effective potential in the Schr\"{o}dinger equation \cite{K-V83}% \begin{equation} u_{\mathrm{es}}=\int d\mathbf{r}^{\prime}\frac{\rho(\mathbf{r}^{\prime}% )}{\left| \mathbf{r-r}^{\prime}\right| }. \label{u_es}% \end{equation} At first glance, there is no need for the Poisson equation since Eqs. (\ref{E_es}) and (\ref{u_es}) give us already the explicit relationships between the necessary quantities and the charge density. However, the function $1/\left| \mathbf{r-r}^{\prime}\right| $ in the integrand of the Eq. (\ref{u_es}) is the Green function of the Laplace equation with the zero boundary condition at the infinity. The expression (\ref{u_es}) is the true solution of the Poisson equation if there are no charges outside the integration region. Evidently, this is not the case for infinitely extended systems or when periodic boundary conditions are specified. Let us assume that there are two subsystems with non-intersected charge distributions \begin{equation} \rho(\mathbf{r})=\rho_{1}(\mathbf{r})+\rho_{2}(\mathbf{r});\,\rho _{1}(\mathbf{r})\neq0,\,\mathbf{r\in\Upsilon}_{1};\,\rho_{2}(\mathbf{r}% )\neq0,\,\mathbf{r\in\Upsilon}_{2};\,\mathbf{\Upsilon}_{1}\cap\mathbf{\Upsilon }_{2}=\emptyset\label{ro-2}% \end{equation} and, respectively,% \begin{equation} \varphi(\mathbf{r})=\varphi_{1}(\mathbf{r})+\varphi_{2}(\mathbf{r}% );\,\Delta\varphi_{1,2}=-4\pi\rho_{1,2}, \label{fi-2}% \end{equation} where $\varphi$ is the Coulomb potential. Substituting the expression (\ref{ro-2}) for $\rho$ in Eq. (\ref{E_es}) and using Poisson equations (\ref{fi-2}) together with Green theorem (\ref{Green theo}) we obtain% \begin{equation} E_{\mathrm{es}}=\int\limits_{\mathbf{\Upsilon}_{1}}d\mathbf{r}\rho_{1}% \varphi+\frac{1}{8\pi}\int\limits_{\mathbf{\Upsilon}_{1}}d\mathbf{r}% \varphi\Delta\varphi+\int\limits_{\mathbf{\Upsilon}2}d\mathbf{r}\rho _{2}\varphi+\frac{1}{8\pi}\int\limits_{\mathbf{\Upsilon}2}d\mathbf{r}% \varphi\Delta\varphi\equiv E_{\mathrm{es}1}+E_{\mathrm{es}2}. \label{E_es1+2}% \end{equation} Now it is possible to consider the total energy $E_{\mathrm{tot}1}$ of the subsystem 1 as the functional of $\rho_{1}(\mathbf{r})$, $\Psi_{E1}% (\mathbf{r})$, and $\varphi(\mathbf{r})$. Then the necessary conditions of the functional minimum are \cite{K-V83}% \begin{equation} \delta E_{\mathrm{tot}1}/\delta\Psi_{E1}^{\ast}(\mathbf{r})=0,\,\delta E_{\mathrm{es}1}/\delta\varphi(\mathbf{r})=0\label{variations}% \end{equation} that must be supplemented by the expression for the effective potential $u_{\mathrm{eff}1}\equiv\delta E_{\mathrm{es}1}/\delta\rho_{1}+u_{\mathrm{xc}% 1}$. The first equality gives rise to the Schr\"{o}dinger equation and the second one is just the Poisson equation. The variation $\delta\varphi$ and the sought potential $\varphi$\ have to satisfy such boundary conditions that ensure% \begin{equation} \oint d\mathbf{S}\left( \varphi\nabla\delta\varphi-\delta\varphi\nabla \varphi\right) =0\label{dS=0-condition}% \end{equation} at the surface confining the region $\mathbf{\Upsilon}_{1}$ only. It implies that boundary conditions to Poisson equation must be properly fixed during the iterative process. If a realization of the equality (\ref{dS=0-condition}) at given boundary conditions turns out to be impossible then the separate investigation of two subsystems is incorrect. This derivation shows that the solution of the Poisson equation cannot be replaced by direct variations of Eq. (\ref{E_es})-(\ref{u_es}) in the course of searching for the $E_{\mathrm{tot}1}$\ minimum for the infinitely extended electronic systems with the periodic or assigned asymptotically at infinity boundary conditions. It is desirable to add some comment on the widely-used ''mixing'' method of fight against the charge instability. The very essence of the method lies in the use of some linear combination of results of previous iterative steps to make up the input for the next step. Such an approach is named as ''iteration with memory'' in the iterative calculus and it is destined to accelerate the convergence rate of the iterative process \cite{Traub82-Iterative}. But this procedure cannot transform a divergent iteration scheme to convergent one. There are many reasons, including mentioned in the present work, to believe that charge instability observed in simulations of large electronic structures is rather sign of divergency than slow convergency of the simple iteration cycle of Poisson$\rightarrow$Schr\"{o}dinger$\rightarrow$Poisson steps. The continuous appearance of new more and more cumbersome and sophisticated mixing methods during the last three decades is the best evidence of this point of view (see some historic commentary in \cite{Liebsch97}, \cite{KerkerPRB81}% -\cite{Kress-FurthPRB96}, \cite{Goed99scaling}). As a rule, the demonstration of the benefit of a new method is accompanied by illustrations of the lack of convergence of the old one as the structure size becomes larger. It is of interest, the mixing scheme based on some handmade modeled screening succeeds relatively \cite{Goed99scaling}, although it was constructed on the ground of rather formal mathematical reasoning than the underlying physics discussed here. It needs also to note that mixing schemes may produce a spurious convergency (see \cite{Kress-FurthPRB96}, p. 11176). Thus it is necessary to check whether the norm of the functional derivatives $\left\| \delta E_{\mathrm{es}% 1}/\delta\varphi(\mathbf{r})\right\| $ and $\left\| \delta E_{\mathrm{tot}% 1}/\delta\Psi_{E1}^{\ast}(\mathbf{r})\right\| $, which are residuals of Poisson and Schr\"{o}dinger equations, are minimal along with\ the total energy $E_{\mathrm{tot}1}$. Of course, $E_{\mathrm{tot}1}$ is the total energy of sufficiently large but finite system that can be approximated by assignment of boundary conditions as in the infinitely extended one. \ack I am indebted to Prof. A. Liebsch who has paid my attention to a neutrality problem in iterative solution of the self-consistent field equations and to Dr. H. Mera for useful discussion of computational details pertinent to Ref.\cite{Mera-eaPRB05}. The partial financial support by Russian Foundation for Basic Researches and die Deutsche Forschungsgemeinschaft is acknowledged. \section*{References}
1,477,468,750,482
arxiv
\section{Introduction} Let $Z$ be a finite set. Denote by $\mathcal{P}(Z)$ the set of probability measures (pm's) with support contained in~$Z$. Let $\mathcal{E}\subseteq\mathcal{P}(Z)$ be an exponential family supported on~$Z$. For $P, Q\in\mathcal{P}(Z)$ denote by $D(P\|Q)$ the \emph{information divergence} (also known as \emph{Kullback-Leibler divergence}), and let $D(P\|\mathcal{E}) \triangleq \inf_{Q\in\mathcal{E}}D(P\|Q)$. In 2002, Nihat Ay formulated the following optimization problem~\cite{Ay02:Pragmatic_structuring}: \begin{problem} \label{prob:main-problem-KL} Maximize $D(P\|\mathcal{E})$ over all probability distributions $P$ on~$Z$. \end{problem} The original motivation came from theoretical studies of the infomax principle. Insight into this problem can also be used to bound approximation errors of machine learning models or other statistical models~\cite{MontufarRauhAy11:Expressive_Power_and_Approximation_Errors_of_RBMs,MontufarRauhAy13:Maximal_KL_from_network_models}. Since 2002, progress has been made in different directions. The problem was attacked for particular classes of exponential families, with a particular focus on hierarchical models~\cite{MatusAy03:On_Maximization_of_the_Information_Divergence,Matus04:Maximization_from_binary_iid_seqs,AyKnauf06:Maximizing_Multiinformation,Matus09:Divergence_from_factorizable_distributions}. A full characterization of the first order optimality conditions was given in~\cite{Matus07:Optimality_conditions}. In 2010, the first author found a surprising connection to another optimization problem~\cite{Rauh11:Finding_Maximizers}: Let $A$ be the \emph{design matrix} (or \emph{sufficient statistics matrix}) of~$\mathcal{E}$, where the columns of $A$ are indexed by~$Z$. Any $u\in\ker A$ can be written uniquely as a difference $u = u^{+}-u^{-}$ of non-negative vectors $u^{ +},u^{-}$ of disjoint support. For $u\in\ker A\setminus\{0\}$ with $\sum_{x\in Z}u^{+}(x) = \sum_{x\in Z}u^{-}(x) = 1$ let \begin{equation*} \overline D(u) = H(u^{-}) - H(u^{+}) = \sum_{x\in Z}u(x)\log|u(x)|, \end{equation*} where $H$ denotes the (Shannon) entropy. The second optimization problem is: \begin{problem} \label{prob:bar-problem-KL} Maximize $\overline D(u)$ over the set all $u\in\ker A$ that satisfy $\sum_{x\in Z}u^{+}(x) = \sum_{x\in Z}u^{-}(x) = 1$. \end{problem} The optimization problem~\ref{prob:bar-problem-KL} is easier than the optimization problem~\ref{prob:main-problem-KL}, since the function to be optimized in~\ref{prob:main-problem-KL} is itself defined by an optimization problem. Both authors showed in~\cite{MatusRauh11:Maximization-ISIT2011} that the map $u\mapsto u^{+}$ induces a one-to-one correspondence between the points that satisfy the respective critical equations of~\ref{prob:bar-problem-KL} and~\ref{prob:main-problem-KL}, and that this correspondence restricts to bijections of the sets of local optimizers and global optimizers, respectively. The authors found this connection quite surprising. To better understand this result, the second author suggested to try to generalize the result to the setting of Bregman divergences and Bregman families. The present paper summarizes the results of this investigation. The first step is the definition of a function $\overline B$ that serves as an analogue of $\overline D$ in the general case. Once this definition is in place, the equivalence of the global maximizers is rather straightforward (Theorem~\ref{thm:equivalence}). What makes the general Bregman case more difficult is that $\overline B$ is only defined implicity as a solution of an optimization problem. Hence, the criticality conditions of $\overline B$ are currently unknown. If the optimization problem underlying $\overline B$ always has a unique solution (Conjecture~\ref{con:uniqueness-codim-one}), then the bijection of the local maximizers also generalizes (Theorem~\ref{thm:local-maxi}). Section~\ref{sec:Bregman-setting} recalls definitions and basic properties of Bregman divergences and introduces Bregman families. Section~\ref{sec:maxim-bregm-diverg} discusses the problem of maximizing the Bregman divergence from a Bregman family. Section~\ref{sec:Bbar} introduces the function $\overline B$ that corresponds to the function~$\overline D$. Section~\ref{sec:equivalence} contains the main results that relates the problems to maximize the Bregman divergence and $\overline B$, respectively. Section~\ref{sec:classical} compares the results to the results of~\cite{MatusRauh11:Maximization-ISIT2011} that concern the classical case of exponential families and the information divergence. \section{Preliminaries: Bregman divergences and Bregman families} \label{sec:Bregman-setting} This section summarizes the relevant results about Bregman divergences and Bregman families. The end of the section contains in Example~\ref{ex:classical-case} the special case of information divergence and exponential families. For more details and generalizations to the case where $Z$ is not finite see~\cite{MatusCsiszar12:Bregman_Pythagorean_identities}. It is wellknown that one can associate to each exponential family a Bregman divergence by expressing the information divergence within the exponential family in terms of the exponential family's natural parameters. However, this construction is not used in this paper. Instead, starting from a particular Bregman divergence, a family of distributions is defined, called a \emph{Bregman family}. These Bregman families generalize exponential families. Consider a finite set $Z$. For each $z\in Z$ let $\beta_{z}:(0,+\infty)\to\mathbb{R}$ be a convex differentiable function with $\lim_{x\to 0+}\beta_z'(x) = -\infty$ and $\lim_{x\to+\infty}\beta_z'(x) = +\infty$, where $\beta'_{z}(x)$ denotes the derivative of $\beta_{z}(x)$ with respect to~$x$. Then the convex conjugate (see~\cite{Rockafellar70:Convex_Analysis}) \begin{equation*} \beta_z^{*}(t) = \sup_{x}\big\{tx - \beta_{z}(x)\big\} \end{equation*} is differentiable and ranges between $-\lim_{x\to 0+}\beta_z(x)$ and $+\infty$. The derivative $e_{z}(x) \triangleq \beta_z^{*\prime}(x)$ is continuous and strictly increases from 0 to $+\infty$. Therefore, the inverse function $l_{z}(y) \triangleq e_{z}^{-1}(y)$ exists for $0<y<+\infty$, is continuous and strictly increases from $-\infty$ to~$+\infty$. The inverse function satisfies $l_{z}(y)=\beta'_{z}(y)$. The following lemma is a standard result in convex analysis (see~\cite{Rockafellar70:Convex_Analysis} or Lemma~2.2 in~\cite{MatusCsiszar12:Bregman_Pythagorean_identities}): \begin{lemma} \label{lem:2.2} $\beta_{z}(e_{z}(r)) = r e_{z}(r) - \beta^{*}(r)$ for all $r < \beta'(+\infty)$. \end{lemma} Consider a function~$f:Z\to\mathbb{R}^{d}$. For $\theta\in\mathbb{R}^{d}$ define a pm $P_{\theta}:z\mapsto e_{z}(\<\theta,f(z)\> - \Lambda(\theta))$, where $\Lambda(\theta)$ is the unique solution of $\sum_{z\in Z}e_{z}(\<\theta,f(z)\> - r)=1$ in~$r$. The subset \begin{equation*} \mathcal{E} = \mathcal{E}_{f} \triangleq \{ P_{\theta} : \theta\in\mathbb{R}^{d} \} \end{equation*} of $\mathcal{P}(Z)$ will be called a \emph{Bregman family} in the following.\footnote{The second author had originally given the name \emph{generalized exponential family} to~$\mathcal{E}$, which is also used by other authors. However, since that name is not very specific and since there are many different ways in which exponential families can be generalized, this paper now uses the name \emph{Bregman family}.} The matrix $A$ with columns $f(z)$ for $z\in Z$ (after fixing an ordering of~$Z$) is called the \emph{design matrix} of~$\mathcal{E}$. The set $\cs(\mathcal{E})\triangleq \conv\{f(z) : z\in Z\}$ is called the \emph{convex support} of~$\mathcal{E}$. The convex support is a (convex) polytope. A set $S\subseteq Z$ is called \emph{facial} for $\mathcal{E}$ if and only if $\conv\{f(z) : z\in S\}$ is a face of $\cs(\mathcal{E})$. The \emph{Bregman divergence} of $u,v: Z\to [0,+\infty)$ is \begin{equation*} B(u,v) = \sum_{z\in Z}\big[ \beta_{z}(u(z)) - \beta_{z}(v(z)) - \beta'_{z}(v(z))[u(z)-v(z)] \big]. \end{equation*} The Bregman divergence of $P\in\mathcal{P}(Z)$ from a Bregman family $\mathcal{E}$ is \begin{equation*} B(P,\mathcal{E}) \triangleq \inf_{Q\in\mathcal{E}} B(P,Q). \end{equation*} When the minimizer in the definition of $B(P,\mathcal{E})$ does not exist, one can find a minimizer in the closure $\overline\mathcal{E}$ of~$\mathcal{E}$, where the closure can be taken with respect to the canonical topology on the finite dimensional convex polytope $\mathcal{P}(Z)$. Just as in the classical case of an exponential family, one can prove the following statements: \begin{proposition} \label{prop:projection-E} Let $\mathcal{E}$ be a Bregman family. \begin{enumerate} \item For any $P\in\mathcal{P}(Z)$ there exists a unique pm $\Pi_{\mathcal{E},P}\in\overline\mathcal{E}$ with \begin{equation*} B(P,\Pi_{\mathcal{E},P}) = B(P,\mathcal{E}). \end{equation*} \item Let $P\in\mathcal{P}(Z)$ and $Q\in\overline\mathcal{E}$. If $\mathbb{E}_{P}[f] = \mathbb{E}_{Q}[f]$, then $Q = \Pi_{\mathcal{E},P}$. \item Let $P\in\mathcal{P}(Z)$. The unique global minimum of $H(Q)\triangleq \sum_{z\in Z}\beta_{z}(Q(z))$ for pm's $Q\in\mathcal{P}(Z)$ with $\mathbb{E}_{P}[f] = \mathbb{E}_{Q}[f]$ is given by $Q = \Pi_{\mathcal{E},P}$. \item The support $\supp(\Pi_{\mathcal{E},P})$ is the smallest facial set containing $\supp(P)$. \end{enumerate} The pm $\Pi_{\mathcal{E},P}$ is called the \emph{generalized reverse Bregman projection} ($rB$-projection) of $P$ to~$\mathcal{E}$. Here, ``generalized'' may be dropped whenever $\Pi_{\mathcal{E},P}\in\mathcal{E}$. If the Bregman family $\mathcal{E}$ is clear from the context, $\Pi_{\mathcal{E},P}$ is abbreviated by~$\Pi_{P}$. \end{proposition} \begin{proposition} \label{prop:closure-E} Let $\mathcal{E}$ be a Bregman family. \begin{enumerate} \item The map $\mu:P\in\mathcal{P}(Z)\mapsto\mathbb{E}_{P}[f]$ surjects onto $\cs(\mathcal{E})$. It restricts to a homeomorphism $\overline\mathcal{E}\cong\cs(\mathcal{E})$. \item $\overline\mathcal{E} = \bigcup_{F} \mathcal{E}_{F}$, where $F$ runs over all sets $F\subseteq Z$ that are facial with respect to~$\mathcal{E}$ and where $\mathcal{E}_{F}$ is the Bregman family defined on~$F$ using $f|_{F}$. \end{enumerate} \end{proposition} For exponential families, the statements in Propositions~\ref{prop:projection-E} and~\ref{prop:closure-E} are well-known and go back at least to~\cite{Barndorff78:Information_and_Exponential_Families}. The statements continue to hold for exponential families when $Z$ is replaced by a more general measure spaces~$Z$, as studied in~\cite{CsiszarMatus05:Closures_of_exp_fam,CsiszarMatus08:GMLE_for_Exp_Fam}. The extended arXiv version of~\cite{WangRauhMassam19:Approximating_faces_w_arxiv} contains a direct proof of the discrete case, which relies on algebraic insights from~\cite{GeigerMeekSturmfels06:Toric_Algebra_Graphical_Models}. \smallskip For a distribution of the form~$Q:z\mapsto e_{z}(r_{z})$, with $r_{z}\in\mathbb{R}$, by Lemma~\ref{lem:2.2}, \begin{multline*} B(P,Q) = \sum_{z\in Z}\big[ \beta_{z}(P(z)) - \beta_{z}(e_{z}(r_{z})) - r_{z}[P(z)-Q(z)] \big] \\ = \sum_{z\in Z}\big[ \beta_{z}(P(z)) - r_{z} e_{z}(r_{z}) + \beta^{*}(r_{z}) - r_{z}[P(z)-Q(z)] \big] \\ = \sum_{z\in Z}\big[ \beta_{z}(P(z)) + \beta^{*}(r_{z}) - r_{z} P(z) \big]. \end{multline*} When $Q\in\mathcal{E}$, then $r_{z}$ is of the the form $\<\theta,f(z)\> - \Lambda(\theta)$. Thus, \begin{multline} \label{eq:B-Phi} B(P,\mathcal{E}) = \sum_{z}\beta_{z}(P(z)) - \sup_{\theta}\Big[ \<\theta, \sum_{z}f(z) P(z)\> - \Lambda(\theta) - \sum_{z}\beta_{z}^{*}(\<\theta,f(z)\> - \Lambda(\theta)) \Big] \\ = \sum_{z}\beta_{z}(P(z)) - \sup_{\theta}\Big[ \<\theta, \mu(P)\> - \Upsilon(\theta) \Big], \end{multline} where $\Upsilon(\theta) = \Lambda(\theta) + \sum_{z}\beta_{z}^{*}(\<\theta,f(z)\> - \Lambda(\theta))$. \begin{theorem} $\Upsilon$ is convex. Its partial derivatives are \begin{equation*} \frac{\partial}{\partial\theta_{i}} \Upsilon(\theta) = \mathbb{E}_{\theta}[f_{i}] = \mu(P_{\theta})_{i}, \end{equation*} where $\mathbb{E}_{\theta}$ denotes the expected value taken with respect to~$P_{\theta}$. The map $\nabla\Upsilon:\mathbb{R}^{d}\to\cs(\mathcal{E})$ is surjective. The Hessian of $\Upsilon$ is positive definite. \end{theorem} \begin{Proof} \begin{align*} \frac{\partial}{\partial\theta_{i}} \Upsilon(\theta) &= \partial_{i} \Lambda(\theta) + \sum_{z} \beta^{*\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta)) [f_{i}(z) - \partial_{i} \Lambda(\theta)] \\ &= \sum_{z} \beta^{*\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta)) f_{i}(z), \\ \frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}} \Upsilon(\theta) &= \sum_{z} \beta^{*\prime\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta)) f_{i}(z) [f_{j}(z) - \partial_{j}\Lambda(\theta)] \\ & = \sum_{z} \beta^{*\prime\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta)) [f_{i}(z) - \partial_{i}\Lambda(\theta)][f_{j}(z) - \partial_{j}\Lambda(\theta)] \succeq 0, \end{align*} where the last equality follows from deriving the defining equation $\sum_{z}\beta^{*\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta))=1$ of~$\Lambda(\theta)$: \begin{equation*} 0 = \frac{\partial}{\partial\theta_{j}} \sum_{z}\beta^{*\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta)) = \sum_{z}\beta^{*\prime\prime}_{z}(\<\theta,f(z)\> - \Lambda(\theta)) [f_{j}(z) - \frac{\partial}{\partial\theta_{j}} \Lambda(\theta)]. \end{equation*} This shows convexity. It is clear that $\mathbb{E}_{\theta}[f]=\mu(P_{\theta})$ belongs to $\conv\{f(z) : z\in Z\}$. Surjectivity follows from Proposition~\ref{prop:closure-E}. \end{Proof} It follows from the properties of convex conjugation: \begin{corollary} \label{cor:Nabla-Upsilon} The maps $\theta\mapsto\nabla\Upsilon(\theta)$ and $\mu\mapsto\nabla\Upsilon^{*}(\mu)$ are mutual inverses in the relative interiors of their respective domains. If $\Pi_{P}\in\relint(\mathcal{E})$, then $\Pi_{P}=P_{\theta}$ for $\theta=\nabla\Upsilon^{*}(\mu(P))$. \end{corollary} Let $H(P) = \sum_{z\in Z}\beta_{z}(P(z))$ as in Proposition\ref{prop:projection-E}. Then~\eqref{eq:B-Phi} rewrites to \begin{equation*} B(P,\mathcal{E}) = H(P) - \Upsilon^{*}(\mu(P)), \end{equation*} where $\Upsilon^{*}$ denotes the convex conjugate of~$\Upsilon$. From this equality follows the next result, which can also be seen as a kind of Pythagorean identity: \begin{corollary} \label{cor:triangleH} $B(P,\mathcal{E}) = H(P) - H(\Pi_{P})$ for all~$P\in\mathcal{P}(Z)$. \end{corollary} \begin{Proof} $B(P,\mathcal{E}) = B(P,\mathcal{E}) - B(\Pi_{P},\mathcal{E}) = H(P) - H(\Pi_{P})$, since $\mu(P) = \mu(\Pi_{P})$. \end{Proof} \begin{example} \label{ex:classical-case} Let $\beta_{z}(x)= x\ln(x/\nu(z)) + x$ for all~$z\in Z$. Then $\beta^{*}_{z}(x) = \nu(z)\exp(x)$ and $l_{z}(x) = \beta'_{z}(x) = \ln(x/\nu(z))$, and so $e_{z}(x) = \nu(z)\exp(x)$. In this case, $\mathcal{E}$ is an exponential family with reference measure~$\nu$, and $B$ equals the information divergence. Since $\beta^{*}_{z}=e_{z}(x)$, it follows that $\sum_{z}\beta_{z}^{*}(\<\theta,f(z)\> - \Lambda(\theta)) = 1$. Therefore, $\Upsilon(\theta) = 1 + \Lambda(\theta)$. In the classical case, $\Lambda$ is called the \emph{partition function}, and convexity of $\Lambda$ is well-known and widely used. In the general case, $\Lambda$ itself need not be convex. \end{example} \section{Maximizing the Bregman divergence from a Bregman family} \label{sec:maxim-bregm-diverg} Let $\mathcal{E}$ be a Bregman family. The following problem generalizes Problem~\ref{prob:main-problem-KL}: \begin{problem} \label{prob:B} Maximize $B(P,\mathcal{E})$ over $P\in\mathcal{P}(Z)$. \end{problem} \begin{theorem} \label{thm:projection-property} If $P\in\mathcal{P}(Z)$ is a local maximizer of $B(\cdot,\mathcal{E})$, then the map $z\mapsto l(P(z)) - l(\Pi_{P}(z))$ is constant for $z\in\supp(P)$ \end{theorem} \begin{Proof} If $\mu(P) = \sum_{z}f(z)P(z)$ does not lie in the relative interior of $\cs(\mathcal{E})$, by Proposition~\ref{prop:closure-E}, one may replace $\mathcal{E}$ by $\mathcal{E}_{F}$ for some suitable~$F\subsetneq Z$. Thus, without loss of generality, assume that $\mu(P)$ lies in the relative interior of $\cs(\mathcal{E})$. Let $w\in\mathbb{R}^{Z}$ with $\sum_{z}w(z) = 0$ and $\supp(w)\subseteq\supp(P)$. For $\epsilon>0$ small, \begin{multline*} B(P+\epsilon w,\mathcal{E}) \approx H(P) + \epsilon \sum_{z}\beta_{z}'(P(z)) w(z) \\ - \Upsilon^{*}\big(\mu(P)\big) - \epsilon \Big\< \nabla\Upsilon^{*}\big(\mu(P)\big), \sum_{z}f(z) w(z) \Big\> \end{multline*} to first order in~$\epsilon$. Let $\theta=\nabla\Upsilon^{*}\big(\mu(P)\big)$. Then $\Pi_P = P_\theta$ by Corollary~\ref{cor:Nabla-Upsilon}, and \begin{equation*} \< \theta, \sum_{z}f(z)w(z) \> = \sum_{z} [\<\theta, f(z)\> - \Lambda(\theta) ] w(z) = \sum_{z} \beta'_{z}(\Pi_{P}(z))w(z), \end{equation*} since $\beta'$ and $\beta^{*\prime}$ are mutual inverses to each other. In total, \begin{equation*} B(P+\epsilon w,\mathcal{E}) \approx H(P) - \Upsilon^{*}\big(\mu(P)\big) + \epsilon \sum_{z}\big[\beta_{z}'(P(z)) - \beta'_{z}(\Pi_{P}(z))\big] w(z), \end{equation*} whence $\sum_{z}\big[\beta_{z}'(P(z)) - \beta'_{z}(\Pi_{P}(z))\big] w(z) = 0$ if $P$ is a critical point. This equality holds for all~$w\in\mathbb{R}^{Z}$ with $\sum_{z}w(z) = 0$ and $\supp(w)\subseteq\supp(P)$. Therefore, $\beta_{z}'(P(z)) - \beta'_{z}(\Pi_{P}(z))$ is constant for $z\in\supp(P)$. \end{Proof} \begin{corollary} Let $P\in\mathcal{P}(Z)$ be a local maximizer of $B(\cdot,\mathcal{E})$, and let $u=P-\Pi_{P}$. Then $\supp(u^{+})=\supp(P)$. If $\beta_{x}=\beta_{y}$ for $x,y\in\supp(P)$, then $u^{+}(x)\ge u^{+}(y)$ if and only if $P(x)\ge P(y)$. \end{corollary} \begin{Proof} By Theorem~\ref{thm:projection-property}, there exists a constant $c$ such that $l(P(z)) - l(\Pi_{P}(z))=c$ for $z\in s(P)$. The number $c$ equals the unique solution of the equation \begin{equation*} \sum_{z\in\supp(P)}e_{z}(l_{z}(\Pi_{P}(z)) + c) = \sum_{z\in\supp(P)}P(z) = 1. \end{equation*} Since all functions $e_{z}$ are increasing, $c>0$. Thus, if $z\in\supp(P)$, then $l_{z}(P(z)) > l_{z}(\Pi_{P}(z))$, and so $P(z)>\Pi_{P}(z)$. This implies $\supp(P)\subseteq\supp(u^{+})$. On the other hand, if $z\notin\supp(P)$, then $\Pi_{P}(z)\ge P(z)$, and so $u(z)\le 0$, which implies $\supp(P)\supseteq\supp(u^{+})$. \end{Proof} As in the classical case, one shows~\cite{MatusAy03:On_Maximization_of_the_Information_Divergence}: \begin{proposition} Any $P\in\mathcal{P}(Z)$ that globally maximizes $B(\cdot,\mathcal{E})$ satisfies $|\supp(P)|\le\dim(\mathcal{E}) + 1$. \end{proposition} \section{The function \texorpdfstring{$\overline B$}{Bbar} and the alternative optimization problem} \label{sec:Bbar} For each real vector-valued function $f:Z\to\mathbb{R}^{d}$ let \begin{equation*} \mathcal{N} = \mathcal{N}(f) = \Big\{u\in\mathbb{R}^{Z} : \sum_{z\in Z}f(z)u(z)=0\text{ and }\sum_{z\in Z}u(z)=0\Big\}. \end{equation*} If $A$ is a design matrix, then $\mathcal{N}=\big\{u\in\ker A:\sum_{z\in Z}u(z)=0\big\}$. Let $u:Z\to\mathbb{R}$ be a real function satisfying $\sum_{x\in Z}u(x)=0$. To each such $u$ associate a function $f_{u}$ such that $\mathcal{N}(f_{u}) = \mathbb{R} u$, and let $\mathcal{F}_{u}=\mathcal{E}_{f_{u}}$. Then $\mathcal{F}_{u}$ has codimension one. By Proposition~\ref{prop:projection-E}, the difference $P-\Pi_{\mathcal{F}_{u},P}$ lies in~$\mathbb{R} u$. \begin{lemma} \label{lem:P-u} Let $P\in\mathcal{P}(Z)$, and let $u=P-\Pi_{P}$. Then $\Pi_{\mathcal{F}_{u},P}=\Pi_{P}$. \end{lemma} \begin{Proof} From $\mathcal{E}\subseteq\mathcal{F}_{u}$ follows $\Pi_{P}\in\mathcal{F}_{u}$. Together with $P-\Pi_{P}=u$, the statement follows from Proposition~\ref{prop:projection-E}. \end{Proof} \bigskip $\mathcal{P}(Z)$ can be partitioned into $\mathcal{P}_{u}^{+}\cup\overline\mathcal{F}_{u}\cup\mathcal{P}_{u}^{-}$, where \begin{equation*} \mathcal{P}_{u}^{+}=\big\{ P\in\mathcal{P}(Z) : \<P-\Pi_{\mathcal{F}_{u},P},u\> > 0 \big\}, \quad \mathcal{P}_{u}^{-}=\big\{ P\in\mathcal{P}(Z) : \<P-\Pi_{\mathcal{F}_{u},P},u\> < 0 \big\}. \end{equation*} The definition and Lemma~\ref{lem:P-u} imply: \begin{lemma} $P \in \mathcal{P}_{P-\Pi_{P}}^{+}$ for any $P\in\mathcal{P}(Z)\setminus\overline\mathcal{E}$. \end{lemma} In the classical case, the maximizer of the information divergence from an arbitrary exponential family $\mathcal{E}$ need not be unique~\cite{MatusAy03:On_Maximization_of_the_Information_Divergence}. However, when $\mathcal{E}=\mathcal{F}_{u}$ has codimension one, there are precisely two local maximizers $u^{+}$ and $u^{-}$, one on each side of~$\mathcal{E}$~\cite[Section~VI]{Rauh11:Thesis}. This motivates the following conjecture: \begin{conjecture} \label{con:uniqueness-codim-one} The map $P\in\mathcal{P}_{u}^{+}\mapsto B(P,\mathcal{F}_{u})$ has a unique local (and global) maximizer. \end{conjecture} The proof of the conjecture in the classical case relies on applying properties of the logarithm to the criticality conditions in Theorem~\ref{thm:projection-property}. It is not possible to apply this proof to the general case of the conjecture. For any function $u:Z\to\mathbb{R}$ that satisfies $\sum_{z\in Z}u(z) = 0$ let \begin{equation*} \overline B(u) \triangleq \max \big\{ B(P,\mathcal{F}_{u}) : P\in\overline\mathcal{P}_{u}^{+} \big\}, \end{equation*} where $\overline\mathcal{P}_{u}^{+}=\mathcal{P}_{u}\cup\overline\mathcal{F}_{u}$ denotes the closure of~$\mathcal{P}_{u}^{+}$. The map $\overline B$ is continuous and welldefined since $\overline\mathcal{P}_{u}^{+}=\mathcal{P}_{u}^{+}\bigcup\overline\mathcal{F}_{u}$ is compact. If $u\neq 0$, then this maximum lies in $\mathcal{P}_{u}^{+}$, and $\overline B(u) > 0$. The function $\overline B$ satisfies $\overline B(\lambda u) = \overline B(u)$ for all~$\lambda>0$. \begin{problem} \label{prob:Bbar} Maximize the function $u\in\mathcal{N}\setminus\{0\}\mapsto\overline B(u)$. \end{problem} The intuition behind the definition of $\overline B$ and Problem~\ref{prob:Bbar} is the following: instead of directly searching for a maximizer $P$ of~$B(\cdot,\mathcal{E})$, one may try to determine the vector $u=P-\Pi_{P}$, which can be seen as a direction within the probability simplex. Thus, the task is to find a direction in which it is possible to achieve large values of~$B(\cdot,\mathcal{E})$. When analyzing the direction~$u$, Lemma~\ref{lem:P-u} says that one may just as well reaplace $\mathcal{E}$ by~$\mathcal{F}_{u}$. \section{Equivalence of the maximizers} \label{sec:equivalence} The following theorem specifies the relations between the problems~\ref{prob:B} and~\ref{prob:Bbar}. It corresponds to \cite[Theorem~3]{Rauh11:Finding_Maximizers}. \begin{theorem} \label{thm:equivalence} \begin{enumerate} \item $\max_{P\in\mathcal{P}(Z)} B(P,\mathcal{E}) = \max_{u\in\mathcal{N}\setminus\{0\}}\overline B(u)$. \item If $P$ is a global maximizer of problem~\ref{prob:B}, then $P-\Pi_{P}$ is a global maximizer of problem~\ref{prob:Bbar}, and $B(P,\mathcal{E})=\overline B(P-\Pi_{P})$. \item If $u$ is a global maximizer of problem~\ref{prob:Bbar} and if $\overline B(u) = B(P,\mathcal{F}_{u})$, then $P$ is a global maximizer of problem~\ref{prob:B}, and $\overline B(u) = B(P,\mathcal{E})$. \end{enumerate} \end{theorem} The proof of Theorem~\ref{thm:equivalence} is based on the following auxilliary theorem, which corresponds to \cite[Theorem~2]{MatusRauh11:Maximization-ISIT2011}. \begin{theorem} \label{thm:Bbar_B_ineq} $\overline B(P - \Pi_{P}) \ge B(P,\mathcal{E})$ for any $P\in\mathcal{P}(Z)\setminus\overline\mathcal{E}$. If $u\in\mathcal{N}\setminus\{0\}$ and $P\in\mathcal{P}(Z)$ satisfy $\overline B(u) = B(P,\mathcal{F}_{u})$, then $B(P,\mathcal{E}) \ge \overline B(u)$, with equality if and only if $P - \Pi_{P}=\lambda u$ for some~$\lambda>0$. \end{theorem} \begin{Proof} The first statement follows from Lemma~\ref{lem:P-u}, as $\overline B(P - \Pi_{P}) \ge B(P,\mathcal{F}_{P-\Pi_{P}}) = B(P,\mathcal{E})$. For the second statement observe that from $\mathcal{E}\subseteq\mathcal{F}_{u}$ follows $B(P,\mathcal{E})\ge B(P,\mathcal{F}_{u}) = \overline B(u)$. \end{Proof} Theorem~\ref{thm:equivalence} follows directly from Theorem~\ref{thm:Bbar_B_ineq}. In~\cite[Theorem~1]{MatusRauh11:Maximization-ISIT2011} it was shown that the points that satisfy the respective critical equations (i.e.\ the equality conditions among the first order conditions) of the two problems~\ref{prob:B} and~\ref{prob:Bbar} and the local maximizers of the two problems are also in one-to-one correspondence in the classical case. Discussing the criticality conditions is difficult, as no explicit formula for $\overline B$ is known, and if Conjecture~\ref{con:uniqueness-codim-one} is wrong, it is improbable that $\overline B$ is differentiable. If the conjecture is true, one can at least prove that the local maximizers of the two problems are related, as Theorem~\ref{thm:local-maxi} below will show. Assume that Conjecture~\ref{con:uniqueness-codim-one} is true, and let $\Phi(u)\triangleq\arg\max_{Q\in\mathcal{P}_{u}^{+}(Z)}B(Q,\mathcal{F}_{u})$ for $u\in\mathcal{N}\setminus\{0\}$. By assumption, $\Phi$ is well-defined and continuous. The map $\Psi:\mathcal{P}\to\mathcal{N},P\mapsto P-\Pi_{P}$ is also continuous. With these two maps, Theorem~\ref{thm:Bbar_B_ineq} can be reformulated as follows: \begin{corollary} \label{cor:Bbar_B_ineq} If Conjecture~\ref{con:uniqueness-codim-one} is true, then: \begin{enumerate} \item $\overline B(\Psi(P)) \ge B(P,\mathcal{E})$ for all~$P\in\mathcal{P}(Z)$, with equality if and only if $P = \Phi(\Psi(P))$. \item $B(\Phi(u)) \ge \overline B(u)$ for all~$u\in\mathcal{N}\setminus\{0\}$, with equality if and only if $\Psi(\Phi(u)) = \lambda u$ for some $\lambda > 0$. \end{enumerate} \end{corollary} \begin{lemma} \label{lem:loc_max_E_F} \begin{enumerate} \item If $u\in\mathcal{N}\setminus\{0\}$ is a local maximizer of~$\overline B$, then $\Pi_{\Phi(u)}= \Pi_{\mathcal{F}_{u},\Phi(u)}$. Thus, if Conjecture~\ref{con:uniqueness-codim-one} is true, then $\Psi(\Phi(u)) = \lambda u$ for some $\lambda > 0$. \item If $P\in\mathcal{P}(Z)$ is a local maximizer of~$B(\cdot,\mathcal{E})$, then $P$ is a local maximizer of $B(\cdot,\mathcal{F}_{\Psi(P)})$. Thus, if Conjecture~\ref{con:uniqueness-codim-one} is true, then $\Phi(\Psi(P)) = P$. \end{enumerate} \end{lemma} \begin{Proof} For the first statement, let $P=\Phi(u)$. Suppose that $\Pi_{P}\neq\Pi_{\mathcal{F}_{u},P}$, and let $Q$ be a pm in the convex hull of $\Pi_{P}$ and $\Pi_{\mathcal{F}_{u},P}$. Since $H$ is strictly convex and by Proposition~\ref{prop:projection-E}, $H(\Pi_{P}) < H(Q) < H(\Pi_{\mathcal{F}_{u},P})$. Corollary~\ref{cor:Bbar_B_ineq} implies $\overline B(P - Q) \ge B(P,\mathcal{F}_{P-Q}) \ge H(P) - H(Q) > H(P) - H(\Pi_{\mathcal{F}_{u},P}) = \overline B(u)$. This contradicts the assumption that $u$ is a local maximizer. Hence, $\Pi_{\mathcal{F}_{u},P}=\Pi_{P}$, and $u = \Phi(P)$. For any $Q\in\mathcal{P}(Z)$, if $B(Q,\mathcal{E})\le B(P,\mathcal{E})$, then $B(Q,\mathcal{F}_{P-\Pi_{P}})\le B(Q,\mathcal{E}) \le B(P,\mathcal{E}) = B(P,\mathcal{F}_{P-\Pi_{P}})$, where the last equality uses Lemma~\ref{lem:P-u}. This proves the second statement. \end{Proof} \begin{theorem} \label{thm:local-maxi} Assume that Conjecture~\ref{con:uniqueness-codim-one} is true. If $P$ is a local maximizer of $B(P,\mathcal{E})$, then $\Psi(P)$ is a local maximizer of~$\overline B$. If $u$ is a local maximizer of $\overline B$, then $\Phi(u)$ is a local maximizer of~$B(P,\mathcal{E})$. \end{theorem} \begin{Proof} Let $P$ be a local maximizer of $B(P,\mathcal{E})$. Let $U$ be a neighbourhood of $P$ in $\mathcal{P}(Z)$ such that $B(Q,\mathcal{E})\le B(P,\mathcal{E})$ for all $Q\in U$. Then $U':=\Phi^{-1}(U)$ is a neighbourhood of~$P$ by Lemma~\ref{lem:loc_max_E_F}. If $v\in U'$, then Corollary~\ref{cor:Bbar_B_ineq} implies \begin{equation*} \overline B(\Psi(P)) \ge B(P,\mathcal{E}) \ge B(\Phi(v),\mathcal{E}) = \overline B(v). \end{equation*} for all~$v\in U'$. This proves the first statement. Let $u\in\mathcal{N}\setminus\{0\}$ be a local maximizer of~$\overline B$. Let $U'$ be a neighbourhood of $u$ in~$\mathcal{N}\setminus\{0\}$ with $\overline B(u)\ge\overline B(v)$ for all~$v\in U'$. Then $U:=\Psi^{-1}(U')$ is a neighbourhood of~$P$ by Lemma~\ref{lem:loc_max_E_F}. If $Q\in U$, then Corollary~\ref{cor:Bbar_B_ineq} implies \begin{equation*} B(\Phi(u),\mathcal{E}) \ge \overline B(u) \ge \overline B(\Psi(Q)) \ge B(Q,\mathcal{E}). \end{equation*} This proves the second statement. \end{Proof} \section{Comparison to the classical case} \label{sec:classical} In the classical case $\beta_{z}(t)=t \ln(t/\nu(z))$, in which $B$ becomes the information (or Kullback-Leibler) divergence and $\mathcal{E}$ is an exponential family with reference measure~$\nu$, the function $\overline B$, which, in the general case, is defined by means of an optimization problem, has an explicit analytic expression: \begin{equation*} \overline B(u) = \ln\bigg(1 + \exp\Big(\sum_{z\in Z}\frac{u(z)}{\|u\|_{1}}\ln|u(z)|\Big)\bigg) = \ln\big(1 + \exp(\overline D(u))\big). \end{equation*} Thus, while an optimization problem has to be solved to evaluate the function $B(\cdot,\mathcal{E})$ at some $P\in\mathcal{P}(Z)$, the function $\overline B$ can be evaluated more easily. In the general case this is not true anymore. However, the computational complexity of the optimization problem~\ref{prob:Bbar} is still different from the complexity of the problem~\ref{prob:B}. To evaluate $\overline B(u)$ at a single point $u\in\mathcal{N}\setminus\{0\}$, a problem of a similar kind as problem~\ref{prob:B}, but much smaller, has to be solved: the solution is a pm in $\mathcal{P}(\supp(u^{+}))$. Moreover, as $\mathcal{F}_{u}$ has co-dimension one, $rB$-projections to $\mathcal{F}_{u}$ can be computed by solving a one-dimensional optimization problem (namely, $\Pi_{\mathcal{F}_{u},P}$ minimizes $H(Q)$ for $Q\in P + \mathbb{R} u$). In total, whether it is easier to attack problem~\ref{prob:Bbar} or~\ref{prob:B} may depend on the specific choice of the functions $\beta_{z}$ and~$f$. For the classical case, \cite{Rauh11:Thesis} and~\cite{Rauh11:Finding_Maximizers} present many ideas how to attack problem~\ref{prob:bar-problem-KL}, many of which may generalize to problem~\ref{prob:Bbar}, depending on the choice of the functions~$\beta_{z}$. Most importantly, the idea behind the definition of the function $\overline B$ sheds light on the relation of the problems~\ref{prob:main-problem-KL} and~\ref{prob:bar-problem-KL}, which is rather opaque if one only looks at the definitions of the functions~$D$ and~$\overline D$. \section*{Acknowledgement} \small This work was partially supported by the Grant Agency of the Czech Republic under Grant P202/10/0618 and the Research Academy Leipzig. \section*{Author contributions} The first investigations were done by the second author in 2010, who also provided the correct notion of a Bregman family. In 2012, both authors worked together to find a good definition for $\overline B$ and to prove the equivalence of the global maximizers (Theorem~\ref{thm:equivalence}). The project was delayed by the first author trying to find a proof of Conjecture~\ref{con:uniqueness-codim-one}. The first author added further results and completed the manuscript. \bibliographystyle{IEEEtranSpers}
1,477,468,750,483
arxiv
\section{Introduction} In colloids with short range attractions, the system ``gels'' at high enough attraction strength. This colloidal gel is a solid, with a low colloid density, stabilized by the bonds between particles. Usually, gelation occurs above crystallization and liquid-gas transition, i.e. increasing the attraction strength, the system enters the fluid-crystal coexistence region, at higher strengths it separates into a liquid and a gas, and finally it gels \cite{poon95,manley05,sedgwick05}. In some cases, gels are found outside the liquid-gas separation boundary \cite{shah03}, although the latter cannot be identified and a clear independence between gelation and phase separation cannot be claimed. Thus, it has been argued that gels are indeed states with arrested phase separation \cite{manley05,foffi05}. The mechanism for arrest can be either the dense phase crossing a glass transition (either attraction or repulsion driven) \cite{foffi05,sastry00}, or a glass transition driven by the particle bonding that prevents the phase separation. In this work we have simulated a system that mimics the mixture of a colloid with a non-adsorbing polymer (which induces attraction between the colloids), in three states beyond the liquid-gas transition. The two states with stronger attractions indeed show arrested phase separation, and the system has formed a percolating network of particles with voids and tunnels. The dynamics of the system is studied in the three states, and we find that the system with arrested phase separation show many properties typical of glass aging. The system with the lowest attraction strength separates into a dense and a dilute phase, but the liquid phase is non-ergodic, i.e. it has reached the glass region at higher density. The dynamics of the system with a long range repulsive barrier, which suppresses the liquid-gas separation, has been studied previously \cite{puertas05,puertas06}, what allows us to conclude here that gels are indeed caused by a quench to the attractive glass region. Quenches to lower attraction strength result in phase separation -- the liquid may undergo a glass transition and become non-ergodic, but the system does not end up with the typical gel properties. \section{Simulation details} Newtonian dynamics simulations were run for a system comprised by 1000 quasi hard particles with an attractive interaction mimicking the effective interaction between colloids in colloid-polymer mixtures due to polymer depletion. The system is polydisperse with radii distributed according to a flat distribution of width 10\% of the average radius, $a$. The core-core repulsion is given by $V(r) = k_BT (r/\sigma)^{-36}$, where $k_BT$ is the thermal energy and $\sigma=a_1+a_2$. The attractive interaction is given by the simple depletion model developed by Asakura and Oosawa \cite{likos01}, corrected to consider a polydisperse system \cite{mendez00}; the attraction strength is given by the polymer volume fraction, $\phi_p$, and the range by the size of the polymers $2\xi$. Further details of the total interaction potential can be found in previous works \cite{puertas05}. The colloidal density is reported as volume fraction, $\phi_c$, and the attraction strength in units of $\phi_p$; dimensionless units are used by setting the mean radius $a=1$, the thermal velocity $v=\sqrt{4/3}$ and the mass $m=1$. For the attraction range and volume fraction used in this work, $2\xi=0.2a$ and $\phi_c=0.40$, respectively, the system undergoes liquid-gas separation at $\phi_p\approx 0.3$ (the crystallization transition occurs in monodisperse systems at lower $\phi_p$). The system is equilibrated without attraction and instantaneously quenched to three different states: $\phi_p=0.35$, $\phi_p=0.50$ and $\phi_p=0.80$. Previous works where the liquid-gas transition is inhibited by means of a long range repulsive barrier showed that there is an attractive glass transition at $\phi_p\approx 0.43$ \cite{puertas05}. Thus, the first state is below the glass point, whereas the other two are above. The structure and dynamics are studied below as a function of the time elapsed since the quench, termed {\sl waiting time}. \begin{figure} \begin{center \includegraphics[width=0.6\textwidth]{psi4.eps} \end{center \caption{\label{psi4} Phase separation order parameter, $\psi_4$, for different states along the isochore $\phi_c=0.40$. The system is divided in $4^3$ boxes, and $\psi_4=\sum_i (\rho_i-\rho)^2$, where $\rho_i$ is the density in every box, and the summation runs over all the boxes. The black circles mark the states studied in detail, $\phi_p=0.35$, $\phi_p=0.50$ and $\phi_p=0.80$. The vertical dashed line shows the state where the glass transition was found without liquid-gas separation.} \end{figure} \begin{figure} \begin{center \includegraphics[width=0.6\textwidth]{sq-nobarrier.eps} \end{center \caption{\label{sq} Structure factor for the three states as labeled and at the waiting times shown.} \end{figure} \section{Results} Fig. \ref{psi4} shows the evolution of the density inhomogeneity in the system as a function of the attraction strength, $\phi_p$, along $\phi_c=0.40$. Homogeneous fluids are obtained at low $\phi_p$, and liquid-gas separation at $\phi_p \approx 0.30$. However, instead of observing increasing inhomogeneity of the system with $\phi_p$, due to denser liquids and more dilute vapors, we note that the phase separation is impeded by an additional mechanism. We show here that this mechanism is the glass transition already studied in the same system without phase separation (vertical dashed line in the figure). The inhibition of liquid-vapor separation is also noticeable in the evolution of the structure factor, shown in Fig. \ref{sq}. Whereas the state $\phi_p=0.35$ shows the typical behaviour of spinodal decomposition, with a peak at low wavevectors that grows and moves to lower $q$, the states at higher attraction strength show a more homogeneous structure. Only a peak at low $q$ is observed, which grows much slower than at $\phi_p=0.35$, and which is smaller the higher $\phi_p$. A similar peak at low $q$ is obtained in the system without phase transition, due to local compaction of the system \cite{puertas05}. The arrested phase separation scenario is fully in agreement with Fig. \ref{psi4} and with previous findings in simulations and experiments; the resulting gels are heterogeneous locally and show arrested dynamics \cite{foffi05,manley05,sedgwick05}. \begin{figure} \begin{flushright} \includegraphics[width=0.85\textwidth]{fsqt-nobarrier.eps} \end{flushright} \caption{\label{fsqt} Incoherent density correlation functions for the three states and at different waiting times, as labeled. The right hand panels show the incoherent non-ergodicity parameter, $f_q$, as a function of $q$ (the dashed line is the Gaussian approximation with a localization length equal to the attraction range) and the structural relaxation time, $\tau$ as a function of the waiting time ($\tau$ is defined as $\Phi_q^s(\tau)=f_q/2$, for $qa=9.9$).} \end{figure} The dynamics of the system is studied in Fig. \ref{fsqt} by means of the incoherent part of the density correlation function, $\Phi_q^s(t')\:=\: 1/N \sum \exp \left\{ {\bf q} ({\bf r}_i(t)-{\bf r}_i(t_w)\right\}$, where the summation runs over the $N$ particles in the system and $t'=t-t_w$. The density correlation functions in the left panels show the typical features of a glass transition, i.e. a two step decay, where the second, structural, relaxation increases with waiting time. The height of the intermediate plateau, the non-ergodicity parameter, is presented in the right upper-most panel as a function of $q$, and the evolution of the relaxation time with waiting time is given in the lower panel. The state with $\phi_p=0.35$ shows the fastest relaxations and the lowest non-ergodicity parameters (yielding a localization length longer than the attraction range), but more importantly, the time scale for structural relaxation saturates. On the other hand, the dynamics of the states for higher $\phi_p$ do not saturate and $\tau$ follows a power law with waiting time, with exponents larger than one, and the localization length is smaller than the interaction range. Similar behaviour was observed in aging of the attractive glass obtained in this system without phase separation \cite{puertas06}. \begin{figure} \begin{flushright} \includegraphics[width=0.85\textwidth]{bonds-nobarrier.eps} \end{flushright} \caption{\label{bonds} Left panels: Bond correlation functions for the three states and waiting times labeled. Right panels: Correlation between the squared displacement between $t'=0$ and $t'=1000$ and the average number of neighbours in this time interval for $t_w=16384$. } \end{figure} The dynamical arrest is caused by the bonds between particles due to the attraction, as observed by the non-ergodicity parameters. It is thus interesting to study the bond correlation function, i.e. the fraction of bonds that have existed uninterruptedly since $t_w$ until $t=t_w+t'$, Fig. \ref{bonds} (two particles are bonded when their separation is smaller than the attraction range). In agreement with the dynamics studied in Fig. \ref{fsqt}, the bonds are stronger and live longer for high $\phi_p$. Note that even at the highest $\phi_p$ studied, $\phi_p=0.80$, the bonds are still reversible, and $\Phi_B(t')$ does not show any plateau. The correlation between the displacement of a given particle and its mean number of bonds (neighbours), shown in the right-hand panels, indicates that particles with a small number of neighbours (in average) move much longer than average, but this population of mobile particles is only apparent in the state $\phi_p=0.35$. These particles, thus, comprise the vapor phase which is in coexistence with the liquid phase, with much lower mobility. The overall relaxation of the system observed in the density correlation function for $\phi_p=0.35$ is therefore caused by the vapour phase, due to the exchange between particles in both phases, although the liquid one can be itself inside the non-ergodic region (most probably in the attractive glass, according to the non-ergodicity parameter). A population of {\sl fast} particles was also observed in the system without phase separation and aided the structural relaxation below the glass transition. The states $\phi_p=0.50$ and $\phi_p=0.80$, on the other hand, are quenched above the glass transition (found in the system without phase separation \cite{puertas05}) and the liquid-gas transition is inhibited. Here, no vapor phase is present, and the system cannot relax structurally, also in agreement with previous findings in the system without phase separation (there, the population of fast particles vanished in the glass states \cite{puertas06b}). Thus, the dynamics of the system is controlled by the glass transition, with only small effects due to the phase separation. Our results show, therefore, that systems undergoing phase separation, where the liquid phase enters the non-ergodic region, may still relax and appear ergodic due to the vapor phase. In order to observe really arrested dynamics and phase separation, the system must be quenched beyond the glass transition, to prevent the formation of the gas phase. We cannot say, however, if the crossover from the ``apparent ergodic'' regime to the ``arrested'' one, is abrupt or continuous, although our results of static properties point to the latter (Fig. \ref{psi4}). In addition, the glass transition is found with the aid of local compaction of the system, both in the system with and without phase transition, which is also found in other studies of the attractive glass \cite{manley05,zaccarelli06}. Our results, however, cannot elucidate whether this compaction is a real necessity for the glass transition or only favors it. \ack Financial support is acknowledged from the M.E.C. -- projects MAT2003-03051-CO3-01 and HA2004-0022 (A.M.P.). We thank F. Sciortino and E. Zaccarelli for useful discussions. \section*{References}
1,477,468,750,484
arxiv
\section{Introduction} Recent years have witnessed the proliferation of deep learning models used in various domains of the industry, including image processing \cite{krizhevsky2012imagenet,he2016identity}, video understanding \cite{karpathy2014large,simonyan2014two}, language understanding \cite{bahdanau2014neural,devlin2018bert}, speech recognition \cite{graves2013speech,chorowski2015attention}, commodity search and recommendation \cite{wang2018billion,ying2018graph}, autonomous drive \cite{chen2015deepdriving}, and various others \cite{silver2016mastering,zoph2016neural}. Large IT companies are investing substantially to build large AI clouds/clusters, equipped with expensive hardware such as GPUs, to run various deep learning workloads to support their AI-driven services. This paper presents a characterization of the workloads from Platform of Artificial Intelligence (PAI) in Alibaba. PAI is a ML(machine learning)-as-a-service platform that simplifies machine learning adoption and makes large-scale AI to meet the needs of Alibaba internel business. It has also been shipped to Aliyun as a cloud product to serve public users. Thousands of training jobs are submitted to PAI on a daily basis, with different business objectives, and diversified computing, communication and I/O requirements and constraints. This paper focuses on one critical aspect of these practical workloads: characterize various resource requirements and identify performance bottlenecks given software frameworks and hardware configurations. The observations are intended to instruct exploration of the workload optimization space, and guide software and hardware configurations/provisioning, to improve workload execution performance. Existing AI workload characterization work mostly focus on quantitative, precise performance modeling of AI workloads \cite{shi2016benchmarking,gu2017deepprof} or building benchmark platforms to measure model performance \cite{adolf2016fathom,li2018tartan,gao2018data2} (see Sec.~\ref{sec_related} for detailed discussions). We take a different angle, collectively characterizing behavior of thousands of training jobs in a production cluster, as well as projecting potential performance gains with different software architectures and hardware configurations based on a simple analytical model. Contributions of this work are summarized as follows: \textbf{First}, we present a lightweight framework to characterize the production workloads at the cluster level. We comprehensively include not only the basic aspects of computation and weight communication in training jobs, as considered in previous studies \cite{hazelwood2018applied,adolf2016fathom}, but also the input data I/O aspects. Our analysis shows that the data I/O time is non-negligible, especially for single-node training workloads; for distributed workloads, input data I/O can potentially become the performance bottleneck after gradient communication has been optimized. \textbf{Second}, our statistical analysis of the cluster workloads reveals that multi-GPU interconnect rather than the computation power is more likely the bottleneck under the current widely adopted training architectures and system configurations. Previous work largely focus on analyzing computation resource and memory access of AI workloads \cite{park2018deep,adolf2016fathom}. Shi \emph{et al.}~\cite{shi2018performance} study the communication factor, and make a similar conclusion that the current DL frameworks, including TensorFlow, CNTK and MXNet, do not perform well in scalability via Ethernet interconnect; their analysis is mainly focusing on performance comparison among different DL frameworks. Instead, we investigate the impact of data traffic on workload performance by collectively investigating a large number of training jobs, and explore potential optimization approaches for communication reduction. \textbf{Third}, we establish simple analytical performance models based on the key workload features, aiming at exposing fundamental performance bottlenecks. Our analytical modeling is different from previous characterization methods \cite{park2018deep,zhu2018benchmarking,adolf2016fathom}, most of which adopt actual runtime profiling measurements for bottleneck analysis. Based on the analytical models, we estimate potential performance gains if the workloads were running on different software architectures and hardware configurations. The focus is to investigate which system architecture (\emph{PS/worker} or \emph{AllReduce}) should be adopted, how much benefits high-speed multi-GPU interconnect, NVLink, may bring, and how performance bottlenecks may shift with different architecture and hardware configurations. \textbf{Finally}, we conduct detailed analysis of representative deep learning workloads using both analytical models and testbed experiments, in the domains of commodity embedding, search and recommendation, \emph{etc}. The relevant models are becoming more and more important in companies related to e-commerce, social networking and search engines, and in PAI, consume a large fraction of resources. Results of the case studies show that differences between estimated performance using our analytical method and actual measurements are less than 10\% on average. \mengdi{Based the basic workload features, we explore different optimization techniques upon different types of workloads, including mixed-precision training with TensorCore \cite{volta}, operation fusion via XLA \cite{XLA} and also changing the system architectures.} We summarize useful observations and implications on improving practical deep learning training workloads. \section{Background and Methodology} \label{sec_methodology} We first present our workload characterization framework. While the characterization framework is established based on TensorFlow \cite{abadi2016tensorflow}, the methodology applies to other frameworks \cite{jia2014caffe,paszke2017automatic,chen2015mxnet,yu2014introduction} as well. \subsection{Architecture Components Modeling} \label{sec_method_background} \vspace{-0.2cm} \subsubsection{System Infrastructure \& Configuration} \label{sec_system_infra} \begin{figure}[!htb] \vspace{-0.3cm} \centering \begin{minipage}[b]{0.8\linewidth} \centerline{\includegraphics[width=\linewidth]{server_noNVLink.png}} \centerline{\scriptsize (a) Server without NVLink} \end{minipage} \vspace{0.2cm} \begin{minipage}[b]{0.8\linewidth} \centerline{\includegraphics[width=\linewidth]{server_NVLink.png}} \centerline{\scriptsize (b) Server with NVLink} \end{minipage} \vspace{-0.2cm} \caption{System Infrastructure.} \label{PAI_infrastructure} \vspace{-0.2cm} \end{figure} Figure \ref{PAI_infrastructure} shows the basic server configurations in the AI cluster. There are typically two types of multi-GPU servers, equipped with/without NVLink \cite{NVLink}. The NVLink technology provides high-speed interconnect across multiple GPUs with a `hybrid mesh grid' topology, as show in Fig. \ref{PAI_infrastructure}(b), to resolve the bandwidth bottleneck of PCIe interconnect among the GPUs. Due to cost issue, servers in some sub-clusters of PAI are equipped with NVLink, while others are not yet. The basic server configuration where we collect the workload traces is shown in Table \ref{table_baseline_config}. The servers are interconnected via bi-directional 25Gbps Ethernet. We will further discuss the impact of the system configurations through varying the configuration settings in Sec.~\ref{subsec_opt_space}. \begin{table}[!htbp] \vspace{-0.2cm} \caption{System Settings.} \label{table_baseline_config} \vspace{-0.2cm} \centering \begin{tabular}{ c| c |c } \hline \multirow{2}{*}{GPU}& FLOPs & 11 TFLOPs \\ \cline{2-3} & Memory & 1 TB / second\\ \hline \multirow{3}{*}{Bandwidth} &Ethernet & 25 Gb / second\\ \cline{2-3} & PCI & 10 GB / second\\ \cline{2-3} & NVLink & 50 GB / second\\ \hline \end{tabular} \vspace{-0.2cm} \end{table} \subsubsection{System Architecture} \label{sec_parallel} More than 85\% computation resources on our cluster are used by distributed training workloads. DL training workloads can be parallelized via data parallelism, model parallelism and also hybrid parallelism \cite{mayer2019scalable}. While model parallelism and hybrid parallelism enable training neural networks which a single processor cannot support, they usually require significant human efforts for efficient model partition. Data parallelism is more model agnostic, and has been the most widely used paradigm for parallelizing neural network training \cite{shallue2018measuring}. We focus on data-parallel training jobs in this work. \begin{figure}[!htb] \vspace{-0.3cm} \centering \begin{minipage}[b]{0.7\linewidth} \centerline{\includegraphics[width=\linewidth]{PSWorker.png}} \centerline{\scriptsize (a) \emph{PS/Worker}} \end{minipage} \begin{minipage}[b]{0.7\linewidth} \centerline{\includegraphics[width=\linewidth]{AllReduce.png}} \centerline{\scriptsize (b) \emph{AllReduce}} \end{minipage} \vspace{-0.2cm} \caption{System Architecture.} \label{fig_PAI_parallel} \vspace{-0.2cm} \end{figure} There are two types of system architectures, centralized and decentralized, for synchronizing weights/gradients among distributed training replicas. In a (parameter) centralized architecture, represented by the parameter server (PS) architecture \cite{li2014scaling}, one or multiple parameter servers manage the gradient aggregation, and each worker holds a training replica, pulling variables from the PSs at the beginning in each training step and pushing gradients back to them at the end of each step. In a (parameter) decentralized architecture, the global parameters are placed replicated or partitioned on all training nodes; each training node exchanges the gradients via an \emph{AllReduce} operation at the end of each training iteration. This architecture can benefit from the NVIDIA Collective Communications Library (NCCL) \cite{nccl2018} for high-speed multi-node/multi-GPU communication. In this paper, we have implemented a new decentralized parallel training strategy called PEARL to handle large embedding weights. The detailed discussions about PEARL are showed in Sec. \ref{sec_pearl_arch}. Currently, representative deep learning frameworks such as TensorFlow, Pytorch and MXNet mainly support the decentralized architecture in the replica mode: all model parameters are replicated to each device and data parallelism is used with the AllReduce algorithnm. In our cluster, roughly 29\% jobs are running using the PS architecture and less than 1\% using AllReduce, as we adopt AllReduce only after our cluster are equipped with NVLink. \subsubsection{DL Training Workloads} \label{sec_workload} \begin{figure}[!htb] \vspace{-0.3cm} \centering \includegraphics[width=0.9\linewidth]{DL_computation.png} \vspace{-0.2cm} \caption{Data Flow of A Typical DL Training Step.} \label{DL_computation} \vspace{-0.3cm} \end{figure} A DL training job always runs in an iterative fashion. Fig. \ref{DL_computation} shows the basic workflow in a typical training step. We study impact of placement of the input data, model computation and weight update on runtime behavior of a training job. Weight movement refers to data transfer related to trainable parameters, including variable reading/gradient aggregation, respectively, in forward/backward stages. The data movement involves storage I/O, \emph{i.e.}, feeding training samples. For GPU workloads, the main computation is placed on GPUs while the data is in the CPU memory; therefore, input data I/O involves traffic on the CPU-GPU interconnect, \emph{i.e.}, PCIe. Previous workload characterization work \cite{adolf2016fathom,li2018tartan} mainly focus on measuring the relationship between model computation and weight movement traffic, while ignoring the data part. However, we found that data I/O is a non-negligible factor for the runtime performance, especially for single-node training workloads. We denote the non-distributed training workloads as \emph{1w1g} (single-worker-single-GPU), and classify our distributed training workloads into four types: \begin{itemize} \item \emph{1wng}: centralized training placed locally within a single server. Typically the parameters are placed on CPU while the computation model is replicated across multiple GPUs. \item \emph{PS/Worker}: PS training framework with each worker/PS node being placed on a separate server. \item \emph{AllReduce-Local}: \emph{AllReduce} workloads in the local mode, running on individual servers equipped with NVLink to exploit the high-speed multi-GPU interconnect. \item \emph{AllReduce-Cluster}: \emph{AllReduce} workloads running across multiple servers. \end{itemize} \begin{table}[!htbp] \vspace{-0.3cm} \caption{Summary of five types of workloads in our cluster.} \label{table_Summary} \scriptsize \centering \begin{tabular}{ c |c| c |c } \hline & System & System & Weight \\ & Architecture & Configuration & Movement \\ \hline \hline 1w1g &- & Local & - \\ \hline 1wng & Centralized & Local & PCIe \\ \hline PS/Worker & Centralized & Cluster & Ethernet \& PCIe \\ \hline AllReduce-Local & Decentralized & Local & NVLink \\ \hline AllReduce-Cluster & Decentralized & Cluster & Ethernet \& NVLink \\ \hline \end{tabular} \vspace{-0.2cm} \end{table} Table \ref{table_Summary} summarizes the basic features for each type of workloads. The common features among different types of workloads are not listed. For example, for all types model computation is placed on GPUs and the input data I/O is via PCIe from CPU to GPUs. \subsection{Workload Characterization Framework} \label{sec_workflow} \begin{figure*}[!htb] \vspace{-0.4cm} \centering \centerline{\includegraphics[width=\linewidth]{analysis_overview.png}} \vspace{-0.4cm} \caption{Workload Characterization Framework.} \label{analysis_overview} \vspace{-0.4cm} \end{figure*} To analyze workload performance on our cluster, we established a workload characterization framework, as shown in Fig. \ref{analysis_overview}. \subsubsection{Runtime Profiling} TensorFlow provides a basic profiling tool, \emph{tf.RunMetadata()} \cite{goldsborough2016tour}, which can trace the runtime information including device placement, operation attributes, kernel launch \& execution time, and tensor attributes (data type, shape, allocation time and liveness, \emph{etc}). We further collect the \emph{job meta information}, which mainly includes the resource allocation information in the entire job. For example, for a distributed training job in the \emph{PS/Worker} architecture, \emph{run\_metadata} provides behavior of a single computation node (using one GPU device), and the job meta information provides supplementary information such as how many workers the job uses. Data collected through \emph{run\_metadata} and the \emph{job meta information} constitute the raw data for our workload analysis. \subsubsection{Workload Feature Extraction} We extract workload features from the fine-grained information collected, which characterize the execution requirements of each job in computation, I/O and weight/gradient transfer. Our workload feature schema is shown in Fig. \ref{analysis_overview}. \subsubsection Performance Breakdown} For a given training job, we are interested in the composition of its execution time: input data I/O time ($T_d$), computation time ($T_c$) and weight/gradient communication time ($T_w$). In practice, sophisticated optimizations are possible to overlap computation and data transfer \cite{zhang2017poseidon,hashemi2018tictac}. Our goal is not to precisely model the total execution time, but to characterize the relative time consumption among computation, input I/O and weight/gradient communication. Therefore, potential overlap is not considered in our analysis and summation of all parts is used as the prediction of the total execution time for one training iteration/step: $T_{total}=T_d+T_c+T_w$. \noindent \textbf{Input data I/O time.} $T_d$ measures the transport efficiency to load the input data, computed as $T_d=\frac{S_d}{B_d}$, where $S_d$ is the input data size and $B_d$ is the bandwidth for input data transfer. \noindent \textbf{Weight movement time.} $T_w$ can be estimated using $T_w=\frac{S_w}{B_w}$, where $S_w$ denotes the weight size to be transferred across different model replicas within a training step, and $B_w$ is the bandwidth of the communication medium. \noindent \textbf{Computation time.} The operations in DL workloads are divided into compute-bound and memory-bound ones. FLOP count, denoted as $\# FLOPs$, is adopted to measure the computation requirements by compute-bound operations (e.g., convolution and MatMul). The memory-bound operations, known as element-wise operations, spend more time on memory access, and thus the amount of memory access is used as their resource requirement. Let $S_{mem\_access}$ represent the total data size of memory access. The computation time can be computed as the sum of the two parts: \begin{equation} T_c=\frac{\#FLOPs}{peak\_FLOPs}+\frac{S_{mem\_access}}{B_{mem\_access}}, \end{equation} where $peak\_FLOPs$ and $B_{mem\_access}$ denote computation capacity and memory access bandwidth of the GPU, respectively. In practice, $peak\_FLOPs$ and $B_{mem\_access}$/$B_d$/$B_w$ are usually not fully used by a workload. Therefore, we use 70\% of the actual capacities in the denominators when computing $T_c$/$T_d$/$T_w$ in our analysis. How to measure the utilization more precisely will be part of our future work. The time percentage of each component is further computed by dividing the time of each component by the total time, e.g., percentage of the input data I/O time is $\frac{T_d}{T_{total}}$. \section{Performance Characterization: Collective Behaviors} \label{sec_clusterbehavior} In this section, we conduct statistical analysis of tens of thousands of jobs running on PAI within the period of Dec.~1st, 2018 to Jan.~20th, 2019. The workloads are run on our internal TensoFlow framework, which is compatible with community TensorFlow 1.8. Due to the small amount of \emph{AllReduce} jobs within this period, we focus on the analysis of \emph{1w1g}, \emph{1wng} and \emph{PS/Worker} workloads from our cluster, and will further explore how much potential improvement can be achieved if using the \emph{AllReduce}-based decentralized architecture. \subsection{Overview of the Workloads} \label{subsec_overview} \begin{figure}[!htb] \vspace{-0.3cm} \centering \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=\linewidth]{job_statistics_jobs.png}} \centerline{\footnotesize (a) job-level} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=\linewidth]{job_statistics_cNode.png}} \centerline{\footnotesize (b) cNode-level} \end{minipage} \caption{Constitution of Workloads.} \label{statistics_dist_strategy} \vspace{-0.3cm} \end{figure} Composition of different types of workloads is shown in Fig. \ref{statistics_dist_strategy}. Besides job numbers, we also count the numbers of computation nodes. A computation node, or cNode, is a GPU device holding a single computation model replica. At job-level, \emph{1w1g} dominates the job types; after taking the cNodes number in jobs into consideration, \emph{PS/Worker} jobs consume the largest portion of resources, up to 81\%. \begin{figure}[!htb] \centering \begin{minipage}[b]{0.48\linewidth} \centerline{\includegraphics[width=\linewidth]{cNode_cdf_typed.eps}} \centerline{\footnotesize (a) computation scale} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centerline{\includegraphics[width=\linewidth]{weight_scale_typed.eps}} \centerline{\footnotesize (b) weight size} \end{minipage} \vspace{-0.2cm} \caption{Workload Scale Distribution.} \label{fig_workload_scale} \vspace{-0.5cm} \end{figure} We further show the cumulative distribution function (CDF) of the cNode number in each type of workloads in Fig. \ref{fig_workload_scale}(a). For \emph{1w1g} workloads, the number of cNode is always 1; \emph{1wng} workloads are placed within a physical server typically, the number of cNodes is no more than 8; about half of \emph{PS/Worker} workloads are placed on more than 8 cNodes, while a small fraction of jobs on more than 128 cNodes. This can help explain why there is only 29\% workloads using the \emph{PS/Worker} architecture, but the percentage of cNodes they consume is up to 81\%. The amount of computation resources consumed by a job can reflect the problem scale and may also indicate the commercial value of the workload. In our cluster, commodity embedding, search and recommendation workloads have large training datasets and may exploit hundreds to thousands of workers to achieve high throughput on the huge training dataset. Notably, such extra large-scale workloads always have significant commercial impact on the company's business; however, they are often not included in DL workload benchmarks \cite{adolf2016fathom,zhu2018benchmarking}. We find that they are non-negligible: only 0.7\% of all workloads have more than 128 cNodes; however, they consume more than 16\% computation resource on our cluster. In the following Sec. \ref{sec_casestudy}, we will explore the characteristics of such large-scale workloads using two example jobs in detail. The model size in a job is a key factor to decide what system architecture is best for the job. For example, for small to medium scale models that can fit into the GPU memory entirely, the \emph{AllReduce-Local} configuration can be adopted, with better performance while using less system resources. When the weight size is large (ranging from tens to hundreds of GB), \emph{PS/Worker} architecture should be adopted to partition the variables among multiple \emph{PS} nodes (note that only weight-replica mode is supported in \emph{AllReduce} implementation in representation DL frameworks). Fig. \ref{fig_workload_scale}(b) illustrates the weight size distribution. We can observe that, within \emph{PS/Worker} workloads, some jobs have large weight size, more than 10 GB or even 100 GB; however lots of them have quite small model sizes. So why do they choose to adopt the \emph{PS/Worker} architecture? Can they be further optimized using a better model placement and system architecture? We will answer these questions in Sec. \ref{subsec_opt_space}. \begin{figure}[!htb] \vspace{-0.3cm} \centering \centerline{\includegraphics[width=\linewidth]{component_percent_jobs.png}} \vspace{-0.3cm} \caption{Average percentage of different parts of workload execution time. \emph{Left column: job-level, Right column: cNode-level.}} \label{percent_type_jobs} \vspace{-0.3cm} \end{figure} \begin{figure*}[!htb] \vspace{-0.3cm} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{all_hardware_cdf_cNode_job.eps}} \centerline{\footnotesize (a) all} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{1w1g_cdf_cNode_job.eps}} \centerline{\footnotesize (b) \emph{1w1g}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{1wng_cdf_cNode_job.eps}} \centerline{\footnotesize (c) \emph{1wng}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{ps_worker_GPU_cdf_cNode_job.eps}} \centerline{\footnotesize (d) \emph{PS/Worker}} \end{minipage} \vspace{-0.2cm} \caption{CDF of each component of the execution time among different workloads. \emph{Top: CDF at job-level, down: cNode-level.}} \label{fig_CDF_type_jobs} \vspace{-0.3cm} \end{figure*} \subsection{Performance Breakdown} \label{subsec_runtime_breakdown} Figure \ref{percent_type_jobs} shows the execution time breakdown for various workloads, including time for input data I/O, weight/gradient transfer and computation. The cNode-level percentages are computed as weighted sum of the job-level percentages, with the weight being the cNode number of each job over the overall cNode number. Please note that \emph{1w1g} jobs do not need weight/gradient communication. Figure \ref{fig_CDF_type_jobs} shows the detailed CDF of each component of the execution time, among the jobs. \mengdi{As the mapping from execution time components to hardware differs in different types of workloads (such as the weight movement is carried out via different hardware as shown in Table \ref{table_Summary}), we summarize the overall time breakdown according to time spent on different hardware components and show the results in Fig. \ref{fig_CDF_type_jobs}(a).} \textbf{Input Data I/O.} Figure \ref{fig_CDF_type_jobs} shows that for \emph{1wng} and \emph{PS/Worker} workloads, input data movement time can be nearly ignored, approximately about 3\% on average, partially because the weight/gradient transfer time is too large. One thing to note is that when such workloads are mapped to another system architecture or using a different hardware configuration, the bottleneck may shift, exposing the data I/O part, which will be illustrated in Sec. \ref{subsec_opt_space}. For \emph{1w1g} workloads, the data I/O part is about 10\% on average. Especially, there are about 5\% of the workloads spending more than 50\% time on input data movement, in which case the data I/O load on PCIe becomes the bottleneck. \textbf{Weight/Gradient Transfer.} On average, weight/gradient communication contributes approximately 22\% to the total execution time. When evaluating the percentage in the cNode-level, the proportion will be more than 60\%, indicating that workloads with larger cNode numbers are more likely to suffer from the communication bottleneck. This can also be shown from the CDF of time breakdown of \emph{PS/Worker} workloads in Fig. \ref{fig_CDF_type_jobs}(d). The \emph{PS/Worker} workloads always involve large numbers of cNodes with large proportions of time spent on weight/gradient transfer. Specifically, more than 40\% \emph{PS/Worker} jobs spend more than 80\% time in communication via Ethernet and/or PCIe. Given the high communication overhead, a potential improvement to expedite model training is to upgrade the network facility or to vary the system configuration by porting the \emph{PS/Worker} workloads to \emph{AllReduce-Local} for leveraging the high communication efficiency introduced by NVLink. \textbf{Computation.} Computation can be further decomposed into memory-bound and compute-bound computation. We can see that memory-bound computation time is larger than compute-bound operation time in all types of workloads. This indicates that the workloads in our cluster involve more memory access. In this case, XLA may provide powerful optimization for element-wise operations (major contribution for memory access). XLA is a domain-specific compiler for linear algebra that optimizes TensorFlow computation, which can fuse pipelined operations to reduce the memory overhead. Additionally, for compute-bound operations, mixed-precision computation can also be introduced to exploit the computation power provided by TensorCore \cite{volta}, which provides up to 8X higher peak FLOPS on Tesla V100, as compared to using standard FP32 operations on V100. \subsection{Exploring the Optimization Space} \label{subsec_opt_space} Previously we showed a holistic execution profile for all workloads. But how would this execution profile change under different system settings? For instance, what can we get by upgrading the network bandwidth from 25Gbps to 100Gbps? Is there any further end-to-end performance speed-up by boosting the GPU peak computing power to 64 or 256 TFLOPS? Will the performance bottleneck shift to data movement by increasing GPU memory bandwidth to 4TB per second? In addition, what if we use \emph{AllReduce-Local} or \emph{AllReduce-Cluster} to run the PS jobs? Next, we analytically evaluate potential performance impact by switching the PS workloads to AllReduce and by changing system configurations for different types of workloads. Especially, we estimate how the performance will be like when GPUs are upgraded to more powerful ones, and interconnections are varied among PCIe (for CPU-GPU/GPU-GPU communication), Ethernet (for cross-server communication), and NVLink (for high-speed inter-GPU communication within a single machine), by changing the values of $S_d$/$S_w$/$S_{mem_{access}}$/$peak_{FLOPs}$ in the analytical models in Sec. \ref{sec_workflow}, respectively. Tallent \emph{et al.} \cite{tallent2017evaluating} compared workload performance for GPU interconnect with NVLink and Cirrascale GX8 PCIe, and their results show that DGX-1 with NVLink has superior performance except on ResNet-type of workloads. We would like to investigate how much the high-speed NVLink interconnect can outperform PCIe/Ethernet with our workloads. \subsubsection{Performance Impact of AllReduce} Figure \ref{percent_type_jobs} shows that communication consumes an important portion of the execution time in \emph{PS/Worker} workloads, which may partially be due to the limited bandwidth of Ethernet/PCIe. We estiamte the performance when PS workloads training small to medium scale models (that can be fit into the GPU memory entirely) are ported to the \emph{AllReduce} architectures, to exploit the high-speed NVLink. In addition to single node performance, we further evaluate the overall throughput of a training job, which can be computed as \begin{equation} throughput=\frac{\#cNode}{T_{total}}\times batch\_size \label{eq_throughput} \end{equation} Here $\frac{\#cNode}{T_{total}}$ is the number of steps the job can train in unit time with all its computation nodes. Considering that $batch\_size$ remains the same in each computation node, the throughput is related to 1) single-node performance $T_{total}$ and 2) the number of cNodes $\#cNode$. We map the \emph{PS/Worker} workloads to the \emph{AllReduce-Local} architecture as follows, since an \emph{AllReduce-Local} job can have at most 8 $\#cNodes$: for a \emph{PS/Worker} job with $\#cNodes> 8$, the number of cNodes is reduced to 8; for those with $\#cNodes\le8$, the cNode numbers will remain unchanged. To map the \emph{PS/Worker} workloads to the \emph{AllReduce-Cluster} architecture, we retain the original number of cNodes in the jobs. In addition to the speedup of all workloads, we select workloads whose throughput cannot be improved by \emph{AllReduce-Local} and show the performance acceleration with \emph{AllReduce-Cluster}. \begin{figure}[!htb] \vspace{-0.3cm} \begin{minipage}[b]{0.48\linewidth} \centerline{\includegraphics[width=\linewidth]{hist_speedup_ps_worker_GPU2GPU-NVLink.eps}} \centerline{\footnotesize (a) \emph{AllReduce-Local}} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centerline{\includegraphics[width=\linewidth]{hist_speedup_ps_worker_GPU2GPU-NVLink-cluster.eps}} \centerline{\footnotesize (b) \emph{AllReduce-Cluster}} \end{minipage} \vspace{-0.2cm} \caption{Improvement by mapping the workloads to \emph{AllReduce}.} \label{opt_GPU_NVLink} \vspace{-0.3cm} \end{figure} \begin{figure}[!htb] \vspace{-0.3cm} \begin{minipage}[b]{0.61\linewidth} \centerline{\includegraphics[width=\linewidth]{gpu_nvlink_cdf_job.eps}} \centerline{\footnotesize (a) CDF} \end{minipage} \begin{minipage}[b]{0.31\linewidth} \centerline{\includegraphics[width=\linewidth]{allreduce_average.png}} \centerline{\footnotesize (b) Average Breakdown} \end{minipage} \vspace{-0.2cm} \caption{Performance breakdown of \emph{PS/Worker} workloads after being mapped to \emph{AllReduce-Local}.} \label{fig_breakdown_allreduce} \vspace{-0.4cm} \end{figure} Fig. \ref{opt_GPU_NVLink} shows that by shifting the communication medium from PCIe/Ethernet to the high speed NVLink interconnect with \emph{AllReduce-Local}, most of the workloads can be accelerated at different levels. Considering the potential reduction of $\#cNode$ in projection, about 60\% workloads still achieve speedup in the overall throughput. This indicates that \emph{AllReduce-Local} architecture equipped with NVLink can potentially boost performance for most of the \emph{PS/Worker} workloads, while at the same time saving system resources significantly (as the number of cNodes after projection will be no more than 8, which can be much larger before projection as shown in Fig. \ref{fig_workload_scale}(a)). We also note that about 22.6\% \emph{PS/Worker} workloads cannot benefit from switching to the \emph{AllReduce-Local} architecture. With the switching, all workloads experience acceleration of the weight/gradient transfer, as well as slow-down of input data I/O, due to the competition for PCIe bandwidth (as input data are transferred from CPU to multiple GPUs within a server simultaneously); whether a workload is sped up or slowed down relies on which part dominates. To demonstrate the bottleneck shift effect, we further illustrate the execution time breakdown of the \emph{AllReduce-Local} workloads in Fig. \ref{fig_breakdown_allreduce}. As compared to the CDF shown in Fig. \ref{fig_CDF_type_jobs}(d), we can observe that the weight/gradient communication part is vastly reduced, while the other parts, including computation and the data I/O fraction, become more important. Especially, based on Fig. \ref{fig_breakdown_allreduce}(b), we can see that the portion of data I/O via PCIe increases the most, indicating the shift of bottlenecks with different architectures. When workloads are shifted from \emph{PS/Worker} to \emph{AllReduce-Cluster}, the main speedup is due to the change of weight/gradient movement medium from Ethernet\&PCIe to Ethernet\&NVLink. However, in both sets of configurations, Ethernet is the main bottleneck for data transfer, and thus the speedup is quite limited, at most 1.2X based on Table \ref{table_baseline_config}. On average, 67.9\% workloads can be sped up. Furthermore, among the workloads that cannot be improved by \emph{AllReduce-Local}, about 37.8\% can be sped up with \emph{AllReduce-Cluster}. \begin{table}[!htbp] \vspace{-0.3cm} \caption{Hardware Configuration Variations \label{table_provision_config} \centering \begin{tabular}{ c| c } \hline & Candidates \\ \hline Ethernet/Gbps & \{10, 25, 100\} \\ \hline PCI/GB & \{10, 50\} \\ \hline GPU peak FLOPs/T & \{8, 16, 32, 64\} \\ \hline GPU memory/TB & \{1, 2, 4\} \\ \hline \end{tabular} \vspace{-0.5cm} \end{table} \subsubsection{Performance Impact of Hardware Evolution} \label{sec_resource_provision} \begin{figure*}[!htb] \vspace{-0.4cm} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{resource_provision_1w1g.eps}} \vspace{-0.2cm} \centerline{\footnotesize (a) \emph{1w1g}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{resource_provision_1wng.eps}} \vspace{-0.2cm} \centerline{\footnotesize (b) \emph{1wng}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{resource_provision_ps_worker.eps}} \vspace{-0.2cm} \centerline{\footnotesize (c) \emph{PS/Worker}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{\includegraphics[width=\linewidth]{resource_provision_gpu_nvlink.eps}} \vspace{-0.2cm} \centerline{\footnotesize (d) \emph{AllReduce-Local}} \end{minipage} \vspace{-0.2cm} \caption{Speedup with different hardware configurations.} \label{fig_resource_provision} \vspace{-0.4cm} \end{figure*} We next investigate how the workloads perform with different hardware configurations, as shown in Table \ref{table_provision_config}. We show normalized resource values in Fig. \ref{fig_resource_provision} according to the basic settings in Table \ref{table_baseline_config}, to facilitate result comparison. For example, the Ethernet bandwidth is normalized using 25Gbps as the basic unit, and PCIe bandwidth is normalized by 10GB/s. We evaluate both the original workloads with \emph{1w1g}, \emph{1wng} and \emph{PS/Worker} architectures, and also the mapped \emph{AllReduce-Local} workloads from the \emph{PS/Worker} workloads. In Fig. \ref{fig_resource_provision}, the speedup is computed using the performance achieved with the new configuration of the respective resource. Different workloads exhibit different behaviors: \emph{1w1g} workloads are most sensitive to GPU memory bandwidth, \emph{1wng} ones vary most with the variation of PCIe bandwidth, and the \emph{PS/Worker} type relies most on the Ethernet bandwidth. The observations are consistent with the performance breakdown results in Fig. \ref{percent_type_jobs} and Fig. \ref{fig_CDF_type_jobs}. For example, \emph{PS/Worker} workloads spend the most time on weight/gradient transfer via Ethernet and they achieve the highest speedup by improvement of the Ethernet bandwidth. Comparison between Fig. \ref{fig_resource_provision}(c) and (d) shows the bottleneck shift effect: performance of \emph{PS/Worker} workloads varies most when varying the Ethernet bandwidth, and they are accelerated quite a bit when the GPU memory bandwidth improves; when the workloads are projected to \emph{AllReduce-Local}, GPU memory bandwidth has the largest impact on performance. \subsection{Summary of Key Observations} \label{subsec_key_observe} We make several interesting observations based on the above: $\triangleright$ On PAI, distributed training jobs dominate resource consumption, with \emph{PS/Worker} jobs consuming 81\% of overall computation resources. $\triangleright$ 90\% jobs train small-scale models, \emph{i.e.}, model size less than 10GB, while there exist also large-scale models(100-300GB) which are trained in large-scale distributed mode and consumes large amounts of resources. $\triangleright$ On average weight/gradient communication takes almost 62\% of the total execution time for all workloads. For \emph{PS/Worker} jobs, more than 40\% workloads spend more than 80\% time in weight/gradient communication. As to the computation portion which is the focus of previous studies \cite{adolf2016fathom,qi2016paleo}, on average it only contributes 35\% of the total training time, with compute-bound part contributing 13\% and memory-bound part 22\%. $\triangleright$ Throughput of 60\% \emph{PS/Worker} workloads can be improved when they are ported to the \emph{AllReduce-Local} architecture, which can leverage the high-speed NVLink for GPU interconnect. $\triangleright$ Workloads show different levels of sensitivity for hardware evolution and the performance bottleneck may shift with the change of system architecture. \emph{PS/Worker} workloads are most sensitive to Ethernet bandwidth; after projected to \emph{AllReduce-Local}, they benefit the most from the improvement of GPU memory access bandwidth. \section{Performance Characterization: Case Studies} \label{sec_casestudy} In this section, we zoom into the training of several production DL models in detail, to further detect their performance bottlenecks and evaluate several optimization techniques. We run the selected training workloads in an experimental testbed of 64 servers. Each server is equipped with one 96-core Intel Xeon Platinum 8163 CPU, eight Tesla V100 GPUs, 128GB RAM, 10GB PCIe and 50GB NVLink. The servers are connected through 25Gbps bi-directional Ethernet. We extensively investigate data preprocessing time and the framework overhead (mostly due to CPU runtime scheduling and GPU kernel launch time), which are not considered in Sec. \ref{sec_clusterbehavior} as they are not fundamental resource demands of workloads and can be optimized to be ignorable using different technologies. With our testbed experiments, we will show the impact of the framework overhead and discuss techniques to minimize it. \subsection{Selected Workloads} Table \ref{model_config} summarizes the six models used for our case studies, selected from different application domains and with different scales of parameter size. \textbf{ResNet50}. Residual networks have been proven to be powerful and widely applied in multiple domains \cite{he2016identity,dai2016r}. \textbf{NMT}. In our production system, NMT model \cite{vaswani2017attention} has been applied to translation for e-commerce business and others. \textbf{Speech}. Neural acoustic models \cite{kim2017dynamic} have been useful in speech recognition and widely adopted in commercial acoustic applications. The model we evaluate is composed of CNN followed by Long Short-Term Memory (LSTM) architecture with layer normalization. \textbf{BERT}. BERT \cite{devlin2018bert} is one of the most commonly used models for language understanding, and has been applied to a few business domains in our company. \textbf{Multi-Interests}. Multi-interest model \cite{cov2016youtube,weston2013interests} based recommender systems are widely used in our service platform, to capture users' various interests. \textbf{GCN}. GCN (Graph Convolutional neural Network) \cite{wang2018billion,ying2018graph} is based on a well-known graph embedding framework. The item embeddings are employed to compute pairwise similarities between all items to facilitate recommendation. \begin{table}[!htbp] \scriptsize \vspace{-0.3cm} \caption{Model Scale} \label{model_config} \vspace{-0.2cm} \centering \begin{tabular}{ c|c|c|c|c} \hline &Domain &Dense &Embedding & System \\ & &weights &weights & Architecture \\ \hline ResNet50 & CV &204MB &0MB & \emph{AllReduce-Local} \\ \hline NMT &Translation &706MB &819MB & \emph{AllReduce-Local} \\ \hline BERT &QA &1GB &284MB & \emph{AllReduce-Local} \\ \hline Speech &Speech recognition &416MB &0MB & \emph{1w1g} \\ \hline Multi-Interests &Recommender &1.19MB &239.45GB & \emph{PS/Worker} \\ \hline GCN &Recommender &207MB &54GB & \emph{PEARL}\footnotemark \\ \hline \end{tabular} \vspace{-0.3cm} \end{table} Table \ref{model_config} summarizes the parameter sizes for the models, including dense weights and embedding weights \cite{wang2018billion}. Note that the parameter sizes include both the trainable variables and the optimization-related variables, such as momentums \cite{ruder2016overview}. For models with small weight size (such as ResNet50, NMT and BERT), all parameters can concurrently reside in the GPU memory; hence \emph{AllReduce-Local} architecture is adopted for their training, to leverage GPU-direct technology (NVLink). For models with large-scale weights (such as \emph{Multi-Interests}), only the \emph{PS/Worker} architecture is suitable, as the weight size supported by the current \emph{AllReduce} frameworks is limited by single GPU's memory size. In our testbed, we train each model using the system architecture indicated in Table \ref{model_config}. The Speech model evaluated is only trained on a small dataset, so does not require distributed training and is trained using \emph{1w1g}. For GCN with a large model size, we will show that the limited Ethernet bandwidth becomes the bottleneck when PS/worker architecture is used, and we will design a new system architecture (PEARL) for its training. Table \ref{model_character} shows the basic workload features. \begin{table}[!htbp] \scriptsize \vspace{-0.3cm} \caption{Basic Workload Features} \label{model_character} \centering \begin{tabular}{ p{48pt}| p{17pt} | p{26pt}|p{30pt}|p{37pt} | p{25pt}} \hline & Batch Size& FLOP count &Memory access &Memory Copy(PCIe) &Network Traffic \\ \hline Multi-Interests & 2048 &105.8G &100.4GB &261MB &122MB \\ \hline ResNet50 & 64 &1.56T &31.9GB &38MB &357MB \\ \hline NMT & 6144 &2.5T &101.6GB &22KB &1.33GB \\ \hline BERT & 12 &2.1T &107.3GB &46KB &1.5GB \\ \hline Speech & 32 &7.9T &20.4GB &804MB &728MB \\ \hline GCN & 512 & 330.7G &25.79GB &1.2MB &3GB \\ \hline \end{tabular} \vspace{-0.4cm} \end{table} \subsection{Model Validation} We first compare the execution time breakdown estimated using the analytical models in Sec. \ref{sec_workflow} and the actual measurement results. For example, ResNet50 involves 1.56T FLOPs, while the peak computing FLOPs provided by Tesla V100 in our testbed is 15 TFLOPs; thus, the compute-bound computation time is predicted via $\frac{1.56}{15*70\%}=0.149s$, where 70\% is the basic assumption for hardware utilization efficiency. The actual measured time for this part is 0.126s. Similar estimation method is used to other parts, including data I/O, weight/gradient traffic time, \emph{etc}. The estimated time and the actual measurement time, and even the time composition, are used in comparison for model validation. \begin{figure}[!htb] \vspace{-0.3cm} \centering \centerline{\includegraphics[width=0.9\linewidth]{Time_Breakdown_Comparison.png}} \vspace{-0.3cm} \caption{Time Breakdown Comparison. Left: actual measurement, right: estimation.} \label{fig_breakdown_comparison} \end{figure} \begin{figure*}[!htb] \vspace{-0.3cm} \centering \centerline{\includegraphics[width=\linewidth]{cases_opt.png}} \vspace{-0.5cm} \caption{Performance Breakdown with Different Optimization Techniques.} \label{fig_case_opt} \vspace{-0.5cm} \end{figure*} In Fig. \ref{fig_breakdown_comparison}, the percentage in the parentheses indicates the time difference, computed as $\frac{T_{predict}-T_{actual}}{T_{actual}}$, where $T_{predict}$ is the total time we estimated and $T_{actual}$ is the actual measured time. The difference is less than 10\% in most cases, and the estimated time breakdown can quite accurately reflect the relative portions of computation and data transfer in the entire execution time. For the Speech model, the difference is more than 66.7\%. Our estimation inaccuracy is due to the actual low usage of GPU memory access bandwidth at only 3\%, much smaller than the 70\% assumption when we do the estimation. We seek how to further improve memory access efficiency as a future direction, while adopting possible optimization such as XLA to reduce the memory-access volume by operation fusion, to accelerate training of the Speech model. \vspace{-0.2cm} \subsection{PEARL Architecture} \label{sec_pearl_arch} Used in the domain of e-commerce, search and recommendation models have very large and sparse commodity-embedding parameters. When the model size (ranging from tens to hundreds of GB) is too large to fit into the GPU memory entirely, the \emph{PS/Worker} architecture should be adopted to partition and store the variables in the CPU memory among multiple \emph{PS} nodes. However, synchronizing a large variable among the PS and GPUs of the workers requires significant ethernet and PCIe bandwidth, and also consumes many CPU clocks. Parameters of such models can be classified into dense and sparse weights, depending on how their elements are accessed. Treating the whole model as dense is inefficient, since naïvely communicating all elements of a large sparse variable, even though only a small subset is accessed, results in relatively low scalability. We propose and implement PEARL (Partitioned Embedding And RepLicated), a new distribution strategy that optimizes the efficiency of data transfer by taking the sparsity of variables into account. As shown in Fig.~\ref{fig_PEARL}, PEARL applies a hybrid approach that partitions the large sparse variables and distributes them in the GPU memory of workers, and adopts the AllReduce architecture to process dense variables. All workers synchronize variables via collective communication operations such as AllReduce and AllGatherv. AllReduce aggregates gradients from all GPUs for the dense weights, while AllGatherv gathers the embedding weights and corresponding gradients from all GPUs for the partitioned weights. The AllGatherv operation is implemented on top of NCCL \cite{nccl2018} primitives such as Broadcast and Reduce, that are optimized to leverage high-speed inter-GPU NVLink. Experiments show that PEARL built atop TensorFlow achieves good scalability in terms of training throughput with the increase of computation resources, on both dense and sparse models. \begin{figure}[!htb] \vspace{-0.1cm} \centering \centerline{\includegraphics[width=0.9\linewidth]{PEARL.png}} \vspace{-0.3cm} \caption{Architecture of PEARL.} \label{fig_PEARL} \vspace{-0.1cm} \end{figure} \subsection{Effectiveness of Optimization Techniques} As shown in Fig. \ref{fig_breakdown_comparison}, behavior of ResNet50, NMT and BERT is quite similar: 1) the actual time measurements and the model-based estimation are close, indicating that the hardware usage efficiency is quite high, around the basic assumption of 70\%. 2) the computation part contributes the most to the total running time, which shows that the communication part of time is reduced quite well by using NVLink. We next investigate how to further improve the computation efficiency. Fig. \ref{fig_case_opt}(a) compares the results obtained using the default setting, with mixed-precision (MP) matrix multiplication in FP16 \cite{micikevicius2017mixed} enabled (which is available with TensorCore in Volta architecture, potentially achieving up to 8X speedup compared to the default multiply-and-addition in FP32), and with XLA enforced. We observe 1.44X end-to-end speedup and 2.8X for MatMul when mixed-precision optimization is in use. With the powerful tool XLA (operation fusion and code generation), element-wise operation time can be reduced, as operation fusion exploits GPU's high-speed cache to reduce the framework scheduling overhead. We observe 2X speedup with both MP and XLA in place (1.76X with only XLA). Fig. \ref{fig_case_opt}(b) shows that when using XLA when training the Speech model, 3.43X speedup can be achieved for element-wise operations and 1.83X for the end-to-end performance. Figure \ref{fig_case_opt}(c) presents the time breakdown of Multi-Interests model training under three different training configurations (batch size and the number of attention layers). With the same model, performance bottlenecks in case of different configurations vary significantly. Larger batch size is more friendly to GPU with element-wise operations being the bottleneck, whose computation time can be reduced by operation fusion at the runtime. With the third configuration, communication becomes the bottleneck. A Multi-Interests model has a large weight size of more than 200GB; the weights cannot be entirely stored in the GPU memory. Therefore, we cannot apply the \emph{AllReduce} architecture to leverage the high-speed NVLink (since current AllReduce frameworks only support weight-replica mode). Similarly, GCN has large-scale embedding weights, and \emph{PS/Worker} framework should be used. However, large-volume communication via Ethernet and PCIe may become the bottleneck. In these cases, PEARL is applied, which can use NVLink to transfer the weights/gradients for large-scale models, is in need. With PEARL, the large-scale weights, such as embeddings, are partitioned among multiple GPUs, while the variable/gradient aggregation is performed using a PS/Worker like protocol, using \emph{AllGather} and \emph{ReduceScatter} operations \cite{nccl2018}; all other small-scale weights are replicated and \emph{AllReduce} is used for gradient exchange. Fig. \ref{fig_case_opt}(d) presents the time breakdown of GCN model when trained using PEARL. We see that with the high-speed interconnect, the communication part via NVLink consumes 25\% of the total time. Using our analytical approach, we can also estimate the time breakdown when using \emph{PS/Worker} with Ethernet \& PCIe for training, which is shown in the second bar in Fig. \ref{fig_case_opt}(d). The communication part with Ethernet \& PCIe contributes to almost 95\% of the total time, which is much more than what we can achieve with PEARL. \section{Discussions} In our proposed workload characterization framework, there are several assumptions that may affect the results. In this section, we discuss the effects when the assumptions shift. \subsection{Hardware efficiency assumption} As described in Sec. \ref{sec_workflow}, the hardware efficiencies of computation (GPU) and communication (PCIe/Ethernet/NVLink) parts are both assumed to be 70\%. To find out whether the assumption is reasonable, we conduct cross-validation in two ways. First, we measure the hardware efficiency in each case analyzed in Sec. \ref{sec_casestudy}. Next, as it is sophisticated to establish a system to precisely measure the hardware utility efficiency for each workload, instead we try to analyze how the results will shift if the assumption is not followed. Table \ref{table_model_efficiency} shows the actual measured hardware efficiency for each workload. 70\% is about the average level. In detail, we can observe that, the efficiency of GPU computation/ memory access is a bit higher than 70\%, while that of data traffic (PCIe/Ethernet/NVlink) is lower. \begin{table}[!htbp] \caption{Resource Efficiency for Each Workload} \label{table_model_efficiency} \centering \begin{tabular}{ p{50pt}|p{30pt}|p{32pt}|p{28pt}|p{35pt}} \hline &GPU TOPS &GDDR &PCIe &Network (Ethernet/NVLink) \\ \hline Multi-Interests &32.71\% &95\% &86.47\% &69.21\% \\ \hline ResNet50 &82.55\% &78.9\% &35.1\% &49.4\% \\ \hline NMT &82.8\% &79.1\% &0.1\% &35.2\%\\ \hline BERT &81.6\% &95\% &0.42\% &47.1\% \\ \hline Audio &60.86\% &3.1\% &77.73\% &40.5\%\\ \hline GCN &88.2\% &69.9\% &86.2\% &27.35\% \\ \hline \end{tabular} \end{table} For the collective behavior, we explore how the conclusion will change if the assumption is violated. Taking \emph{PS/Worker} workloads as example, we evaluate how the weight traffic's portion in the end-to-end training time varies when the hardware efficiency in computation/communication changes. As shown in Fig. \ref{fig_hardware_eff}, as expected, when the actual hardware efficiency in communication (PCIe/Ethernet) is lower than 70\%, the \emph{PS/Worker} workloads will spend more time on weight traffic, and vice versa. What should be noted is that even when the hardware efficiency in computation is only 25\% (quite lower than the 70\% assumption), the \emph{PS/Worker} workloads still spend more time on weight traffic on average. To give a precise estimation on the fundamental bottleneck in the cluster using our proposed framework, it is still important to establish a better methodology to measure the utilization efficiency of each hardware component, which will be an important direction in our future work. \begin{figure}[!htb] \vspace{-0.3cm} \centering \centerline{\includegraphics[width=0.8\linewidth]{shift_hardware_efficiency_component.eps}} \caption{Shift Effect in Weight Traffic Percentage When Hardware Efficiency Changes.} \label{fig_hardware_eff} \end{figure} \subsection{Computation/communication overlap assumption} There are various ways to overlap computation and data transfer \cite{zhang2017poseidon,hashemi2018tictac} in DL workloads. Although the purpose of this work is to expose the fundamental performance bottlenecks, which will not change due to the overlap issue, several speedup results may change if the non-overlap assumption is violated. As how to achieve computation and communication overlap is still an open question in deep learning design, it is not easy to quantify the actual overlap potential for each workload. Instead, we use an ideal overlap case to give an estimation for comparison. In this case, the total time changes from $T_{total}=T_d + T_c + T_w$ (used in our framework in Sec. \ref{sec_workflow}) to $T_{total} = \max\{T_d, T_c, T_w\}$. \begin{figure}[!htb] \vspace{-0.3cm} \centering \begin{minipage}[b]{0.48\linewidth} \centerline{\includegraphics[width=\linewidth]{shift_overlap_component.eps}} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centerline{\includegraphics[width=\linewidth]{shift_overlap_speedup.eps}} \end{minipage} \caption{Shift Effect Under Different Overlap States. Left: weight traffic percent, right: speedup when mapping to \emph{AllReduce-Local}} \label{fig_overlap} \end{figure} Fig. \ref{fig_overlap} shows the comparison results of \emph{PS/Worker} workloads under different overlap states: totally none-overlap \emph{VS} ideal-overlap. It can be observed that when computation and communication ideally overlap, the weight traffic part is heavily exposed as the performance bottleneck, as it consumes the longest time among $\{T_d, T_c, T_w\}$. As to the speedup analysis when mapping \emph{PS/Worker} workloads to the \emph{AllReduce-Local} architecture, we can observe that the ratio of sped-up workloads remains similar as the none-overlap results, 22.6\% \emph{VS} 20.2\%. It can be noted that there are 23.4\% workloads achieving 21X speedup, which are actually the workloads bound by the weight traffic part either before or after the architecture projection. For such workloads, the speedup ratio can be computed as: \begin{equation} \frac{\frac{S_w}{25Gb\times 70\%} + \frac{S_w}{10GB\times 70\%}}{\frac{S_w}{10GB\times 70\%}}=21 \end{equation} where $S_w$ denotes the weight traffic volume. The comparison further illustrates that the assumption of computation/communication overlap may affect the detailed analytical results, such as speedup ratio or running time constitution; however, it does not change the conclusion as to what is the fundamental bottleneck for the workloads in our cluster. At last, it is worth noting that the purpose of our analysis framework is not to precisely predict practical performance of workloads, but to expose the fundamental bottlenecks in hardware components or system architecture for collective behavior of workloads in our cluster. \section{System Implications} \label{sec_implication} Based on previous results, we now summarize important implications on how to optimize training frameworks (\emph{e.g.} TensorFlow) and how to properly provision system resources. \subsection{Implications on Framework Optimization} \subsubsection{System Architecture} In the PAI cluster, we identified plenty of DL models that are not suitable to be trained using either \emph{PS/Worker} or \emph{AllReduce}, \emph{e.g.}, models with one large sparse embedding and many relatively small dense weights (such as GCN in Section \ref{sec_casestudy}). The weight sizes within such workloads are too large to be resident in GPU memory. On the other hand, such workloads always incur heavy weight/gradient traffic, for which the Ethernet connections with limited bandwidth will be the bottleneck. For such workloads, we proposed a new strategy PEARL, as inspired by our characterization of collective behavior in the cluster, catering to the resource requirements of such workloads. Our simple analytical model can predict the time breakdown of jobs on different architectures, facilitating system architecture selection. Though our model does not take potential framework overhead into consideration, experiments show that its estimation is quite close to real measurements for representative workloads. A more comprehensive prediction method is one of our future directions to explore. \subsubsection{Compilation: Operation Fusion and Code Generation} Statistical results in Sec. \ref{sec_clusterbehavior} show that, within the computation part, the time spent on memory-bound operations is no less than that of computation-bound ones. TensorFlow XLA is a solid \mengdi{compilation} framework for operation fusion and code generation to reduce the memory accesses. We have shown that XLA is powerful enough to handle practical training workloads. As shown in Sec. \ref{sec_casestudy}, different workloads have drastically different computation profiles. For ResNet50, NMT and BERT, memory-access time takes at most 40\% of execution time. In large-scale recommendation models (Multi-Interests, GCN), it takes up to 60\%. For all these workloads, compilation using XLA is helpful in reducing CPU launch overhead and improving GPU computation efficiency. XLA is known to have several limitations. For example, it cannot deal well with workloads with dynamic shapes, the operation fusion algorithm is designed as rule-based and cannot be generalized well, the code generation mechanism still needs to be improved to generate highly optimized kernels \cite{FusionStitching}. The community is calling for a powerful, robust compilation infrastructure that is able to handle rapidly changing training workloads in the future. \subsubsection{Framework Overhead} Frameworks like TensorFlow use a flexible and general enough CPU runtime to do computation scheduling. If the main part of the computation graph consists of very fine-grained operations, CPU scheduling may incur non-negligible overheads, especially in busy CPU/GPU clusters with a mixture of workloads deployed. Most of our workloads have regular computation structures, and carry out repetitive iterations during the training process. Through compilation (discussed above), it is possible to allow a larger portion of the computation graph to be scheduled to the GPU altogether. \subsection{Implications on Hardware Configurations} \subsubsection{Interconnect Bandwidth} There are two types of interconnects for distributed training in our cluster: NVLink and Ethernet, with notable gap \emph{w.r.t.} the communication bandwidth. We have shown the performance gain of high speed interconnects for weight/gradient communication in numerous medium scale ($<$50GB) workloads. For large models (\emph{e.g.} \emph{Multi-Interests} model in Sec. \ref{sec_casestudy}), weight/gradient communication over the Ethernet can take up to more than 50\% of execution time per iteration. High-bandwidth interconnects will definitely help such communication-bound workloads, as shown in Fig. \ref{fig_resource_provision}. \subsubsection{PCIe Bandwidth} In our system settings, PCIe is mainly dedicated for data transfer between CPU and GPU. In distributed training, PCIe traffic normally consists of two portions: sample data input, and weight/gradient communication. As shown in Sec. \ref{sec_clusterbehavior}, in most workloads, the sample input volume is negligible, and weight/gradient transfer is usually bound by network rather than PCIe. However, this does not mean that PCIe bandwidth is less important for performance. As shown in Fig. \ref{fig_breakdown_allreduce}, the bottleneck may be shifted to PCIe after the network bandwidth usage is optimized. Additionally, high-speed PCIe interconnects can enable exciting new optimization opportunities for some mission critical applications. The basic idea is to push as much work as possible, from CPU to GPU, in order to allow more operations in the computation graph to be processed in GPU as a whole and minimize CPU intervention. \subsubsection{GPU Computing Power and Memory Bandwidth} Computing power and memory bandwidth of GPUs are essential for DL workloads. Important as they are, we have shown in Sec. \ref{sec_clusterbehavior} that weight/gradient communication renders the biggest performance bottleneck in our cluster. More careful model distribution and system architecture selection are necessary to mitigate communication overhead in order to fully exploit the computation power. \section{Related Work} \label{sec_related} There have recently been several studies conducting cluster-level machine learning workload characterization, aiming to improve resource utilization and workload performance in the ML cluster \cite{park2018deep,jeon2018multi,cortez2017resource}. Park \emph{et al.} \cite{park2018deep} analyze the inference workloads in a Facebook data center, pointing out limitations of the current ML infrastructure \cite{hazelwood2018applied} and providing suggestions for future general-purpose/accelerated inference hardware. Jeon \emph{et al.} \cite{jeon2018multi} present a detailed workload characterization of a two-month trace from a multi-tenant GPU cluster, and focused on resource utilization and scheduling. Some other work aim to establish the performance benchmark \cite{zhu2018benchmarking,adolf2016fathom,gao2018data,gao2018data2}. Fathom \cite{adolf2016fathom} establishes a set of reference implementation for eight archetypal DL jobs. Guignard \emph{et al.} \cite{guignard2018performance} adopt the eight types of workloads from Fathom to evaluate the performance of the IBM ``Minsky'' platform. A micro-benchmark is designed in \cite{chien2018characterizing} to measure reads in TensorFlow and a burst buffer is implemented to improve the I/O performance. Gao \emph{et al.} \cite{gao2018data,gao2018data2} establish a proxy benchmark for AI workloads by identifying eight data motifs. Several studies have focused on predicting the performance of a job using a mathematical model \cite{qi2016paleo,gu2017deepprof,venkataraman2016ernest,bakhoda2009analyzing}. PALEO \cite{qi2016paleo} establishes a performance model by extracting the basic computational requirements and mapping them to a specific point within the design space of software, hardware and communication strategies. DeepProf \cite{gu2017deepprof} is a tool that can automatically process GPU traces and generate performance reports for deep learning applications, which can perform diagnosis to identify the runtime bottleneck. The above two work both aim to break down the execution time of a workload, with the former analyzing from the theoretical perspective and the latter using runtime traces. Ernest \cite{venkataraman2016ernest} builds a performance model from the workload observation on small datasets and predicts the performance on larger datasets in bigger clusters. Justus \emph{et al.} \cite{justus2018predicting} predict execution time of one part in the entire DL network; execution time of the sub-graph constitutes a basic unit for predicting the end-to-end performance. Different from the existing work that aim at precisely predicting practical performance of a given workload, \mengdi{our work focuses on characterization of currently deployed jobs on our large cluster and extracting their fundamental resource requirements, in order to expose the potential hardware/software optimization directions at the cluster scale.} From our observations, we extract fundamental execution bottlenecks and identify latent, useful directions for training framework optimization or system configuration improvement. \section{Conclusion} \label{sec_conclusion} This paper presents a characterization framework to enable performance analysis over diversified production workloads running on Alibaba-PAI. The framework features a lightweight technique to collect runtime profiling metrics of workloads. Based on collected job statistics, we build a workload model to extract key features and project them to different system configurations in order to analytically predict the performance behavior. We characterize collective behavior of a large volume of workloads, as well as zoom into representative workloads for investigating impact of different system architectures and hardware configurations. We discuss potential technical directions for improving training performance of the workloads. As future work, we seek to characterize inference workloads in our cluster using a similar methodology. \bibliographystyle{unsrt}
1,477,468,750,485
arxiv
\section{Introduction} With the wide success of pre-trained large language models, a range of techniques has arisen to adapt these general-purpose models to downstream tasks. ELMo \cite{peters-etal-2018-deep} proposed freezing the pre-trained model and learning a task-specific weighting of its per-layer representations. However, since GPT \cite{gpt} and BERT \cite{devlin-etal-2019-bert}, the dominant adaptation technique has been \textbf{model tuning} (or ``fine-tuning''), where all model parameters are tuned during adaptation, as proposed by \citet{howard-ruder-2018-universal}. \begin{figure}[h!] \centering \includegraphics[width=0.9\columnwidth]{figures/figure1.pdf}\hspace{1.5ex} \caption{Standard \textbf{model tuning} of T5 achieves strong performance, but requires storing separate copies of the model for each end task. Our \textbf{prompt tuning} of T5 matches the quality of model tuning as size increases, while enabling the reuse of a single frozen model for all tasks. Our approach significantly outperforms few-shot \textbf{prompt design} using \mbox{GPT-3}. We show mean and standard deviation across $3$ runs for tuning methods.} \label{fig:model-size} \end{figure} More recently, \citet{brown_2020_gpt3} showed that \textbf{prompt design} (or ``priming'') is surprisingly effective at modulating a frozen \mbox{GPT-3} model's behavior through text prompts. Prompts are typically composed of a task description and/or several canonical examples. This return to ``freezing'' pre-trained models is appealing, especially as model size continues to increase. Rather than requiring a separate copy of the model for each downstream task, a single generalist model can simultaneously serve many different tasks. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{figures/figure2.pdf} \caption{\textbf{Model tuning} requires making a task-specific copy of the entire pre-trained model for each downstream task and inference must be performed in separate batches. \textbf{Prompt tuning} only requires storing a small task-specific prompt for each task, and enables mixed-task inference using the original pre-trained model. With a T5 ``XXL'' model, each copy of the tuned model requires $11$ billion parameters. By contrast, our tuned prompts would only require $20{,}480$ parameters per task---a reduction of \emph{over five orders of magnitude}---assuming a prompt length of $5$ tokens.} \label{fig:diagram} \end{figure} Unfortunately, prompt-based adaptation has several key drawbacks. Task description is error-prone and requires human involvement, and the effectiveness of a prompt is limited by how much conditioning text can fit into the model's input. As a result, downstream task quality still lags far behind that of tuned models. For instance, \mbox{GPT-3} 175B few-shot performance on SuperGLUE is $17.5$ points below fine-tuned T5-XXL \cite{raffel_2020_t5} ($71.8$ vs.~$89.3$) despite using $16$ times more parameters. Several efforts to automate prompt design have been recently proposed. \citet{shin-etal-2020-autoprompt} propose a search algorithm over the discrete space of words, guided by the downstream application training data. While this technique outperforms manual prompt design, there is still a gap relative to model tuning. \citet{li_2021_prefix_tuning} propose ``prefix tuning'' and show strong results on generative tasks. This method freezes the model parameters and backpropagates the error during tuning to prefix activations prepended to each layer in the encoder stack, including the input layer. \citet{hambardzumyan_2021_warp} simplify this recipe by restricting the trainable parameters to the input and output sub-networks of a masked language model, and show reasonable results on classifications tasks. In this paper, we propose \textbf{prompt tuning} as a further simplification for adapting language models. We freeze the entire pre-trained model and only allow an additional $k$ tunable tokens per downstream task to be prepended to the input text. This ``soft prompt'' is trained end-to-end and can condense the signal from a full labeled dataset, allowing our method to outperform few-shot prompts and close the quality gap with model tuning (Figure~\ref{fig:model-size}). At the same time, since a single pre-trained model is recycled for all downstream tasks, we retain the efficient serving benefits of frozen models (Figure~\ref{fig:diagram}). While we developed our method concurrently with \citet{li_2021_prefix_tuning} and \citet{hambardzumyan_2021_warp}, we are the first to show that prompt tuning alone (with no intermediate-layer prefixes or task-specific output layers) is sufficient to be competitive with model tuning. Through detailed experiments in sections~\ref{sec:tuning}--\ref{sec:results}, we demonstrate that language model capacity is a key ingredient for these approaches to succeed. As Figure~\ref{fig:model-size} shows, \emph{prompt tuning becomes more competitive with scale.} We compare with similar approaches in Section~\ref{sec:previous_work}. Explicitly separating task-specific parameters from the ``generalist'' parameters needed for general language-understanding has a range of additional benefits. We show in Section~\ref{sec:shift} that by capturing the task definition in the prompt while keeping the generalist parameters fixed, we are able to achieve better resilience to domain shifts. In Section~\ref{sec:ensemble}, we show that ``prompt ensembling'', learning multiple prompts for the same task, can boost quality and is more efficient than classic model ensembling. Finally, in Section~\ref{sec:interpretability}, we investigate the interpretability of our learned soft prompts. In sum, our key contributions are: \begin{enumerate} [topsep=3pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item Proposing prompt tuning and showing its competitiveness with model tuning in the regime of large language models. \item Ablating many design choices, and showing quality and robustness improve with scale. \item Showing prompt tuning outperforms model tuning on domain shift problems. \item Proposing ``prompt ensembling'' and showing its effectiveness. \end{enumerate} \section{Prompt Tuning} \label{sec:tuning} Following the ``text-to-text'' approach of T5 \cite{raffel_2020_t5}, we cast all tasks as text generation. Instead of modeling classification as the probability of an output class given some input, $\Pr(y|X)$, where $X$ is a series of tokens and $y$ is a single class label, we now model it as conditional generation, where $Y$ is a sequence of tokens that represent a class label. T5 models classification as $\Pr_{\theta}(Y | X)$, parameterized by the weights, $\theta$, of the transformers \cite{vaswani2017attention} that make up its encoder and decoder. Prompting is the approach of adding extra information for the model to condition on during its generation of $Y$. Normally, prompting is done by prepending a series of tokens, $P$, to the input $X$, such that the model maximizes the likelihood of the correct $Y$, $\Pr_{\theta}(Y|[P;X])$, while keeping the model parameters, $\theta$, fixed. In \mbox{GPT-3}, the representations of the prompt tokens, $P = \{p_1, p_2, \dots, p_n\}$, are part of the model's embedding table, parameterized by the frozen $\theta$. Finding an optimal prompt thus requires the selection of prompt tokens, through either manual search or non-differentiable search methods \cite{jiang-etal-2020-know,shin-etal-2020-autoprompt}. Prompt tuning removes the restriction that the prompt $P$ be parameterized by $\theta$; instead the prompt has its own dedicated parameters, $\theta_P$, that can be updated. While prompt \emph{design} involves selecting prompt tokens from a fixed vocabulary of frozen embeddings, prompt \emph{tuning} can be thought of as using a fixed prompt of special tokens, where only the embeddings of these prompt tokens can be updated. Our new conditional generation is now $\Pr_{\theta;\theta_P}(Y | [P;X])$ and can be trained by maximizing the likelihood of $Y$ via backpropagation, while only applying gradient updates to $\theta_P$. Given a series of $n$ tokens, $\{x_1, x_2, \dots, x_n\}$, the first thing T5 does is embed the tokens, forming a matrix $X_e \in \mathbb{R}^{n \times e}$ where $e$ is the dimension of the embedding space. Our soft-prompts are represented as a parameter $P_e \in \mathbb{R}^{p \times e}$, where $p$ is the length of the prompt. Our prompt is then concatenated to the embedded input forming a single matrix $[P_e;X_e] \in \mathbb{R}^{(p+n)\times e}$ which then flows though the encoder-decoder as normal. Our models are trained to maximize the probability of $Y$, but only the prompt parameters $P_e$ are updated. \subsection{Design Decisions} There are many possible ways to initialize the prompt representations. The simplest is to train from scratch, using random initialization. A more sophisticated option is to initialize each prompt token to an embedding drawn from the model's vocabulary. Conceptually, our soft-prompt modulates the frozen network's behavior in the same way as text preceding the input, so it follows that a word-like representation might serve as a good initialization spot. For classification tasks, a third option is to initialize the prompt with embeddings that enumerate the output classes, similar to the ``verbalizers'' of \citet{schick-schutze-2021-exploiting}. Since we want the model to produce these tokens in the output, initializing the prompt with the embeddings of the valid target tokens should prime the model to restrict its output to the legal output classes. Another design consideration is the length of the prompt. The parameter cost of our method is $EP$, where $E$ is the token embedding dimension and $P$ is the prompt length. The shorter the prompt, the fewer new parameters must be tuned, so we aim to find a minimal length that still performs well. \subsection{Unlearning Span Corruption} \label{sec:span_corruption} Unlike autoregressive language models like \mbox{GPT-3}, the T5 models we experiment with use an encoder-decoder architecture and pre-train on a span corruption objective. Specifically, T5 is tasked with ``reconstructing'' masked spans in the input text, which are marked with unique sentinel tokens. The target output text consists of all the masked content, separated by sentinels, plus a final sentinel. For instance, from the text ``Thank you for inviting me to your party last week'' we might construct a pre-training example where the input is ``Thank you $\langle$X$\rangle$ me to your party $\langle$Y$\rangle$ week'' and the target output is ``$\langle$X$\rangle$ for inviting $\langle$Y$\rangle$ last $\langle$Z$\rangle$''. While \citet{raffel_2020_t5} find this architecture and pre-training objective more effective than traditional language modeling, we hypothesize that this setup is not a good fit for producing a frozen model that can be readily controlled through prompt tuning. In particular, a T5 model pre-trained exclusively on span corruption, such as T5.1.1, has never seen truly natural input text (free of sentinel tokens), nor has it ever been asked to predict truly natural targets. In fact, due to the details of T5's span corruption preprocessing, every pre-training target will begin with a sentinel. While this ``unnatural'' tendency to output sentinels is easy to overcome through fine-tuning, we suspect that it would be much harder to override through a prompt alone, as the decoder priors cannot be adjusted. Given these concerns, we experiment with T5 models in three settings. (1) ``Span Corruption'': We use pre-trained T5 off-the-shelf as our frozen model, and test its ability to output the expected text for downstream tasks. (2) ``Span Corruption + Sentinel'': We use the same model, but prepend all downstream targets with a sentinel, so as to more closely resemble the targets seen in pre-training. (3) ``LM Adaptation'': We continue T5's self-supervised training for a small number of additional steps, but using the ``LM'' objective discussed by \citet{raffel_2020_t5}; given a natural text prefix as input, the model must produce the natural text continuation as output. Crucially, this adaptation happens \emph{only once}, producing a single frozen model that we can reuse for prompt tuning across any number of downstream tasks. Through LM adaptation, we hope to ``quickly'' transform T5 into a model more similar to \mbox{GPT-3}, which always outputs realistic text, and is known to respond well to prompts as a ``few-shot learner''. It is not obvious how successful this late-stage transformation will be compared to pre-training from scratch, and it has not been investigated previously to our knowledge. As such, we experiment with various lengths of adaptation up to 100K steps. \begin{figure*}[ht!] \centering \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=0.75\columnwidth, trim=5 10 15 5, clip]{ figures/prompt-ablation.pdf} \caption{Prompt length} \label{fig:ablate-length} \end{subfigure} \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=0.75\columnwidth, trim=5 10 15 5, clip]{figures/init-ablation.pdf} \caption{Prompt initialization} \label{fig:ablate-init} \end{subfigure} \vspace{1ex} \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=0.75\columnwidth, trim=5 10 15 5, clip]{figures/pretraining-ablation.pdf} \caption{Pre-training method} \label{fig:ablate-pretrain} \end{subfigure} \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=0.75\columnwidth, trim=5 10 15 5, clip]{ figures/adaptation-ablation.pdf} \caption{LM adaptation steps} \label{fig:ablate-lm-steps} \end{subfigure} \caption{Ablations of various hyperparameters on prompt tuning performance (mean and stddev across $3$ runs). In our ``\hyperref[settings:default]{default}'' (\inlinegraphics{figures/green-x.png}) configuration, quality improves stably with model size. Across all ablations, \emph{the largest (XXL) model is the most robust to hyperparameter choice}. \subref{fig:ablate-length}~\textbf{Prompt length}: Increasing to $20$+ tokens generally confers a large boost, but XXL performs well even with single-token prompts. \subref{fig:ablate-init}~\textbf{Prompt initialization}: Random uniform initialization lags behind more ``advanced'' initializations using sampled vocabulary or class label embeddings, but the difference vanishes at XXL size. \subref{fig:ablate-pretrain}~\textbf{Pre-training objective}: LM adaptation outperforms span corruption, even when a sentinel is added to downstream task targets, but XXL works well with any method. \subref{fig:ablate-lm-steps}~\textbf{LM adaptation}: Longer adaptation generally gives larger gains, but XXL is robust to even short adaptation.} \label{fig:full-ablation} \end{figure*} \section{Results} \label{sec:results} Our frozen models are built on top of pre-trained T5 checkpoints of all sizes (Small, Base, Large, XL, XXL). We leverage the public T5.1.1 checkpoints, which include improvements over the original T5.\footnote{These improvements are (1) the removal of all supervised data from pre-training, (2) adjustments to hyperparameters $d_{\text{model}}$ and $d_{f\!f}$, and (3) the use of GeGLU \cite{shazeer2020glu} over ReLU \cite{Nair2010RectifiedLU} activations.} \label{settings:default}Our ``default'' configuration, plotted with a green `$\times$' (\inlinegraphics{figures/green-x.png}) throughout, uses an LM-adapted version of T5 trained for an additional 100K steps, initializes using class labels (see Section~\ref{sec:ablations}), and uses a prompt length of $100$ tokens. While this is longer than the default 10-token prefix used by \citet{li_2021_prefix_tuning}, our method still uses fewer task-specific parameters, as we only tune the input layer, as opposed to overwriting activations in all network layers. See Figure~\ref{fig:param_counts} for a detailed comparison. We will also see shortly that even much shorter prompts are viable as model size increases. We measure performance on the SuperGLUE benchmark \cite{wang2019superglue}, a collection of eight challenging English language understanding tasks.\footnote{The tasks are BoolQ \cite{clark-etal-2019-boolq}, CB \cite{de-marneff_simons_tonhauser_2019}, COPA \cite{roemmele2011choice}, MultiRC \cite{MultiRC2018}, ReCoRD \cite{zhang2018record}, RTE \cite{dagan2005pascal,bar2006second,giampiccolo2007third,bentivogli2009fifth}, WiC \cite{DBLP:journals/corr/abs-1808-09121}, and WSC \cite{levesque2012winograd}.} We report metrics on the development set associated with each dataset. Each of our prompts train on a single SuperGLUE task; there was no multi-task setup or mixing of training data across tasks. We translate each SuperGLUE dataset into a text-to-text format following \citet{raffel_2020_t5}, except that we omit the task names prepended to inputs indicating which SuperGLUE task an example belongs to. We train our prompts for $30{,}000$ steps using T5's standard cross-entropy loss, with a constant learning rate of $0.3$ and a batch size of $32$. Checkpoints are selected via early stopping on the development set, where the stopping metric is the default metric for the dataset, or the average of metrics for datasets evaluated with multiple metrics. All experiments were run in JAX \cite{jax2018github} using the Adafactor optimizer \cite{pmlr-v80-shazeer18a} with weight decay $1e{-5}$, $\beta_2$ decay $0.8$, and parameter scaling off. The models were implemented in Flax \cite{flax2020github}. More details are available in Appendix~\ref{app:reproducibility}. \subsection{Closing the Gap} To compare our method with standard model tuning, we tune the public T5.1.1 checkpoints on SuperGLUE using the default hyperparameters specified in the T5 library (learning rate $0.001$, and Adafactor optimizer with pre-training parameter states restored). We consider two baselines. (1)~``Model Tuning'': For an apples-to-apples comparison, we tune on each task separately, as in our prompt tuning setup.\footnote{To improve this baseline, we performed a sweep over the batch size hyperparameter and selected $2^{16}$ tokens per batch.} (2)~``Model Tuning (Multi-task)'': We use T5's multi-task tuning setup to achieve a more competitive baseline.\footnote{The T5 SuperGLUE submission used a more complex setup, first mixing multi-task supervised data into pre-training, and then performing single-task fine-tuning. Since we use T5.1.1 throughout, this setup is unavailable, as the pre-training phase is fully self-supervised. We follow \citet{raffel_2020_t5} in using $2^{20}$ tokens per batch and including DPR data in the multi-task mixture, which is known to boost WSC task performance \cite{kocijan-etal-2019-surprisingly}.} In this case, a single model is tuned on all tasks jointly, with a text prefix indicating the task name. In Figure~\ref{fig:model-size} (p.~1), we see that prompt tuning becomes more competitive with model tuning as scale increases. At the XXL size (11 billion parameters), prompt tuning matches even the stronger multi-task model tuning baseline, despite having over $20{,}000$ times fewer task-specific parameters. To compare with prompt design, we include GPT-3 few-shot performance on the SuperGLUE dev split, as reported by \citet{brown_2020_gpt3}.\footnote{We also experimented with using GPT-3's manual text prompts directly with our LM-adapted T5 checkpoints. However performance was far below GPT-3 for comparable model sizes. This may be due to differences in pre-training data and model architecture, as well as T5's shorter sequence length.} Figure~\ref{fig:model-size} shows that prompt tuning beats \mbox{GPT-3} prompt design by a large margin, with prompt-tuned \mbox{T5-Small} matching \mbox{GPT-3} XL (over $16$ times larger), and prompt-tuned \mbox{T5-Large} beating \mbox{GPT-3} 175B (over $220$ times larger). \subsection{Ablation Study} \label{sec:ablations} \paragraph{Prompt Length} We train prompts for each model size while varying the prompt length in $\{1, 5, 20, 100, 150\}$ and fixing other settings to our \hyperref[settings:default]{default configuration}. Figure~\ref{fig:ablate-length} shows that for most model sizes, increasing prompt length beyond a single token is critical to achieve good performance. Notably, the XXL model still gives strong results with a single-token prompt, suggesting that the larger the model, the less conditioning signal is needed to achieve a target behavior. Across all models, increasing beyond $20$ tokens only yields marginal gains.\footnote{Going past $100$ tokens appears mildly detrimental for larger models. A similar pattern of diminishing performance past a certain prefix length is observed by \citet{li_2021_prefix_tuning}.} \paragraph{Prompt Initialization} We ablate the effect of prompt initialization by training models at all sizes while fixing other hyperparameters to their \hyperref[settings:default]{default values}. For random initialization, we sample uniformly from the range [$-0.5$, $0.5$]. When initializing from sampled vocabulary, we restrict to the $5{,}000$ most ``common'' tokens in T5's SentencePiece vocabulary \cite{kudo2018sentencepiece}, which is ordered by likelihood in the pre-training corpus. For ``class label'' initialization, we take the embeddings for the string representations of each class in the downstream task and use them to initialize one of the tokens in the prompt. When a class label is multi-token, we average the token embeddings. At longer prompt lengths, we often run out of class labels before we have initialized all of the prompt tokens. In this case we fall back to our sampled vocab strategy to fill in the prompt.\footnote{T5's handling of the ReCoRD and WSC tasks requires the model to generate short, free-form text. In these cases, we initialize the prompts with words related to the task: \textit{commonsense}, \textit{reasoning}, \textit{reading}, and \textit{comprehension} for ReCoRD and \textit{commonsense}, \textit{pronoun}, and \textit{resolution} for WSC.} Figure~\ref{fig:ablate-init} shows our ablation of initialization strategy across model sizes, where we find that the class based initialization performs best. At smaller model sizes, there are large gaps between the different initializations, but once the model is scaled to XXL size, those differences disappear. With ``class label'' initialization, we observe that the class labels typically persist in the learned prompts, such that the nearest token embeddings (in cosine distance) match the tokens used for initialization. Beyond this, we did not find our learned prompts to be interpretable, similar to those of \citet{shin-etal-2020-autoprompt}. See Section~\ref{sec:interpretability} for details. \paragraph{Pre-training Objective} In Figures~\ref{fig:ablate-pretrain} and \ref{fig:ablate-lm-steps}, we see pre-training objective has a clear effect on prompt tuning quality. As hypothesized in Section~\ref{sec:span_corruption}, T5's default ``span corruption'' objective is not well-suited for training frozen models to be later conditioned by prompts. Intuitively, models pre-trained to read and write sentinel tokens are hard to apply directly to tasks of reading and writing text without sentinels. As seen in Figure~\ref{fig:ablate-pretrain}, even the ``workaround'' of adding a sentinel to the downstream targets has little benefit. While LM adaptation adds value across all model sizes, we note our largest XXL model is the most forgiving and gives strong results even with span corruption. Given the benefit of LM adaptation, we also explore how long of an adaptation is helpful. Figure~\ref{fig:ablate-lm-steps} shows that longer adaptation provides additional gains, up to $100$K steps. This suggests that the ``transition'' from span corruption to a language modeling objective is not a trivial change, and making an effective switch takes an investment of training resources ($10$\% of the steps of the original T5 pre-training). At the same time, as in our other ablations, we observe that the XXL model is robust to even non-ideal configurations. At this size, the gains from adaptation are quite modest. In the non-optimal ``span corruption'' setting, we observe instability across model sizes, with the Small model outperforming the larger Base, Large, and XL models. On inspection, we find that for many tasks, these mid-sized models never learn to output a legal class label and thus score 0\%. The two most common error modes are copying sub-spans from the input and predicting an empty string. Furthermore, this poor performance is not due to random variance in prompt tuning, as we observe low variance across $3$ runs for each size. These results indicate that using models pre-trained with the ``span corruption'' objective can be unreliable, with only $2$ out of $5$ models working well, whereas the LM adapated versions work reliably across all model sizes. We have released T5 1.1 checkpoints adapted using the LM objective for $100$K steps for all model sizes.\footnote{\url{https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md##lm-adapted-t511lm100k}} \section{Comparison to Similar Approaches} \label{sec:previous_work} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figures/conditioning-size-vs-param-count.pdf} \caption{Parameter usage of various adaptation techniques, fixing architecture to T5.1.1 and prompt/prefix length to $1$--$100$ tokens (bands show mean and stddev). \textbf{Model Tuning}: All parameters are task-specific. \textbf{Prefix Tuning}: Activations are tuned in the prefix of each layer, requiring $0.1$--$1$\% task-specific parameters for inference, but more are used for training. \textbf{WARP}: Task parameters are reduced to under $0.1$\% by only tuning input and output layers. \textbf{Prompt Tuning}: Only prompt embeddings are tuned, reaching under $0.01$\% for most model sizes. \textbf{Prompt Design}: Only a sequence of prompt IDs ($500$--$2000$ tokens) is required.} \label{fig:param_counts} \end{figure} In this section, we review recent work on learning continuous prompts, and draw comparisons with our method. One important axis of comparison is the number of task-specific parameters each method requires, as shown in Figure~\ref{fig:param_counts}. Among methods with learnable parameters, prompt tuning is the most parameter efficient, requiring less than $0.01$\% task-specific parameters for models over a billion parameters.\footnote{To compare with prompt design, we count each token ID in the prompt as a parameter, and assume a prompt of between $500$--$2000$ tokens to match the GPT-3 setting. While this technique is by far the most parameter efficient, it comes at the cost of task quality.} \citet{li_2021_prefix_tuning} propose ``prefix tuning'': learning a sequence of prefixes that are prepended at every transformer layer. This is akin to learning transformer activations that are fixed across examples at every network layer. In contrast, prompt tuning uses a single prompt representation that is prepended to the embedded input. Beyond requiring fewer parameters, our approach allows the transformer to update the intermediate-layer task representations, as contextualized by an input example. Their work builds on GPT-2 \cite{radford2019language} and BART \cite{lewis-etal-2020-bart}, while ours focuses on T5 and examines changes in performance and robustness to design choices as model size increases. When using BART, prefix tuning includes prefixes on both the encoder and decoder network, while prompt tuning only requires prompts on the encoder. \citet{li_2021_prefix_tuning} also rely on a reparameterization of the prefix to stabilize learning, which adds a large number of parameters during training, whereas our configuration does not require this reparameterization and is robust across SuperGLUE tasks and model sizes. \citet{hambardzumyan_2021_warp} propose ``WARP'', where prompt parameters are added to the input layer. This method works with masked language models, relying on a {\tt [MASK]} token and a learnable output layer to project the mask to class logits. This formulation restricts the model to producing a single output, limiting it to classification. Prompt tuning does not require any changes to the input or a task-specific head. The performance of prompt tuning is also considerably closer to the strong performance of model tuning. \citet{liu2021gpt} propose ``P-tuning'' where learnable continuous prompts are interleaved throughout the embedded input, using patterns based on human design. Our approach removes this complication by simply prepending the prompt to the input. To achieve strong SuperGLUE results, P-tuning has to be used in \emph{conjunction} with model tuning, that is, models jointly update both the prompt and the main model parameters, whereas our approach keeps the original language model frozen.\footnote{As another difference, P-tuning requires the addition of ``anchor'' tokens in the input (e.g.\ a question mark following the hypothesis in the RTE task) to achieve strong performance, while prompt tuning leaves inputs untouched.} \citet{qinLearningHowAsk2021} use ``soft words'' to learn prompts to extract knowledge from pre-trained LMs. Prompts are positioned in relation to the input based on hand-designed prompt prototypes, and a learned $\Delta_i^{\ell}$ parameter is included for each layer, so parameter cost scales with model depth. \citet{logeswaranFewshotSequenceLearning2020} use a learnable prepended token to adapt transformer models to various tasks, but focus on small synthetic datasets designed to accommodate a compositional task representation, as opposed to larger real-world datasets. Their base models are small transformers trained from scratch \emph{jointly} with the task representations, whereas we keep the base model frozen and investigate scaling laws using larger transformers. More generally, work on task prompts is closely aligned with work on ``adapters'' \cite{rebuffi_2017_adapters, houlsby_2019_adapters}, small bottleneck layers inserted \emph{between} frozen pre-trained network layers. Adapters offer another means of reducing task-specific parameters, with \citet{houlsby_2019_adapters} achieving GLUE performance close to full model tuning when freezing BERT-Large and only adding $2$--$4$\% additional parameters. \citet{pfeiffer-etal-2020-mad} use multiple adapters in a multilingual context to explicitly separate language understanding from task specification, similar to our approach. A core difference between adapters and prompt tuning is how the approaches change model behavior. Adapters modify the actual function that acts on the input representation, parameterized by the neural network, by allowing the rewriting of activations at any given layer. Prompt tuning modifies behavior by leaving the function fixed and adding new input representations that can affect how subsequent input is processed. \section{Resilience to Domain Shift} \label{sec:shift} \begin{table}[] \centering \footnotesize \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{ll|ccr} \toprule \textbf{Dataset} & \textbf{Domain} & \textbf{Model} & \textbf{Prompt} & $\Delta$ \\ \midrule SQuAD & Wiki & 94.9 $\pm$0.2 & 94.8 $\pm$0.1 & $-$0.1 \\ \midrule TextbookQA & Book & 54.3 $\pm$3.7 & \textbf{66.8} $\pm$2.9 & +12.5 \\ BioASQ & Bio & 77.9 $\pm$0.4 & \textbf{79.1} $\pm$0.3 & +1.2 \\ RACE & Exam & 59.8 $\pm$0.6 & \textbf{60.7} $\pm$0.5 & +0.9 \\ RE & Wiki & 88.4 $\pm$0.1 & \textbf{88.8} $\pm$0.2 & +0.4 \\ DuoRC & Movie & \textbf{68.9} $\pm$0.7 & 67.7 $\pm$1.1 & $-$1.2 \\ DROP & Wiki & \textbf{68.9} $\pm$1.7 & 67.1 $\pm$1.9 & $-$1.8 \\ \bottomrule \end{tabular} } \caption{ F1 mean and stddev for models trained on SQuAD and evaluated on out-of-domain datasets from the MRQA 2019 shared task. Prompt tuning tends to give stronger zero-shot performance than model tuning, especially on datasets with large domain shifts like TextbookQA. } \label{tab:domain-squad} \end{table} By freezing the core language model parameters, prompt tuning prevents the model from modifying its general understanding of language. Instead, prompt representations indirectly modulate the representation of the input. This reduces the model's ability to overfit to a dataset by memorizing specific lexical cues and spurious correlations. This restriction suggests that prompt tuning may improve robustness to domain shifts, where the distribution of inputs differs between training and evaluation. We investigate zero-shot domain transfer on two tasks: question answering (QA) and paraphrase detection. For question answering, we use the MRQA 2019 shared task on generalization \cite{fisch2019mrqa}. This task collects extractive QA datasets in a unified format and tests how models trained on ``in-domain'' datasets perform when evaluated on ``out-of-domain'' datasets. For our experiments, we train on SQuAD \cite{rajpurkar-etal-2016-squad} and evaluate on each of the out-of-domain datasets.\footnote{We select checkpoints based on SQuAD validation F1. The out-of-domain datasets are TextbookQA \cite{8100054}, RACE \cite{lai-etal-2017-race}, BioASQ (\url{http://bioasq.org/}), RE \cite{levy-etal-2017-zero}, DuoRC \cite{saha-etal-2018-duorc}, and DROP \cite{dua-etal-2019-drop}.} Table~\ref{tab:domain-squad} shows that prompt tuning outperforms model tuning on the majority of out-of-domain datasets, with a remarkable $12.5$ point F1 gap between the two approaches on TextbookQA. We observe larger gains from prompt tuning in cases of larger domain shifts (e.g.~to Biomedical in BioASQ or to Textbooks in TextbookQA). Of the datasets where model tuning is better, we see that DROP shares a domain (Wikipedia) with SQuAD and is thus one of the smallest domain transfers. As a second test of robustness to domain shift, we explore transfer between two paraphrase detection tasks from GLUE \cite{wang2019glue}. The first task is QQP \cite{WinNT}, which asks if two questions from the community Q\&A site Quora are ``duplicates''. The second task is MRPC \cite{dolan2005automatically}, which asks if two sentences drawn from news articles are paraphrases. We test transfer in both directions (QQP $\Leftrightarrow$ MRPC). As before, we train on the ``in-domain'' task, select checkpoints using in-domain validation, and evaluate zero-shot on the ``out-of-domain'' task. \begin{table}[] \footnotesize \centering \resizebox{0.85\columnwidth}{!}{ \begin{tabular}{lll|cc} \toprule \textbf{Train} & \textbf{Eval} & \textbf{Tuning} & \textbf{Accuracy} & \textbf{F1} \\ \midrule QQP & MRPC & Model & 73.1 $\pm$0.9 & 81.2 $\pm$2.1 \\ & & Prompt & \textbf{76.3} $\pm$0.1 & \textbf{84.3} $\pm$0.3 \\ \midrule MRPC & QQP & Model & 74.9 $\pm$1.3 & \textbf{70.9} $\pm$1.2 \\ & & Prompt & \textbf{75.4} $\pm$0.8 & 69.7 $\pm$0.3 \\ \bottomrule \end{tabular} } \caption{ Mean and stddev of zero-shot domain transfer between two paraphrase detection tasks. } \label{tab:domain-paraphrase} \end{table} Table~\ref{tab:domain-paraphrase} shows that training a lightweight prompt on the QQP data and evaluating on MRPC gives much better performance than tuning the entire model (+$3.2$ accuracy and +$3.1$ F1). The results are much closer in the other direction, with prompt tuning showing a small improvement in accuracy and a small drop in F1\@. These results support the view that model tuning may be over-parameterized and more prone to overfit the training task, to the detriment of similar tasks in different domains. \section{Prompt Ensembling} \label{sec:ensemble} Ensembles of neural models trained from different initializations on the same data are widely observed to improve task performance \cite{hansen_1990_ensembles} and are useful for estimating model uncertainty \cite{lakshminarayanan_2017_deep_ensembles}. However, as model size increases, ensembling can become impractical. Beyond the space required to store $N$ models (e.g.\ $42$ GiB for each copy of T5-XXL), there is a substantial inference cost to running $N$ distinct models, whether in parallel or in series. Prompt tuning provides a more efficient way to ensemble multiple adaptations of a pre-trained language model. By training $N$ prompts on the same task, we create $N$ separate ``models'' for a task, while still sharing the core language modeling parameters throughout. Beyond drastically reducing storage costs, the prompt ensemble makes inference more efficient. To process one example, rather than computing forward passes of $N$ different models, we can execute a single forward pass with a batch size of $N$, replicating the example across the batch and varying the prompt. These savings mirror those seen for multi-tasking in Figure~\ref{fig:diagram}. \begin{table}[t] \setlength\tabcolsep{5pt} \centering \footnotesize \resizebox{\columnwidth}{!}{ \begin{tabular}{ll|ccc} \toprule \textbf{Dataset} & \textbf{Metric} & \textbf{Average} & \textbf{Best} & \textbf{Ensemble} \\ \midrule BoolQ & acc. & 91.1 & 91.3 & \textbf{91.7} \\ CB & acc./F1 & 99.3 / 99.0 & 100.00 / 100.00 & \textbf{100.0} / \textbf{100.0} \\ COPA & acc. & 98.8 & 100.0 & \textbf{100.0} \\ MultiRC & EM/F1$_a$ & 65.7 / 88.7 & 66.3 / 89.0 & \textbf{67.1} / \textbf{89.4} \\ ReCoRD & EM/F1 & 92.7 / 93.4 & 92.9 / 93.5 & \textbf{93.2} / \textbf{93.9} \\ RTE & acc. & 92.6 & \textbf{93.5} & \textbf{93.5} \\ WiC & acc. & 76.2 & 76.6 & \textbf{77.4} \\ WSC & acc. & 95.8 & \textbf{96.2} & \textbf{96.2} \\ \midrule \multicolumn{2}{l}{SuperGLUE (dev)} & 90.5 & 91.0 & \textbf{91.3} \\ \bottomrule \end{tabular} } \caption{ Performance of a five-prompt ensemble built from a single frozen T5-XXL model exceeds both the average and the best among the five prompts.} \label{tab:ensemble} \end{table} To demonstrate the viability of prompt ensembling, we train five prompts for each SuperGLUE task, using a single frozen T5-XXL model with our default hyperparameters. We use simple majority voting to compute predictions from the ensemble. Table~\ref{tab:ensemble} shows that across all tasks, the ensemble beats the single-prompt average and beats, or matches, the best individual prompt. \section{Interpretability} \label{sec:interpretability} An ideally interpretable prompt would consist of natural language that clearly describes the task at hand, explicitly asks the model for some result or action, and makes it easy to understand why the prompt elicited such behavior from the model. As prompt tuning works in the continuous embedding space rather than the discrete token space, interpreting prompts becomes more difficult. To test the interpretability of our learned soft prompts, we compute the nearest neighbors to each prompt token from the frozen model's vocabulary. We use cosine distance between the vocabulary embedding vector and the prompt token representation as the similarity metric. We observe that for a given learned prompt token, the top-5 nearest neighbors form tight semantic clusters. For example, we see lexically similar clusters such as \{~\textit{Technology} / \textit{technology} / \textit{Technologies} / \textit{technological} / \textit{technologies}~\}, as well as more diverse but still strongly related clusters such as \{~\textit{entirely} / \textit{completely} / \textit{totally} / \textit{altogether} / \textit{100\%}~\}. The nature of these clusters suggests that the prompts are in fact learning ``word-like'' representations. We found that random vectors drawn from the embedding space do not show this sort of semantic clustering. When initializing the prompts using the ``class-label'' strategy, we often find that the class labels persist through training. Specifically, if a prompt token is initialized to a given label, that label is often among the learned token's nearest neighbors after tuning. When initializing with the ``Random Uniform'' or ``Sampled Vocab'' methods, the class labels can also be found in the nearest neighbors of the prompts; however they tend to appear as neighbors to multiple prompt tokens. This suggests that the model is learning to store the expected output classes in the prompts as reference, and initializing the prompt to outputs classes makes this easier and more centralized. When examining longer prompts (e.g.~size $100$), we often find several prompt tokens with the same nearest neighbors. This suggests there is either excess capacity in the prompt, or that the lack of sequential structure in the prompt representation makes it difficult for the model to localize information to a specific position. While the learned prompts taken as sequences show little interpretability, we do observe a high frequency of words like \textit{science}, \textit{technology} and \textit{engineering} as the nearest neighbors for prompts trained on the BoolQ dataset and approximately $20$\% of the questions are in the ``Nature/Science'' category. While more investigation is needed, this suggests that one role of the prompt may be to prime the model to interpret inputs in a specific domain or context (e.g.~``scientific''). \section{Conclusion} In this paper, we showed that prompt tuning is a competitive technique for adapting frozen pre-trained language models to downstream tasks. On the popular SuperGLUE benchmark, its task performance rivals that of traditional model tuning, with the gap vanishing as model size increases. On zero-shot domain transfer, we found that prompt tuning leads to improved generalization. This plausibly indicates that freezing general-purpose language understanding parameters and restricting downstream learning to a lightweight parameter footprint can help to avoid overfitting to a specific domain. Beyond task quality metrics, we discussed the appeal of moving to frozen pre-trained models in terms of storage and serving costs. This move enables both efficient multi-task serving, as well as efficient high-performing prompt ensembling. Looking forward, we believe that factoring out task-defining parameters as distinct from general language-modeling parameters is an exciting step that opens up many avenues for new research. \section*{Acknowledgements} We thank Lucas Dixon, Waleed Ammar, Slav Petrov and Sebastian Ruder for comments on an earlier draft, and the following people for helpful discussion: Colin Raffel, Adam Roberts, and Noam Shazeer. We thank Linting Xue for help with the LM adaptation training.
1,477,468,750,486
arxiv
\section{Introduction}\label{sec1} Throughout this paper we will work over the complex number field. In this paper we deal with varieties of Calabi--Yau type. \begin{defn} Let $X$ be a normal projective variety. Then $X$ is {\em of Calabi--Yau type} if there is an $\mathbb{R}$-divisor $C\geq0$ such that $(X,C)$ is lc and $K_{X}+C\equiv 0$. \end{defn} The main result of this paper is the non-vanishing theorem for lc pairs whose underlying variety is of Calabi--Yau type. \begin{thm}\label{thmmain} Let $X$ be a normal projective variety. Suppose that $X$ is of Calabi--Yau type. Then, for any lc pair $(X,\Delta)$, the non-vanishing conjecture holds. In other words, if $K_{X}+\Delta$ is pseudo-effective there exists an $\mathbb{R}$-divisor $E\geq0$ such that $K_{X}+\Delta\sim_{\mathbb{R}}E$. \end{thm} Here we recall statement of the non-vanishing conjecture. \begin{conj}[Non-vanishing]\label{conjnon} Let $(X,\Delta)$ be a projective lc pair such that $K_{X}+\Delta$ is pseudo-effective. Then there exists an $\mathbb{R}$-divisor $E\geq0$ such that $K_{X}+\Delta \sim_{\mathbb{R}}E$. \end{conj} Conjecture \ref{conjnon} is one of the most important open problems in the minimal model theory. It is known by Birkar \cite{birkar-existII} that Conjecture \ref{conjnon} implies the minimal model conjecture. Today Conjecture \ref{conjnon} is known for lc pairs of dimension $\leq3$ but the conjecture is only partially solved in higher dimensional case. For example, Conjecture \ref{conjnon} for lc pairs $(X,\Delta)$ of ${\rm dim}\,X\geq4$ is known when \begin{itemize} \item $(X,\Delta)$ is klt and $\Delta$ is big (cf.~\cite{bchm}), \item $(X,\Delta)$ is klt and $X$ is rationally connected (cf.~\cite{gongyo-nonvanishing}), or \item $K_{X}\equiv0$ (cf.~\cite{gongyo}, see also \cite{ckp}, \cite{kawamata}, \cite{ambro} and \cite{nakayama-zariski-decom}). \end{itemize} Moreover the arguments in \cite{gongyo-nonvanishing} and \cite{dhp} show that Conjecture \ref{conjnon} holds for any lc pair $(X,\Delta)$ such that ${\rm dim}\,X=4$ and $X$ is uniruled, though it is not written explicitly in their papers. Lazi\'c and Peternell proved Conjecture \ref{conjnon} for terminal $4$-folds under the assumption that $\chi(X,\mathcal{O}_{X})\neq0$ and $K_{X}$ has a singular metric with algebraic singularities and positive curvature current (cf.~\cite[Theorem B]{lazicpeter}). We note that the case $K_{X}\equiv0$ mentioned above is a special case of Theorem \ref{thmmain}. Indeed, when $K_{X}\equiv0$ in Theorem \ref{thmmain}, the statement of the theorem is equivalent to the abundance theorem for numerically trivial lc pairs and it is proved by Gongyo \cite{gongyo} (see also \cite{ckp} and \cite{kawamata}). Therefore, in view of Conjecture \ref{conjnon}, Theorem \ref{thmmain} can be regarded as a generalization of the result of \cite{gongyo}. The contents of this paper are as follows: In Section \ref{sec2} we collect some notations, definitions and important theorems. In Section \ref{sec3} we prove Theorem \ref{thmmain}. \begin{ack} The author was partially supported by JSPS KAKENHI Grant Number JP16J05875 from JSPS. The author thanks Professor Osamu Fujino for discussions and warm encouragement. He also thanks Professors Yoshinori Gongyo and Yusuke Nakamura for comments. \end{ack} \section{Preliminaries}\label{sec2} In this section we collect notations, definitions and some important theorems. \begin{say}[Singularities of pairs] A {\em pair} $(X,\Delta)$ consists of a normal variety $X$ and a boundary $\mathbb{R}$-divisor $\Delta$, that is, an $\mathbb{R}$-divisor whose coefficients belong to $[0,1]$, on $X$ such that $K_{X}+\Delta$ is $\mathbb{R}$-Cartier. Let $(X,\Delta)$ be a pair and let $D$ be a prime divisor over $X$. Then $a(D,X,\Delta)$ denotes the discrepancy of $D$ with respect to $(X,\Delta)$. In this paper we use the definitions of Kawamata log terminal (klt, for short) pair, log canonical (lc, for short) pair and divisorially log terminal (dlt, for short) pair written in \cite{kollar-mori} or \cite{bchm}. \end{say} Next we define some models. \begin{defn}[Log birational model]\label{deflogbir} Let $\pi\!:\!X \to Z$ be a projective morphism from a normal variety to a variety and let $(X,\Delta)$ be an lc pair. Let $\pi'\!:\!X' \to Z$ be a projective morphism from a normal variety to $Z$ and let $\phi\!:\!X \dashrightarrow X'$ be a birational map over $Z$. Let $E$ be the reduced $\phi^{-1}$-exceptional divisor on $X'$, that is, $E=\sum E_{j}$ where $E_{j}$ are $\phi^{-1}$-exceptional prime divisors on $X'$. Then $(X', \Delta'=\phi_{*}\Delta+E)$ is called a {\em log birational model} of $(X,\Delta)$ over $Z$. \end{defn} \begin{defn}[Log minimal model and Mori fiber space]\label{deflogmin} Notations as in Definition \ref{deflogbir}, a log birational model $(X', \Delta')$ of $(X,\Delta)$ over $Z$ is a {\em weak log canonical model} ({\em weak lc model}, for short) if \begin{itemize} \item $K_{X'}+\Delta'$ is nef over $Z$, and \item for any prime divisor $D$ on $X$ which is exceptional over $X'$, we have $$a(D, X, \Delta) \leq a(D, X', \Delta').$$ \end{itemize} A weak lc model $(X',\Delta')$ of $(X,\Delta)$ over $Z$ is a {\em log minimal model} if \begin{itemize} \item $(X',\Delta')$ is $\mathbb{Q}$-factorial, and \item the above inequality on discrepancies is strict. \end{itemize} A log minimal model $(X',\Delta')$ of $(X, \Delta)$ over $Z$ is called a {\em good minimal model} if $K_{X'}+\Delta'$ is semi-ample over $Z$. On the other hand, a log birational model $(X', \Delta')$ of $(X,\Delta)$ over $Z$ is called a {\em Mori fiber space} if $X'$ is $\mathbb{Q}$-factorial and there is a contraction $X' \to W$ with ${\rm dim}\,W<{\rm dim}\,X'$ such that \begin{itemize} \item the relative Picard number $\rho(X'/W)$ is one and $-(K_{X'}+\Delta')$ is ample over $W$, and \item for any prime divisor $D$ over $X$, we have $$a(D,X,\Delta)\leq a(D,X',\Delta')$$ and strict inequality holds if $D$ is a divisor on $X$ and exceptional over $X'$. \end{itemize} \end{defn} \begin{defn}[Log smooth model]\label{deflogsm} Let $(X,\Delta)$ be an lc pair and let $f\!:\!Y \to X$ be a log resolution of $(X,\Delta)$. Let $\Gamma$ be a boundary $\mathbb{R}$-divisor on $Y$ such that $(Y,\Gamma)$ is log smooth. Then $(Y,\Gamma)$ is a {\em log smooth model} of $(X,\Delta)$ if we can write $$K_{Y}+\Gamma=f^{*}(K_{X}+\Delta)+F$$ with an effective $f$-exceptional divisor $F$ such that every $f$-exceptional prime divisor $E$ satisfying $a(E,X,\Delta)>-1$ is a component of $F$ and $\Gamma-\llcorner \Gamma \lrcorner$. \end{defn} Our definition of log minimal model and Mori fiber space is slightly different from that of \cite{birkar-flip}. The difference is that we do not assume those models to be dlt. But this difference is intrinsically not important (see \cite[Remark 2.7]{has-trivial}). In our definition, any weak lc model $(X',\Delta')$ of a $\mathbb{Q}$-factorial lc pair $(X,\Delta)$ constructed with the $(K_{X}+\Delta)$-MMP is a log minimal model of $(X,\Delta)$ even though $(X',\Delta')$ may not be dlt. The following theorem proved by Birkar \cite{birkar-flip} is frequently implicitly used in this paper. \begin{thm}[cf.~{\cite[Theorem 4.1]{birkar-flip}}]\label{thmtermi} Let $(X,\Delta)$ be a $\mathbb{Q}$-factorial lc pair such that $(X,0)$ is klt, and let $\pi\!:\!X \to Z$ be a projective morphism of normal quasi-projective varieties. If there exists a log minimal model of $(X,\Delta)$ over $Z$, then any $(K_{X}+\Delta)$-MMP over $Z$ with scaling of an ample divisor terminates. \end{thm} Next we recall definition of pseudo-effective threshold. \begin{defn} Let $(X,\Delta)$ be a projective lc pair and let $M\geq0$ be an $\mathbb{R}$-Cartier $\mathbb{R}$-divisor such that $K_{X}+\Delta+M$ is pseudo-effective. Then the {\em pseudo-effective threshold} of $M$ with respect to $(X,\Delta)$, denoted by $\tau(X,\Delta;M)$, is $$\tau(X,\Delta;M)={\rm inf}\{t\in \mathbb{R}_{\geq 0}\mid K_{X}+\Delta+tM {\rm \; is \; pseudo\mathchar`-effective}\}.$$ \end{defn} We close this section with two important theorems proved by Hacon, M\textsuperscript{c}Kernan and Xu \cite{hmx-acc}. \begin{thm}[{cf.~\cite[Theorem 1.1]{hmx-acc}}]\label{thmacclct} Fix a positive integer $n$, a set $I \subset [0,1]$ and a set $J\subset \mathbb{R}_{>0}$, where $I$ and $J$ satisfy the DCC. Let $\mathfrak{T}_{n}(I)$ be the set of lc pairs $(X,\Delta)$, where $X$ is a variety of dimension $n$ and the coefficients of $\Delta$ belong to $I$. Then the set $$\{ {\rm lct}(X,\Delta;M) \mid (X,\Delta) \in \mathfrak{T}_{n}(I), {\rm \; the \;coefficients \; of\; }M{\rm \;belong\; to\;}J \}$$ satisfies the ACC, where ${\rm lct}(X,\Delta;M)$ is the log canonical threshold of $M$ with respect to $(X,\Delta)$. \end{thm} \begin{thm}[{cf.~~\cite[Theorem D]{hmx-acc}}]\label{thmglobalacc} Fix a positive integer $n$ and a set $I \subset [0,1]$, which satisfies the DCC. Then there is a finite set $I_{0}\subset I$ with the following property: If $(X,\Delta)$ is an lc pair such that \begin{enumerate} \item[(i)] X is projective of dimension $n$, \item[(ii)] the coefficients of $\Delta$ belong to $I$, and \item[(iii)] $K_{X}+\Delta$ is numerically trivial, \end{enumerate} then the coefficients of $\Delta$ belong to $I_{0}$. \end{thm} \section{Proof of Theorem \ref{thmmain}}\label{sec3} In this section we prove Theorem \ref{thmmain}. \begin{lem}\label{lemdeform} Let $(X,B)$ be a projective lc pair. Let $\pi\!:\!(X,B)\to Z$ be a contraction to a normal projective variety $Z$ such that $K_{X}+B\sim_{\mathbb{R}}\pi^{*}D$ for some $D$ on $Z$. Then we can construct the following diagram $$ \xymatrix{ (X,B)\ar@{-->}[r] \ar[d]_{\pi}&(X_{0},B_{0})\ar^{\pi_{0}}[d]\\ Z&Z_{0}\ar^{h}[l] } $$ such that \begin{itemize}\item$\pi_{0}$ and $h$ are contractions and $h$ is birational, \item$(X_{0},B_{0})$ is a log birational model of $(X,B)$ and it is a projective $\mathbb{Q}$-factorial lc pair such that $(X_{0},0)$ is klt, \item$K_{X_{0}}+B_{0}\sim_{\mathbb{R}}\pi_{0}^{*}h^{*}D$, \item$Z_{0}$ is a projective $\mathbb{Q}$-factorial variety such that $(Z_{0},0)$ is klt, and\item$B_{0}=B'_{0}+B''_{0}$ with $B'_{0}\geq0$ and $B''_{0}\geq0$ such that $B''_{0}\sim_{\mathbb{R},\,Z_{0}}0$ and any lc center of $(X_{0},B'_{0})$ dominates $Z_{0}$. \end{itemize} \end{lem} \begin{proof} The idea of the proof can be found in \cite[proof of Lemma 4.5]{has-trivial}. We prove Lemma \ref{lemdeform} with three steps. \begin{step}\label{step1} In this step we construct a diagram $$ \xymatrix{ (X,B)\ar_{\pi}[d] \ar@{-->}[r]&(\overline{X},\overline{B})\ar^{\overline{\pi}}[d]\\ Z&\overline{Z} \ar^{\overline{h}}[l] } $$ such that \begin{enumerate} \item $\overline{\pi}$ and $\overline{h}$ are contractions and $\overline{h}$ is birational, \item $(\overline{X},\overline{B})$ is a log birational model of $(X,B)$ and it is a projective $\mathbb{Q}$-factorial lc pair such that $(\overline{X},0)$ is klt, \item $\overline{B}=\overline{B}'+\overline{B}''$ with $\overline{B}'\geq0$ and $\overline{B}''\geq0$ such that $\overline{B}''\sim_{\mathbb{R},\,\overline{Z}}0$ and any lc center of $(\overline{X},\overline{B}')$ dominates $\overline{Z}$, and \item $K_{\overline{X}}+\overline{B}\sim_{\mathbb{R}}\overline{\pi}^{*}\overline{h}^{*} D$. \end{enumerate} First take a dlt blow-up $(W,\Psi)\to (X,B)$ as in \cite[Corollary 2.14]{has-trivial}. Then we can decompose $\Psi=\Psi'+\Psi''$ with $\Psi'\geq0$ and $\Psi''\geq 0$ such that $\Psi''$ is vertical over $Z$ and any lc center of $(W,\Psi')$ dominates $Z$. Moreover we have $K_{W}+\Psi'+\Psi''\sim_{\mathbb{R},\,Z}0$. Since $(W,\Psi')$ is $\mathbb{Q}$-factorial and dlt, by \cite[Theorem 1.1]{has-mmp}, we can run the $(K_{W}+\Psi')$-MMP over $Z$ with scaling and get a good minimal model $(W,\Psi')\dashrightarrow (\overline{X},\overline{B}')$ over $Z$. Let $\overline{B}$ and $\overline{B}''$ be the birational transform of $\Psi$ and $\Psi''$ on $\overline{X}$ respectively. Then $\overline{B}=\overline{B}'+\overline{B}''$. Let $\overline{\pi}\!:\!\overline{X}\to \overline{Z}$ be the contraction over $Z$ induced by $K_{\overline{X}}+\overline{B}'$, and let $\overline{h}\!:\!\overline{Z}\to Z$ be the induced morphism. We can easily check that $(\overline{X},\overline{B}=\overline{B}'+\overline{B}'')$, $\overline{\pi}\!:\!\overline{X}\to \overline{Z}$ and $\overline{h}\!:\!\overline{Z}\to Z$ satisfy conditions (1), (2), (3) and (4). Indeed, it is easy to see that $\overline{\pi}$ and $\overline{h}$ satisfy condition (1). We also have $K_{\overline{X}}+\overline{B}\sim_{\mathbb{R}}\overline{\pi}^{*}\overline{h}^{*}D$, which is condition (4). Moreover, since $(X,B)$ is lc and since $K_{X}+B$ and $K_{\overline{X}}+\overline{B}$ are both $\mathbb{R}$-linearly equivalent to the pullback of $D$, we see that $(\overline{X},\overline{B})$ is lc. Now it is clear that $(\overline{X},\overline{B})$ satisfies condition (2). It is also clear that $\overline{B}''\sim_{\mathbb{R},\,\overline{Z}}0$ because $K_{\overline{X}}+\overline{B}'\sim_{\mathbb{R},\,\overline{Z}}0$. Finally we check that any lc center of $(\overline{X},\overline{B}')$ dominates $\overline{Z}$. Pick any prime divisor $P$ over $\overline{X}$ such that $a(P,\overline{X},\overline{B}')=-1$. Then $a(P,W,\Psi')=-1$ and thus $P$ dominates $Z$. Since $\overline{h}\!:\!\overline{Z}\to Z$ is birational, we see that $P$ dominates $\overline{Z}$. Therefore any lc center of $(\overline{X},\overline{B}')$ dominates $\overline{Z}$, and we see that $(\overline{X},\overline{B}=\overline{B}'+\overline{B}'')$ satisfies condition (3). So we complete this step. \end{step} \begin{step}\label{step2} We put $\overline{D}=\overline{h}^{*}\!D$. Then $K_{\overline{X}}+\overline{B}\sim_{\mathbb{R}}\overline{\pi}^{*}\overline{D}$ by construction. In this step we construct a diagram $$ \xymatrix{ (\overline{X},\overline{B})\ar_{\overline{\pi}}[d] \ar@{-->}[r]&(X_{0},B_{0})\ar^{\pi_{0}}[d]\\ \overline{Z}&Z_{0} \ar^{h_{0}}[l] } $$ with a projective $\mathbb{Q}$-factorial variety $Z_{0}$ such that $(Z_{0},0)$ is klt and \begin{enumerate} \item[(1$'$)] $\pi_{0}$ and $h_{0}$ are contractions and $h_{0}$ is birational, \item[(2$'$)] $(X_{0},B_{0})$ is a log birational model of $(\overline{X},\overline{B})$ and it is a projective $\mathbb{Q}$-factorial lc pair such that $(X_{0},0)$ is klt, \item[(3$'$)] $B_{0}=B_{0}'+B_{0}''$ with $B_{0}'\geq0$ and $B_{0}''\geq0$ such that $B''_{0}\sim_{\mathbb{R},\,Z_{0}}0$ and any lc center of $(X_{0},B'_{0})$ dominates $Z_{0}$, and \item[(4$'$)] $K_{X_{0}}+B_{0}\sim_{\mathbb{R}} \pi_{0}^{*}h_{0}^{*}\overline{D}$. \end{enumerate} By condition (3) in Step \ref{step1}, there exists an $\mathbb{R}$-divisor $\overline{T}\geq0$ on $\overline{Z}$ such that $\overline{B}''\sim_{\mathbb{R}}\overline{\pi}^{*}\overline{T}$. By condition (3) in Step \ref{step1} and \cite[Corollary 3.2]{fg-bundle}, there exists a klt pair on $\overline{Z}$. Let $h_{0}\!:\!Z_{0}\to \overline{Z}$ be a dlt blow-up of the klt pair. Then $h_{0}$ is a small birational morphism and $Z_{0}$ is $\mathbb{Q}$-factorial. Let $\overline{\varphi}\!:\!\overline{W}\to \overline{X}$ be a log resolution of $(\overline{X},\overline{B}')$ such that the induced map $\pi_{\overline{W}}\!:\!\overline{W}\dashrightarrow Z_{0}$ is a morphism. We pick a boundary divisor $\Psi'_{\overline{W}}$ so that $(\overline{W},\Psi'_{\overline{W}})$ is a log smooth model of $(\overline{X},\overline{B}')$. Then we have \begin{equation*} \begin{split} K_{\overline{W}}+\Psi'_{\overline{W}}=&\overline{\varphi}^{*}(K_{\overline{X}}+\overline{B}')+E_{\overline{W}}\sim_{\mathbb{R}}\overline{\varphi}^{*}\overline{\pi}^{*}(\overline{D}-\overline{T})+E_{\overline{W}}\\ =&(h_{0}\circ\pi_{\overline{W}})^{*}(\overline{D}-\overline{T})+E_{\overline{W}} \end{split} \end{equation*} for a $\overline{\varphi}$-exceptional divisor $E_{\overline{W}}\geq0$. By construction of $\Psi'_{\overline{W}}$, for any $\overline{\varphi}$-exceptional prime divisor $E_{i}$ on $\overline{W}$, $E_{i}$ is a component of $E_{\overline{W}}$ if and only if $a(E_{i},\overline{X},\overline{B}')>-1$. We run the $(K_{\overline{W}}+\Psi'_{\overline{W}})$-MMP over $Z_{0}$ with scaling. By the argument of very exceptional divisors (cf.~\cite[Theorem 3.5]{birkar-flip}), after finitely many steps, $E_{\overline{W}}$ is contracted and thus we get a model $(\overline{W},\Psi'_{\overline{W}})\dashrightarrow(X_{0},B'_{0})$ such that $K_{X_{0}}+B_{0}'\sim_{\mathbb{R},\,Z_{0}}0$. Let $\pi_{0}\!:\!X_{0}\to Z_{0}$ be the induced morphism. Now we have the following diagram. $$ \xymatrix{ (\overline{X},\overline{B}')\ar_{\overline{\pi}}[d]&(\overline{W},\Psi'_{\overline{W}})\ar_{\overline{\varphi}}[l] \ar_{\pi_{\overline{W}}}[dr]\ar@{-->}[r]&(X_{0},B'_{0})\ar^{\pi_{0}}[d]\\ \overline{Z}&&Z_{0} \ar^{h_{0}}[ll]}$$ Moreover we have $K_{X_{0}}+B'_{0}\sim_{\mathbb{R}}\pi_{0}^{*}h_{0}^{*}(\overline{D}-\overline{T})$. Let $B''_{0}$ be the birational transform of $\overline{\varphi}^{*}\overline{B}''$ on $X_{0}$, and we put $B_{0}=B'_{0}+B''_{0}$. Recall that the divisor $\overline{T}$ on $\overline{Z}$ satisfies $\overline{B}''\sim_{\mathbb{R}}\overline{\pi}^{*}\overline{T}$. From now on we check that $(X_{0}, B_{0}=B'_{0}+B''_{0})$, $\pi_{0}\!:\!X_{0}\to Z_{0}$ and $h_{0}\!:\!Z_{0}\to \overline{Z}$ satisfy conditions (1$'$), (2$'$), (3$'$) and (4$'$). It is clear that $\pi_{0}$ and $h_{0}$ satisfy condition (1$'$). Moreover, since $B''_{0}\sim_{\mathbb{R}}\pi_{0}^{*}h_{0}^{*}\overline{T}$, we have $$ K_{X_{0}}+B_{0}=K_{X_{0}}+B'_{0}+B''_{0}\sim_{\mathbb{R}}\pi_{0}^{*}h_{0}^{*}(\overline{D}-\overline{T})+\pi_{0}^{*}h_{0}^{*}\overline{T}=\pi_{0}^{*}h_{0}^{*}\overline{D}. $$ Therefore $K_{X_{0}}+B_{0}$ satisfies condition (4$'$). Next pick any prime divisor $P$ over $X_{0}$ such that $a(P,X_{0},B'_{0})=-1$. Then $a(P,\overline{W},\Psi'_{\overline{W}})=-1$, and hence $a(P,\overline{X},\overline{B}')=-1$ because $(\overline{W},\Psi'_{\overline{W}})$ is a log smooth model of $(\overline{X},\overline{B}')$ (cf.~\cite[Remark 2.11]{has-mmp}). So $P$ dominates $\overline{Z}$ by condition (3) in Step \ref{step1}. Since $h_{0}\!:\!Z_{0}\to \overline{Z}$ is birational, $P$ dominates $Z_{0}$ and hence we see that any lc center of $(X_{0},B'_{0})$ dominates $Z_{0}$. Now we can easily check that $(X_{0}, B_{0}=B'_{0}+B''_{0})$ satisfies condition (3$'$). Finally we check condition (2$'$). We only check that $(X_{0},B_{0})$ is a log birational model of $(\overline{X},\overline{B})$ because others are easy. Note that $(X_{0},B_{0})$ is lc since $(\overline{X},\overline{B})$ is lc and since $K_{\overline{X}}+\overline{B}$ and $K_{X_{0}}+B_{0}$ are both $\mathbb{R}$-linearly equivalent to the pullback of $\overline{D}$. Let $E_{i}$ be a $\overline{\varphi}$-exceptional prime divisor on $\overline{W}$ such that $a(E_{i},\overline{X},\overline{B})>-1$. We show that $E_{i}$ is contracted by $\overline{W}\dashrightarrow X_{0}$. Since $a(E_{i},\overline{X},\overline{B}')\geq a(E_{i},\overline{X},\overline{B})>-1$ we see that $E_{i}$ is a component of $E_{\overline{W}}$. Then $E_{i}$ is contracted by $\overline{W}\dashrightarrow X_{0}$ since $E_{\overline{W}}$ is contracted by $\overline{W}\dashrightarrow X_{0}$. In this way we see that $(X_{0}, B_{0})$ is a log birational model of $(\overline{X},\overline{B})$. So $(X_{0},B_{0})$ satisfies condition (2$'$) and we complete this step. \end{step} \begin{step}\label{step3} Now we have constructed the following diagram $$ \xymatrix{ (X,B)\ar_{\pi}[d] \ar@{-->}[r]&(\overline{X},\overline{B})\ar_{\overline{\pi}}[d] \ar@{-->}[r]&(X_{0},B_{0})\ar^{\pi_{0}}[d]\\ Z_{Y}&\overline{Z} \ar^{\overline{h}}[l]&Z_{0} \ar^{h_{0}}[l] } $$ satisfying conditions (1), (2), (3) and (4) in Step \ref{step1} and (1$'$), (2$'$), (3$'$) and (4$'$) in Step \ref{step2}, and furthermore $Z_{0}$ is $\mathbb{Q}$-factorial and $(Z,0)$ is klt. We set $h=\overline{h}\circ h_{0}:Z_{0}\to Z$. By construction $h$ is birational, and it is clear that the following $$ \xymatrix{ (X,B)\ar_{\pi}[d] \ar@{-->}[r]&(X_{0},B_{0}=B'_{0}+B''_{0})\ar^{\pi_{0}}[d]\\ Z&Z_{0} \ar^{h}[l] } $$ is the desired diagram. So we are done. \end{step} \end{proof} \begin{rem} By construction of the diagram we see that the divisor $B''_{0}$ is reduced, i.e., all coefficients of $B''_{0}$ are one (cf.~\cite[Lemma 4.5]{has-trivial}). But we do not use this fact in this paper. \end{rem} \begin{lem}\label{lemlift} Let $\pi\!:\!(X,B)\to Z$ be a contraction such that \begin{itemize} \item $(X,B)$ is a projective $\mathbb{Q}$-factorial lc pair such that $(X,0)$ is klt, \item $K_{X}+B\sim_{\mathbb{R}}\pi^{*}D$ for some $D$ on $Z$, \item $Z$ is a projective $\mathbb{Q}$-factorial variety such that $(Z,0)$ is klt, and \item $B=B'+B''$ with $B'\geq0$ and $B''\geq0$ such that $B''\sim_{\mathbb{R},\,Z}0$ and any lc center of $(X,B')$ dominates $Z$. \end{itemize} Let $T$ be an effective $\mathbb{R}$-divisor on $Z$ such that $B''\sim_{\mathbb{R}}\pi^{*}T$. If $D$ is pseudo-effective but $D-eT$ is not pseudo-effective for any $e>0$, then we can construct the following diagram $$ \xymatrix{ (X,B)\ar@{-->}[r]\ar_{\pi}[d]&(\widetilde{X},\widetilde{B})\ar^{\widetilde{\pi}}[d]\\ Z\ar@{-->}[r]&\widetilde{Z}\ar[r]&Z^{\vee} } $$ such that \begin{itemize} \item $(\widetilde{X},\widetilde{B})$ is projective $\mathbb{Q}$-factorial lc, $(X,0)$ is klt, $\widetilde{Z}$ is projective and $\mathbb{Q}$-factorial, $(\widetilde{Z},0)$ is klt, and $Z^{\vee}$ is normal and projective, \item the maps $X\dashrightarrow \widetilde{X}$ and $Z\dashrightarrow\widetilde{Z}$ are birational contractions, \item the morphism $\widetilde{Z}\to Z^{\vee}$ is a contraction such that $\rho(\widetilde{Z}/Z^{\vee})=1$ and ${\rm dim}\,Z^{\vee}<{\rm dim}\,\widetilde{Z}$, and \item $K_{\widetilde{X}}+\widetilde{B}\sim_{\mathbb{R}}\widetilde{\pi}^{*}\widetilde{D}$ and $\widetilde{D}\sim_{\mathbb{R},\,Z^{\vee}}0$. \end{itemize} Here the divisors $\widetilde{B}$ and $\widetilde{D}$ are the birational transform of $B$ on $\widetilde{X}$ and $D$ on $\widetilde{Z}$ respectively. \end{lem} \begin{proof} We can construct the desired diagram by the same argument as in \cite[Step 1 and 2 in the proof of Proposition 5.3]{has-trivial}. We write down the details for the reader's convenience. Let $\{e_{n}\}_{n\geq1}$ be a strictly decreasing sequence of positive real numbers such that $e_{n}<1$ for any $n$ and ${\rm lim}_{n\to \infty} e_{n}=0$. By \cite[Corollary 3.2]{fg-bundle}, for any $n\geq1$, we can find a boundary $\mathbb{R}$-divisor $\Theta_{n}$ such that $(Z,\Theta_{n})$ is klt and $$K_{X}+B-e_{n}B''\sim_{\mathbb{R}}\pi^{*}(D-e_{n}T)\sim_{\mathbb{R}}\pi^{*}(K_{Z}+\Theta_{n}).$$ Since $K_{Z}+\Theta_{n}\sim_{\mathbb{R}}D-e_{n}T$ is not pseudo-effective for any $n\geq1$, we can run the $(K_{Z}+\Theta_{n})$-MMP with scaling and obtain a Mori fiber space. Let $Z\dashrightarrow \widetilde{Z}_{n}$ be the birational contraction of a finitely many steps of the $(K_{Z}+\Theta_{n})$-MMP, and let $\widetilde{Z}_{n}\to Z_{n}^{\vee}$ be the contraction of the Mori fiber space. Let $\widetilde{D}_{n}$ and $\widetilde{T}_{n}$ be the birational transform of $D$ and $T$ on $\widetilde{Z}_{n}$ respectively. Since $K_{Z}+\Theta_{n}\sim_{\mathbb{R}}D-e_{n}T$ and since $D$ is pseudo-effective, we see that $\widetilde{D}_{n}-e_{n}\widetilde{T}_{n}$ is anti-ample over $Z_{n}^{\vee}$ and $\widetilde{T}_{n}$ is ample over $Z_{n}^{\vee}$. Furthermore, by applying the $\mathbb{R}$-boundary divisor version of \cite[Lemma 3.6]{has-trivial}, we have the following diagram $$ \xymatrix{ (X,B-e_{n}B'')\ar@{-->}[rr] \ar_{\pi}[d]&&(\widetilde{X}_{n},\widetilde{B}_{n}-e_{n}\widetilde{B}''_{n})\ar^{\pi_{n}}[d]\\ Z\ar@{-->}[rr]&&\widetilde{Z}_{n}\ar[r]&Z_{n}^{\vee} } $$ such that the upper horizontal birational map is a finitely many steps of the $(K_{X}+B-e_{n}B'')$-MMP and $$K_{\widetilde{X}_{n}}+\widetilde{B}_{n}-e_{n}\widetilde{B}''_{n}\sim_{\mathbb{R}}\pi_{n}^{*}(\widetilde{D}_{n} -e_{n}\widetilde{T}_{n})\quad {\rm and}\quad \widetilde{B}''_{n}\sim_{\mathbb{R}}\pi_{n}^{*}\widetilde{T}_{n},$$ where $\widetilde{B}_{n}$ and $\widetilde{B}''_{n}$ are the birational transform of $B$ and $B''$ on $\widetilde{X}_{n}$. Now we apply Theorem \ref{thmacclct} to $X_{n}$ and apply Theorem \ref{thmglobalacc} to the general fiber of $\widetilde{X}_{n}\to Z_{n}^{\vee}$. Then we see that for some $n$ the pair $(\widetilde{X}_{n},\widetilde{B}_{n})$ is lc and $K_{\widetilde{X}_{n}}+\widetilde{B}_{n}\sim_{\mathbb{R},\,Z_{n}^{\vee}}0$ (cf.~\cite[Step 2 in the proof of Proposition 5.3]{has-trivial}). We also see that $\widetilde{D}_{n}\sim_{\mathbb{R},\,Z_{n}^{\vee}}0$ because we have $K_{\widetilde{X}_{n}}+\widetilde{B}_{n}\sim_{\mathbb{R}}\pi_{n}^{*}\widetilde{D}_{n}$. For this $n$ we put $\widetilde{Z}=\widetilde{Z}_{n}$ and $Z^{\vee}=Z_{n}^{\vee}$. Then it is easy to see that the following $$ \xymatrix{ (X,B)\ar@{-->}[r]\ar_{\pi}[d]&(\widetilde{X},\widetilde{B})\ar[d]\\ Z\ar@{-->}[r]&\widetilde{Z}\ar[r]&Z^{\vee} } $$ is the desired diagram. \end{proof} \begin{proof}[Proof of Theorem \ref{thmmain}] By hypothesis there is $C$ on $X$ such that $(X,C)$ is lc and $K_{X}+C\equiv0$. Then we have $K_{X}+C\sim_{\mathbb{R}}0$ by the abundance theorem for numerically trivial lc pairs. Therefore we may assume $C\neq0$ and Theorem \ref{thmmain} for $(X,\Delta)$ is equivalent to Theorem \ref{thmmain} for $(X,t\Delta+(1-t)C)$ for any $0<t\ll1$. So we will freely replace $(X,\Delta)$ with $(X,t\Delta+(1-t)C)$. By taking a dlt blow-up of $(X,C)$ and by replacing $(X,\Delta)$ with $(X,t\Delta+(1-t)C)$ for some $0<t\ll1$ we can assume $X$ is $\mathbb{Q}$-factorial and $(X,0)$ is klt. Since $C\neq0$, $K_{X}$ is not pseudo-effective, and thus $\tau(X,0;\Delta)>0$. Replacing $(X,\Delta)$ by $(X,\tau(X,0;\Delta)\Delta)$, we can assume that $\tau(X,0;\Delta)=1$. We prove Theorem \ref{thmmain} by induction on the dimension of $X$. \begin{step3}\label{step1non} By \cite[Lemma 3.1]{gongyo-nonvanishing}, we can construct a birational contraction $\phi\!:\!X\dashrightarrow X'$ and a contraction $X'\to Z'$ such that ${\rm dim}\,Z'<{\rm dim}\,X$, $(X',\phi_{*}\Delta)$ is lc and $K_{X'}+\phi_{*}\Delta\sim_{\mathbb{R},\,Z'}0$. Then $(X',\phi_{*}C)$ is also lc since $K_{X}+C\sim_{\mathbb{R}}0$. Take a log resolution $Y \to X$ of $(X,{\rm Supp}(\Delta+C))$ so that the induced map $f\!:\!Y\dashrightarrow X'$ is a morphism, and let $(Y,\Delta_{Y})$ and $(Y,C_{Y})$ be log smooth models of $(X,\Delta)$ and $(X,C)$ respectively. Since $K_{X}+C\sim_{\mathbb{R}}0$, we see that $K_{Y}+C_{Y}-f^{*}(K_{X'}+\phi_{*}C)$ is effective and $f$-exceptional. So we can run the $(K_{Y}+C_{Y})$-MMP over $X'$ and get a model $f'\!:\!(Y',C_{Y'})\to X'$ such that $K_{Y'}+C_{Y'}=f'^{*}(K_{X'}+\phi_{*}C)\sim_{\mathbb{R}}0.$ By construction $(Y',C_{Y'})$ is lc and $Y\dashrightarrow Y'$ is a finitely many steps of the $(K_{Y}+t\Delta_{Y}+(1-t)C_{Y})$-MMP for any $0<t\ll1$. Fix a sufficiently small $t>0$ and set $\Gamma_{Y}=t\Delta_{Y}+(1-t)C_{Y}$. Let $\Gamma_{Y'}$ be the birational transform of $\Gamma_{Y}$ on $Y'$. Then we can write $$K_{Y'}+\Gamma_{Y'}=f'^{*}\bigl(K_{X'}+t\phi_{*}\Delta+(1-t)\phi_{*}C\bigr)+F$$ with an $f'$-exceptional divisor $F$. Note that $F$ may not be effective. Run the $(K_{Y'}+\Gamma_{Y'})$-MMP over $X'$ with scaling. By \cite[Theorem 3.5]{birkar-flip} we reach a model $f''\!:\!(Y'',\Gamma_{Y''})\to X'$ such that $$K_{Y''}+\Gamma_{Y''}=f''^{*}\bigl(K_{X'}+t\phi_{*}\Delta+(1-t)\phi_{*}C\bigr)+F_{Y''}$$ with $F_{Y''}\leq0$. Now we recall that $(X',\phi_{*}\Delta)$ and $(X',\phi_{*}C)$ are both lc. Combining it with the above equation we see that $(Y'',\Gamma_{Y''}-F_{Y''})$ is also lc. By construction we also have $K_{Y''}+\Gamma_{Y''}-F_{Y''}\sim_{\mathbb{R},\,Z'}0$. Since $-F_{Y''}\geq 0$ and $(Y'',0)$ is $\mathbb{Q}$-factorial klt, by \cite[Theorem 1.1]{has-mmp}, we can run the $(K_{Y''}+\Gamma_{Y''})$-MMP over $Z'$ and obtain a good minimal model $(Y'',\Gamma_{Y''})\dashrightarrow (Y''',\Gamma_{Y'''})$ over $Z'$. Let $\pi\!:\!Y'''\to Z$ be the contraction over $Z'$ induced by $K_{Y'''}+\Gamma_{Y'''}$, and let $C_{Y'''}$ be the birational transform of $C_{Y}$ on $Y'''$. Note that ${\rm dim}\,Z={\rm dim}\,Z'$ because $Z$ is birational to $Z'$. We also have $K_{Y'''}+\Gamma_{Y'''}\sim_{\mathbb{R},\,Z}0$ and $K_{Y'''}+C_{Y'''}\sim_{\mathbb{R}}0$. Furthermore, by construction, the birational map $Y\dashrightarrow Y'''$ is a finitely many steps of the $(K_{Y}+\Gamma_{Y})$-MMP. Therefore we can replace $(X,\Delta)$ and $(X,C)$ by $(Y''',\Gamma_{Y'''})$ and $(Y''',C_{Y'''})$. In this way, to prove Theorem \ref{thmmain}, we can assume that there exists a contraction $\pi\!:\!X\to Z$ to a normal projective variety $Z$ such that ${\rm dim}\,Z<{\rm dim}\,X$ and $K_{X}+\Delta\sim_{\mathbb{R},\,Z}0$. \end{step3} \begin{step3}\label{step2non} We apply Lemma \ref{lemdeform} to $(X,C)\to Z$ (not $(X,\Delta)\to Z$) and obtain a diagram $$ \xymatrix{ (X,C)\ar@{-->}[r] \ar[d]_{\pi}&(X_{0},C_{0})\ar^{\pi_{0}}[d]\\ Z&Z_{0}\ar^{h}[l] } $$ such that \begin{itemize} \item $\pi_{0}$ and $h$ are contractions and $h$ is birational, \item $(X_{0},C_{0})$ is a log birational model of $(X,C)$ and it is a projective $\mathbb{Q}$-factorial lc pair such that $(X_{0},0)$ is klt, \item $K_{X_{0}}+C_{0}\sim_{\mathbb{R}}0$, \item $Z_{0}$ is a projective $\mathbb{Q}$-factorial variety and $(Z_{0},0)$ is klt, and \item $C_{0}=C'_{0}+C''_{0}$ with $C'_{0}\geq0$ and $C''_{0}\geq0$ such that $C''_{0}\sim_{\mathbb{R},\,Z_{0}}0$ and any lc center of $(X_{0},C'_{0})$ dominates $Z_{0}$. \end{itemize} Let $\varphi\!:\!W\to X$ and $\varphi_{0}\!:\!W\to X_{0}$ be a common resolution. We define a divisor $\Psi$ on $W$ by equation $K_{W}+\Psi=\varphi^{*}(K_{X}+\Delta)$ and set $\Delta_{0}=\varphi_{0*}\Psi$. Note that $\Delta_{0}$ may not be effective but $t\Delta_{0}+(1-t)C_{0}$ is effective for any $0<t\ll1$ because $(X_{0},C_{0})$ is a log birational model of $(X,C)$. By construction $K_{X_{0}}+\Delta_{0}\sim_{\mathbb{R},\,Z_{0}}0$ and any lc center of $(X_{0}, t\Delta_{0}+(1-t)C_{0})$ is an lc center of $(X_{0},C_{0})$. We can easily check that we can replace $(X,\Delta)\to Z$ and $(X,C)$ by $(X_{0},t\Delta_{0}+(1-t)C_{0})\to Z_{0}$ and $(X_{0},C_{0})$. Therefore we can assume that \begin{enumerate} \item[(i)] $Z$ is a projective $\mathbb{Q}$-factorial variety and $(Z,0)$ is klt, \item[(ii)] $C=C'+C''$ for some $C'\geq0$ and $C''\geq0$ such that $C''\sim_{\mathbb{R},\,Z}0$ and any lc center of $(X,C')$ dominates $Z$, and \item[(iii)] any lc center of $(X, \Delta)$ is an lc center of $(X,C)$. \end{enumerate} \end{step3} \begin{step3}\label{step3non} In this step we prove Theorem \ref{thmmain} for $(X,\Delta)$ when $C''=0$. In this case we have $C=C'$. By conditions (ii) and (iii) in Step \ref{step2non}, all lc centers of $(X,\Delta)$ and those of $(X,C)$ dominate $Z$. Therefore, by \cite[Corollary 3.2]{fg-bundle}, there exists $\Theta$ (resp.~$G$) on $Z$ such that $(Z,\Theta)$ is klt (resp.~$(Z,G)$ is klt) and $K_{X}+\Delta\sim_{\mathbb{R}}\pi^{*}(K_{Z}+\Theta)$ (resp.~$K_{X}+C\sim_{\mathbb{R}}\pi^{*}(K_{Z}+G)$). Then there is $E\geq0$ such that $K_{Z}+\Theta\sim_{\mathbb{R}}E$ by induction hypothesis. Thus we see that $K_{X}+\Delta\sim_{\mathbb{R}}\pi^{*}E$ and so we are done. \end{step3} \begin{step3}\label{step4non} By Step \ref{step3non} we can assume that $C''\neq0$. Then $K_{X}+C'\sim_{\mathbb{R}}-C''$ is not pseudo-effective, and hence $K_{X}+t\Delta+(1-t)C-(1-t)C''$ is not pseudo-effective for any $0<t\ll1$. Moreover any lc center of $(X, t\Delta+(1-t)C')$ is an lc center of $(X,C')$. We fix a sufficiently small $t>0$ and we replace $(X,\Delta)$ by $(X,t\Delta+(1-t)C)$. We also see that we can replace $C''$ by $(1-t)C''$ (at the same time $C'$ is replaced by $C'+tC''$). Therefore replacing $C''$ we can assume that $\Delta-C''\geq0$, $K_{X}+\Delta-C''$ is not pseudo-effective, and any lc center of $(X, \Delta-C'')$ is an lc center of $(X,C')$. Then by condition (ii) in Step \ref{step2non} any lc center of $(X, \Delta-C'')$ dominates $Z$. Now we put $\tau=\tau(X,\Delta-C'';C'')$, where the right hand side is the pseudo-effective threshold of $C''$ with respect to $(X,\Delta-C'')$. By construction we have $0<\tau\leq1$. Therefore we can replace $(X,\Delta)$ by $(X,\Delta-C''+\tau C'')$. We can also replace $C''$ with $\tau C''$ and replace $C'$ with $C'+(1-\tau) C''$. Note that any lc center of $(X,C'+(1-\tau) C'')$ is an lc center of $(X,C')$ because $\tau>0$ and $(X,C)$ is lc. In this way, by replacing those divisors, we can assume that \begin{itemize} \item $\Delta-C''\geq0$ and any lc center of $(X,\Delta-C'')$ dominates $Z$, and \item $K_{X}+\Delta-eC''$ is not-pseudo-effective for any $e>0$. \end{itemize} In the rest of the proof we do not use $C'$. \end{step3} \begin{step3}\label{step5non} Pick divisors $D$ and $T$ on $Z$ such that $K_{X}+\Delta\sim_{\mathbb{R}}\pi^{*}D$ and $C''\sim_{\mathbb{R}}\pi^{*}T$ respectively. By Step \ref{step1non}, \ref{step2non} and \ref{step4non}, $(X,\Delta)\to Z$ and $C''\neq0$ satisfy \begin{itemize} \item $(X,\Delta)$ is a projective $\mathbb{Q}$-factorial lc pair such that $(X,0)$ is klt, \item $K_{X}+\Delta\sim_{\mathbb{R}}\pi^{*}D$, \item $Z$ is a projective $\mathbb{Q}$-factorial variety such that $(Z,0)$ is klt, \item $\Delta-C''\geq0$, $C''\geq0$, $C''\sim_{\mathbb{R}}\pi^{*}T$ and any lc center of $(X,\Delta-C'')$ dominates $Z$, and \item $K_{X}+\Delta-eC''$ is not-pseudo-effective for any $e>0$. \end{itemize} Therefore we can apply Lemma \ref{lemlift} and we can obtain the following diagram $$ \xymatrix{ (X,\Delta)\ar@{-->}[r]\ar_{\pi}[d]&(\widetilde{X},\widetilde{\Delta})\ar^{\widetilde{\pi}}[d]\\ Z\ar@{-->}[r]&\widetilde{Z}\ar[r]&Z^{\vee} } $$ such that \begin{itemize} \item $(\widetilde{X},\widetilde{\Delta})$ is a projective $\mathbb{Q}$-factorial lc pair, $\widetilde{Z}$ is projective and $\mathbb{Q}$-factorial, and $Z^{\vee}$ is a normal projective variety, \item the maps $X\dashrightarrow \widetilde{X}$ and $Z\dashrightarrow\widetilde{Z}$ are birational contractions, \item the morphism $\widetilde{Z}\to Z^{\vee}$ is a contraction such that $\rho(\widetilde{Z}/Z^{\vee})=1$ and ${\rm dim}\,Z^{\vee}<{\rm dim}\,\widetilde{Z}$, and \item $K_{\widetilde{X}}+\widetilde{\Delta}\sim_{\mathbb{R},\,Z^{\vee}}0$. \end{itemize} Here $\widetilde{\Delta}$ is the birational transform of $\Delta$ on $\widetilde{X}$. We take a log resolution $Y_{1}\to X$ of $(X,{\rm Supp}\,(\Delta+C))$ such that the induced map $Y_{1}\dashrightarrow \widetilde{X}$ is a morphism. Let $(Y_{1},\Delta_{Y_{1}})$ and $(Y_{1},C_{Y_{1}})$ be log smooth models of $(X,\Delta)$ and $(X,C)$ respectively. Then we can apply the argument of Step \ref{step1non} to $Y_{1}\to \widetilde{X}\to Z^{\vee}$ since $(\widetilde{X},\widetilde{\Delta})$ is lc and $K_{\widetilde{X}}+\widetilde{\Delta}\sim_{\mathbb{R},\,Z^{\vee}}0$. Thus we can get a contraction $Y'''_{1}\to Z_{1}$ over $Z^{\vee}$ and lc pairs $(Y'''_{1},\Gamma_{Y'''_{1}})$ and $(Y'''_{1},C_{Y'''_{1}})$ such that $K_{Y'''_{1}}+\Gamma_{Y'''_{1}}\sim_{\mathbb{R},\,Z_{1}}0$ and $K_{Y'''_{1}}+C_{Y'''_{1}}\sim_{\mathbb{R}}0$. Here $C_{Y'''_{1}}$ is the birational transform of $C_{Y_{1}}$ on $Y'''_{1}$ and $\Gamma_{Y'''_{1}}$ is the birational transform of $t\Delta_{Y_{1}}+(1-t)C_{Y_{1}}$ on $Y'''_{1}$ for a sufficiently small $t>0$. Furthermore we can check that we may replace $(X,\Delta)\to Z$ and $(X,C)$ by $(Y'''_{1},\Gamma_{Y'''_{1}})\to Z_{1}$ and $(Y'''_{1},C_{Y'''_{1}})$. For details, see the second paragraph of Step \ref{step1non}. We replace $(X,\Delta)\to Z$ by $(Y'''_{1},\Gamma_{Y'''_{1}})\to Z_{1}$. Then the dimension of $Z$ is strictly decreased. This is crucial to the proof. \end{step3} \begin{step3} From now on we repeat the argument of Step \ref{step2non}-\ref{step5non}. By the same argument as in Step \ref{step2non}, we can assume $(X,\Delta)\to Z$ and $(X,C)$ satisfy conditions (i), (ii) and (iii) in Step \ref{step2non}. Then there are two possibilities: \begin{itemize} \item Theorem \ref{thmmain} holds for $(X,\Delta)$ (cf.~Step \ref{step3non}), or \item we can find a contraction $Y'''_{2}\to Z_{2}$ with ${\rm dim}\,Z_{2}<{\rm dim}\,Z$ and lc pairs $(Y'''_{2},\Gamma_{Y'''_{2}})$ and $(Y'''_{2},C_{Y'''_{2}})$ such that $K_{Y'''_{2}}+\Gamma_{Y'''_{2}}\sim_{\mathbb{R},\,Z_{2}}0$, $K_{Y'''_{2}}+C_{Y'''_{2}}\sim_{\mathbb{R}}0$ and Theorem \ref{thmmain} for $(X,\Delta)$ is implied from Theorem \ref{thmmain} for $(Y'''_{2},\Gamma_{Y'''_{2}})$ (cf.~Step \ref{step4non} and \ref{step5non}). \end{itemize} If we are in the first case we stop the argument. If we are in the second case we replace $(X,\Delta)\to Z$ by $(Y'''_{2},\Gamma_{Y'''_{2}})\to Z_{2}$ and repeat the argument of Step \ref{step2non}-\ref{step5non}. Each time we replace $(X,\Delta)\to Z$ in the argument of Step \ref{step5non}, the dimension of $Z$ is strictly decreased. Therefore this discussion eventually stops. Thus we can prove Theorem \ref{thmmain} and so we are done. \end{step3} \end{proof}
1,477,468,750,487
arxiv
\section*{Introduction} This supplemental material provides, firstly, a detailed account of the bend distortions of a unit vector field, their geometric degeneracies and the topological information they carry. The results obtained apply to any material or physical system described (even in part) by such a unit vector, or line, field. In the same way that the geometry of twist is used primarily in the context of chiral phases, such as cholesteric liquid crystals, helimagnets and Beltrami flows, where the natural state is one of non-zero twist, we anticipate that the geometry of bend will be used primarily for phases where the natural state is one of non-zero bend. For this reason, we illustrate our general discussion with examples taken from a concrete, minimal example of such a system, the twist-bend nematic --- in the same way that cholesterics illustrate general features of the geometry of twist, we shall use the twist-bend nematic to illustrate general aspects of the geometry of bend. As such, the specific free energy we shall use to model the twist-bend nematic, drawn from recent literature \cite{shamid2013,jakli2018}, will be of subsidiary importance. The reader interested in the general structure of bend distortions may consult \S\S\ref{sec:general},\ref{sec:Local},\ref{sec:FrenetSerret},\ref{sec:Meron} and read these independently of any consideration of the twist-bend nematic, whose modelling and free energy is discussed briefly in \S\ref{sec:FreeEnergy}. The second purpose of this supplement is to provide additional graphical renderings and mathematical detail of the various defects in twist-bend nematics discussed in the main text --- of the screw and edge dislocations \S\S\ref{sec:Heliconical},\ref{subsec:screw},\ref{subsec:edge}, of Skyrmions and Skyrmion lattices \S\ref{sec:Skyrmion}, and of three-dimensional knotted merons \S\ref{sec:Meron}. In addition, we discuss two smectic-like defects not detailed in the main text, twist grain boundaries \S\ref{sec:TGB} and focal conics \S\ref{sec:FocalConic}. \subsection{Geometry of Orientational Order} \label{sec:general} The geometrical description of orientational order comes from a natural decomposition of the director gradients~\cite{machon2016,alexander2018,machon2019}. The director gives a canonical splitting of directions in space at each point, into those parallel to the director and those perpendicular to it. The latter define a two-dimensional vector space at every point, called the orthogonal plane field, $\xi$. Fig.~\ref{fig:planes} illustrates this splitting and the plane field $\xi$ at a single point in space (Fig.~\ref{fig:planes}(a)), for the cholesteric ground state (Fig.~\ref{fig:planes}(b)), and for a double-twist cylinder (Fig.~\ref{fig:planes}(c)). The local symmetry of the director field gives an action of a subgroup of the rotation group isomorphic to $SO(2)$ under which the director gradients naturally split as \begin{equation} \partial_i n_j = n_i (n_k \partial_k) n_j + \frac{\nabla\cdot{\bf n}}{2} \bigl( \delta_{ij} - n_i n_j \bigr) + \frac{{\bf n}\cdot\nabla\times{\bf n}}{2} \,\epsilon_{ijk} n_k + \Delta_{ij} . \label{Seq:director_gradients} \end{equation} The first term gives the derivatives parallel to the director field, $\nabla_{\parallel}{\bf n}$, and the remainder the orthogonal gradients, $\nabla_{\perp}{\bf n}$. The orthogonal gradients can be thought of as a linear transformation on the orthogonal plane field --- the shape operator for the director field --- defined by ${\bf v} \mapsto ({\bf v}\cdot\nabla){\bf n}$ for any orthogonal vector ${\bf v}$. The first two terms in the orthogonal gradients are isotropic and contain the splay and twist distortions, while the last term, $\Delta_{ij}$, is the anisotropic part of the orthogonal gradients; it is a traceless, symmetric, linear transformation on $\xi$ that transforms as a spin $2$ object under the action of the local $SO(2)$ symmetry group. Its eigenvectors are the directions of principal curvature of the director field. Closely related to it is the linear transformation $\Pi_{ij} = \Delta_{il} \epsilon_{ljk} n_k$, the anisotropic part of the chirality pseudotensor, whose principal eigenvector defines the pitch axis in cholesterics and helimagnets~\cite{machon2016,alexander2018}. Of course, $\Delta$ and $\Pi$ are defined for any type of orientational order and not only for cholesterics, but in cholesterics where non-zero twist is energetically preferred they gain added significance. The defects in the pitch axis --- called $\lambda$ lines and readily visible under optical microscopy --- correspond to the zeros of $\Delta$ (and equivalently $\Pi$)~\cite{machon2016,alexander2018}. In the general case, the zeros of $\Delta$ (equivalently $\Pi$) are the umbilics of the director field, where the orthogonal gradients are locally isotropic. As these are zeros of a linear transformation on a vector space they carry topological information~\cite{machon2016,alexander2018,machon2019}; specifically, they identify the topology of the director field modulo elements of order $4$~\cite{machon2016,machon2016prsa,machon2019}. \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{planes.png} \caption{An illustration of the plane field $\xi$ associated to a director ${\bf n}$. (a) At each point, $\xi$ is defined to be the orthogonal plane (grey) to the director (blue). ${\bf n}$ and $\xi$ are shown for (b) the cholesteric ground state and (c) a double-twist cylinder, whose axis is indicated by the green line.} \label{fig:planes} \end{figure} The parallel gradients, $\nabla_{\parallel}{\bf n}$, define the bend distortion ${\bf b} := ({\bf n}\cdot\nabla){\bf n} = - {\bf n} \times (\nabla\times{\bf n})$; its geometric interpretation is that it is the curvature of the integral curves of the director field. The bend is a vector that is everywhere orthogonal to the director field, ${\bf b}\cdot{\bf n} = 0$, and is therefore a section of the orthogonal plane field $\xi$. As a section of a rank $2$ vector bundle, $\bf b$ has zeros of codimension $2$ which form one-dimensional curves within the texture. We call these curves $\beta$ lines; they are the central object of the present work. These $\beta$ lines are the locus of inflection points in the director integral curves; as a director integral curve intersects a $\beta$ line, the curvature of the director integral curve vanishes. $\beta$ lines furnish a geometric fingerprint of the director field, reflecting its geometric structure while also conveying topological information by representing the Poincar\'e dual to the Euler class of $\xi$; we shall see numerous examples of this in the following sections. \subsection{Free Energy} \label{sec:FreeEnergy} The description of director gradients given so far has been fully general, applying to any form of orientational order and, in fact, independent of any energetic considerations. However, energetic considerations are also important as they will constrain the facets of the geometry that are most important in determining the properties of different phases. The decomposition of director gradients~\eqref{Seq:director_gradients} leads immediately to the Frank free energy~\cite{Machon,machon2019} \begin{equation} F = \int \biggl\{ \frac{\tilde{K}_{1}}{2} \bigl( \nabla \cdot {\bf n} \bigr)^2 + \frac{\tilde{K}_{2}}{2} \bigl( {\bf n}\cdot\nabla\times{\bf n} \bigr)^2 + \frac{\tilde{K}_{3}}{2} \bigl| ({\bf n}\cdot\nabla) {\bf n} \bigr|^2 + \tilde{K}_{4} \bigl| \Delta \bigr|^2 \biggr\} dV , \label{Seq:Frank} \end{equation} where $|\Delta|^2 = \Delta_{ij} \Delta_{ij}$ and the $\tilde{K}_i$ are elastic moduli in terms of which the usual Frank constants are~\cite{Machon,machon2019,selinger2019} \begin{align} & K_1 = \tilde{K}_1 + \tilde{K}_4 , && K_2 = \tilde{K}_2 + \tilde{K}_2 , && K_3 = \tilde{K}_3 , && K_{24} = \tilde{K}_4 . \end{align} A lucid exposition of this approach to the Frank free energy along with insightful applications to interpreting liquid crystal textures is given in~\cite{selinger2019}. Beyond the nematic phase, different types of orientational order emphasise particular aspects of the geometry by energetically favouring a non-zero value for one of the four parts of the director gradients~\eqref{Seq:director_gradients}. The most familiar case is that of cholesterics where the twist term in the free energy~\eqref{Seq:Frank} becomes $\frac{1}{2} \tilde{K}_2 ({\bf n}\cdot\nabla\times{\bf n} + q_0)^2$, with $q_0$ the chirality. The other three parts of the decomposition~\eqref{Seq:director_gradients} do not provide scalar invariants of the director field (invariant under the nematic symmetry ${\bf n} \sim -{\bf n}$). Energetic terms promoting a non-zero value for these geometric distortions can be given in the Brazovskii form \begin{align} & \frac{\tilde{K}_1}{2} \Bigl( \bigl| \nabla \cdot {\bf n} \bigr|^2 - s_0^2 \Bigr)^2 , && \frac{\tilde{K}_3}{2} \Bigl( \bigl| ({\bf n} \cdot \nabla) {\bf n} \bigr|^2 - b_0^2 \Bigr)^2 , && \tilde{K}_4 \Bigl( \bigl| \Delta \bigr|^2 - \Delta_0^2 \Bigr)^2 , \end{align} where $s_0$ is the preferred magnitude of the splay, $b_0$ is the preferred magnitude of the bend and $\Delta_0$ is the preferred magnitude of the anisotropic part of the orthogonal gradients, although these are far from the most general expressions. An alternative description creates non-zero values for the splay, bend or anisotropic orthogonal gradients by introducing auxiliary fields and couplings of the form \begin{align} & -\lambda \bigl( n_i \partial_j n_j \bigr) p_i , && -\lambda \bigl( n_j \partial_j n_i \bigr) p_i , && -\lambda \Delta_{ij} T_{ij} , \end{align} respectively. This is the approach originally suggested by Meyer for describing spontaneously modulated splay and bend phases and adopted by the Kent State group~\cite{shamid2013}. The relationship between the two approaches has been described in the recent review~\cite{jakli2018}. In the case of the twist-bend nematic the free energy can be taken to have the form \begin{equation} F = \int \biggl\{ \frac{\tilde{K}_1}{2} \bigl( \nabla \cdot {\bf n} \bigr)^2 + \frac{\tilde{K}_2}{2} \bigl( {\bf n} \cdot \nabla \times {\bf n} \bigr)^2 + \frac{\tilde{K}_3}{2} \bigl| ({\bf n}\cdot\nabla) {\bf n} \bigr|^2 + \tilde{K}_4 \bigl| \Delta \bigr|^2 - \lambda \bigl[ ({\bf n}\cdot\nabla) {\bf n} \bigr] \cdot {\bf p} + \frac{C}{2} \bigl| \nabla {\bf p} \bigr|^2 + \frac{U}{4} \bigl( 1 - |{\bf p}|^2 \bigr)^2 \biggr\} dV , \label{Seq:TB_energy} \end{equation} where $C$ is an elastic modulus for the auxiliary polarisation field ${\bf p}$ and $U$ sets the scale of its bulk ordering energy. In ordinary nematics (and cholesterics) the Frank free energy is often simplified by adopting a one-elastic-constant approximation, replacing the four Frank elastic terms with the single term $\frac{K}{2} |\nabla{\bf n}|^2$, and we adopt this reduced form for simplicity in generating numerical examples. Our focus here is on geometric and topological properties of the director field, which are largely insensitive to the exact form of the free energy and have general applicability for typical values of material constants. \section*{Basic Examples of Bend Geometry and $\beta$ Lines} In the following sections we detail the construction of the basic defects in twist-bend nematics discussed in the main text, as well as give additional graphical renderings of these textures from different perspectives and with different features emphasised. The heliconical ground state of the twist-bend nematic has one-dimensional periodic spatial modulation. On scales large compared to the heliconical pitch, its elastic deformations and hydrodynamic modes are the same as those of a smectic~\cite{kamien1996,parsouzi2016,meyer2016}, as is the case also for cholesterics~\cite{radzihovsky2011}. The polarisation is a non-hydrodynamic mode~\cite{parsouzi2016}. As such, many calculations from the literature on smectics can be applied directly to give a coarse description of the energetics of defects and textures in twist-bend nematics, when the latter are closely similar to known smectic textures. Our focus will be on describing these states from the twist-bend perspective where they may be visualised as disruptions to the family of helices which make up the director integral curves. We begin with a recapitulation of the geometry of the heliconical ground state, \S\ref{sec:Heliconical}, before discussing the two basic smectic-like defects introduced in the main text --- screw (\S\ref{subsec:screw}) and edge (\S\ref{subsec:edge}) dislocations. We also detail two more complex smectic-like defects not presented in the main text, grain boundary phases (\S\ref{sec:TGB}) and focal conics (\S\ref{sec:FocalConic}). We then described examples related to isolated Skyrmions and Skyrmion lattices (\S\ref{sec:Skyrmion}), providing enlarged renderings of these textures to convey their complex structure. \subsection{Heliconical State} \label{sec:Heliconical} \begin{figure}[tb] \centering \includegraphics[width=0.99\linewidth]{SIFigureHeliconical.pdf} \caption{The heliconical texture. (a) The director ${\bf n}$ makes a constant angle $\theta_0 \in [0,\pi/2]$ with the $z$ axis and rotates with wavevector $q$ in the $xy$ plane. The bend $\bf b$ lies in the $xy$ plane, again rotating with wavevector $q$. (b) The integral curves of the director are helices; orange surfaces indicate a full turn of the helix, with pitch $2\pi/q$. (c) The director field fills space, giving a family of interlocking integral helices --- three such helices are shown in grey.} \label{fig:Heliconical} \end{figure} The heliconical state can be given the following purely geometrical description. It is characterised by having a bend distortion of constant non-zero magnitude. The bend is the curvature of the director integral curves; curves with constant magnitude of curvature are helices. Taking the helical axis to be $z$, a general helical integral curve can be written \begin{equation} {\bf X}(z) = x_0 \,{\bf e}_x + y_0 \,{\bf e}_y + z \,{\bf e}_z + \frac{\tan\theta_0}{q} \Bigl[ \sin qz \,{\bf e}_x + (1 - \cos qz) \,{\bf e}_y \Bigr] , \label{Seq:helices} \end{equation} where $x_0, y_0$ are constants corresponding to the point in the $xy$-plane that the helix passes through. The helix has curvature $q \sin\theta_0 \cos\theta_0$ and torsion $q \cos^2\theta_0$; the unit tangent gives the director field of a heliconical state \begin{equation} {\bf n} = \cos\theta_0 \,{\bf e}_z + \sin\theta_0 \bigl[ \cos qz \,{\bf e}_x + \sin qz \,{\bf e}_y \bigr] , \label{Seq:heliconical} \end{equation} where $q$ is the helical wavevector and $\theta_0$ is the constant cone angle the director makes with the heliconical pitch axis --- here the $z$-axis. As $\theta_0 \to 0$ the director limits to the uniform orientation ${\bf e}_z$, with straight integral curves. When $\theta_0 \to \pi/2$ we recover the cholesteric ground state, and again the integral curves are straight lines, which now rotate uniformly as one moves along $z$. In Fig.~\ref{fig:Heliconical} we show the heliconical texture \eqref{Seq:heliconical} and its helical integral curves \eqref{Seq:helices} for a generic cone angle, intermediate between these two extremes. To analyse the director gradients we introduce the basis \begin{align} {\bf s}_1 & = - \sin qz \,{\bf e}_x + \cos qz \,{\bf e}_y , \\ {\bf s}_2 & = \sin\theta_0 \,{\bf e}_z - \sin\theta_0 \bigl[ \cos qz \,{\bf e}_x + \sin qz \,{\bf e}_y \bigr] , \end{align} of the orthogonal planes $\xi$; these correspond to the normal and binormal vectors in the Frenet-Serret frame of the helical integral curves~\eqref{Seq:helices}. The director gradients are \begin{equation} \nabla {\bf n} = q \sin\theta_0 \cos\theta_0 \,{\bf n}\otimes{\bf s}_1 - \frac{q\sin^2\theta_0}{2} \bigl[ {\bf s}_1 \otimes {\bf s}_2 - {\bf s}_2 \otimes {\bf s}_1 \bigr] + \frac{q\sin^2\theta_0}{2} \bigl[ {\bf s}_1 \otimes {\bf s}_2 + {\bf s}_2 \otimes {\bf s}_1 \bigr] , \end{equation} and we can read off that the bend is ${\bf b} = ({\bf n}\cdot\nabla){\bf n} = q\sin\theta_0 \cos\theta_0 \,{\bf s}_1$, the splay is $\nabla\cdot{\bf n} = 0$ and the twist is ${\bf n}\cdot\nabla\times{\bf n} = -q\sin^2\theta_0$. The anisotropic orthogonal gradients are \begin{align} & \Delta = \frac{q\sin^2\theta_0}{2} \bigl[ {\bf s}_1 \otimes {\bf s}_2 + {\bf s}_2 \otimes {\bf s}_1 \bigr] , && \Pi = - \frac{q\sin^2\theta_0}{2} \bigl[ {\bf s}_1 \otimes {\bf s}_1 - {\bf s}_2 \otimes {\bf s}_2 \bigr] , \end{align} and from the linear transformation $\Pi$ we can read off that the cholesteric pitch axis is ${\bf s}_2$. We note that this is not the same as the heliconical pitch axis ($z$-axis); we explain how to identify the latter in \S\ref{sec:FrenetSerret}. This description has emphasised the geometry of the heliconical state, independent of specific energetic considerations. Several free energies have been developed that have the heliconical director~\eqref{Seq:heliconical} as a ground state; some examples include~\cite{kamien1996,dozov2001,shamid2013,kats2014,pajak2018}. For the free energy~\eqref{Seq:TB_energy}, taking the limit $U\to\infty$ (which enforces $|{\bf p}|=1$), the preferred values of the heliconical cone angle $\theta_0$ and wavevector $q$ are \begin{align} & \cos 2 \theta_0 = 1+ \frac{2C}{K} - \biggl[ \frac{4C}{K} \biggl( 1+ \frac{C}{K} \biggr) \biggr]^\frac{1}{2} , && q = \frac{2 \lambda}{K} \cot 2 \theta_0 . \end{align} \subsection{Screw Dislocations} \label{subsec:screw} \begin{figure}[tbp] \centering \includegraphics[width=0.8\linewidth]{SIFigure_Screw.pdf} \caption{Screw dislocations in twist-bend nematics. Panels (a,b) show $+1,-1$ strength screw dislocations respectively. The $\beta$ line along the $z$ axis is shown in green. (a,b)(i) Helical phase field on three $z$ slices, with $\phi=0$ level set shown in orange. (a,b)(ii) Zoomed out view of $\phi = 0$ level set, showing equispaced layers away from the screw dislocation. (a,b)(iii) Director integral curves (blue) with their intersection with $\phi=0$ shown as black dots. (a,b)(iv) Top down view of integral curves, with bend vector (orange) shown on a $z=0$ slice. (a,b)(v) Perspective view of integral curves and their bend vector, showing periodic variation of the bend along the screw. (a,b)(vi) Degeneration of the integral curves to a straight line, which is also the $\beta$ line, as we approach the $z$ axis.} \label{fig:Screw} \end{figure} The one-dimensional periodicity of the heliconical phase leads to a general correspondence with the elasticity of smectics and so a description in terms of `smectic-like' phase fields. The heliconical phase $\phi = qz$ in \eqref{Seq:heliconical} is the same as the phase in the mass-density wave of the smectic ground state. Other smectic phase fields --- corresponding to screw dislocations, edge dislocations, TGB phases, focal conics etc. --- lend themselves to analogous twist-bend states with the same helical phase field and provide examples of smectic-like defects in twist-bend nematics. We emphasise at the outset, however, that this is merely one class of defect in twist-bend nematics; the Skyrmion-type textures we describe later are not derived in this way from a smectic counterpart. Our first example of a smectic-like defect is the screw dislocation, for which we consider the texture \begin{equation} {\bf n} = \cos\theta(\rho) \,{\bf e}_z + \sin\theta(\rho) \bigl[ \cos\phi \,{\bf e}_x + \sin\phi \,{\bf e}_y \bigr] , \label{eq:Screw} \end{equation} where $\phi = qz + s\arctan(y/x)$, with $s = \pm 1, \pm 2, \dots $ the defect strength, and $\theta(\rho)$ interpolates smoothly from $0$ at the origin to the heliconical far field angle as $\rho := \sqrt{x^2+y^2} \to \infty$. In Fig.~\ref{fig:Screw} we show these textures for $s= +1,-1$ in panels (a,b) respectively. The phase field $\phi$ contains a smectic screw disclocation along the $z$ axis such that around any positively oriented loop in the $xy$-plane encircling the axis $\phi$ winds by $2\pi s$. This is shown by the winding colour map in Figs.~\ref{fig:Screw}(a,b)(i), which also show the level set $\phi = 0$ as an orange surface; this surface corresponds to the layers of a smectic screw dislocation. Note the difference in the sense of rotation between panels (a) and (b). Figs.~\ref{fig:Screw}(a,b)(ii) show the same level set $\phi =0$ but zoomed out, emphasising that away from the screw dislocation we simply have equally spaced layers, $\phi \approx qz$. In Fig.~\ref{fig:Screw}(a,b)(iii) we add integral curves of the director, with their intersection with the $\phi=0$ surface indicated by black points; in the limit $\rho \rightarrow \infty$ the integral curves are exactly helices and the marked points are locations along the integral curves of the same `helical phase'. The screw dislocation corresponds to a $2\pi s$ `phase slip', as can be seen in Fig.~\ref{fig:Screw}(a,b)(iv) in which we show a top down view of the integral curves alongside the phase $\phi$ on the $xy$ plane. The bend of \eqref{eq:Screw} is \begin{equation} {\bf b} = ({\bf n} \cdot \nabla \theta)\left[ \cos\theta \left( \cos \phi \,{\bf e}_x + \sin \phi \,{\bf e}_y \right) - \sin\theta \,{\bf e}_z \right] + ({\bf n} \cdot \nabla \phi)\sin\theta\left[-\sin\phi \,{\bf e}_x + \cos \phi \,{\bf e}_y \right]. \label{eq:Bend} \end{equation} We first consider its far field behaviour. As $\rho \rightarrow \infty$, $\nabla \theta \rightarrow 0$ and \eqref{eq:Bend} becomes \begin{equation} {\bf b} = q \cos\theta_0 \sin\theta_0 \left[ -\sin\phi \,{\bf e}_x + \cos \phi \,{\bf e}_y \right], \label{eq:BendFarField} \end{equation} exactly the heliconical bend but with $qz \rightarrow \phi = qz + \arctan(y/x)$. We conclude that the bend winds as $\phi$, and so there is a $2\pi s$ winding of the bend vector about the origin. This winding is shown in Fig.~\ref{fig:Screw}(a,b)(iv). The bend~\eqref{eq:BendFarField} also rotates along the pitch axis $z$ with pitch $2\pi/q$, giving a periodic structure to these defects along $z$, as shown in Figs.~\ref{fig:Screw}(a,b)(v). For $s=+1$, a radial profile rotates to become azimuthal and then back to radial. For $s=-1$, the axes of the $-1$ profile rotate along $z$. As $\rho$ decreases and you approach the axis the integral curves are no longer exactly helices, however the $2\pi s$ winding of the bend vector is preserved. In \S\ref{sec:Local} we give a general analysis of the director structure as we approach a degenerate point and in \S\ref{sec:Meron} we describe some global, topological aspects. Here, we will continue to think of the integral curves as approximately helices but with curvature and torsion that vary with $\rho$, which is a good approximation provided the tilt angle $\theta_0$ is small. More precisely, consider the magnitudes of the two terms in \eqref{eq:Bend}, \begin{align} {\bf n} \cdot \nabla \theta & = \sin\theta \,\theta^{\prime}(\rho)\cos\bigl( qz+(s-1)\arctan(y/x) \bigr), \label{eq:term1} \\ \sin\theta({\bf n} \cdot \nabla \phi) & = \sin\theta \biggl( q\cos\theta + \frac{\sin\theta(\rho)}{\rho} \sin\bigl( qz+(s-1)\arctan(y/x) \bigr) \biggr). \label{eq:term2} \end{align} Note that \eqref{eq:term2} shows that we require $\theta(\rho)$ to vanish at least linearly at the origin. The ratio of the two terms is then approximately $\theta^{\prime}(0) / (q + \theta^{\prime}(0))$ and taking $\theta^{\prime}(0)$ to be roughly $\theta_0$ divided by the pitch the ratio is of order $\theta_0 / 2\pi$ and is small. We can then neglect \eqref{eq:term1}, and simplify \eqref{eq:term2} to $|{\bf b}|=q\sin\theta(\rho) \cos\theta(\rho)$, the curvature of an integral helix. As $\rho \rightarrow 0$ this curvature vanishes, and along the $z$ axis itself the helices degenerate to a straight line, which is also our $\beta$ line. A schematic of this degeneration is shown in Fig.~\ref{fig:Screw}(a,b)(vi) and can be compared against numerical relaxation of a screw dislocation shown in Fig.~\ref{Sfig:core}. We identify the core region of the $\beta$ line by measuring how the cone angle $\theta$ deviates from the preferred value $\theta_0$ of the heliconical state and indicate it by blue shading. On the right, we show the size of the core region for different values of $K/\lambda$, corresponding to the helical pitch, increasing from top to bottom. The value of $K/\lambda$ doubles with each panel, illustrating a roughly linear scaling. The final panel is illustrated in more detail on the left of Fig.~\ref{Sfig:core}; compare with Fig.~\ref{fig:Screw}(a)(vi). \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{core} \caption{Illustration of the core structure of a screw dislocation. The core can be identified with the region where the cone angle $\theta$ deviates from the preferred cone angle $\theta_0$. (a,b) We plot the value of $|\theta-\theta_0|$ on a slice orthogonal to the $\beta$ line in a numerical simulation of a screw dislocation. The region where this is positive is shown in blue. The integral curves deform from helices to a straight line, where $\theta=0$, along the $\beta$ line. (c) The size of the core region is shown for several values of $K/\lambda$, which doubles with each panel, illustrating a roughly linear scaling.} \label{Sfig:core} \end{figure} \subsection{Edge Dislocations} \label{subsec:edge} Returning to \eqref{eq:Screw} but taking instead $\phi = qz + s\arctan(z/x)$ yields an edge dislocation in the phase field parallel to the $y$ axis --- the case $s=+1$ is shown in Fig.~\ref{fig:Edge}. As we go from negative to positive $x$ an extra $2\pi$ is inserted into $\phi$, corresponding to an additional full turn in the integral helices, as can be seen in Fig.~\ref{fig:Edge}(a). On a positively oriented loop encircling the edge dislocation, the bend therefore acquires a winding of $2\pi s$ as in the case of the screw dislocation. There are, however, several distinct features of the edge dislocation worth emphasising. The first is that the $\beta$ line (shown in green in Fig.~\ref{fig:Edge}) is not itself an integral curve of the director --- this is the generic situation in an arbitrary director field, the screw dislocation being an exceptional case. The second feature is the location of the $\beta$ line itself --- it is not along the $y$ axis, but slightly displaced from it, as shown in Figs.~\ref{fig:Edge}(a,b). To understand this feature we recall some details of the phase field $\phi$, shown in Fig.~\ref{fig:Edge}(c) \cite{kamien2016}. An edge dislocation is composed of two disclinations in $\nabla\phi/|\nabla \phi|$. The first is a $+1$ disclination along the $y$ axis, denoted $\bf{D}$ in Figs.~\ref{fig:Edge}(b, c), which is a singularity in $\phi$. The second is a $-1$ disclination along $(-\frac{1}{q},y,0)$, called the hyperbolic line and denoted $\bf H$ in Figs.~\ref{fig:Edge}(b, c). This second disclination is the unique location where $\nabla\phi=0$, with $\phi$ itself nonsingular. We now return to \eqref{eq:Bend}, derived for the screw dislocation but valid here too. Neglecting $({\bf n} \cdot \nabla \theta)$ as before, we see $\bf b$ vanishes when $\nabla \phi$ vanishes, and so we have a $\beta$ line along the hyperbolic line $\bf H$. One might worry about the phase singularity at the origin, but a direct expansion of \eqref{eq:Bend} shows that the bend is in fact continuous about the origin, taking value ${\bf b} = \theta^{\prime}(0) \,{\bf e}_y$ at the origin itself, and is not (as one might initially suspect) singular --- this is reflected in the smooth nature of the bend at the origin shown in Figs.~\ref{fig:Edge}(a,b). We briefly remark that the canonical local form of a family of curves which pass through an inflectional configuration (where the bend vanishes) is given in \citep{moffatt1992}, where it is shown that on passing through the inflectional configuration the curve normal (equivalently the bend $\bf b$) picks up a $2\pi$ rotation. Locally, this is what happens to our integral curves as we pass through the $\beta$ line at $\bf H$. \begin{figure}[tbp] \centering \includegraphics[width=0.9\linewidth]{SIFigure_Edge.pdf} \caption{Edge dislocations in twist-bend nematics. (a) $xz$ slice through the edge dislocation parallel to the $y$ axis, coloured by the angle the bend vector makes with the $x$ axis, with director integral curves shown in blue, bend vector in orange and the $\beta$ line shown in green. Across the dislocation the bend acquires a $2 \pi$ winding. (b, c) The $\beta$ line does not coincide with the phase singularity $\bf D$ along the $y$ axis, but is along the hyperbolic line $\bf H$. We emphasise this difference by showing the angle the bend vector makes with the $x$ axis in (b), and the phase field $\phi$ in (c) --- note the discrepancy in the location of singularities.} \label{fig:Edge} \end{figure} \subsection{Twist Grain Boundary Phases} \label{sec:TGB} The examples of screw and edge dislocations extend to constructions of locally heliconical director fields whose helical phase corresponds to any smectic texture. A general director field with these properties is given by \begin{equation} {\bf n} = \cos\theta \,{\bf N} + \sin\theta \bigl[ \cos\phi \,{\bf e}_1 + \sin\phi \,{\bf e}_2 \bigr] , \label{Seq:smectic_director} \end{equation} where $\phi$ is a smectic phase field, ${\bf N}$ is the smectic-A director field ({\sl i.e.} ${\bf N} = \nabla\phi / |\nabla\phi|$ away from singularities in $\phi$) and ${\bf e}_1$, ${\bf e}_2$ are an orthonormal basis for the planes orthogonal to ${\bf N}$ chosen to have no rotation along the integral curves of ${\bf N}$, meaning $(\nabla_{{\bf N}} {\bf e}_1) \cdot {\bf e}_2 = 0$. As in the screw and edge dislocation examples, the cone angle $\theta$ should vanish along the phase singularities. In this section and the next we outline constructions of this form for phase fields representing twist grain boundary and parabolic focal conic textures. Twist grain boundaries in smectics are formed by arrays of equally spaced screw dislocations and mediate a rotation of the smectic layer normal. This same structure can be encoded into a director field that locally corresponds to the heliconical state; the grain boundary mediates a rotation of the helical (pitch) axis and each of the screw dislocations becomes a $\beta$ line. We first review briefly the construction of grain boundaries in smectics. A single grain boundary in a smectic can be described by the phase field~\cite{matsumoto2017} \begin{equation} \phi = \textrm{Im} \ln \Bigl[ \mathrm{e}^{-y/\ell} \mathrm{e}^{i \phi_{-}} + \mathrm{e}^{y/\ell} \mathrm{e}^{i \phi_{+}} \Bigr] , \label{Seq:single_grain} \end{equation} where $\phi_{\pm} = qz \cos(\alpha/2) \pm qx \sin(\alpha/2)$ and we choose $\ell = [q\sin(\alpha/2)]^{-1}$ to make $\phi$ a harmonic function. The layer structure is the level set $\phi=0$ and is shown in Fig.~\ref{Sfig:TGB}(a). For $y\lesssim -\ell$ we have $\phi \approx \phi_{-}$ and for $y\gtrsim \ell$ we have $\phi \approx \phi_{+}$. In the plane $y=0$ there are screw dislocations with axes parallel to $z$ at $x=\frac{\pi}{2} + m \pi$, $m\in\mathbb{Z}$. The gradient of the phase field is \begin{equation} \nabla\phi = q\cos(\alpha/2) \,{\bf e}_z + q \sin(\alpha/2) \,\frac{\sinh(2y/\ell) \,{\bf e}_x + \sin(2x/\ell) \,{\bf e}_y}{\cosh(2y/\ell)+\cos(2x/\ell)} , \end{equation} and its magnitude squared, \begin{equation} |\nabla\phi|^2 = q^2 \,\frac{\cosh(2y/\ell)+\cos\alpha \cos(2x/\ell)}{\cosh(2y/\ell)+\cos(2x/\ell)} , \end{equation} diverges as inverse distance squared along each of the screw dislocations. It is not difficult to extend this construction to create phase fields containing multiple grains and describing full twist-grain boundary phases. We refer the reader to~\cite{matsumoto2017} for details. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{TGB.png} \caption{(a) Smectic phase field for a single grain boundary. The surface shown is $\phi = 0$, where $\phi$ is given in~\eqref{Seq:single_grain}. (b-c) Helical integral curves (blue) of a twist-bend director containing a grain boundary, with $\beta$ lines shown in green: (b) side view; (c) top view.} \label{Sfig:TGB} \end{figure} We restrict our focus here to describing how the single grain boundary~\eqref{Seq:single_grain} can be embedded into a heliconical director field with $\beta$ lines along each of the screw dislocations, {\sl i.e.} the lines $(\frac{\pi}{2} + m\pi , 0 , z)$, $m\in\mathbb{Z}$. We write the director field in the form~\eqref{Seq:smectic_director} and take the basis $\{{\bf N}, {\bf e}_1, {\bf e}_2\}$ to be \begin{align} & {\bf N} = \cos\sigma \,{\bf e}_z + \sin\sigma \,{\bf e}_x , && {\bf e}_1 = - \sin\sigma \,{\bf e}_z + \cos\sigma \,{\bf e}_x , && {\bf e}_2 = {\bf e}_y , \end{align} where $\sigma$ is a function interpolating between $-\alpha/2$ for $y \lesssim -\ell$ and $+\alpha/2$ for $y \gtrsim +\ell$, for instance $\sigma = \frac{\alpha}{2} \tanh(2y/\ell)$. With this choice ${\bf N}$ differs from $\nabla\phi/|\nabla\phi|$ by exponentially small terms away from the cores of the screw dislocations, along each of which it is ${\bf e}_z$. To make the cone angle $\theta$ vanish linearly along each screw dislocation and approach a preferred value $\theta_0$ outside of the core region we can choose $\theta = q\theta_0 / |\nabla\phi|$. A selection of helical integral curves of this director field are shown in Fig.~\ref{Sfig:TGB}(b,c). \subsection{Parabolic Focal Conics} \label{sec:FocalConic} Focal conics are amongst the most celebrated geometric features of any ordered phase. They are the hallmark of smectic order, corresponding to the fundamental singularities of a material composed of equally spaced layers. They are also seen in twist-bend nematics~\cite{kleman2018}, which serve to emphasise that it is the one-dimensional periodicity that leads to focal conics, rather than a modulation of the mass density. A director field for a twist-bend phase containing a focal conic defect can be constructed using the general form~\eqref{Seq:smectic_director}, where $\phi$ is the phase field of a focal conic and ${\bf N}$ is the layer normal, away from the conic singularities themselves. The construction and description of the Dupin cyclides and focal conic domains is classical; here, we simply quote the formulae with a convenient parameterisation~\cite{alexander2010}. A focal conic domain consists of a space-filling family of surfaces -- level sets of a phase field $\phi$ -- that are singular along a pair of confocal conics and uniformly spaced everywhere else. In the case of a parabolic domain, the confocal parabolae may be taken to be \begin{align} & {\bf p}_1(u) = \biggl( \sigma \frac{\cos u}{1+\cos u} , \sqrt{2} \sigma \frac{\sin u}{1+\cos u} , 0 \biggr) , && {\bf p}_2(v) = \biggl( -\sigma \frac{\cos v}{1+\cos v} , 0 , \sqrt{2} \sigma \frac{\sin v}{1+\cos v} \biggr) , \end{align} where $\sigma$ is a constant parameter corresponding to the distance between the two foci/apices of the parabolae and $-\pi < u,v < \pi$. The domain itself then has the explicit parameterisation \begin{equation} \begin{split} x & = \cos u \,\frac{\sigma - \phi (1+\cos v)}{2 + \cos u + \cos v} - \cos v \,\frac{\sigma + \phi (1+\cos u)}{2 + \cos u + \cos v} , \\ y & = \sqrt{2} \,\sin u \,\frac{\sigma - \phi (1+\cos v)}{2 + \cos u + \cos v} , \\ z & = \sqrt{2} \,\sin v \,\frac{\sigma + \phi (1+\cos u)}{2 + \cos u + \cos v} , \end{split} \end{equation} where each surface of constant $\phi$ is a parabolic Dupin cyclide. Depending on the value of $\phi$ the range of $u,v$ should be restricted so as to terminate the surface on the singular parabolae. Specifically, if $\phi<-\sigma/2$ then the range of $u$ should be restricted according to $\cos u < |\sigma/\phi| - 1$; if $\phi>\sigma/2$ then the range of $v$ should be restricted by $\cos v < |\sigma/\phi| - 1$; and if $-\sigma/2 < \phi < \sigma/2$ no restriction is needed. In Fig.~\ref{Sfig:FocalConic}(a) we show the structure of a parabolic focal conic domain, with a selection of individual layers shown in Fig.~\ref{Sfig:FocalConic}(b). \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{FocalConic.png} \caption{(a) Smectic phase field for a parabolic focal conic domain. We show multiple different level sets $\phi = \text{constant}$. The $\beta$ lines, singularities in the phase field $\phi$, are shown in green. (b) Individual layers in the parabolic focal conic domain are shown for increasing levels of $\phi$. (c-e) Helical integral curves of a twist-bend director containing a parabolic focal conic domain: (c) top view; (d) side view; (e) the local structure around each focus / apex is (compatible with) that of a (chiral) point defect. The integral curves connect one focus/ $\beta$ line to the other. We show two families of integral curves, one in red, one in blue, that converge on the same point on one of the foci.} \label{Sfig:FocalConic} \end{figure} In terms of this parameterisation the frame $\{{\bf N}, {\bf e}_1, {\bf e}_2\}$ is given by \begin{equation} \begin{split} {\bf N} & = \biggl( - \,\frac{\cos u + \cos v + 2 \cos u \cos v}{2 + \cos u + \cos v} , - \,\frac{\sqrt{2} \sin u (1+\cos v)}{2 + \cos u + \cos v} , \frac{\sqrt{2} \sin v (1+\cos u)}{2 + \cos u + \cos v} \biggr) , \\ {\bf e}_1 & = \biggl( \frac{\sqrt{2} \sin u (1+\cos v)}{2 + \cos u + \cos v} , - \frac{1 + 2 \cos u + \cos u \cos v}{2 + \cos u + \cos v} , - \,\frac{\sin u \sin v}{2 + \cos u + \cos v} \biggr) , \\ {\bf e}_2 & = \biggl( \frac{\sqrt{2} \sin v (1+\cos u)}{2 + \cos u + \cos v} , \frac{\sin u \sin v}{2 + \cos u + \cos v} , \frac{1 + 2 \cos v + \cos u \cos v}{2 + \cos u + \cos v} \biggr) . \end{split} \end{equation} Helical integral curves of the director field are then given by \begin{equation} {\bf h}_{(u,v)}(\phi) = {\bf x}_{0}(u,v) + \frac{\phi}{q} \,{\bf N} + \frac{\tan\theta}{q} \bigl[ \sin \phi \,{\bf e}_1 + \bigl( 1 - \cos \phi \bigr) {\bf e}_2 \bigr] , \end{equation} where ${\bf x}_{0}(u,v)$ is a point on the cyclide $\phi = 0$. The range of values of $\phi$ should be limited to $[\frac{-\sigma}{1+\cos u}, \frac{\sigma}{1+\cos v}]$ and the helices then extend from one conic to the other. A selection of such helical integral curves are shown in Fig.~\ref{Sfig:FocalConic}(c-e). In this structure the two focal parabolae are singularities and correspond to $\beta$ lines. Although there are several possibilities for how the director is resolved along these lines, one natural arrangement places point defects at each focus/apex of the two parabolae; this local structure is especially suggested by Fig.~\ref{Sfig:FocalConic}(e). \subsection{Skyrmions and Double Twist Cylinders} \label{sec:Skyrmion} \begin{figure}[tbp] \centering \includegraphics[width=0.99\linewidth]{SIFigure_Skyrmion.pdf} \caption{Twist-bend skyrmion in a heliconical background. (a) The two $\beta$ lines comprising the Skyrmion are shown in the simulation box, with grey disc in the midplane indicating rough Skyrmion extent. The background director is heliconical (blue curves). (b) Director and bend vector on midplane through the Skyrmion. Red circles highlight the winding of the bend around the $\beta$ lines. (c,d,e) Idealised double twist cylinder forming the neighbourhood of the vertical $\beta$ line. The director integral curves link the $\beta$ line, as emphasised in panel (e). (f) Director integral curves and bend vector on an $xz$ slice through the Skyrmion texture. Red circles emphasise the winding of the bend vector about the second helical $\beta$ line.} \label{Sfig:Skyrmion} \end{figure} We now examine a class of defects in twist-bend nematics which are not constructed by analogy to a smectic phase field but rather from topologically non-trivial textures in cholesterics. Skyrmions are non-singular field configurations found in cholesterics and chiral ferromagnets~\cite{ackerman2014,ackerman2017,afghah2017,duzgun2018,foster2019,sutcliffe2017} corresponding to topologically protected particle-like solitons. They carry a topological charge $Q=\frac{1}{4\pi} \int {\bf n} \cdot \partial_x {\bf n} \times \partial_y {\bf n}\ dx dy$, an element of $\pi_2(S^2) \approx \mathbb{Z}$ giving the `wrapping number' of the texture. Given the general similarities between the heliconical director field and the cholesteric ground state it is natural to consider if Skyrmion textures also exist in twist-bend nematics and how they may be characterised in terms of $\beta$ lines and the geometry of bend. In cholesterics, Skyrmions are usually created in frustrated cells with normal anchoring boundary conditions; away from the Skyrmion the director points vertically (say) so that the asymptotic behaviour is frustrated and not the cholesteric ground state. However, in twist-bend nematics the heliconical ground state may have a small cone angle (indeed arbitrarily small) allowing the usual Skyrmion structure to match naturally onto it as an asymptotic far field and this is the configuration we consider. In Fig.~\ref{Sfig:Skyrmion} we show a single Skyrmion embedded in a heliconical background, the result of numerical relaxation of \eqref{Seq:TB_energy} from a topologically correct initial director field. The Skyrmion is characterised by two $\beta$ lines as shown in Figs.~\ref{Sfig:Skyrmion}(a,b), the first vertical, the second a helix with pitch equal to the heliconical background. In the neighbourhood of the vertical $\beta$ line the director field is a double-twist cylinder, an idealised description of which is the texture ${\bf n} = \cos q\rho \,{\bf e}_z + \sin q\rho \,{\bf e}_\phi$. In Figs.~\ref{Sfig:Skyrmion}(c,d,e) we show this texture, its integral curves and its bend vector. The texture has bend ${\bf b }= -\frac{1}{\rho} \sin^2 q\rho \,{\bf e}_{\rho}$, which vanishes linearly at the origin with winding number $+1$, giving a $\beta$ line along the $z$ axis. In contrast to the screw or edge dislocations discussed in \S\S\ref{subsec:screw},\ref{subsec:edge}, here the integral curves of the director link the $\beta$ line. This observation establishes that this $\beta$ line is topologically distinct from screw or edge dislocations, in the sense that a homotopy of the director between a double twist cylinder and an screw/edge dislocation would necessarily introduce new $\beta$ lines (related observations of the failure of standard homotopy theory to deal with order parameters coupled to the director are given in \cite{beller2014} for the case of umbilic lines in cholesterics). The local structure about the second $\beta$ line is that of the edge dislocation, as can be seen in the integral curve structure shown in Fig.~\ref{Sfig:Skyrmion}(f). In \S\ref{sec:Local} we will define a global orientation for $\beta$ lines --- this orientation is shown as arrows along the $\beta$ lines in Fig.~\ref{Sfig:Skyrmion}. We briefly note that the $\beta$ lines of Fig.~\ref{Sfig:Skyrmion} are both oriented along $+z$, and both puncture the grey disc shown in Figs~\ref{Sfig:Skyrmion}(a,b) in the same sense. The apparent difference in local winding of the bend vector between them is misleading, as the oriented plane on which one should measure winding makes a half turn, with director, between the two $\beta$ lines. With this orientation defined, in \S\ref{sec:FrenetSerret} we will apply the Gauss-Bonnet-Chern theorem to these Skyrmion textures to show that the two $\beta$ lines of Fig.~\ref{Sfig:Skyrmion} are topologically required. \begin{figure}[tbp] \centering \includegraphics[width=0.8\linewidth]{SIFigure_Lattice.pdf} \caption{Skyrmion lattice in a twist-bend nematic. (a) Each Skyrmion in the lattice is composed of two $\beta$ lines, as in the isolated Skyrmion of Fig.~\ref{Sfig:Skyrmion}. The cylindrical symmetry of the helical $\beta$ line is broken to hexagonal by the lattice. (b) Director and bend vector on the midplane of the Skyrmion lattice --- compare with Fig.~\ref{Sfig:Skyrmion}(b). (c) Phase of the bend vector shown in panel (b). (d) Integral curves on an $xz$ slice through the lattice --- compare with Fig.~\ref{Sfig:Skyrmion}(f).} \label{Sfig:SkyrmionLattice} \end{figure} In Fig.~\ref{Sfig:SkyrmionLattice} we show a lattice of Skyrmions, again obtained by numerical relaxation. The hexagonal symmetry of the lattice breaks the cylindrical symmetry of the helical $\beta$ lines, but otherwise the texture is essentially that of a repeated isolated Skyrmion. \section*{Local and Global Structure of $\beta$ Lines} In the following sections we develop an account of the geometry and topology of the $\beta$ lines introduced in \S\ref{sec:general}. We discuss their local structure in \S\ref{sec:Local}, defining how $\beta$ lines may be oriented and showing how the director structure about the $\beta$ line sets its index. We then move to global structure in \S\ref{sec:FrenetSerret}, showing that $\beta$ lines are Poincar{\'e} dual to the Euler class of the plane field $\xi$, via an application of the Gauss-Bonnet-Chern theorem; concretely, these lines encode topological information about the director, such as Skyrmion number. We apply this general result to two specific examples, the screw dislocation \S\ref{subsec:screw} and an isolated Skyrmion \S\ref{sec:Skyrmion}. \subsection{Local Analysis of Bend Zeros} \label{sec:Local} \begin{figure}[tb] \centering \includegraphics[width=0.5\linewidth]{SIFigureOrientation.pdf} \caption{Local structure of a $\beta$ line. (a) $\beta$ lines come with two canonical frames, $\bf n$ and its orthogonal place $\xi$, and the tangent $\bf t$ and orthogonal plane $\nu$. (b) The bend vector locally lies in the plane $\xi$, making $\nabla \bf b|_\beta$ an isomorphism $\nu \rightarrow \xi$.} \label{Sfig:Orientation} \end{figure} Along the $\beta$ line there are two canonical frames: One coming from the director and its orthogonal plane $\xi$, and the other coming from the tangent vector to the $\beta$ line ${\bf t}$ and its normal plane $\nu$ as shown in Fig.~\ref{Sfig:Orientation}(a). First we note that along the $\beta$ line the image of the linear map $\nabla {\bf b} |_{\beta} : T\mathbb{R}^3 \to T\mathbb{R}^3$ defined by ${\bf v} \mapsto ({\bf v}\cdot\nabla){\bf b}|_{\beta}$ is the orthogonal plane $\xi$, Fig.~\ref{Sfig:Orientation}(b). This is because ${\bf n}\cdot{\bf b}=0$ and hence $(\nabla{\bf b})\cdot{\bf n} = - (\nabla{\bf n})\cdot{\bf b}$, so that along a $\beta$ line $(\nabla{\bf b})\cdot{\bf n} = 0$. Similarly, the tangent vector ${\bf t}$ spans the kernel of $\nabla {\bf b} |_{\beta}$. This understood, we may think of $\nabla{\bf b}|_{\xi}$ as defining an isomorphism between the normal plane $\nu$ and the orthogonal plane $\xi$. The general linear group has two disconnected components, corresponding to positive and negative determinant. The orientation of the $\beta$ line is taken such that $\nabla{\bf b}|_{\xi}$ belongs to the positive, or orientation-preserving, component. In Figs.~\ref{Sfig:Skyrmion},\ref{Sfig:SkyrmionLattice} we indicate this orientation for the case of Skyrmions with arrows along the $\beta$ lines --- this orientation will enter into the signed intersection count with a surface which defines Skyrmion number in \S \ref{sec:FrenetSerret}. Note that under the replacement $\bf n \rightarrow -\bf n$, the bend $\bf b$ remains invariant, but the orientation of the plane field $\xi$ reverses, and hence all $\beta$ line orientations reverse. This reversal corresponds to the well-known reversal of Skyrmion number (hedgehog charge) under $\bf n \rightarrow -\bf n$~\cite{alexander2012}. At any generic point the vectors ${\bf n}$ and ${\bf t}$ have no special relationship, being neither colinear nor perpendicular. Both these situations therefore correspond to situations of greater degeneracy. Points where ${\bf n}$ and ${\bf t}$ are perpendicular are the most basic type of degeneracy and have codimension one; we call them Legendrian points. Points of colinearity have codimension two. At a generic (or Legendrian) point, the planes $\nu$ and $\xi$ have one-dimensional intersection, which may be used to give a `framing', whose half-integer `self-linking' can change only by passing through points of colinearity. We now relate $\nabla{\bf b}|_\beta$ to $\nabla \bf n$, computing the normal form of a Taylor series for the bend at a generic zero. The analysis closely parallels that for other geometric degeneracies such as umbilic points of surfaces~\cite{berry1977}, C lines in electromagnetic fields~\cite{nye1983} and umbilic lines in general~\cite{machon2016}. A Taylor series for a generic point where the bend vanishes will involve terms in the director field up to second order, so as to obtain all first order terms in the bend. Introducing a local coordinate system adapted to the director and its orthogonal plane at the bend zero, and writing ${\bf n} \approx n_x \,{\bf e}_x + n_y \,{\bf e}_y + {\bf e}_z$, we find the general form of the Taylor series contributing to the linear structure of the bend zero is \begin{align} \begin{bmatrix} n_x \\ n_y \end{bmatrix} & = \biggl[ \Bigl. \nabla_\perp {\bf n} \Bigr\rvert_0 + z \Bigl. \bigl( \partial_z\nabla_\perp {\bf n} \bigr) \Bigr\rvert_0 \biggr] \begin{bmatrix} x \\ y \end{bmatrix} + \frac{1}{2} z^2 \begin{bmatrix} s_x \\ s_y \end{bmatrix} , \label{Seq:director_local} \\ \begin{bmatrix}b_x \\ b_y\end{bmatrix} &= \nabla {\bf b} \cdot \begin{bmatrix} x \\ y \\z \end{bmatrix} = \biggl[ \Big( \Bigl. \nabla_\perp {\bf n} \Bigr\rvert_0 \Big)^2 + \Bigl. \partial_z\nabla_\perp {\bf n} \Bigr\rvert_0 \biggr] \begin{bmatrix} x \\ y \end{bmatrix} + z \begin{bmatrix} s_x \\ s_y \end{bmatrix} . \label{Seq:bendprofile} \end{align} Here $\nabla_{\perp} {\bf n} = \Bigl[ \begin{smallmatrix} \partial_x n_x & \partial_y n_x \\ \partial_x n_y & \partial_y n_y \end{smallmatrix} \Bigr]$ denotes the 2$\times$2 matrix of orthogonal gradients of the director~\cite{machon2016} (see \S\ref{sec:general}), and $\partial_z \nabla_{\perp} {\bf n}$ is its rate of change along the local director. The winding number of the bend vector in the $xy$-plane is $\pm 1$ according to the sign of $\det\bigl( (\nabla_{\perp}{\bf n}|_{0})^2+\partial_z \nabla_{\perp}{\bf n}|_{0} \bigr)$; when the derivatives $\bigl. \partial_z \nabla_{\perp}{\bf n} \bigr|_{0}$ are negligible this reduces to $(\det \nabla_{\perp}{\bf n}|_{0})^2$ and the winding is always $+1$, so that the different profiles of $\beta$ lines are controlled crucially by the parallel derivatives of the orthogonal director gradients. $[s_x,s_y]$ controls the angle between the director and the tangent to the $\beta$ line. To see this, note that, as we saw above, \eqref{Seq:bendprofile} is a linear map $[x,y,z]\mapsto [b_x,b_y]$ with a one-dimensional kernel tangent to the $\beta$ line. When $[s_x,s_y]=0$ this kernel is along the $z$ axis. \begin{figure}[tb] \centering \includegraphics[width=0.5\linewidth]{SIFigureLocalProfiles.png} \caption{Local profiles of bend zeros from Taylor series, with director and its integral curves in blue, bend vector in orange and oriented $\beta$ line in green. (a) Radial $+1$ defect (b) azimuthal +1 defect. (c) Generic bend zero, with tilt between director and $\beta$ line. (d) Legendrian point, with degenerate winding behaviour.} \label{Sfig:bend_profiles} \end{figure} With \eqref{Seq:bendprofile} we may construct $\beta$ lines with different local structures starting from a Taylor series for the director, with several examples shown in Fig.\ref{Sfig:bend_profiles}. In Fig.~\ref{Sfig:bend_profiles}(a), the only nonzero part of \eqref{Seq:bendprofile} is $(\nabla_\perp {\bf n})_{ij} = \delta_{ij} - n_i n_j$. This gives a pure splay distortion of the director, with a radial $+1$ defect in the bend along the $z$ axis. In Fig.~\ref{Sfig:bend_profiles}(b) we construct a vortex-like $+1$ defect by setting $(\partial_z \nabla_\perp {\bf n})_{ij} = \epsilon_{ij}$ with all else $0$. In Fig.~\ref{Sfig:bend_profiles}(c) we add a nonzero value of $[s_x,s_y]$ to the director field of Fig.~\ref{Sfig:bend_profiles}(a), which tilts the $\beta$ line. Finally, in Fig.~\ref{Sfig:bend_profiles}(d) we construct a Legendrian point where we encounter degenerate behaviour in the winding; this is done by arranging $\det\bigl( (\nabla_{\perp}{\bf n}|_{0})^2+\partial_z \nabla_{\perp}{\bf n}|_{0} \bigr)=0$. \subsection{Frenet-Serret Frame, Connection and Curvature} \label{sec:FrenetSerret} At a generic point the director field carries a canonical Frenet-Serret framing. The Frenet-Serret frame associated to any space curve is the orthonormal frame consisting of its unit tangent, normal vector and binormal. As the bend is the curvature of the director integral curves its direction is exactly that of the Frenet-Serret normal for each integral curve. We write ${\bf b} = \kappa \,{\bf s}_1$, with $\kappa = |{\bf b}|$, and ${\bf s}_2 = {\bf n}\times{\bf s}_1$; the frame $\{{\bf n}, {\bf s}_1, {\bf s}_2\}$ gives a Frenet-Serret framing of the director field. It is defined on the complement of the $\beta$ lines, which are singularities of the Frenet-Serret framing. The Frenet-Serret frame provides a canonical (Frenet-Serret) connection for the orthogonal plane field $\xi$ \begin{equation} \omega = \bigl( \nabla {\bf s}_1 \bigr) \cdot {\bf s}_2 , \end{equation} defined on the complement of the $\beta$ lines. The value of $\omega$ on the director field is the torsion, $\tau = \omega({\bf n}) = \bigl( \nabla_{{\bf n}} {\bf s}_1 \bigr) \cdot {\bf s}_2$, while the vector dual to it is the heliconical pitch axis; both are singular along the $\beta$ lines. The associated curvature 2-form (the curvature of the plane field $\xi$) is \begin{equation} \Omega = d\omega = \frac{-1}{2} \epsilon_{ijk} \,n_{i} \,dn_{j} \wedge dn_{k} . \end{equation} In the heliconical ground state we have $\omega = q \cos\theta \,dz$, the torsion is $\tau = q \cos^2\theta$, the heliconical pitch axis is ${\bf e}_z$ and the curvature vanishes. When the local helical structure varies slowly as in the director ${\bf n} = \cos\theta \,{\bf e}_z + \sin\theta [ \cos\phi \,{\bf e}_x + \sin\phi \,{\bf e}_y ]$ the Frenet-Serret connection is $\omega \approx \cos\theta \,d\phi$, the torsion is $\tau \approx \cos^2\theta \,|\nabla\phi|$, the heliconical pitch axis is $\nabla\phi / |\nabla\phi| = {\bf N}$ and the curvature is $\Omega = - \sin\theta \,d\theta \wedge d\phi$. We remark that the bend is invariant under the nematic symmetry ${\bf n} \to -{\bf n}$ and as a consequence both the curvature $\kappa$ and Frenet-Serret normal ${\bf s}_1$ are unchanged under this transformation. On the other hand, the binormal ${\bf s}_2 = {\bf n} \times {\bf s}_1$ changes sign, as does the Frenet-Serret connection $\omega$ and curvature $\Omega$. This latter is the well-known change in sign of nematic hedgehog charge under ${\bf n} \to -{\bf n}$~\cite{alexander2012}. The heliconical pitch axis also reverses but the torsion $\tau$ is invariant. Along a $\beta$ line the Frenet-Serret connection degenerates as a multiple of the angular form winding around it, which provides an orientation of the $\beta$ line; like the connection, this orientation reverses under ${\bf n} \to -{\bf n}$. The integral of the curvature over a surface $S$ detects topological properties of the director field as described by the Gauss-Bonnet-Chern theorem \begin{equation} \frac{1}{2\pi} \int_{\partial S} \omega - \frac{1}{2\pi} \int_{S} \Omega = e_{\xi}(S) = \sum_{j} \textrm{Int}\bigl( \beta_{j} , S \bigr) , \label{Seq:GBC} \end{equation} where the Euler number $e_{\xi}(S)$ of the plane field $\xi$ can equally be calculated as the total intersection number of the surface with the $\beta$ lines by Poincar\'e duality. This number depends on the homology class of the surface $S$ relative to its boundary. As an example, consider the screw dislocation textures of \S\ref{subsec:screw} and let $S$ be a disc of (large) radius $R$ in the plane $z=0$, centred on the origin. On the boundary of the disc where the director is locally the heliconical state with preferred cone angle $\theta_0$, the Frenet-Serret connection is $\omega = \cos\theta_0 \,d\phi$ with $\phi = qz + s \arctan(y/x)$ and \begin{equation} \frac{1}{2\pi} \int_{\partial S} \omega = \frac{\cos\theta_0}{2\pi} \int_{\partial S} \biggl( q \,dz + s \frac{-y \,dx + x \,dy}{x^2+y^2} \biggr) = s \cos\theta_0 . \end{equation} The curvature is $\Omega = - \sin\theta \,d\theta\wedge d\phi$ and its integral is (minus) the area swept out by the director field over $S$ \begin{equation} \frac{1}{2\pi} \int_{S} \Omega = s \bigl( \cos\theta_0 - 1 \bigr) , \end{equation} so that the Gauss-Bonnet-Chern theorem gives \begin{equation} \frac{1}{2\pi} \int_{\partial S} \omega - \frac{1}{2\pi} \int_{S} \Omega = s \cos\theta_0 - s \bigl( \cos\theta_0 - 1 \bigr) = s . \end{equation} The Euler number is the strength of the screw dislocation; as there is a single $\beta$ line along the $z$-axis it is also the intersection number of the $\beta$ line with $S$. A similar example can be given for the double twist director of \S\ref{sec:Skyrmion} \begin{equation} {\bf n} = \cos\theta \,{\bf e}_z + \sin\theta \bigl[ \sin \arctan(y/x) \,{\bf e}_x - \cos \arctan(y/x) \,{\bf e}_y \bigr] , \end{equation} that describes the core region of a Skyrmion. Here $\theta = \theta(\rho)$ is a function of the radial distance $\rho=\sqrt{x^2+y^2}$ from the axis of the cylinder, along which $\theta$ vanishes. The bend is \begin{equation} {\bf b} = \frac{-\sin^2\theta}{\rho} \bigl[ \cos \arctan(y/x) \,{\bf e}_x + \sin \arctan(y/x) \,{\bf e}_y \bigr] , \end{equation} and the Frenet-Serret connection and curvature are \begin{align} & \omega = \cos\theta \frac{-y \,dx + x \,dy}{x^2+y^2} , && \Omega = \frac{-\sin\theta \,\theta^{\prime}}{\rho} \,dx\wedge dy . \end{align} Integrating over a disc of radius $R$ in the plane $z=0$ (centred on the axis) we have \begin{equation} \frac{1}{2\pi} \int_{\partial S} \omega - \frac{1}{2\pi} \int_{S} \Omega = \cos\theta(R) - \bigl( \cos\theta(R) - 1 \bigr) = 1 , \end{equation} corresponding to the intersection number of the $\beta$ line along the axis of the double twist cylinder with the disc. In a Skyrmion this core region of double twist connects smoothly to an asymptotic director corresponding to a pure heliconical ground state. As the bend vector of the double twist region has winding number $+1$ in the $xy$-plane, while that of the heliconical ground state is constant, there is necessarily a $\beta$ line involved in any such interpolation. For any surface $S$ extending into the heliconical ground state we have $\omega|_{\partial S} = q \cos\theta \,dz$ and \begin{equation} \frac{1}{2\pi} \int_{\partial S} \omega - \frac{1}{2\pi} \int_{S} \Omega = 0 - \Bigl[ \bigl( \cos\theta(R) - 1 \bigr) + \bigl( -1 - \cos\theta(R) \bigr) \Bigr] = 2 , \end{equation} the additional contribution from the integrated curvature being the area (divided by $2\pi$) swept out on the unit sphere by the director field in the interpolation between the inner double twist region and asymptotic heliconical texture. Again, the Euler number is the total intersection number of $S$ with the $\beta$ lines, and is twice the Skyrmion charge. \begin{figure}[tb] \centering \includegraphics[width=0.7\linewidth]{SLfig.png} \caption{Illustration of the transverse self-linking number associated to a closed integral curve of the director field, here a planar circle (blue). (a) The Frenet-Serret framing with the normal (direction of the bend) in orange and the binormal in cyan. The self-linking number of the closed integral curve with this framing is zero but the frame has a singularity at the centre of the disc. (b) A trivialisation of the orthogonal plane field over the disc induces a framing of the closed integral curve with self-linking $-1$ (illustrated using the red curve displaced from $K$ along a basis vector of the trivialisation); this is the transverse self-linking number of the curve. In both panels we show a top down view and a side-on view for clarity.} \label{Sfig:SL} \end{figure} We finish this section by noting a connection to the C\u{a}lug\u{a}reanu theorem~\cite{calugareanu1961,fuller1971} that arises for closed integral curves of the director. Suppose $K$ is such a closed integral curve. Since the director is the tangent vector to this curve and $\omega({\bf n})=\tau$, the integral of the connection yields the twist of $K$, with its Frenet-Serret framing \begin{equation} \frac{1}{2\pi} \int_{K} \omega = \frac{1}{2\pi} \int_{K} \tau \,ds = \textrm{Tw}(K) . \end{equation} If $S$ is any Seifert surface for $K$ then the intersection number of the $\beta$ lines with $S$ is equal to the difference between the Frenet-Serret self-linking number of $K$, $\textrm{SL}(K)$, and the self-linking number of a framing that extends over $S$ without any singularities, which we call the transverse self-linking number, $\overline{\textrm{SL}}(K;S)$, a quantity of significance in contact topology~\cite{geiges2008,machon2016}. The transverse self-linking number is illustrated in Fig.~\ref{Sfig:SL} for the simplest example of a planar circle bounding a disc. With these two identifications~\eqref{Seq:GBC} becomes \begin{equation} \textrm{Tw}(K) - \frac{1}{2\pi} \int_{S} \Omega = \textrm{SL}(K) - \overline{\textrm{SL}}(K;S) . \end{equation} Finally, using the C\u{a}lug\u{a}reanu theorem~\cite{calugareanu61,fuller71}, $\textrm{SL}(K)=\textrm{Tw}(K)+\textrm{Wr}(K)$, where $\textrm{Wr}(K)$ is the writhe of $K$, we obtain a geometric integral formula for the transverse self-linking number \begin{equation} \overline{\textrm{SL}}(K;S) = \frac{1}{2\pi} \int_{S} \Omega + \textrm{Wr}(K), \end{equation} as a sum of the total Berry curvature of the Seifert surface and the writhe of the closed integral curve. Of course, the integrated curvature has the interpretation as the twist of $K$ with the transverse framing. \subsection{Knots, Merons, Linking and Self-Linking} \label{sec:Meron} In this final section we consider some examples of the global properties of $\beta$ lines, when they form closed loops, knots and links. These are relevant to the increasing number of complex, three-dimensional knotted fields~\cite{chen2013prl,ackerman2017prx,machon2016,machon2016prsa,sutcliffe2018,tai2019}, whose intricate structures realise knotted field lines, disclinations and geometric degeneracies, including umbilic and $\beta$ lines. The simplest example is obtained by wrapping the edge dislocation discussed in \S\ref{subsec:edge} around an axis to form a circular loop. This example is shown in Fig.~\ref{fig:edge_circle} and illustrates several concepts from the preceding sections. First, we observe that the profile of the bend around the circular $\beta$ line changes as we move along it, from a $-1$ winding to a $+1$ winding. Consequently, there must be a pair of Legendrian points on the $\beta$ line. The local structure of the Legendrian points is given by the saddle-node bifurcation, where the winding around a critical point changes sign, as described in~\cite{etnyre1999}. If the $\beta$ line were flat, laying entriely in the $z$ plane, then every point would be Legendrian. This is non-generic, so the $\beta$ line is tilted out of this plane. The Legendrian points are indicated by blue spheres in Fig.~\ref{fig:edge_circle}(c,d). In Fig.~\ref{fig:edge_circle}(d), we show the bend on a slice that intersects the $\beta$ line at two points directly in between the Legendrian points, indicated by a yellow sphere, where the bend has winding $+1$ around the line, and a purple sphere, where the bend has winding $-1$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{edge_circle.png} \caption{A circular edge dislocation embedded in a heliconical background. The phase field $\phi$ is shown on (a) a surface away from the $\beta$ line (green), and (b) a surface that intersects the $\beta$ line. (c) There are two Legendrian points on the line, indicated by blue spheres. The director (blue curves) and bend (orange) are shown on a slice intersecting the two Legendrian points. (d) The bend is shown on a slice though the $\beta$ line, which intersects the $\beta$ line at points halfway between the Legendrian points, indicated by coloured sphere. At the purple point, the winding of the bend around the $\beta$ line is $-1$. At the yellow point, the winding of the bend around the $\beta$ line is $+1$, as can be seen from examing the bend vector field itself.} \label{fig:edge_circle} \end{figure} As well as realising the unknot as a $\beta$ line, it is possible to embed an arbitrary knotted or linked set of $\beta$ lines into a heliconical background, via an extension of our constructions for screw and edge dislocations. Given any knot or link $K$, the director \begin{equation} {\bf n} = \cos\theta \,{\bf e}_z + \sin\theta \bigl[ \cos\phi_K \,{\bf e}_x + \sin\phi_K \,{\bf e}_y \bigr] , \label{eq:meron} \end{equation} where $\phi_{K} = qz + \frac{1}{2} \omega_{K}$, with $\omega_K$ the solid angle function for $K$~\cite{binysh2018}, embeds a helical winding of the director integral curves around a tubular neighbourhood of $K$; as before, the cone angle $\theta$ should be made to vary from its far field value to vanish along $K$. The phase winding in the helical integral curves guarantees the existence of a $\beta$ line. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{meron3d.png} \caption{A meron tube along a trefoil knot. (a) The $\beta$ line (green), shown from above. (b-i) The helical phase field on different slices through the texture. The meron tube is an edge dislocation, and the changes in the helical phase field shown on the slices as one passes through the $\beta$ line should be compared with the edge dislocation shown in Fig.~\ref{fig:Edge}.} \label{fig:meron3D} \end{figure} The director texture is that of a meron tube extruded along $K$. A meron is a fractionalisation of a Skyrmion that carries half the topological charge~\cite{duzgun2018,yu2018}. $\beta$ lines provide a natural geometric perspective on this fractionalisation: since each Skyrmion comprises two $\beta$ lines, a single $\beta$ line represents half a Skyrmion, {\sl i.e.} a meron. In terms of the heliconical phase field, $\phi_{K}$, these meron tubes are edge dislocations where heliconical layers terminate. Exactly these structures were recently created experimentally in cholesteric cells and shown to form highly controllable and responsive knotted solitons~\cite{tai2019}. In that experiment, links of `escape up' and `escape down' meron tubes combined to give non-zero Hopf invariant. For the twist-bend nematic phase, the small conical angle ($\theta \approx 25^{\circ}$~\cite{chen2013}) gives an energetic preference to `escape up' merons over `escape down', whereas in cholesterics ($\theta = \pi/2$) the two types of meron are degenerate. An example for the trefoil knot is shown in Fig.~\ref{fig:meron3D}. The phase field $\phi$ is shown on several slices through the $\beta$ line, which is shown as a green curve in each panel. These slices should be compared with the structure of the phase field for an edge dislocation in Fig.~\ref{fig:Edge}. Panels (b-i) show the change in the phase field on a surface as one slides that surface across the $\beta$ line. The change in the number of layers as the surface crosses the $\beta$ line is clear from an examination of the phase field. Similar images are shown in Fig.~\ref{fig:hopf3D} for the two Hopf links, with linking number $+1$, Fig.~\ref{fig:hopf3D}(a), and $-1$, Fig~\ref{fig:hopf3D}(b). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{hopf3d.png} \caption{A meron tube along (a) a Hopf link with linking number $+1$, and (b) a Hopf link with linking number $-1$. In each panel the $\beta$ lines are shown in green, and the colours on each slice show the helical phase field.} \label{fig:hopf3D} \end{figure} Knotted meron tubes illustrate a further property of the $\beta$ lines, which capture not only the Euler class of the director and the Skyrmion charge (via the Gauss-Bonnet-Chern theorem \eqref{Seq:GBC}) but also the same information as the Hopf invariant. Classicially, three-dimensional knotted solitons in $S^3$ (or $\mathbb{R}^3$ with a uniform background director) are characterised by the homotopy group $\pi_3(S^2)$. The Hopf invariant establishes an isomorphism between this group and the integers, $\pi_3(S^2) \cong \mathbb{Z}$, and is computed via the linking of preimages. Gompf and Stipsicz~\cite{Gompf} offer an alternative way of describing this invariant which connects it to the zeros of a vector field orthogonal to the director, such as the bend. The invariant is a linking number, \begin{equation} \label{eq:hopf} \Theta = \sum_{i} s_i^2 \mathrm{SL} ( \beta_i ) + \sum_{i\neq j} s_i s_j\mathrm{Lk} ( \beta_i , \beta_j ), \end{equation} familiar from helicity and abelian Chern-Simons theory~\cite{ArnoldKhesin}, where the $j$th $\beta$ line $\beta_j$ has strength $s_j$. The self-linking number, $\mathrm{SL}(\beta)$, is defined as follows: consider the total rotation $\int_{B^{\prime}} {\bf e}_2 \cdot d{\bf e}_1$ of the Frenet-Serret frame about the director along any push-off $B^{\prime}$ giving a zero-framing for the $\beta$ line. Part of this rotation is an intrinsic Berry phase $\gamma$, equal to the area on the unit sphere bound by the curve traced out by {\bf n} along $B$. The difference $\gamma - \int_{B^{\prime}} {\bf e}_2 \cdot d{\bf e}_1 = 2\pi \, \textrm{SL}(\beta)$ defines the self-linking. In the (non-generic) case where the pushoff $B^\prime$ is transverse to the planes $\xi$ orthogonal to the director, then the self-linking number just defined is the same as the self-linking number of $B^\prime$ computed by pushing off along the bend vector field. In general, there is no direct relationship between $\Theta$ and the Hopf invariant, however they capture the same fundamental topology, and a uniform state with vanishing Hopf invariant will also have vanishing $\Theta$. For example, we may produce a director with Hopf invariant $H$ by taking a double-twist cylinder (Skyrmion tube) and twisting it $H$ times before joining the endpoints. The resulting solid torus can be embedded into a uniform background to give an `axially-symmetric' Hopfion~\cite{sutcliffe2018}. As we have disucssed, there are two $\beta$ lines, a central line with strength $+1$ and a second $\beta$ line wrapping around it with strength $-1$, as shown in Fig.~\ref{Sfig:Skyrmion}. Both lines have vanishing self-linking number, while the linking number of the two $\beta$ lines is $-1$, so that $\Theta = 2H$. As a second example, consider the trefoil knot shown in Fig.~\ref{fig:meron3D}. There is a single $\beta$ line corresponding to the green curve, and consequently the invariant $\Theta$ is equal to the self-linking number. In the construction we have given the framing on the $\beta$ line is the solid angle framing, so that the self-linking number vanishes, and also $\Theta=0$. \end{document}
1,477,468,750,488
arxiv
\section{Introduction} Submodularity is a property of set functions equivalent to the notion of diminishing returns. More formally, we say that a set function $f:2^E \to \mathbb{R}$ is \emph{submodular} if for any two sets $A\subseteq B \subseteq E$ and an element $e \notin B$, the corresponding marginal gains satisfy $f(A \cup \{e\}) -f(A) \geq f(B \cup \{e\}) -f(B)$. Submodularity has found a wide range of connections and applications to different computer science areas in recent years. However, many applications in practice does not satisfy the diminishing returns property, but rather a weaker version of it. This has motivated several lines of work exploring different ways to relax the submodularity property \cite{das2011submodular,feige2013welfare,chen2018capturing,horel2016maximization,feige2015unifying,ghadiri2019beyond,ghadiri2020parameterized}. One such relaxation that has received a lot of attention from the machine learning community is the notion of weak submodularity (we postpone the formal definition to Section~\ref{sec:definitions}), originally introduced by Das and Kempe~\cite{das2011submodular}. They provided applications to the feature selection and the dictionary selection problems, and showed that the standard greedy algorithm achieves a $(1-e^{-\gamma})$-approximation for the monotone maximization problem subject to a cardinality constraint. Here the parameter $\gamma \in [0,1]$ is called the submodularity ratio, and it measures how ``close'' the function is to being submodular. Weak submodularity has found applications in areas such as linear and nonlinear sparse regression~\cite{elenberg2017streaming,khanna2017scalable}, high-dimensional subset selection~\cite{elenberg2018restricted}, interpretability of black-box neural network classifiers~\cite{elenberg2017streaming}, video summarization, splice site detection, and black-box interpretation of images~\cite{chen2018weakly}. In subsequent work, Das and Kempe~\cite{das2018approximate} left as an open question whether some of these theoretical guarantees can be extended to non-monotone objectives. As their definition of weak submodularity is targeted at monotone functions, they raise the question of whether there is a more general definition that retains some of the positive results of their work, while also yielding an analogue to non-monotone objectives. \iffalse As their definition is targeted at monotone submodular functions, they mention it seems unlikely that a similar bound will carry over for non-monotone objectives. They conclude by raising the question of whether there is a more general definition of weak submodularity that retains some of the positive results of their work for monotone functions, while also yielding an analogue to non-monotone objectives. \fi One main goal of this work is to answer that question. We believe this is interesting for both theoretical and practical purposes, given that non-monotone submodular objectives have found a wide range of applications in computer science. Some of these include document summarization~\cite{lin2010multi,lin2011class}, MAP inference for determinantal point processes~\cite{gillenwater2012near}, personalized data summarization~\cite{mirzasoleiman2016fast}, nonparametric learning~\cite{zoubin2013scaling}, image summarization~\cite{tschiatschek2014learning}, and removing redundant elements from DNA sequencing~\cite{libbrecht2018choosing}. Hence, it seems natural to study how the approximation guarantees for non-monotone submodular maximization degrade in terms of the submodularity ratio. \iffalse All the above works, however, have focused on monotone objectives, and little seems to be known about the behavior of non-monotone weakly submodular functions. This seems to be an important question, given that non-monotone submodular objectives have found a wide range of applications in computer science. Some of these include document summarization~\cite{lin2010multi,lin2011class}, MAP inference for determinantal point processes~\cite{gillenwater2012near}, personalized data summarization~\cite{mirzasoleiman2016fast}, nonparametric learning~\cite{zoubin2013scaling}, image summarization~\cite{tschiatschek2014learning}, and removing redundant elements from DNA sequencing~\cite{libbrecht2018choosing}. \fi In this work we introduce a natural generalization of weak submodularity to the non-monotone setting. We then show that a fast and simple randomized greedy algorithm retains some of the good theoretical guarantees available for (non-monotone) submodular objectives. In addition, for monotone weakly submodular functions, this algorithm retains the approximation guarantee of $1-e^{-\gamma}$ given in~\cite{das2011submodular}. \iffalse Thus making the algorithm a great candidate for approaching the problem $\max \{f(S): |S| \leq k \}$ whenever $f$ is either monotone or non-monotone weakly submodular. In addition, given its simplicity and speed, we believe the algorithm can be a good choice for using in practice. \fi A second main contribution of our work is to provide a more refined analysis that takes into account that the submodularity ratio parameter may change (some times improving) throughout the execution of the algorithm. We provide several applications where this more refined bound leads to improved approximation guarantees, for both monotone and non-monotone maximization problems. \iffalse However, instead of assuming a \emph{global} (i.e., the same) parameter $\gamma$ throughout the algorithm, we design a more refined analysis that takes into account that the parameter $\gamma= \gamma_{A,B}$ changes (usually improving) through the execution of the algorithm, depending on the sets $A,B$ at hand. We discuss several explicit applications where this more refined argument leads to improvements in terms of the approximation guarantees, some times beating the current state of the art. In addition, to the best of our knowledge our results provide the first theoretical guarantees for non-monotone weakly submodular objectives. \fi \iffalse The main goal of this work is to study the problem $\max \{f(S): |S| \leq k \}$ where the function $f$ can be either monotone or non-monotone weakly submodular. However, instead of assuming a \emph{global} (i.e., the same) parameter $\gamma$ throughout the algorithm, we design a more refined analysis that takes into account that the parameter $\gamma= \gamma_{A,B}$ changes (usually improving) through the execution of the algorithm, depending on the sets $A,B$ at hand. We discuss several explicit applications where this more refined argument leads to improvements in terms of the approximation guarantees, some times beating the current state of the art. In addition, to the best of our knowledge our results provide the first theoretical guarantees for non-monotone weakly submodular objectives. \fi \iffalse \fi The rest of this section is organized as follows. In Section~\ref{sec:definitions} we extend weak submodularity to the non-monotone setting. In Section~\ref{sec:gamma_A,B} we discuss the notion of local submodularity ratio. We discuss several examples and applications in Section~\ref{sec:examples}. Our main contributions are presented in Section~\ref{sec:contributions}. Additional related work regarding weak submodularity and non-monotone submodular maximization is discussed in Section~\ref{sec:related-work}. \subsection{Weak submodularity and non-monotonicity}\label{sec:definitions} Throughout this paper we use $f_A(B)$ to denote the marginal gain of adding the set $B$ to $A$, that is $f(A \cup B) - f(A)$. A non-negative monotone set function $f:2^E \to \mathbb{R}_+$ is \emph{$\gamma$-weakly submodular} for some parameter $0 \leq \gamma \leq 1$, if for any pair of disjoint sets $A,B \subseteq E$, it satisfies $ \sum_{e \in B} f_A (e) \geq \gamma \cdot f_A (B). $ We note that this is the definition used in~\cite{bian2017guarantees,chen2018weakly,elenberg2017streaming}, which is slightly adapted from the original definition given in~\cite{das2011submodular, das2018approximate}. The parameter $\gamma$ is called the \emph{submodularity ratio}. When $f$ is monotone, it is clear that for any value of $\gamma \in [0,1]$ the above class contains monotone submodular functions. However, for non-monotone objectives the marginal gains can be negative, and in this case we have $\gamma f_A (B) \geq f_A (B)$ whenever $f_A (B) \leq 0$, leading to a stronger condition than diminishing returns. This motivates us to introduce the following two classes of non-monotone non-submodular functions. \begin{definition}[pseudo and weak submodularity] Given a scalar $0< \gamma\leq1$, we say that a set function $f:2^E \to \mathbb{R}_+$ is: \vspace*{0.05cm} \begin{enumerate} \item $\gamma$-pseudo submodular if $\sum_{e \in B} f_A (e) \geq \gamma f_A (B)$ for any pair of disjoint sets $A,B \subseteq E$. \vspace*{0.05cm} \item $\gamma$-weakly submodular if $\sum_{e \in B} f_A (e) \geq \min\{ \gamma f_A (B), \frac{1}{\gamma} f_A (B)\}$ for any $A,B \subseteq E$ disjoint. \end{enumerate} \end{definition} We first note that for monotone functions, the above two definitions are equivalent to the notion of $\gamma$-weakly submodularity from previous works~\cite{bian2017guarantees,chen2018weakly,elenberg2017streaming}. This follows immediately from the fact that monotone functions satisfy $f_A(B) \geq 0$ for all $A,B \subseteq E$, and hence $\min\{ \gamma f_A (B), \frac{1}{\gamma} f_A (B)\} = \gamma f_A (B)$. For any value $\gamma \in (0,1]$ the above definition of $\gamma$-weakly submodularity leads to a weaker notion of diminishing returns (i.e., it contains non-monotone submodular functions). Indeed, if $f_A (B) \geq 0$ we have $\sum_{e \in B} f_A (e) \geq \gamma f_A (B)$, while if $f_A (B) < 0$ we have $\sum_{e \in B} f_A (e) \geq \frac{1}{\gamma} f_A (B)$. On the other hand, while the class of $\gamma$-pseudo submodular functions does not properly contain non-monotone submodular functions, it does contain functions that are not necessarily submodular. We show this in Figure~\ref{fig:function-hierarchy}. \begin{figure} \centering \includegraphics[scale=0.8]{image.pdf} \caption{Hierarchy of the different function classes.}\label{fig:function-hierarchy} \end{figure} \subsection{Local submodularity ratio}\label{sec:gamma_A,B} The submodularity ratio $\gamma$ is in general a very pessimistic bound for most applications. This is due to the fact that $\gamma$ is defined as a \emph{global} bound, in the sense that it must hold for any pair of disjoint sets $A,B \subseteq E$. Or at least for any pair of sets that are relevant to the execution of the algorithm, e.g., sets of cardinality at most $k$. We next discuss a natural way to ``refine'' this bound. Given a function $f:2^E \to \mathbb{R}_+$ and any pair of disjoint sets $A,B \subseteq E$, in this work we denote by $\gamma^f_{A,B}$ any non-negative scalar satisfying $\sum_{e \in B} f_A (e) \geq \gamma^f_{A,B} \cdot f_A (B)$. When it is clear from the context we usually simplify the notation to $\gamma_{A,B}$ instead of $\gamma^f_{A,B}$. One of our contributions is showing how using these local bounds can be beneficial in some settings. In particular we discuss several natural classes of functions for which (i) one can compute explicit bounds for the value $\gamma_{A,B}$ (see Section~\ref{sec:examples}), and (ii) using the local bounds $\gamma_{A,B}$ (instead of $\gamma$) leads to significantly better theoretical guarantees (we discuss this in more detail in Section~\ref{sec:contributions}). We believe this is interesting for both theoretical and practical applications. \subsection{Examples and applications}\label{sec:examples} In this section we present several classes of functions for which the parameter $\gamma_{A,B}$ can be bounded explicitly, and discuss applications arising from these results. We postpone the proofs to Appendix~\ref{sec:appendix-examples-proof}. \iffalse We first note that since monotone functions satisfy $f_A(B) \geq 0$ for any $A,B \subseteq E$, it is clear that the larger the value of $\gamma_{A,B}$ the better. That is, the larger the value of $\gamma_{A,B}$ the closer it is to satisfying the diminishing returns property. However, this is not the case in general for non-monotone functions. Since the marginal gains in this case can be negative, a lower value of $\gamma_{A,B}$ may be better. Hence for monotone objectives we present our bounds for $\gamma_{A,B}$ using an inequality, while for non-monotone functions we present them with equality (since we do not know in principle whether $f_A(B)$ is positive or negative). \fi Our first example is the so-called metric diversity function (also known as remote clique). Here we are given a metric (i.e., a distance that satisfies the triangle inequality) $d:E \times E \to \mathbb{R}_+$ over a finite set $E$, where $d(u,v)$ measures the dissimilarity between two elements $u$ and $v$. One then defines a set function $f(S)= \frac{1}{2} \sum_{u\neq v \in S} d(u,v)$ that measures the diversity inside the set $S$. The problem $\max d(S): |S| \leq k$ of finding a diverse subset has been studied in the operations research community~\cite{hassin1997approximation, ravi1994heuristic, birnbaum2009improved}, and has found applications in other areas~\cite{agrawal2009diversifying, drosou2010search}. \begin{restatable}{example}{metrics}\label{ex:diversity} Given a metric $d:E \times E \to \mathbb{R}_+$, consider the function given by $f(S)=\sum_{ \{u,v\} \subseteq S} d(u,v)$, which is monotone and supermodular. Then, we have $ \gamma_{A,B} \geq \frac{a}{a+b-1} $ for any two disjoint sets $A,B \subseteq E$, where $a = |A|$ and $b = |B|$. \end{restatable} The works of~\cite{borodin2014weak,borodin2015proportionally} introduced the notion of proportionally submodular functions\footnote{They called them weakly submodular at first, and changed the name in subsequent work.}. A set function $f: 2^E \to \mathbb{R}_+$ is \emph{proportionally submodular} if $ |S|f(T) + |T|f(S) \geq |S \cap T|f(S \cup T) + |S \cup T|f(S \cap T) $ for every $S,T \subseteq E$. In the monotone setting, this class properly contains monotone submodular functions. In addition, this class also contains some non-submodular objectives such as the (supermodular) metric diversity function discussed in Example~\ref{ex:diversity}. Since these functions are closed under addition, the sum of a monotone submodular function and a metric diversity function is proportionally submodular. Our next result bounds the parameter $\gamma_{A,B}$ for this class, in both the monotone and non-monotone settings. \begin{example}\label{ex:proportionally-submod} A non-negative proportionally submodular function $f: 2^E \to \mathbb{R}_+$ has $ \gamma_{A,B} \geq \frac{3 a (1 + a)}{3 a^2 + 3 a b + b^2 - 1} $ for any two disjoint sets $A,B \subseteq E$, where $a=|A|$ and $b=|B|$. \end{example} The above result leads to interesting applications. First it allows to improve over the current best approximation for maximizing a monotone proportionally submodular function subject to a cardinality constraint. In addition, combining this with other results from this work, we can also get improved approximations for the product $f\cdot g$ of a monotone submodular function $f$ and a monotone proportionally submodular function $g$. We discuss this in more detail in Section~\ref{sec:applications}. \begin{restatable}{example}{product}\label{ex:f(S)g(S)} Let $f,g:2^E \to \mathbb{R}_+$ be two monotone set functions with parameters $\gamma^f_{A,B}$ and $\gamma^g_{ A,B}$ respectively. Then the product function $h(S):=f(S) \cdot g(S)$ is also non-negative and monotone, with parameter \[ \gamma_{A,B} \geq \begin{cases} \frac{f(A)}{f(A \cup B)} \gamma^g_{A,B} & \text{if } \gamma^f_{A,B} \geq \gamma^g_{A,B} \\ \frac{g(A)}{g(A \cup B)} \gamma^f_{A,B} & \text{if } \gamma^g_{A,B} \geq \gamma^f_{A,B}, \end{cases} \] for any two disjoint sets $A,B \subseteq E$. In particular, if $f$ and $g$ have global parameters $\gamma^f$ and $\gamma^g$ respectively, such that $\gamma^f \geq \gamma^g$, then the product function $h$ has parameter $\gamma_{A,B} \geq \gamma^g \cdot \max \{ \frac{f(A)}{f(A \cup B)} , \frac{g(A)}{g(A \cup B)} \}$. \end{restatable} Using that submodular functions satisfy $\gamma_{A,B}\geq 1$, we can combine the above result with Examples~\ref{ex:diversity} and~\ref{ex:proportionally-submod} to get the following. \begin{example}\label{ex:f(S)g(S)-apps} Let $f,g : 2^E \to \mathbb{R}_+$ be two monotone functions, and let $h(S):= f(S) \cdot g(S)$ be the product function with parameter $\gamma_{A,B}$. Then we have the following. \begin{enumerate}[(a)] \item If $f$ and $g$ are submodular then $\gamma_{A,B} \geq \max \{\frac{f(A)}{f(A \cup B)}, \frac{g(A)}{g(A \cup B)}\}$. \item If $f$ is submodular and $g$ is the metric diversity function from Example~\ref{ex:diversity}, then $\gamma_{A,B} \geq \frac{f(A)}{f(A \cup B)} \cdot \frac{a}{a+b-1}$, where $a=|A|$ and $b=|B|$. \item If $f$ is submodular and $g$ is proportionally submodular then $\gamma_{A,B} \geq \frac{f(A)}{f(A \cup B)} \cdot \frac{3 a (1 + a)}{3 a^2 + 3 a b + b^2 - 1}$, where $a=|A|$ and $b=|B|$. \end{enumerate} \end{example} \iffalse \begin{example} Let $f$ and $g$ be two non-negative monotone submodular and proportionally submodular functions respectively. Then the product function $h(S):=f(S) \cdot g(S)$ is non-negative monotone with parameter $\gamma_{A,B} \geq \frac{f(A)}{f(A \cup B)} \cdot \frac{3 a (1 + a)}{-1 + 3 a^2 + 3 a b + b^2}$, where $a=|A|$ and $b=|B|$. \end{example} \begin{example} Let $f$ be a non-negative monotone submodular function and $g$ be a non-negative monotone. Then the product function $h(S):=f(S) \cdot g(S)$ is non-negative monotone with parameter $\gamma_{A,B} \geq \frac{f(A)}{f(A \cup B)} \cdot \frac{a}{a+b-1}$, where $a=|A|$ and $b=|B|$. \end{example} \begin{example} Let $f$ and $g$ be two non-negative monotone submodular functions. Then the product function $h(S):=f(S) \cdot g(S)$ is non-negative monotone with parameter $\gamma_{A,B} \geq \max \{\frac{f(A)}{f(A \cup B)}, \frac{g(A)}{g(A \cup B)}\}$. \end{example} \fi By taking a non-monotone submodular function $f$, and either multiplying it or dividing by the cardinality function, we obtain a new function that is no longer submodular. The next example bounds the parameter $\gamma_{A,B}$ for these functions. \begin{restatable}{example}{prodcardisubmod}\label{ex:|S|f(S)} Let $f:2^E \to \mathbb{R}_+$ be a submodular function. Then for any two disjoint sets $A,B \subseteq E$ with $|A|=a$ and $|B|=b$ we have the following. \begin{enumerate}[(a)] \item The function $g(S):=|S| \cdot f(S)$ satisfies $\gamma_{A,B} \geq \frac{a+1}{a+b}$. \item The function $g(S):=\frac{f(S)}{|S|}$ has $\gamma_{A,B} \leq \frac{a+b}{a+1}$. \end{enumerate} \end{restatable} \iffalse \begin{restatable}{example}{prodcardisubmod}\label{ex:|S|f(S)} Given a non-negative submodular function $f$, the function $g(S):=|S| \cdot f(S)$ has $\gamma_{A,B} \geq \frac{|A|+1}{|A|+|B|}$ for any two disjoint sets $A,B \subseteq E$. \end{restatable} \begin{restatable}{example}{divcardisubmod}\label{ex:f(S)/|S|} Given a non-negative submodular function $f$, the function $g(S):=\frac{f(S)}{|S|}$ has $\gamma_{A,B}=\frac{|A|+|B|}{|A|+1}$ for any two disjoint sets $A,B \subseteq E$. \end{restatable} \fi \iffalse Our next example can be interpreted as a natural discrete analog of the regularization term widely used in machine learning. \textcolor{blue}{I think we don't need this if the condition is barely satisfied.} \textcolor{red}{It is satisfied for submodular functions, and it is almost satisfied by functions like metric diversity or $|S|f(S)$ (we seem to be off by a factor of 1/2). If I can get it working for those two, I will leave it. Otherwise I will remove it.} \begin{restatable}{example}{regularization}\label{ex:regularization} Let $w:E \to \mathbb{R}_+$ be a modular function such that $0 \leq w(e)\leq 1$ for all $e\in E$. Let $f$ be a non-negative monotone set function with parameter satisfying $\gamma^f_{A,B} \geq \frac{ 2w(A)+1 }{ 2w(A)+ w(B) }$. Then for any $\alpha >0$, the function $h(S):=f(S) - \alpha \cdot {w(S)}^2$ has parameter $\gamma_{A,B} = \frac{ 2w(A)+1 }{ 2w(A)+ w(B) }$. \end{restatable} \fi We next discuss the behavior of the parameter $\gamma_{A,B}$ under summation, and how this result allows us to generalize some of the bounds previously discussed in this section. \begin{restatable}{proposition}{sumproperty}\label{prop:sum-property} Let $f,g:2^E \to \mathbb{R}_+$ be two set functions with parameters $\gamma^f_{A,B}$ and $\gamma^g_{A,B}$ respectively. We have the following. \begin{enumerate}[(a)] \item If $f$ and $g$ are both monotone, then $f+g$ is also monotone with parameter $ \gamma_{A,B} \geq \min\{\gamma^f_{A,B}, \gamma^g_{A,B} \}$. In particular, if $0 \leq \gamma^g_{A,B} \leq \gamma^f_{A,B}$ holds for all pairs of disjoint sets $A$ and $B$, then $f+g$ has parameter $\gamma_{A,B} \geq \gamma^g_{A,B}$. \item If $f$ is monotone and $g$ is non-monotone, and $0 \leq \gamma^g_{A,B} \leq \gamma^f_{A,B}$ holds for all pairs of disjoint sets $A$ and $B$, then $f+g$ has parameter $\gamma_{A,B} \geq \gamma^g_{A,B}$. \end{enumerate} \end{restatable} By combining the above proposition with Examples~\ref{ex:diversity},~\ref{ex:|S|f(S)}, and~\ref{ex:proportionally-submod} we get the following. \begin{restatable}{example}{divsubmod}\label{ex:nonmonotone} Let $f$ be a non-negative monotone submodular function. Then: \begin{itemize} \item The sum $f+g$ where $g$ is a metric diversity function satisfies $\gamma_{A,B} \geq \frac{|A|}{|A|+|B|-1}$. \item The sum $f(S) + |S|\cdot g(S)$ where $g$ is non-monotone submodular satisfies $\gamma_{A,B} \geq \frac{|A|+1}{|A|+|B|}$. \item The sum $f+g$ where $g$ is non-monotone proportionally submodular satisfies $\gamma_{A,B} \geq \frac{3 a (1 + a)}{3 a^2 + 3 a b + b^2 - 1}$, where $a=|A|$ and $b=|B|$. \end{itemize} \end{restatable} We can also combine the above result with Example~\ref{ex:f(S)g(S)} to get that the product function $(f+g) \cdot h$ satisfies $\gamma_{A,B} \geq \frac{f(A)}{f(A \cup B)} \cdot \frac{|A|}{|A|+|B|-1}$, whenever $f$ and $h$ are monotone submodular and $g$ is a metric diversity function. This generalizes the bound from Example~\ref{ex:f(S)g(S)-apps} (b). We note that the sum of a monotone submodular function and a metric diversity function has been previously studied~\cite{borodin2017max}. We discuss this in more detail in Section~\ref{sec:applications}. \subsection{Additional related work}\label{sec:related-work} The notion of weak submodularity was introduced by Das and Kempe~\cite{das2011submodular}, where they showed that the standard greedy algorithm achieves a $(1-e^{-\gamma})$-approximation for the monotone maximization problem subject to a cardinality constraint. They provided applications to the feature selection problem for linear regression and the dictionary selection problems. Khanna~et~al.~\cite{khanna2017scalable} showed that faster (such as distributed and stochastic) versions of the greedy algorithm also retain provable theoretical guarantees for monotone weakly submodular maximization under a cardinality constraint. They discussed applications for the sparse linear regression problem and the support selection problem. Elenberg~et~al.~\cite{elenberg2017streaming} considered the above problem in the random order streaming setting, and provided applications to nonlinear sparse regression and interpretability of black-box neural network classifiers. Connections between weak submodularity and restricted strong convexity were shown by Elenberg~et~al.~\cite{elenberg2018restricted}, and used for high-dimensional subset selection problems. The work of Chen~et~al.~\cite{chen2018weakly} goes beyond the cardinality constraint, and considers the monotone maximization problem subject to a matroid constraint. They provided an approximation ratio of ${(1+1/ \gamma)}^{-2}$ for this problem, and discuss applications to video summarization, splice site detection, and black-box interpretation of images. Gatmiry and Gomez \cite{gatmiry2018non} showed that the standard deterministic greedy algorithm also enjoys provable guarantees for the above problem, though worse than the one provided by~\cite{chen2018weakly}. They provide applications to tree-structured Gaussian graphical model estimation. The recent work of Harshaw~et~al.~\cite{harshaw2019submodular} considers the problem $\max \{f(S)-m(S):|S| \leq k\}$, where $f$ is non-negative monotone $\gamma$-weakly submodular and $m$ is a non-negative modular function. Using the special structure of this type of objective, they circumvented the potential roadblocks of $f-m$ being negative or non-monotone, and provided a bifactor guarantee satisfying $f(S)-m(S) \geq (1-e^{-\gamma}) f(OPT) - m(OPT)$. In addition, they showed that this approximation ratio is tight in the value oracle model. Non-monotone submodular maximization subject to a cardinality constraint has been studied extensively. The first constant factor approximation for this problem was given by Lee~et~al.~\cite{lee2010maximizing}. Since then a large series of works~\cite{buchbinder2014submodular,ene2016constrained,feldman2011unified,gharan2011submodular,gupta2010constrained,vondrak2013symmetry} have improved the approximation factor to the current best $0.385$ ratio due to Buchbinder and Feldman~\cite{buchbinder2019constrained}. Some of the latter works, however, use an approach that involves using a continuous relaxation of the objective function and then applying rounding methods to the fractional solution. While this approach has been extremely successful for proving strong theoretical guarantees, due to the run time they usually become impractical in real-world scenarios with large amounts of data. In our work we use a randomized greedy algorithm proposed by Buchbinder et al.~\cite{buchbinder2014submodular}, where it is shown that this algorithm produces a $1/e$-approximation (on expectation). On the inapproximability side, Gharan and Vondrak~\cite{gharan2011submodular} show that it is impossible to achieve a $0.491$ approximation for this problem in the value oracle model. \subsection{Our contributions}\label{sec:contributions} One main contribution of this work is showing that an easy-to-implement and fast randomized greedy algorithm (i.e., Algorithm~\ref{alg:random-greedy}) has provable theoretical guarantees for the problem $\max \{f(S): |S| \leq k\}$ when the function $f: 2^E \to \mathbb{R}_+$ is non-monotone weakly submodular (as defined in Section~\ref{sec:definitions}). This is encapsulated in the following result. To the best of our knowledge, this is the first time that weakly submodular functions are considered in the non-monotone setting. \begin{theorem}\label{thm:non-monot-global} There exists an efficient randomized greedy algorithm which has an approximation ratio (on expectation) of at least $\gamma {(1-1/\gamma k)}^{k-1}$ for the problem of maximizing a non-negative non-monotone $\gamma$-weakly submodular function subject to a cardinality constraint of size $k$. This approximation ratio is asymptotically $\gamma \cdot e^{-1/ \gamma}$ as $k \to \infty$. For non-negative non-monotone $\gamma$-pseudo submodular functions, the approximation ratio is of at least $\gamma \cdot e^{-\gamma}$. \end{theorem} We remark that when $\gamma$ approaches to $1$, our bounds recover the $1/e$ approximation factor given in~\cite{buchbinder2014submodular} for the analysis of the same algorithm over submodular functions (i.e., the case when $\gamma=1$). A key ingredient for analyzing non-monotone objectives is to bound the term $\mathbb{E}[f(S_i \cup \mathrm{OPT})]$ with respect to $f(\mathrm{OPT})$. For submodular functions the work of~\cite{buchbinder2014submodular} (see their Lemma 2.2 and Observation 1) bounds the above term by using the diminishing returns property, i.e., $f_A (e) \geq f_B (e)$ whenever $A \subseteq B$ and $e \notin B$. However, it is not clear how one could imitate such argument in the case of non-submodular functions. In particular, it is not obvious whether from the definition of weak submodularity, one could find a parameter $\beta >0$ satisfying some approximate version $f_A (e) \geq \beta f_B (e)$ of diminishing returns. We circumvent this issue by analyzing the quantity $\mathbb{E}[f(S_i \cup \mathrm{OPT})]$ directly with respect to the execution of the algorithm (see Lemma~\ref{lem:semi1}). Another important piece of our work is to provide a more refined analysis that allows for the submodularity ratio to change throughout the execution of the algorithm. This is particularly useful since many classes of functions will usually satisfy this (see for instance Section~\ref{sec:examples}). Our most general result (Theorem~\ref{thm:non-monot-local}) assumes some local bounds for the submodularity ratio throughout the algorithm, and provides approximation guarantees based on these bounds. Its statement is somewhat less clean to express since it depends on the notation used in Algorithm~\ref{alg:random-greedy} (which we introduce later in Section~\ref{sec:randomized-greedy}), so we defer its full presentation and discussion to Section~\ref{sec:non-monotone}. We next present some of its consequences, which lead to some of our main applications. \begin{theorem}\label{thm:local-param} Assume we run the randomized greedy algorithm described in Algorithm~\ref{alg:random-greedy} on a function $f:2^E \to \mathbb{R}_+$ with parameters $\gamma_{A,B} \in [0,1]$ for any pair of disjoint sets $A,B \subseteq E$. Moreover, assume there are values $0 \leq \gamma_i \leq 1$ for $i \in \{0,1,2,\ldots,k-1\}$ so that $ \sum_{e \in \mathrm{OPT}} f_{S_i} (e) \geq \min\{ \gamma_i \cdot f_{S_i} (\mathrm{OPT}), f_{S_i} (\mathrm{OPT})\} $ holds for any possible solution $S_i$ of the algorithm after iteration $i$. Then the algorithm produces (on expectation): \vspace*{0.05cm} \begin{itemize} \item An approximation factor of at least $1- \exp (-\frac{1}{k}\sum_{i=0}^{k-1} \gamma_i)$ if $f$ is monotone. \vspace*{0.05cm} \item An approximation factor of at least $ \frac{1}{ek} \sum_{i=0}^{k-1} \gamma_i $ if $f$ is non-monotone. \end{itemize} \end{theorem} We remark that for monotone $\gamma$-weakly submodular objectives the above result retains the $(1-e^{-\gamma})$-approximation given in~\cite{das2011submodular}. This follows by noticing that for monotone functions we always have that $\min\{ \gamma_i \cdot f_{S_i} (\mathrm{OPT}), f_{S_i} (\mathrm{OPT})\} = \gamma_i \cdot f_{S_i} (\mathrm{OPT})$ since $f_{S_i} (\mathrm{OPT}) \geq 0$ and $\gamma_i \in [0,1]$. One can then use the $\gamma$-weak submodularity of the function to set $\gamma_i = \gamma$ for all $i$. Combining the above theorem with the results from Section~\ref{sec:examples} leads to interesting applications. We now highlight some of them, and defer a more detailed discussion to Section~\ref{sec:applications}. The above theorem allows us to obtain provable guarantees for some of the non-monotone objectives discussed in Section~\ref{sec:examples}. These include, for instance, the non-monotone functions from Example~\ref{ex:nonmonotone}, which satisfy the property $\gamma_{A,B} \in [0,1]$. Theorem~\ref{thm:local-param} also leads to interesting results for monotone objectives. Applying it to Example~\ref{ex:proportionally-submod} we get a $0.197$-approximation for maximizing monotone proportionally submodular functions subject to a cardinality constraint. This improves over the current best $0.168$-approximation from~\cite{borodin2014weak,borodin2015proportionally}. Another set of applications is obtained via Example~\ref{ex:f(S)g(S)-apps}, which allows us to get several constant factor approximations for the product of set functions. For instance, for the product $f\cdot(g+h)$ where $f,g$ are monotone submodular and $h$ is a metric diversity function, our results lead to a $0.058$-approximation. For the product $f \cdot g$ where $f$ is monotone submodular and $g$ is monotone proportionally submodular, we get a $0.046$-approximation. We are not aware of previous work for these problems. \section{Approximation guarantees} In this section we present the main theoretical contribution of this work, which is to analyze the performance of a randomized greedy algorithm on non-monotone functions (see Section~\ref{sec:non-monotone}). We present the analysis for monotone objectives in Section~\ref{sec:monotone}. We next describe the randomized greedy algorithm that we use in this work. \subsection{Randomized greedy algorithm}\label{sec:randomized-greedy} In this section, we explain the randomized greedy algorithm introduced in the work of~\cite{buchbinder2014submodular}, where they study the problem of maximizing a non-monotone submodular function subject to a cardinality constraint. We note that this algorithm has also been used in~\cite{chen2018weakly} for the problem of maximizing a monotone weakly submodular function subject to a matroid constraint. Given a set function $f:2^E \to \mathbb{R}$ over a ground set $E$, we first add a set $D$ of $2k$ dummy elements to the ground set. That is, for any set $A \subseteq E$ and $U \subseteq D$ the function satisfies $f_A(U) = 0$. Then, for each $1 \leq i \leq k$, we take a set of $k$ elements that maximizes the sum of the marginal gains, where in case of ties we always give preference to elements from the original ground set $E$. Finally, we choose uniformly at random one of the $k$ elements, and add it to the current solution. We summarize this procedure in Algorithm~\ref{alg:random-greedy}. \RestyleAlgo{algoruled} \begin{algorithm}[htb] \footnotesize Add a set $D$ of $2k$ dummy elements to $f$.\\ Initialize: $S_0 \leftarrow \emptyset$. \\ \For{$i = 1$ to $k$} { Let $M_i \subseteq (E \cup D) \setminus S_{i-1}$ be a subset of size $k$ maximizing $\sum_{e \in M_i} f_{S_{i-1}}(e)$. In case of ties between dummy elements and elements from $E$, always choose the latter.\\ Let $e_i$ be a uniformly random element from $M_i$.\\ $S_i \leftarrow S_{i-1}+e_i$.\\ } \Return $S_k$. \caption{\textsf{RandomizedGreedy}$(f,k)$}\label{alg:random-greedy} \end{algorithm} The algorithm is quite efficient as it makes $O(nk)$ queries to the value oracle. This is the same number of queries that the standard deterministic greedy algorithm makes. Moreover, adding $2k$ dummy elements to the original ground set guarantees the following. \begin{observation}\label{obs:random-greedy} At any iteration $1\leq i \leq k$ of the \textsf{RandomizedGreedy } algorithm the following is satisfied: \begin{enumerate} \item $|M_i|=k$. \item $f_{S_{i-1}} (e_i) \geq 0$, and hence $f(S_i) \geq f(S_{i-1})$. \item $\sum_{e \in M_i} f_{S_{i-1}} (e) \geq \sum_{e \in \mathrm{OPT}} f_{S_{i-1}} (e)$. \end{enumerate} \end{observation} \begin{proof} The first two statements are immediate from the fact that we add $2k$ dummy elements. To see the last statement, let $\bar{M}_i$ denote a set of size $k$ containing $\mathrm{OPT} \setminus S_{i-1}$ and potentially some dummy elements (so that $|\bar{M}_i|=k$). Then, by definition of $M_i$ we have \[ \sum_{e \in M_i} f_{S_{i-1}} (e) \geq \sum_{e \in \bar{M}_i} f_{S_{i-1}} (e) = \sum_{e \in \mathrm{OPT}} f_{S_{i-1}} (e). \qedhere \] \end{proof} \subsection{Analysis for monotone functions}\label{sec:monotone} In this section, we analyze the performance of the \textsf{RandomizedGreedy} algorithm on monotone functions. We note that we keep the term depending on the initial set $S_0$ in the approximation factor. The main reason for this is that while in many settings this will just be the empty set, in some applications one needs to start from a non-empty initial set $S_0$ to have provable guarantees for the parameter $\gamma_i$. (See for instance our applications for the product of set functions discussed in Section~\ref{sec:applications}.) Then we would just run the algorithm for $k - |S_0|$ iterations. \begin{theorem \label{thm:monotone-local} Let $f:2^E \to \mathbb{R}_+$ be a monotone set function. Assume there are values $0 \leq \gamma_i \leq 1$ for $i \in \{0,1,2,\ldots,k-1\}$ so that \[ \sum_{e \in \mathrm{OPT}} f_{S_{i}} (e) \geq \gamma_i \cdot f_{S_{i}} (\mathrm{OPT}) \] throughout the execution of the \textsf{RandomizedGreedy } algorithm, where $S_i$ denotes the set of chosen elements after the $ith$ iteration (i.e., $|S_i|=i$). Then at any iteration $1 \leq i \leq k$ the algorithm satisfies \begin{align*} \mathbb{E}[f(S_i)] & \geq \left(1- \prod_{j=0}^{i-1} {\left(1-\frac{\gamma_j}{k} \right)} \right) \cdot f(\mathrm{OPT}) + \prod_{j=0}^{i-1} {\left(1-\frac{\gamma_j}{k} \right)} \cdot \mathbb{E}[f(S_0)] \\ & \geq \left(1- \exp \Big(-\sum_{j=0}^{i-1} \frac{\gamma_j}{k} \Big) \right) \cdot f(\mathrm{OPT}) + \prod_{j=0}^{i-1} {\left(1-\frac{\gamma_j}{k} \right)} \cdot \mathbb{E}[f(S_0)]. \end{align*} \end{theorem} \begin{proof} Fix $1\leq i \leq k$ and a possible realization $S_1,S_2,\ldots,S_{i-1}$ of the algorithm of up to iteration $i-1$. Then (conditioned on this event) we have \begin{align*} \mathbb{E}[f_{S_{i-1}} (e_i)] & = \frac{1}{k} \sum_{e \in M_i} f_{S_{i-1}} (e) \geq \frac{1}{k} \sum_{e \in OPT} f_{S_{i-1}} (e) \geq \frac{\gamma_{i-1}}{k} f_{S_{i-1}} (OPT) \\ & = \frac{\gamma_{i-1}}{k} [f(S_{i-1} \cup OPT) - f(S_{i-1})] \geq \frac{\gamma_{i-1}}{k} [f(OPT) - f(S_{i-1})], \end{align*} where the first inequality follows from Observation~\ref{obs:random-greedy}, the second inequality from the theorem's assumption, and the last inequality from non-negativity of $f$. We then have \[ \mathbb{E} [f(S_i)] - f(S_{i-1}) \geq \frac{\gamma_{i-1}}{k} [f(OPT) - f(S_{i-1})], \] and rearranging the terms we get \[ f(OPT) - \mathbb{E} [f(S_i)] \leq \Big(1-\frac{\gamma_{i-1}}{k} \Big) \Big[f(OPT) - f(S_{i-1}) \Big]. \] By unfixing the realization $S_1,S_2,\ldots,S_{i-1}$ and taking expectations over all such possible realizations of the algorithm we get \begin{align*} f(OPT) - \mathbb{E} [f(S_i)] &\leq \Big(1-\frac{\gamma_{i-1}}{k} \Big) \Big[f(OPT) - \mathbb{E} [f(S_{i-1})] \Big] \\ & \leq \Big(1-\frac{\gamma_{i-1}}{k} \Big)\Big(1-\frac{\gamma_{i-2}}{k} \Big) \Big[f(OPT) - \mathbb{E} [f(S_{i-2})] \Big] \\ & \leq \cdots \\ & \leq \bigg( \prod_{j=0}^{i-1} \left(1 - \frac{\gamma_{j}}{k}\right) \bigg) [f(OPT) - \mathbb{E} [f(S_{0})]]. \end{align*} Hence, \begin{align*} \mathbb{E}[f(S_i)] & \geq \left(1- \prod_{j=0}^{i-1} {\left(1-\frac{\gamma_j}{k} \right)} \right) \cdot f(\mathrm{OPT}) + \prod_{j=0}^{i-1} {\left(1-\frac{\gamma_j}{k} \right)} \cdot \mathbb{E}[f(S_0)] \\ & \geq \left(1- \exp \Big(-\sum_{j=0}^{i-1} \frac{\gamma_j}{k} \Big) \right) \cdot f(\mathrm{OPT}) + \prod_{j=0}^{i-1} {\left(1-\frac{\gamma_j}{k} \right)} \cdot \mathbb{E}[f(S_0)], \end{align*} where the last inequality uses that $1-x \leq e^{-x}$ for all $ x \geq 0$. \end{proof} The above result now proves the first part of Theorem~\ref{thm:local-param}. This follows because by monotonicity of $f$ we have $f_{S_i} (\mathrm{OPT}) \geq 0$, and hence $\min\{ \gamma_i \cdot f_{S_i} (\mathrm{OPT}), f_{S_i} (\mathrm{OPT})\} = \gamma_i \cdot f_{S_i} (\mathrm{OPT})$. \subsection{Analysis for non-monotone functions}\label{sec:non-monotone} In this section we analyze the performance of the \textsf{RandomizedGreedy} algorithm on non-monotone functions. As mentioned in Section~\ref{sec:contributions}, a key ingredient for analyzing the non-monotone case is to bound the term $\mathbb{E}[f(S_i \cup \mathrm{OPT})]$ from below with respect to $f(\mathrm{OPT})$. For monotone objectives this is trivial, since by monotonicity we always have $f(S_i \cup \mathrm{OPT}) \geq f(\mathrm{OPT})$. The techniques used in~\cite{buchbinder2014submodular} for analyzing \textsf{RandomizedGreedy } with respect to submodular functions make use of the diminishing returns property (see their Lemma 2.2 and Observation 1). However, it is not clear how to extend those techniques for non-monotone weakly submodular functions, since it is not obvious whether they satisfy some type of approximate diminishing returns property $f_A (e) \geq \beta f_B (e)$. Our next result circumvents this issue by analyzing the quantity $\mathbb{E}[f(S_i \cup \mathrm{OPT})]$ directly with respect to the execution of the algorithm. \begin{lemma}\label{lem:semi1} Let $f$ be a non-negative set function. Assume there are numbers $0 \leq \bar{\alpha}_i \leq \bar{\beta}_i \leq k$ such that \[ \sum_{u \in M_i} f_{S_{i-1} \cup \mathrm{OPT}} (u) \geq \min\{ \bar{\alpha}_i \cdot f_{S_{i-1} \cup \mathrm{OPT}} (M_i), \bar{\beta}_i \cdot f_{S_{i-1} \cup \mathrm{OPT}} (M_i) \} \] is satisfied for any choice of $M_i$ and $S_{i-1}$ throughout the execution of the \textsf{RandomizedGreedy } algorithm. Then at any iteration $1 \leq i \leq k$ the algorithm satisfies $ \mathbb{E}[f(S_i \cup \mathrm{OPT})] \geq \prod_{j=1}^i (1-\bar{\beta}_j / k ) \cdot f(\mathrm{OPT}). $ \end{lemma} \begin{proof} Fix $1\leq i \leq k$ and an event $S_1,S_2,\ldots,S_{i-1}$ of a possible path of the algorithm up to iteration $i-1$. Then (conditioned on this event) we have \begin{align*} & \mathbb{E}[f(S_i \cup \mathrm{OPT})] \\ & = f(S_{i-1} \cup \mathrm{OPT}) + \mathbb{E}[f_{S_{i-1} \cup \mathrm{OPT}} (u_i)] = f(S_{i-1} \cup \mathrm{OPT}) + \frac{1}{k} \sum_{u \in M_i} f_{S_{i-1} \cup \mathrm{OPT}} (u)\\ &\geq f(S_{i-1} \cup \mathrm{OPT}) + \frac{1}{k} \min\{ \bar{\alpha}_i \cdot f_{S_{i-1} \cup \mathrm{OPT}} (M_i), \bar{\beta}_i \cdot f_{S_{i-1} \cup \mathrm{OPT}} (M_i) \}. \end{align*} We now consider separately the cases where the marginal gain $f_{S_{i-1} \cup OPT} (M_i)$ is either negative or non-negative. If it is non-negative, using that $0 \leq \bar{\alpha}_i \leq \bar{\beta}_i$ we get \begin{equation*} \mathbb{E}[f(S_i \cup \mathrm{OPT})] \geq f(S_{i-1} \cup \mathrm{OPT}) + \frac{\bar{\alpha}_i}{k} f_{S_{i-1} \cup OPT} (M_i) \geq f(S_{i-1} \cup \mathrm{OPT}). \end{equation*} If on the other hand, the marginal gain is negative, then \begin{align*} \mathbb{E}[f(S_i \cup \mathrm{OPT})] &\geq f(S_{i-1} \cup \mathrm{OPT}) + \frac{\bar{\beta}_i}{k} f_{S_{i-1} \cup \mathrm{OPT}} (M_i)\\ &= f(S_{i-1} \cup \mathrm{OPT}) + \frac{\bar{\beta}_i}{k} [f(S_{i-1} \cup \mathrm{OPT} \cup M_i) - f(S_{i-1} \cup \mathrm{OPT})]\\ &\geq f(S_{i-1} \cup \mathrm{OPT}) - \frac{\bar{\beta}_i}{k} f(S_{i-1} \cup \mathrm{OPT}) = \Big[1- \frac{\bar{\beta}_i}{k} \Big] f(S_{i-1} \cup \mathrm{OPT}), \end{align*} where the last inequality follows from non-negativity. Thus, for each possible fixed realization $S_1,S_2,\ldots,S_{i-1}$ of the algorithm up to iteration $i-1$ we have \begin{equation*} \mathbb{E}[f(S_i \cup \mathrm{OPT})] \geq \Big[1- \frac{\bar{\beta}_i}{k} \Big] f(S_{i-1} \cup \mathrm{OPT}). \end{equation*} By unconditioning on the event $S_1,S_2,\ldots,S_{i-1}$, and taking the expectation over all such possible events we get: \begin{align*} \mathbb{E}[f(S_i \cup \mathrm{OPT})] & \geq \Big[1- \frac{\bar{\beta}_i}{k} \Big] \mathbb{E}[f(S_{i-1} \cup \mathrm{OPT})] \geq \Big[1-\frac{\bar{\beta}_i}{k} \Big] \Big[1- \frac{\bar{\beta}_{i-1}}{k} \Big] \mathbb{E}[f(S_{i-2} \cup \mathrm{OPT})] \\ &\geq \cdots \geq \prod_{j=1}^i \Big[1- \frac{\bar{\beta}_j}{k} \Big] \mathbb{E}[f(S_{0} \cup \mathrm{OPT})] = \prod_{j=1}^i \Big[1- \frac{\bar{\beta}_j}{k} \Big] f(\mathrm{OPT}). \qedhere \end{align*} \end{proof} For submodular functions the above result becomes $\mathbb{E}[f(S_i \cup \mathrm{OPT})] \geq {(1-1/k)}^i \cdot f(\mathrm{OPT})$, since we can take $\bar{\alpha}_i = \bar{\beta}_i =1$ for all $i$. We remark that this matches the bound provided in~\cite{buchbinder2014submodular} for submodular functions (see their Observation 1). \iffalse We remark that when $\gamma=1$ (i.e., the case of submodular functions), by taking $\bar{\gamma}_j=1$ for all $j$ the above result recovers the bound $\mathbb{E}[f(S_i \cup \mathrm{OPT})] \geq {\big[1-\frac{1}{k}\big]}^i \cdot f(\mathrm{OPT})$ provided in~\cite{buchbinder2014submodular} for submodular functions. \fi We now prove our main result \begin{theorem}\label{thm:non-monot-local} Let $f:2^E \to \mathbb{R}_+$ be a set function. Assume there are values $0 \leq \bar{\alpha}_i \leq \bar{\beta}_i \leq k$ and $0 \leq {\alpha}_i \leq {\beta}_i \leq k$ such that \[ \sum_{u \in M_i} f_{S_{i-1} \cup \mathrm{OPT}} (u) \geq \min\{ \bar{\alpha}_i \cdot f_{S_{i-1} \cup \mathrm{OPT}} (M_i), \bar{\beta}_i \cdot f_{S_{i-1} \cup \mathrm{OPT}} (M_i) \} \] and \[ \sum_{e \in OPT} f_{S_{i-1}} (e) \geq \min\{ \alpha_{i-1} \cdot f_{S_{i-1}} (\mathrm{OPT}), \beta_{i-1} \cdot f_{S_{i-1}} (\mathrm{OPT}) \} \] is satisfied for any choice of $M_i$ and $S_{i-1}$ throughout the execution of the \textsf{RandomizedGreedy } algorithm. Then at any iteration $1 \leq i \leq k$ the algorithm satisfies \[ \mathbb{E}[f(S_i)] \geq \left( \prod_{j=1}^{i-1} \min \Big\{1-\frac{\bar{\beta}_j}{k}, 1-\frac{\alpha_j}{k} \Big\} \right) \cdot \Big( \sum_{j=0}^{i-1} \frac{\alpha_j}{k} \Big) \cdot f(\mathrm{OPT}). \] \iffalse Moreover, the output of the algorithm satisfies \[ \mathbb{E}[f(S_k)] \geq \Bigg[ \prod_{j=1}^{k-1} \Big[1-\frac{\bar{\gamma}_j}{k}\Big] \Bigg] \cdot \Big[\sum_{\ell=1}^p \frac{m_\ell - m_{\ell-1} }{k} c_{\ell-1} \Big] \cdot f(\mathrm{OPT}). \] \fi \end{theorem} \begin{proof} Fix $1\leq i \leq k$ and an event $S_1,S_2,\ldots,S_{i-1}$ of a possible realization of the algorithm up to iteration $i-1$. Then (conditioned on this event) we have \begin{align*} \mathbb{E}[f_{S_{i-1}} (e_i)] & = \frac{1}{k} \sum_{e \in M_i} f_{S_{i-1}} (e) \geq \frac{1}{k} \sum_{e \in OPT} f_{S_{i-1}} (e) \\ &\geq \frac{1}{k} \min\{ \alpha_{i-1} \cdot f_{S_{i-1}} (\mathrm{OPT}), \beta_{i-1} \cdot f_{S_{i-1}} (\mathrm{OPT}) \}, \end{align*} where the first inequality follows from Observation~\ref{obs:random-greedy}, and the second inequality from the theorem's assumption. We now consider separately the cases where the marginal gain $f_{S_{i-1}} (\mathrm{OPT})$ is either negative or non-negative. If it is non-negative, using that $0 \leq {\alpha}_i \leq {\beta}_i$ we get \[ \mathbb{E}[f_{S_{i-1}} (e_i)] \geq \frac{\alpha_{i-1}}{k} \cdot f_{S_{i-1}} (\mathrm{OPT}). \] If it is negative we get \[ \mathbb{E}[f_{S_{i-1}} (e_i)] \geq 0 \geq \frac{\alpha_{i-1}}{k} \cdot f_{S_{i-1}} (\mathrm{OPT}), \] where the first inequality follows from Observation~\ref{obs:random-greedy}. It then follows that for any fixed possible realization $S_1,S_2,\ldots,S_{i-1}$ of the algorithm of up to iteration $i-1$ we have \begin{equation} \label{eq:greedy-bound} \mathbb{E}[f_{S_{i-1}} (e_i)] \geq \frac{\alpha_{i-1}}{k} \cdot f_{S_{i-1}} (\mathrm{OPT}) = \frac{\alpha_{i-1}}{k} [f(S_{i-1} \cup \mathrm{OPT}) - f(S_{i-1})]. \end{equation} We now unfix the realization $S_1,S_2,\ldots,S_{i-1}$ and take expectations over all such possible realizations of the algorithm. \begin{align} \label{eq:thm} \mathbb{E}[f(S_i)] &= \mathbb{E}[f(S_{i-1}) + f_{S_{i-1}} (e_i)] = \mathbb{E}[f(S_{i-1})] + \mathbb{E}[f_{S_{i-1}} (e_i)] \nonumber \\ & \geq \mathbb{E}[f(S_{i-1})] + \frac{\alpha_{i-1}}{k} \mathbb{E}[f(S_{i-1} \cup \mathrm{OPT}) - f(S_{i-1})] \nonumber \\ & = \Big[1-\frac{\alpha_{i-1}}{k}\Big] \mathbb{E}[f(S_{i-1})] + \frac{\alpha_{i-1}}{k} \mathbb{E}[f(S_{i-1} \cup \mathrm{OPT})] \nonumber \\ & \geq \Big[1-\frac{\alpha_{i-1}}{k}\Big] \mathbb{E}[f(S_{i-1})] + \frac{\alpha_{i-1}}{k} \prod_{j=1}^{i-1} \Big[1-\frac{\bar{\beta}_j}{k}\Big] f(\mathrm{OPT}), \end{align} where the first inequality follows from Equation~\eqref{eq:greedy-bound} and the last inequality follows from Lemma~\ref{lem:semi1} (which we can use due to the theorem's assumptions). We are now ready to prove the statement of the theorem using induction on the value of $1 \leq i \leq k$. The base case $i=1$ claims that $\mathbb{E}[f(S_1)] \geq (\alpha_0 / k) \cdot f(\mathrm{OPT})$. This follows from Equation \eqref{eq:thm} by setting $i=1$ and using that $f(S_0) = f(\emptyset) \geq 0$. Now let $1<i \leq k$ be arbitrary, and assume that the claim is true for all values $1 \leq i'<i$; we show it is also true for $i$. Using Equation~\eqref{eq:thm} and the induction hypothesis we get \begin{align*} & \mathbb{E}[f(S_i)] \geq \Big[1-\frac{\alpha_{i-1}}{k}\Big] \mathbb{E}[f(S_{i-1})] + \frac{\alpha_{i-1}}{k} \prod_{j=1}^{i-1} \Big[1-\frac{\bar{\beta}_j}{k}\Big] f(\mathrm{OPT})\\ &\geq \Bigg[ \Big[1-\frac{\alpha_{i-1}}{k}\Big] \left( \prod_{j=1}^{i-2} \min \Big\{1-\frac{\bar{\beta}_j}{k}, 1-\frac{\alpha_j}{k} \Big\} \right) \cdot \Big( \sum_{j=0}^{i-2} \frac{\alpha_j}{k} \Big) + \frac{\alpha_{i-1}}{k} \prod_{j=1}^{i-1} \Big[1- \frac{\bar{\beta}_j}{k} \Big] \Bigg] f(\mathrm{OPT}) \\ & \geq \left( \prod_{j=1}^{i-1} \min \Big\{1-\frac{\bar{\beta}_j}{k}, 1-\frac{\alpha_j}{k} \Big\} \right) \cdot \Big( \big( \sum_{j=0}^{i-2} \frac{\alpha_j}{k} \big) + \frac{\alpha_{i-1}}{k} \Big) \cdot f(\mathrm{OPT}) \\ &= \left( \prod_{j=1}^{i-1} \min \Big\{1-\frac{\bar{\beta}_j}{k}, 1-\frac{\alpha_j}{k} \Big\} \right) \cdot \Big( \sum_{j=0}^{i-1} \frac{\alpha_j}{k} \Big) \cdot f(\mathrm{OPT}). \qedhere \end{align*} \end{proof} \subsubsection{Proof of Theorems~\ref{thm:non-monot-global} and \ref{thm:local-param}} The above result leads to several interesting consequences by choosing the values of the parameters $\alpha_i, \bar{\alpha}_i, \beta_i , \bar{\beta}_i$ appropriately. For instance, for non-monotone $\gamma$-weakly submodular functions we have $\alpha_i = \bar{\alpha}_i = \gamma$ and $\beta_i = \bar{\beta}_i = 1/ \gamma$ for all $i$. Hence we immediately get an approximation of $\gamma {(1-1/\gamma k)}^{k-1}$, which is asymptotically $\gamma e^{-1/ \gamma}$ as $k \to \infty$. In a similar fashion, for $\gamma$-pseudo submodular functions we can take $\alpha_i = \bar{\alpha}_i = \beta_i = \bar{\beta}_i = \gamma$ for all $i$, leading to an approximation factor of $\gamma {(1-\gamma / k)}^{k-1} \geq \gamma e^{-\gamma}$. This now proves Theorem~\ref{thm:non-monot-global}. One can now also prove the second part of Theorem~\ref{thm:local-param} as follows. First, if the function has a parameter that satisfies $0 \leq \gamma_{A,B} \leq 1$ (such as in Example~\ref{ex:nonmonotone}), we immediately get that $\alpha_i, \bar{\alpha}_i, \beta_i , \bar{\beta}_i \leq \max_{A \cap B = \emptyset} \gamma_{A,B} \leq 1$. Hence $\prod_{j=1}^{k-1} \min \{1-\alpha_j/k, 1-\bar{\beta}_j / k \} \geq {[1-1/k]}^{k-1} \geq 1/e$. In addition, using the assumptions from Theorem~\ref{thm:local-param} one can take $\alpha_i = \gamma_i$ and $\beta_i = 1$ in Theorem~\ref{thm:non-monot-local}; leading to an approximation factor of $(1/ek) \cdot \sum_{j=0}^{k-1} \gamma_j$ as desired. Theorem~\ref{thm:non-monot-local} becomes particularly useful to prove tighter guarantees for some of the examples discussed in Section~\ref{sec:examples}, which have a parameter $\gamma_{A,B}$ that changes throughout the algorithm. We discuss this and applications for monotone objectives in the next section. \section{Applications}\label{sec:applications} We now present some applications for our results. We discuss the monotone case first. For monotone functions it is clear that the \textsf{RandomizedGreedy } algorithm always selects $k$ elements from the original ground set $E$ (i.e., it never chooses dummy elements). In particular, the current solution $S_i$ at iteration $i$ always has $i$ elements from $E$, while $OPT \setminus S_i$ is a set containing at most $k$ elements. We can use this, together with the results from Section~\ref{sec:examples}, to compute a lower bound for a parameter $\gamma_i \geq 0$ that satisfies $ \sum_{e \in \mathrm{OPT}} f_{S_i} (e) \geq \gamma_i \cdot f_{S_i} (\mathrm{OPT}). $ For instance, one can take \vspace*{-0.1cm} \[ \gamma_i = \min_{|A|=i, \ 1\leq |B| \leq k, \ A \cap B = \emptyset} \gamma_{A,B}. \] We then immediately get a provable approximation ratio of at least $1- \exp (-\frac{1}{k}\sum_{i=0}^{k-1} \gamma_i)$ via Theorem~\ref{thm:local-param} (or Theorem~\ref{thm:monotone-local}). For monotone proportionally submodular functions, Example~\ref{ex:proportionally-submod} gives a bound of $\gamma_{A,B} \geq \frac{3 a (1 + a)}{3 a^2 + 3 a b + b^2 - 1}$ where $a=|A|$ and $b=|B|$. Hence $\gamma_i \geq \frac{3 i (1 + i)}{3 i^2 + 3 i k + k^2 - 1}$ for $i\in \{0,1,\ldots,k-1\}$. By plugging this into Theorem~\ref{thm:local-param} we get an expression that does not seem to have a closed form, but that numerically converges from above to $0.197$. This improves over the approximation factor of $0.168$ given in~\cite{borodin2015proportionally} for the same problem (they give it as a $5.95$-approximation since they express approximation factors as numbers greater than $1$). \begin{theorem} \label{thm:prop-submod} There is an efficient $0.197$-approximation for the problem of maximizing a non-negative monotone proportionally submodular function subject to a cardinality constraint. \end{theorem} Our next application is for the product of monotone set functions. First, let us consider the case $f \cdot g$ where $f$ is submodular and $g$ is either submodular, metric diversity, or proportionally submodular. Example~\ref{ex:f(S)g(S)-apps} provides explicit bounds for the parameter $\gamma_{A,B}$ of these product functions. We have $ \gamma_{A,B} \geq ( f(A) / f(A \cup B) ) \cdot \gamma^g_{A,B}$ where the latter term denotes the parameter of the function $g$. Hence, we need to lower bound the term $f(A) / f(A \cup B)$. We can do this as follows. One can show that for submodular functions, if there is a set $S_f$ satisfying $f(S_f) \geq \alpha \cdot \max_{|S| \leq k} f(S)$ then $ f(A) / f(A \cup B) \geq \alpha / (1+\alpha)$ for any set $A \supseteq S_f$ and any set $B$ of size at most $k$ (see Claim~\ref{claim:submodular-product} in the Appendix). We can then take $S_0 = S_f$ as the initial set and run the \textsf{RandomizedGreedy } algorithm during $k - |S_0|$ iterations (to get a set of size $k$), with a guarantee that the parameter of the product function satisfies $\gamma_{A,B} \geq \alpha \cdot \gamma^g_{A,B}$. This leads to approximation guarantees of $1- \exp (-\frac{1}{k}\sum_{i=k/2}^{k-1} \alpha \cdot \gamma^g_i)$, where $\gamma^g_i$ denotes the parameter $\gamma_i$ of the function $g$. For submodular functions, we can run the standard greedy algorithm on $f$ during $k/2$ iterations to find a set $S_f \subseteq E$ of size $k/2$ satisfying $f(S_f) \geq (1-e^{-1/2}) \cdot \max_{|S| \leq k} f(S)$. Combining this with the fact that submodular functions have $\gamma_i \geq 1$, the sum of submodular and metric diversity has $\gamma_i \geq \frac{i}{i+k-1}$, and proportionally submodular functions have $\gamma_i \geq \frac{3 i (1 + i)}{3 i^2 + 3 i k + k^2 - 1}$ for $i\in \{0,1,\ldots,k-1\}$, one can obtain the following approximation guarantees. \begin{theorem}\label{thm:app-product} Let $f,g$ and $h$ be non-negative and monotone. If $f$ is submodular, then: \begin{itemize} \item There is an approximation (on expectation) of $0.131$ for $f \cdot g$ when $g$ is submodular. \item There is an approximation (on expectation) of $0.058$ for $f \cdot (g + h)$ when $g$ is a metric diversity function and $h$ is submodular. \item There is an approximation (on expectation) of $0.046$ for $f \cdot g$ when $g$ is proportionally submodular. \end{itemize} \end{theorem} We are not aware of previous work for the product of set functions that we can compare our results to. However, when the functions are monotone, a natural baseline can be obtained by taking the set $S := S_f \cup S_g$ where $S_f$ is obtained by running the greedy algorithm for $\max_{|S| \leq k/2} f(S)$, and similarly $S_g$ is obtained by running the greedy algorithm for $\max_{|S| \leq k/2} g(S)$. Then if $f(S_f) \geq \alpha_f \cdot \max_{|S| \leq k} f(S)$ and $g(S_g) \geq \alpha_g \cdot \max_{|S| \leq k} g(S)$, we get that $(f\cdot g) (S_f \cup S_g) \geq \alpha_f \cdot \alpha_g \cdot fg(OPT)$. In the case of the above functions we get the following guarantees for $\alpha$ after running the greedy algorithm for $k/2$ iterations: for a submodular function we get $\alpha \geq 1-e^{-1/2} $ via the standard greedy algorithm analysis, for the sum of submodular and metric diversity we get $\alpha \geq 1/8$ via the analysis from~\cite{borodin2017max}, and for proportionally submodular we get $\alpha \geq 0.05 $ via the analysis using Example~\ref{ex:proportionally-submod} and Theorem~\ref{thm:local-param} (which improves over the previous analysis given in~\cite{borodin2015proportionally}). This leads to the following baselines (though there is room for optimizing the sizes of $S_f$ and $S_g$): a 0.155-approximation for the product of two submodular functions, a $0.049$-approximation for the product of a submodular function and the sum of submodular and metric diversity, and a $0.019$-approximation for the product of a submodular function and a proportionally submodular function. We note that our choice of cardinality $k/2$ for the initial set $S_0$ of the algorithm, and for the sets $S_f$ and $S_g$ used in the baselines, may not be optimal. For the sake of consistency and to keep the argument as clean as possible, we used the same cardinality for all of them. By using a similar argument to the one from Theorem~\ref{thm:app-product} one can also get constant factor approximations in the case where $f$ is a metric diversity function. This follows since if $S_f \subseteq E$ satisfies $f(S_f) \geq \alpha \cdot \max_{|S| \leq k} f(S)$, then $f(A) / f(A \cup B) \geq \alpha / (5 + \alpha) $ for any set $A \supseteq S_f$ and any set $B$ of size at most $k$ (see Claim~\ref{claim:diversity-product} in the Appendix). The fact that this bound is worse than for submodular functions is expected, since $f$ is supermodular and hence $f_A(e) \leq f_B(e)$ whenever $A \subseteq B \subseteq E$ and $e \notin B$. We now discuss the non-monotone case. While for monotone functions the algorithm always chooses $k$ elements from the original ground set $E$ (i.e., it never picks dummy elements), this may not be the case for non-monotone objectives. That is, for non-monotone objectives we have $S_k \subseteq E \cup D$. Hence, we cannot just directly plug the bounds for $\gamma_{A,B}$ from Section~\ref{sec:examples}, since these depend on the number of elements from $E$ that the current solution $S_i$ has. Our next result gives a guarantee with respect to the number of elements from $E$ that the algorithm picks. \begin{proposition} \label{prop:app-nonmonotone} Let $f:2^E \to \mathbb{R}_+$ be a set function with parameters $\gamma_{A,B} \in [0,1]$ satisfying that $\gamma_{A,B} \geq \gamma_{A',B}$ whenever $|A| \geq |A'|$. Then, if the \textsf{RandomizedGreedy } algorithm picks $m$ elements from the original ground set $E$ (i.e., not dummy elements), its output $S_k \subseteq E \cup D$ satisfies \begin{equation*} \mathbb{E}[f(S_k)] \geq \frac{1}{k e} \Big[(k-m) \bar{\gamma}_1 +\sum_{i=0}^{m-1} \bar{\gamma}_{i} \Big] \cdot f(\mathrm{OPT}), \end{equation*} where $\bar{\gamma}_i = \min\{ \gamma_{A,B} :|A|=i, \ 1 \leq |B| \leq k, \ A \cap B = \emptyset \}$. \end{proposition} \begin{proof} First note that since $\gamma_{A,B} \in [0,1]$ we also have that $\bar{\gamma}_i \in [0,1]$. We show that after iteration $i$, any realization of the algorithm (conditioned on the event that $m$ elements from $E$ are selected) satisfies $ \sum_{e \in \mathrm{OPT}} f_{S_i} (e) \geq \min\{ \gamma_i \cdot f_{S_i} (\mathrm{OPT}), f_{S_i} (\mathrm{OPT})\} $ where $$ \gamma_i= \begin{cases} \bar{\gamma}_0 & \mbox{if } i=0, \\ \bar{\gamma}_1 & \mbox{if } 1 \leq i \leq k-m, \\ \bar{\gamma}_{i-k+m} & \mbox{if } k-m+1 \leq i \leq k-1. \end{cases} $$ Then the desired result follows from Theorem~\ref{thm:local-param}, since $ \sum_{i=0}^{k-1} \gamma_i = \bar{\gamma}_0 + (k-m) \bar{\gamma}_1 +\sum_{i=1}^{m-1} \bar{\gamma}_{i} . $ First note that when $f_{S_i} (\mathrm{OPT}) < 0$ we have \[ \sum_{e \in \mathrm{OPT}} f_{S_i} (e) \geq \gamma_{ {\scriptscriptstyle S_i \cap E, \mathrm{OPT} \setminus S_i}} \cdot f_{S_i} (\mathrm{OPT}) \geq f_{S_i} (\mathrm{OPT}) = \min\{ \gamma_i \cdot f_{S_i} (\mathrm{OPT}), f_{S_i} (\mathrm{OPT})\}, \] where the first inequality follows from the definition of the parameter $\gamma_{A,B}$, the second inequality follows since $\gamma_{ {\scriptscriptstyle S_i \cap E, \mathrm{OPT} \setminus S_i}} \in [0,1]$ and $f_{S_i} (\mathrm{OPT}) < 0$, and similarly the last equality follows since $\gamma_i \in [0,1]$ and $f_{S_i} (\mathrm{OPT}) < 0$. Hence the inequality $ \sum_{e \in \mathrm{OPT}} f_{S_i} (e) \geq \min\{ \gamma_i \cdot f_{S_i} (\mathrm{OPT}), f_{S_i} (\mathrm{OPT})\} $ is always satisfied when $f_{S_i} (\mathrm{OPT}) < 0$. In the case where $f_{S_i} (\mathrm{OPT}) \geq 0$ we get \begin{equation} \label{eq:prop-nonmonotone} \sum_{e \in \mathrm{OPT}} f_{S_i} (e) \geq \gamma_{ {\scriptscriptstyle S_i \cap E, \mathrm{OPT} \setminus S_i}} \cdot f_{S_i} (\mathrm{OPT}) \geq \bar{\gamma}_{ {\scriptscriptstyle |S_i \cap E|} } \cdot f_{S_i} (\mathrm{OPT}), \end{equation} where the last inequality follows from the definition of $\bar{\gamma}_i$ and the fact that $|OPT \setminus S_i| \leq k$. We lower bound the term $|S_i \cap E|$ as follows. We can always assume that in the first iteration the algorithm picks an element from the original ground set. This is because $f(\emptyset)= 0$ and $f(e) \geq 0$ for all $e \in E$ by non-negativity of $f$. Hence there is always a choice of $k$ elements from the original ground set for the candidate set $M_1$. Now, since $\gamma_{A,B} \geq \gamma_{A',B}$ whenever $|A| \geq |A'|$, we have that the values $\bar{\gamma}_i$ are non-decreasing. It then follows that the worst scenario occurs when the algorithm picks the remaining $m-1$ non-dummy elements in the last $m-1$ iterations. In this case, we get that $|S_0 \cap E|=0$, $|S_i \cap E| = 1$ for $1 \leq i \leq k-m$, and $|S_i \cap E| = i-k+m$ for $k-m+1 \leq i \leq k$. Combining this with Equation \eqref{eq:prop-nonmonotone} leads to the desired result. \end{proof} The above result can be used to obtain bounds for some of the examples discussed in Section~\ref{sec:examples} that satisfy $0 \leq \gamma_{A,B} \leq 1$ and have non-decreasing values $\gamma_{A,B}$ as a function of $|A|$, such as those from Example~\ref{ex:nonmonotone}. We discuss this next, where we state the approximation guarantees for the case the algorithm selects at least $k/2$ elements from $E$. \begin{corollary} Let $f:2^E \to \mathbb{R}_+$ be a (non-monotone) set function, and assume the \textsf{RandomizedGreedy } algorithm picks at least $k/2$ elements from $E$. Then its output $S_k \subseteq E \cup D$ satisfies the following guarantees: \vspace{0.1cm} \begin{enumerate}[(a)] \item If $f=g+h$ where $g$ is monotone submodular and $h$ is non-monotone proportionally submodular, then $\mathbb{E}[f(S_k)] \geq \frac{0.05}{e} \cdot f(\mathrm{OPT})$. \vspace{0.1cm} \item If $f(S):=g(S) + |S| \cdot h(S)$ where $g$ is monotone submodular and $h$ is non-monotone submodular, then $\mathbb{E}[f(S_k)] \geq \frac{0.09}{e} \cdot f(\mathrm{OPT})$. \end{enumerate} \end{corollary} \begin{proof} From Example~\ref{ex:nonmonotone} we know that the function $f$ from part (b) satisfies $\bar{\gamma}_i \geq (i+1) / (i+k)$, while the function from part (a) satisfies $\bar{\gamma}_i \geq (3i(i+1)) / (3i^2+3ik+k^2-1)$. Plugging these values into Proposition~\ref{prop:app-nonmonotone} leads to the desired bounds. \end{proof} \section{Conclusion} In this paper we introduced a natural generalization of weak submodularity for non-monotone functions. We showed that a randomized greedy algorithm has provable approximation guarantees for maximizing these functions subject to a cardinality constraint. We also provided a fine-grained analysis that allows the submodularity ratio to change throughout the algorithm. We discussed applications of our results for monotone and non-monotone functions. It is open whether the $(\gamma \cdot e^{-1/\gamma})$-approximation is asymptotically tight for the maximization problem subject to a cardinality constraint. Another natural direction for future work is to consider the non-monotone maximization problem under more general constraints, such as matroids or knapsacks.
1,477,468,750,489
arxiv
\section{Introduction} \label{sec:introduction} Galaxy peculiar velocities contribute to a galaxy's observed redshift via the Doppler effect. This leads to characteristic anisotropies in the observed galaxy clustering pattern, known as redshift-space distortions (RSDs) \citep[][]{1972MNRAS.156P...1J,1977ApJ...212L...3S,1980lssu.book.....P,1987MNRAS.227....1K,1994MNRAS.267.1020P,1996MNRAS.282..877B}. By measuring the velocity-induced statistical effect on the galaxy power spectrum, recent galaxy surveys have measured the structure growth rate accurately, providing constraints on both dark energy properties and modifications to gravity theory \citep[e.g. SDSS BOSS/eBOSS][]{2017MNRAS.470.2617A,2021PhRvD.103h3533A}. The forthcoming generation of wide-field galaxy redshift surveys generally probe larger volumes and higher galaxy densities, thus allowing for higher signal to noise ratio measurements and better insights into our Universe, e.g. DESI \citealt{2016arXiv161100036D}, PFS \citep[][]{2014PASJ...66R...1T}, Euclid \citep[][]{2018LRR....21....2A,2020A&A...642A.191E}, SPHEREx \citep[][]{2014arXiv1412.4872D}, LSST \citep[][]{2009arXiv0912.0201L}, MegaMapper \citep[][]{2019BAAS...51g.229S,2019BAAS...51c..72F}. An accurate theoretical description for galaxy clustering in redshift space is a key for the success of the future spectroscopic surveys \citep[see e.g.][]{2020PhRvD.102l3541N}. However, the modeling of the galaxy power spectrum is limited to $k\sim0.14\ h\mathrm{Mpc}^{-1}$, mainly due to the finger of God effect on small scales \citep[see e.g.][]{2021arXiv211000006I,2021arXiv211000016D}. Simulation-based methods can in principle capture the non-linear RSD effects, but recently \citet{2021arXiv211006969K} find little improvement in cosmological parameters beyond $k\sim0.2\ h\mathrm{Mpc}^{-1}$, due to the degeneracy between cosmological parameters and nuisance parameters in the analysis. The stage-III spectroscopic surveys can already map the galaxy distributions to $k\sim0.2\ h\mathrm{Mpc}^{-1}$, and will be substantial to $k\sim1\ h\mathrm{Mpc}^{-1}$ by future spectroscopic surveys. Thus, it is necessary to develop new methods to exploit small-scale information in redshift space. The gravitational coupling between density perturbations leads to striking non-Gaussian features in the large-scale structure. The small-scale filamentary structures arise from gravitational tidal interactions. The gravitational nonlinearity has traditionally led to a reduction in cosmological information \citep[e.g.][]{1999MNRAS.308.1179M,2005MNRAS.360L..82R}. It has been realized that such tidal non-Gaussianity can be exploited to improve the measurement of large-scale structures \citep[][]{2012arXiv1202.5804P}. The local anisotropic distortions can be used to reconstruct the large-scale tidal shear and gravitational potential \citep[][]{2012arXiv1202.5804P,2016PhRvD..93j3504Z,2022ApJ...929....5Z}. \citet{2012arXiv1202.5804P} presented the first tidal reconstruction method, which uses two transverse shear fields in analogy with weak lensing \citep[][]{1993ApJ...404..441K}. This method has been further explored by \citet{2016PhRvD..93j3504Z} and found to be noisier for radial modes along the line of sight, i.e. an anisotropic reconstruction noise. This is because these modes are inferred indirectly from the variations of the two transverse shear fields along the line of sight. A new algorithm which exploits all five shear terms in three-dimensional space has been proposed recently, which reconstructs the radial modes directly from another three shear fields \citep{2022ApJ...929....5Z}. The new method has a lower and isotropic reconstruction noise, compared to the previous method using two shear fields. Similar algorithms have also been investigated by other groups, following the nonlinear coupling in standard perturbation theory \citep[see][for more details]{2018JCAP...07..046F,2020PhRvD.101h3510L,2020arXiv200700226L,2021PhRvD.104l3520D}. The reconstructed field from the tidal effects provides an independent tracer of the large-scale structure. By comparing this to redshift space galaxy field, one can measure the velocity growth factor on large scales without cosmic variance, analogous to \citet{2009JCAP...10..007M}. This enables precision measurements of local-type primordial non-Gaussianity using an effective multi-tracer approach \citep[see][for more discussions]{2021PhRvD.104l3520D}, where the improvements arise from the cosmic variance cancellation \citep[][]{2009MT}. In 21~cm cosmology, the radial modes with small wave numbers are lost due to the Galactic foreground contamination. However, these modes can be reconstructed with tidal reconstruction \citep[][]{2018PhRvD..98d3511Z,2018JCAP...07..046F,2019PhRvD.100b3517L,2019MNRAS.486.3864K}. This is essential for CMB and other correlations such as weak lensing, kinematic Sunyaev-Zel'dovich effect, photometric galaxies, etc \citep[see e.g.][]{2018PhRvD..98d3511Z,2019PhRvD.100b3517L,2021arXiv211205034G}. The recovery of 21~cm radial modes opens up a new set of possibilities and has profound implications for 21~cm cosmology. This problem has also been explored by \citet{2019JCAP...11..023M,2021JCAP...10..056M} and \citet{2021MNRAS.504.4716G} using a forward modeling approach and a machine learning-based method, which could be more optimal with a higher computational cost. These applications rely on a successful implementation of tidal reconstruction in redshift space, while previous studies have focused on tidal reconstruction in real space. Measurements of density fields from galaxies are exclusively made in redshift space. In principle, the RSD effect can be included in the nonlinear coupling in standard perturbation theory, \citep[e.g. following][]{2018JCAP...07..046F,2020PhRvD.101h3510L,2021PhRvD.104l3520D,2020arXiv200700226L}, which should be valid in the mildly nonlinear regime. However, galaxies are subject to nonlinear dynamics. While the leading order effect is well described by perturbation theory on large scales, the nonlinear effects on small scales, i.e. fingers of God, are difficult to model, limiting an analytical study of redshift space tidal reconstruction. In this work, we present a detailed study of tidal reconstruction in redshift space. We apply the tidal reconstruction methods to mock galaxies from $N$-body simulations. We find that the reconstruction results are anisotropic due to the RSD effect. While the radial modes are more noisy due to the nonlinear velocity dispersion, the transverse modes can be reconstructed with high fidelity, well correlated with the large-scale matter density field. The large-scale bias of the reconstructed field can be described by a simple two-parameter model with a distinct angular dependence to the linear RSD effect and the noise power spectrum is nearly isotropic and scale-independent on large scales, which can be relatively straightly fitted in the cosmological parameter inference. This enables tidal reconstruction a promising method for the multi-tracer analysis and 21~cm intensity mapping surveys. This paper is organized as follows. In Section~\ref{sec:tidalreconstruction}, we introduce the tidal reconstruction methods. In Section~\ref{sec:method}, we describe the numerical simulations and the numerical implementation of tidal reconstruction. In Section~\ref{sec:result}, we present the numerical results of reconstruction. We discuss and conclude in Section~\ref{sec:discussion}. \section{METHODOLOGY} \label{sec:tidalreconstruction} The gravitational coupling between large- and small-scale perturbations leads to anisotropic distortions in the locally measured correlation function \citep[][]{2012arXiv1202.5804P,2010PhRvL.105p1302M,2012PhRvL.108y1301J}. Such local anisotropic tidal distortions can be used to reconstruct the large-scale matter distribution \citep[][]{2012arXiv1202.5804P,2016PhRvD..93j3504Z,2022ApJ...929....5Z}. In this section, we present the tidal reconstruction algorithm and discuss its redshift space application. We consider the gravitational interaction between a long wavelength perturbation and small-scale density fluctuations in the squeezed limit, i.e., the wavelength of the small-scale density fluctuations is sufficiently smaller than that of the large-scale density field. The leading order observable is then described by the large-scale tidal field, \begin{equation} t_{ij}=\Phi_{L,ij}, \end{equation} where $\Phi_L$ is the large-scale gravitational potential. The $3\times3$ symmetric tensor field $t_{ij}$ can be decomposed as \begin{equation} \label{eq:tij} t_{ij} = \left( \begin{array}{ccc} \epsilon_0 + \epsilon_1 - \epsilon_z & \epsilon_2 & \epsilon_x \\ \epsilon_2 & \epsilon_0 -\epsilon_1 - \epsilon_z & \epsilon_y \\ \epsilon_x & \epsilon_y & \epsilon_0 + 2\epsilon_z \end{array} \right), \end{equation} where $\epsilon_{0}=(\Phi_{L,11}+\Phi_{L,22}+\Phi_{L,33})/3$, $\epsilon_1 = (\Phi_{L, 11} - \Phi_{L, 22} )/2, \epsilon_2 = \Phi_{L, 12}, \epsilon_x = \Phi_{L, 13}, \epsilon_y = \Phi_{L, 23}$ and $\epsilon_z = (2\Phi_{L, 33} - \Phi_{L, 11} - \Phi_{L,22})/6$. The trace part of the tidal field corresponds to the local mean density, while other components describe the tidal shear terms. The gravity shear forces lead to anisotropic distortions in the locally measured power spectrum \citep[see e.g.][for more details]{2014PhRvD..89h3507S}. Since the large-scale tidal field is coherent on small scales, the tidal coupling results in a systematic change of the small-scale power. When enough small-scale modes are measured, the tidal shear terms can be reconstructed with high fidelity \citep[][]{2012arXiv1202.5804P}. The large-scale tidal shear fields can be estimated with the quadratic estimators, which are outer products of the filtered density fields \citep{2008MNRAS.388.1819L, 2010PhRvD..81l3015L, 2012PhRvD..85d3016B}, \begin{eqnarray} \label{eq:shearestimation} \hat{\epsilon}_1(\bm{x}) & = & [\delta^{w_1}(\bm{x})\delta^{w_1}(\bm{x}) - \delta^{w_2}(\bm{x})\delta^{w_2}(\bm{x})]/2, \nonumber \\ \hat{\epsilon}_2(\bm{x}) & = & \delta^{w_1}(\bm{x})\delta^{w_2}(\bm{x}), \nonumber \\ \hat{\epsilon}_x(\bm{x}) & = & \delta^{w_1}(\bm{x})\delta^{w_3}(\bm{x}), \nonumber \\ \hat{\epsilon}_y(\bm{x}) & = & \delta^{w_2}(\bm{x})\delta^{w_3}(\bm{x}), \nonumber \\ \hat{\epsilon}_z(\bm{x}) & = & [2\delta^{w_3}(\bm{x})\delta^{w_3}(\bm{x}) - \delta^{w_1}(\bm{x})\delta^{w_1}(\bm{x}) \nonumber \\ &&- \delta^{w_2}(\bm{x})\delta^{w_2}(\bm{x})]/6, \end{eqnarray} where \begin{equation} \label{eq:filterestimation} \delta^{w_j}(k)=ik_j W_R(k)\delta(\bm{k}), \end{equation} is the filtered gradient density field and $W_R(k) = \exp(-k^2R^2/2)$ is the Gaussian window with smoothing scale $R$ \citep[][]{2022ApJ...929....5Z}. In principle, the filter here should be anisotropic in redshift space. The mapping from real to redshift space brings anisotropies in the observed galaxy distribution along the line of sight, including the Kaiser effect \citep[][]{1987MNRAS.227....1K} and fingers of God \citep[][]{1972MNRAS.156P...1J}. The radial modes at small scales are usually noisier due to the fingers of God damping \citep[][]{2021JCAP...05..059S} and anisotropic smoothing can account for this and improve the performance \citep[][]{2016MNRAS.457.2068C,2018MNRAS.478.1866H}. However, it is difficult to quantify the impact of RSD on reconstruction when using an anisotropic smoothing window as we are observing the combined effects of RSD and anisotropic filtering. The real to redshift space mapping also causes additional coupling of small-scale densities to the large-scale density field along the line of the sight and this can be computed using perturbation theory \citep[][]{2017PhRvD..95h3522A,2017JCAP...06..053B,2018JCAP...02..022L,2018PhRvD..97f3527A,2018JCAP...07..049C,2019PhRvD.100j3515A}. The estimators can be constructed using the standard perturbation theory in redshift space to account for the anisotropic coupling due to the RSD effects, which potentially enables an unbiased estimate of the real space large-scale matter field \citep[e.g. following methods in][]{2018JCAP...07..046F,2020PhRvD.101h3510L,2020arXiv200700226L,2021PhRvD.104l3520D}. However, this limits the number of small-scale modes that can be included in reconstruction as perturbation theory is only valid in the mildly nonlinear regime, which degrades the reconstruction significantly \citep[see][for more discussions]{2022ApJ...929....5Z}. With the estimated tidal shear fields, we construct estimators for the large-scale density field. In general, any combination of shear fields can provide an estimate of the large-scale density field \citep[][]{2022ApJ...929....5Z}. Here, we consider two tidal reconstruction algorithms. One uses two transverse shear fields, $\epsilon_1$ and $\epsilon_2$, which are less affected by errors in redshift estimation \citep{2012arXiv1202.5804P, 2016PhRvD..93j3504Z,2019MNRAS.486.3864K}. Another algorithm exploits all five shear terms and thus has a lower reconstruction noise \citep{2022ApJ...929....5Z}. The details of the two methods are outlined below. \begin{itemize} \item {\it Transverse shear reconstruction}: In \citet{2012arXiv1202.5804P} and \citet{2016PhRvD..93j3504Z}, we uses two purely transverse shear field $\epsilon_1$ and $\epsilon_2$ in analogy with the weak-lensing mass reconstruction \citep[][]{1993ApJ...404..441K}. The large-scale density field is given by \begin{equation} \label{eq:2shear} \epsilon_0(\bm{k}) = \frac{2k^2}{3(k_1^2 + k_2^2)^2} \left[ (k_1^2 - k_2^2)\epsilon_1(\bm{k}) + 2k_1 k_2 \epsilon_2(\bm{k}) \right], \end{equation} where $\epsilon_0=\nabla^2\Phi_L/3$, which differs from the large-scale density $\delta_L$ by a constant proportional factor. This original proposal can avoid the impact of RSD on reconstruction since the transverse tidal shears in the tangential plane are less sensitive to the RSD effect along the line of sight. The RSD effect should be a second order effect for reconstruction. In this paper, we explore the redshift space performance of this method in detail. \item {\it Full shear reconstruction}: This method is proposed by \citet{2022ApJ...929....5Z}, where we exploit all five shear terms in reconstruction. The reconstructed field is given by \begin{eqnarray} \label{eq:5shear} \epsilon_0(\bm{k}) = \frac{1}{2k^2} & \left[ (k_1^2 - k_2^2)\epsilon_1(\bm{k}) + 2k_1k_2\epsilon_2(\bm{k}) + 2k_1k_3\epsilon_x(\bm{k}) \right .\\ & \left . + 2k_2k_3\epsilon_y(\bm{k}) + (2k_3^2 - k_1^2 - k_2^2)\epsilon_z(\bm{k}) \right]. \nonumber \end{eqnarray} The full shear reconstruction uses full shear information in the three-dimensional space and thus has a lower and isotropic reconstruction noise in real space \citep[][]{2022ApJ...929....5Z}. However, the shear fields $\epsilon_x$, $\epsilon_y$ and $\epsilon_z$ directly probe the inhomogeneous matter distribution in the line of sight direction, which is affected by the mapping from real to redshift space. Therefore, it is expected that the reconstruction will be highly anisotropic compared with transverse shear reconstruction in redshift space. We explore the detailed performance in redshift space with simulated mock galaxy fields below. \end{itemize} In general, the reconstructed density field can be written as \begin{equation} \label{eq:model2} \delta_r(\bm{k}) = T(\bm{k}) \delta(\bm{k})+ N(\bm{k}), \end{equation} where $\delta_r(\bm{k})$ denotes the reconstructed field from transverse or full shear algorithm, $T(\bm{k})$ is the propagator that quantifies the bias to the original real space dark matter density field $\delta(\bm{k})$ and $N(\bm{k})$ is reconstruction noise. For full shear reconstruction in real space, the reconstruction bias and noise power only depend on the magnitude of the wave vector \citep[][]{2022ApJ...929....5Z}. However, for tidal reconstruction in redshift space, we expect that both the propagator $T(\bm{k})$ and the noise $N(\bm{k})$ will depend on the magnitude of the wave vector as well as the angle between the wave vector and the radial direction. We explore the properties of the propagator and noise power using high precision simulations in the following sections. \section{Numerical setup} \label{sec:method} In this section, we describe the simulations and the numerical implementation of tidal reconstruction. \subsection{Simulations} \label{sec:datasample} To investigate the performance of tidal reconstruction in redshift space, we utilize a set of six independent $N$-Body simulations, run with {\tt MP-Gadget} \citep{feng2018}, evolving $1536^3$ dark matter particles in a periodic box with side length $L=1500\ h^{-1}\mathrm{Mpc}$ to redshift $z=0.6$. The cosmological parameters are $\Omega_{m} = 0.3075$, $\Omega_bh^2=0.0223$, $\Omega_ch^2=0.1188$, $h = 0.6774$, $\sigma_8=0.8159$, and $n_s=0.9667$. These are the same simulations used in \citet{2021JCAP...05..059S} and \citet{2021JCAP...03..020S}. The {\tt Rockstar} \citep{2013ApJ...762..109B} phase space halo finder is used to identify halos and subhalos from snapshots of dark matter particles at redshift $z=0.6$. We generate the simulated galaxy samples by imposing a soft mass cut to the viral mass to select massive halos and subhalos to represent galaxies, following the procedure of \citet{2020PhRvD.102l3541N}. There are two parameters, $\log_{10}M_{\mathrm{min}}$ and $\sigma_{\log_{10}M}$, which determine typical minimum mass and the profile of the soft mass cutoff \citep{2020PhRvD.102l3541N}. By choosing $\log_{10}M_{\mathrm{min}}=11.5\ h^{-1}M_{\odot}$ and $12.97\ h^{-1}M_{\odot}$, we have two galaxy samples with number densities $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, respectively. The value of $\sigma_{\log_{10}M}$ is 0.35 for both samples. See \citet{2020PhRvD.102l3541N} and \citet{2021JCAP...05..059S} for more details. The higher mass sample approximately reproduces the observed properties of BOSS CMASS galaxies \citep[][]{2020PhRvD.102l3541N} and the lower mass mock galaxies that will be observed by DESI have a much higher number density \citep{2021JCAP...05..059S}. We use these two catalogs to explore the effects of number density on tidal reconstruction. We implement the RSD by moving galaxies along the line of sight according to the velocities of center-of-mass given by {\tt Rockstar}. The redshift space position $\bm{s}$ of a galaxy at true comoving position $\bm{x}$ is given by \begin{equation} \label{eq:rsd} \bm{s} = \bm{x} + \frac{\hat{\bm{z}}\cdot\bm{v}(\bm{x}) }{aH}\hat{\bm{z}}, \end{equation} where $\bm{v}$ is the peculiar velocity, $a$ is the scale factor and $H$ the Hubble parameter. The logarithmic structure growth rate at $z=0.6$ is $f=0.786$. In this work, we adopt the plane-parallel or distant-observer approximation, in which the $\hat{\bm{z}}$ direction is taken to be the line of sight. We use the standard (Cloud-in-Cell) CIC interpolation scheme to paint galaxies and particles to a $1536^3$ regular grid. The resulting density fields are deconvolved with the CIC window \citep{2005ApJ...620..559J}. We also use interlacing to reduce the effect of aliasing caused by the finite sampling \citep{2016MNRAS.460.3624S}. The redshift space analysis requires a separate treatment of line-of-sight and transverse components of $\bm{k}$. We compute the two-dimensional power spectrum of density fields, $P(k_\perp,k_\parallel)$, where $k_\perp$ and $k_\parallel$ are the transverse and line-of-sight components of $\bm{k}$. For better quantitative assessments of the reconstruction performance, the power spectra of density fields are also computed in discrete $k$ and $\mu$ bins, where $k$ is the magnitude of the wave vector and $\mu=k_{\parallel}/k$ is the cosine of the angle between the line-of-sight and the wave vector. We use five uniform $\mu$ bins, $\mu=0-0.2,0.2-0.4$, etc. The width of $k$ bins is $\Delta k=3k_f$ for both $P(k_\perp,k_\parallel)$ and $P(k,\mu)$, where $k_f$ is the fundamental frequency $k_f = 2\pi/L$. In the following discussions, we use $P_{AB}(\bm{k})\equiv\langle A(\bm{k})B^*(\bm{k})\rangle$ to denote the power spectrum of fields $A(\bm{k})$ and $B(\bm{k})$. Note that here we have dropped the Dirac delta function. \subsection{Reconstruction} \label{sec:reconstruction} The tidal reconstruction works as follows. We first smooth the redshift space galaxy field with the Gaussian window and compute the filtered field using Equation~(\ref{eq:filterestimation}). The optimal smoothing scales are different for the two galaxy samples. We test a few different scales and the optimal scales which maximize the correlation are $1.25\ h^{-1}\mathrm{Mpc}$ and $1.5\ h^{-1}\mathrm{Mpc}$ for galaxy samples with $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, respectively. Then the shear fields are computed following Equation~(\ref{eq:shearestimation}). Finally, the reconstructed large-scale density is given by Equation~(\ref{eq:2shear}) and Equation~(\ref{eq:5shear}) for transverse and full shear reconstruction. To better analyse the reconstruction results, we have written the reconstructed field as \begin{equation} \delta_r(\bm{k}) = T(\bm{k})\delta(\bm{k}) + N(\bm{k}), \end{equation} where $T(\bm{k})=P_{\delta_r\delta}(\bm{k})/P_{\delta\delta}(\bm{k})$ is the propagator, the $\delta(\bm{k})$ is dark matter field in real space, and $N(\bm{k})$ is the reconstruction noise. Note that the power spectrum of dark matter density field $P_{\delta\delta}$ is isotropic in real space due to statistical isotropy. Therefore, the propagator $T(k_\perp,k_\parallel)$ or $T(k,\mu)$ fully quantifies the angular dependence of the reconstructed field due to the RSD effect. The power spectrum of the reconstruction error, or noise, which describes the stochasticity for tidal reconstruction, is given by \begin{equation} \label{eq:noisepower} P_{\mathrm{err}}(\bm{k}) \equiv \langle|\delta_r(\bm{k}-T(\bm{k})\delta(\bm{k})|^2\rangle = \left( P_{\delta_r\delta_r}(\bm{k}) - \frac{P_{\delta_r\delta}(\bm{k})^2}{P_{\delta\delta}(\bm{k})} \right), \end{equation} where we have used $T(\bm{k})=P_{\delta_r\delta}(\bm{k})/P_{\delta\delta}(\bm{k})$. It is natural to expect that the noise power spectrum would be anisotropic for tidal reconstruction in redshift space. We also compute the cross correlation coefficient between the reconstructed field and the real space dark matter density field, \begin{equation} \label{eq:crosscorrelation} r_{cc}(\bm{k}) = \frac{P_{\delta_r\delta}(\bm{k})}{\sqrt{P_{\delta\delta}(\bm{k})P_{\delta_r\delta_r}(\bm{k})}}, \end{equation} where $P_{\delta_r\delta}(\bm{k})$ is the cross power spectrum, and $P_{\delta\delta}(\bm{k})$ and $P_{\delta_r\delta_r}(\bm{k})$ are the power spectra of the real space dark matter density $\delta$ and reconstructed field $\delta_r$. Higher correlation between the two fields indicates a better reconstruction. Obviously in a perfect reconstruction we have $r(\bm{k})=1$, and $P_{\mathrm{err}}(\bm{k}) = 0$. From Equation~(\ref{eq:noisepower}) and Equation~(\ref{eq:crosscorrelation}), it can be derived that the noise power spectrum divided by the total power spectrum of reconstructed field is related to the cross correlation coefficient as \begin{equation} \label{eq:1-r2} P_{\mathrm{err}}(\bm{k})/P_{\delta_r\delta_r}(\bm{k})=1-r^2_{cc}(\bm{k}). \end{equation} For optimal filtering the reconstructed fields, we can compute the transfer function by minimizing the difference between the reconstructed and real space dark matter density fields, \begin{equation} \langle|t(\bm{k})\delta_r(\bm{k})-\delta(\bm{k})|^2\rangle, \end{equation} and we have \begin{equation} \label{eq:tf} t(\bm{k})=\frac{P_{\delta_r\delta}(\bm{k})}{P_{\delta\delta}(\bm{k})}. \end{equation} Note that for tidal reconstruction in redshift space, the transfer function depends on the cosine $\mu$, as the reconstruction noise is highly anisotropic. The power spectra are measured in $(k_\perp,k_\parallel)$ or $(k,\mu)$ bins for each simulation first and then averaged over the six independent realizations to suppress cosmic variance before computing $T(\bm{k})$, $r_{cc}(\bm{k})$, and $t(\bm{k})$. \section{RESULTS} \label{sec:result} In this section, we assess the performance of two tidal reconstruction algorithms with simulated galaxy mock catalogs from high precision simulations. We consider several metrics including the density maps, the cross-correlation coefficient with the real space dark matter density field, the propagator and the noise power spectrum. We then turn to exploring the angular-dependent reconstruction effects with synthetic redshift space galaxy samples. \subsection{Full shear reconstruction} Figure~\ref{fig:Slice_5T_XY} shows the two-dimensional slices of the dark matter density field in one of the simulations, and the two fields reconstructed with the lower mass galaxy density fields in real and redshift space, respectively. The number density of this catalog is $\bar{n} = 3.60\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$. The dark matter density field is smoothed with an $R=4\ h^{-1}\mathrm{Mpc}$ Gaussian. The reconstructed fields are convolved with the transfer function in Equation~(\ref{eq:tf}), minimizing the difference between the reconstructed field and the real space dark matter density field. This process effectively corrects the anisotropic bias and suppresses the anisotropic noise to have a better visual comparison. We see that tidal reconstruction can provide an accurate estimate of the large-scale matter distribution, consistent with findings in \citet{2022ApJ...929....5Z}. The redshift space reconstruction shows similar performance as the real space result, i.e., the RSD does not impact the reconstruction in the transverse plane much. This is not surprising as the RSD effect mainly changes the line of sight modes, and does not affect transverse modes with $\mu\simeq0$ a lot. Figure~\ref{fig:Slice_5T_XZ} compares the full shear reconstructed fields with the real space dark matter density field in the $x-z$ plane. In contrast to Figure~\ref{fig:Slice_5T_XZ}, these density slices directly probe the performance of tidal reconstruction in the radial direction. As expected, the reconstructed field shows an anisotropic noise in the $x-z$ plane. The reconstructed map is noisier than the corresponding real space map. We expect that the reconstruction degrades mainly due to the small-scale nonlinear RSD effect, i.e., fingers of God, since tidal reconstruction is dominated by the large number of small-scale modes \citep[][]{2022ApJ...929....5Z}. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Slice_XY.pdf} \caption{Two-dimensional slices of the density maps in the $x-y$ plane. From left to right, the panels show the dark matter density field, the full shear tidal reconstructed field in real and redshift spaces and the corresponding reconstructed fields for the transverse method. The maps are reconstructed from the lower mass catalog with number density $\bar{n} = 3.60\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$. The reconstructed fields are convolved with the transfer function in Equation~(\ref{eq:tf}) to suppress the anisotropic noises. The dark matter density field is smoothed by a Gaussian filter with smoothing scale $R = 4\ h^{-1}\mathrm{Mpc}$. } \label{fig:Slice_5T_XY} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Slice_XZ.pdf} \caption{Same as Fig.~\ref{fig:Slice_5T_XY}, but for density slices in the $x-z$ plane.} \label{fig:Slice_5T_XZ} \end{figure} In Figure~\ref{fig:Cor2D_5T}, we plot the two-dimensional cross-correlation coefficients $r(k_{\perp}, k_{\parallel})$ between the full-shear reconstructed fields and the real space dark matter density field for two galaxy mock catalogs with $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, respectively. In general, the tidal reconstruction works better with the higher number density sample, i.e., lower shot noise, in both real and redshift spaces, because with a lower shot noise, the small-scale modes which dominate the reconstruction performance, are measured with a higher signal-to-noise ratio. In real space, the correlation coefficients are isotropic for both higher and lower mass galaxy catalogs as expected. For redshift space, we see that the correlation coefficient shows a clear angular dependence on the angle between the wave vector and the line of sight. For reconstructed modes near the radial direction, i.e., $\mu\simeq1$, the correlation coefficient drops much faster than the transverse modes with $\mu\simeq0$ as the wave number increases. Therefore, the reconstruction noise is much higher along the line-of-sight direction than the transverse plane, which is consistent with conclusions from the visual comparison in Figures~\ref{fig:Slice_5T_XY} and \ref{fig:Slice_5T_XZ}. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{Cor2D_5T.pdf} \caption{The two-dimensional correlation coefficient $r(k_{\perp}, k_{\parallel})$ of the full shear tidal reconstructed density field with the real space dark matter density field for two galaxy number densities $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$. The upper panels show real space results, while lower panels show results in redshift space. The light straight lines indicate five $\mu$ bins, from $0-0.2$, to $0.8-1.0$. In redshift space, the correlation coefficient is anisotropic, which becomes much smaller for modes with higher $\mu$ values. } \label{fig:Cor2D_5T} \end{figure} To see the anisotropy caused by the RSD effect more clearly, we plot the cross-correlation coefficients $r(k,\mu)$ measured in $(k,\mu)$ bins in Figure~\ref{fig:R2DBin_RSD_5T}. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{R2DBin_RSD_5T.pdf} \caption{The cross-correlation coefficient $r(k,\mu)$ of the full-shear reconstructed fields with the dark matter density field, measured in five $\mu$ bins, $0-0.2$, $0.2-0.4$, etc, for two galaxy samples with $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$. While the reconstruction of modes in the highest $\mu$ bin is degraded by the RSD effect substantially, the other modes are reconstructed with high fidelity. The envelops are the scatter estimated from the six independent realizations from the simulations. } \label{fig:R2DBin_RSD_5T} \end{figure} The lines from dark to light show the correlation for five $\mu$ bins from 0.1 to 0.9. The envelops are the scatter estimated from the six independent realizations from the simulations. For the higher mass sample, i.e., $\bar{n}=4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, the measured correlation coefficients have more fluctuations on large scales. This is not surprising as the reconstruction noise is much higher for this sample and the simulation boxes have limited volume and thus limited number of modes at scales $k\sim0.01\ h\mathrm{Mpc}^{-1}$. The correlation coefficient can only reach $\sim0.7$, i.e., $1-r^2=P_N/P_{\delta_r\delta_r}\simeq0.5$, on the largest scales for the first few $\mu$ bins, $\mu=0.1$ and $0.3$. Therefore, for the BOSS CMASS number densities $\bar{n}\sim4.25\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$, the large-scale dark matter distribution can not be reconstructed with a high signal-to-noise ratio. For the lower mass sample with a significantly higher number density of DESI galaxies, the correlation is higher than $\sim0.8$ at $k < 0.05\ h\mathrm{Mpc}^{-1}$, except the radial modes near the line of sight with $\mu=0.9$. This allows the mapping of dark matter on large scales with a high signal-to-noise, which is highly beneficial for the multi-tracer analysis \citep[][]{2009JCAP...10..007M,2009MT}. Figure~\ref{fig:TN2DBin_RSD_NT_5T} \begin{figure*}[ht!] \centering \includegraphics[width=0.8\columnwidth]{TN2DBin_RSD_NT_5T.pdf} \caption{The propagator and reconstruction noise power spectrum for the full-shear reconstructed field in redshift space. The horizontal solid lines show the simple model for the propagator at large scales. } \label{fig:TN2DBin_RSD_NT_5T} \end{figure*} presents the propagator and reconstruction noise power spectrum for tidal reconstruction in redshift space for two galaxy samples, measured in five $\mu$ bins from 0.1 to 0.9. The light shaded regions denote one standard deviation from repeating this estimate for the six simulations. The propagator is defined as $T(k,\mu)=P_{\delta_r\delta}(k,\mu)/P_{\delta\delta}(k,\mu)$. Notice that the bias of the reconstructed field measured in this way is a function of $k$ and $\mu$. To disentangle the impact of RSD effect on tidal reconstruction, we apply the real space tidal shear estimator to redshift space mock galaxy density fields. Therefore, the angular dependence of the propagator and noise power arises from the RSD effect alone. From Figure~\ref{fig:TN2DBin_RSD_NT_5T}, for the lower mass galaxies, we notice that the propagator depends on the cosine $\mu$ with respect to the line of sight direction and its amplitude decreases as $\mu$ becomes larger. We see a similar trend from the higher mass galaxy catalog though the measurements are noisier as the limited simulation volume and number of simulations. The impact of the RSD effect manifests as different multiplicative biases on the amplitude of reconstructed modes with different cosine angles $\mu$ with respect to the radial direction, while in real space the propagator is isotropic without any dependence on the direction \citep[e.g.][]{2022ApJ...929....5Z}. We will study this angular dependence in detail next. The propagator approaches a constant value on large scales for all $\mu$ bins and deviates from the constant value at $k\gtrsim0.1\ h\mathrm{Mpc}^{-1}$. It is more clear for the number density $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$, while less obvious for the lower number density $\bar{n}=4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$ due to the much higher reconstruction noise. The similar trend has been observed in real space tidal reconstruction by \citet{2022ApJ...929....5Z}. This is because the tidal shear estimators are derived in the squeezed limit, where the wavelength of the large-scale mode is much larger than that of the small-scale modes used for tidal reconstruction. At the scale where the squeezed limit is assumed to break down, we would expect the bias for reconstructed fields begins running strongly with scale, as we see in Figure~\ref{fig:TN2DBin_RSD_NT_5T} \citep[see][for more discussions about this effect]{2022ApJ...929....5Z}. This is the same as the CMB and 21~cm lensing estimators derived in the long wavelength limit \citep[see e.g.][for more discussions]{2008MNRAS.388.1819L,2010PhRvD..81l3015L,2012PhRvD..85d3016B,2019PhRvL.122r1301S}. In Figure~\ref{fig:TN2DBin_RSD_NT_5T}, we also show the reconstruction noise power spectra for two galaxy samples. We see that the noise power flattens on large scales. This is as expected since in the squeezed limit, the reconstruction of large-scale modes is in the white homogeneous noise regime. The reconstruction noise is much higher for the lower number density sample, since the higher shot noise enhances the stochasticity of reconstruction. When the wavenumber is larger than $0.05\ h\mathrm{Mpc}^{-1}$, the noise power spectrum shows a mild disagreement with the white noise prediction on large scales. We notice that the reconstruction noise power spectrum is more isotropic than the propagator. In the low-$k$ limit, the amplitude of the noise power spectrum can differ by only tens of percent for both simulated galaxy samples. To use the tidal reconstructed field for cosmological inference, it is necessary to have an accurate description of the propagator and noise power spectrum. The scale-dependent propagator or reconstruction bias is flat on large scales, with an angular dependence on the cosine with the line of sight. The value of the propagator in the low-$k$ limit becomes smaller when the cosine $\mu$ becomes larger, being opposite to the linear Kaiser effect, where the power spectrum amplitude is larger near the line of sight, i.e., larger $\mu$ bins. Therefore, we attempt to fit the large-scale propagator using a simple parametric form, \begin{equation} \label{eq:fit} T(k, \mu) = \beta_0 - \beta_2 \mu^2, \end{equation} to capture the angular dependent effect. We then fit the propagator by minimizing the sum of squares \begin{equation} S=\sum_{i,j}\left(\hat{T}(k_i, \mu_j) - T(k_i, \mu_j)\right)^2, \end{equation} where the hat denotes measured data points from simulations. Here we use the data points to $k_{\mathrm{max}} = 0.1\ h\mathrm{Mpc}^{-1}$ for all $\mu$ bins. Note that the weight is uniform for all $k$ bins, while the power spectrum error usually scales as the inverse of number of modes in that $k$ bin. This should be regarded as a particular weight to up-weight the estimated propagator on large scales, which avoids over-fitting at small scales that degrades the fit on large scales, i.e., the low-$k$ limit where the propagator approaches constant. We use the \textsc{Scipy} routine {\tt scipy.optimize.curve\_fit} to implement the least square algorithm. For the galaxy catalog with number density $\bar{n} = 3.6\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$, we obtain $\beta_0 = 0.234$ and $\beta_2 = 0.181$, with corresponding uncertainty $\sigma_{\beta_0} = 0.0012$ and $\sigma_{\beta_2} = 0.0027$. For the higher mass sample with $\bar{n} = 4.25\times 10^{-4}\ h^3\mathrm{Mpc}^{-3}$, we obtain $\beta_0 = 0.269$ and $\beta_2 = 0.275$, with the variance $\sigma_{\beta_0} = 0.0028$, and $\sigma_{\beta_2} = 0.0063$, respectively. We have plotted the best fit model for both high and low mass samples in Figure~\ref{fig:TN2DBin_RSD_NT_5T}. For the higher number density sample, Equation~(\ref{eq:fit}) provides a fairly good description for the propagator at large scales. While the measurement of the propagator is noisier for the lower number density sample, it can still be modeled by the simple two-parameter model within the one-sigma uncertainties on large scales. The distinct angular dependence compared with the usual linear RSD effect, $b+f\mu^2$, is mainly due to that tidal reconstruction exploits small-scale structures, which is mostly impacted by the nonlinearities due to small-scale velocities, i.e., fingers of God effects. One way to argue how well this linear bias model for tidal reconstruction works is to ask up to which scales $T(k,\mu)$ is a constant. A significant scale dependence is a sign that the squeezed limit does not apply and higher order corrections must be included. If we need to include higher $k$ modes in the cosmological analysis, the scale and angular dependence of $T(k,\mu)$ has to be modeled with high fidelity mock catalogs, which resembles the clustering properties of specific galaxy samples. The reconstruction noise power spectrum is instead much more isotropic, with at most tens of percent fluctuations for different directions. Since the ratio of noise power to total power is given by $1-r^2$, where $r$ is the cross-correlation coefficient, the reconstructed mode has been measured to the cosmic-variance dominated limit when the correlation coefficient is close to unity, $r\sim1$. For high fidelity reconstruction, i.e., the higher number density sample, the noise power is subdominant on large scales, where $r>0.8$, except for the largest $\mu$ bin. A ten percent variation in the noise power spectrum contributes to only a few percent of the total power spectrum. Notice that the reconstruction noise power can not be directly compared with the shot noise prediction $1/\bar{n}$ for galaxies before reconstruction, since the propagator of reconstructed fields is generally $\sim0.2$, while the linear galaxy bias is of order $1\sim2$. The tidal reconstruction noise power spectrum is about $100\ h^{-3}\mathrm{Mpc}^3$ for the galaxy sample with $\bar{n} = 3.6\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$, and $10^3\ h^{-3}\mathrm{Mpc}^3$ for galaxies with $\bar{n} = 4.25\times 10^{-4}\ h^3\mathrm{Mpc}^{-3}$. We could use the noise power divided by the propagator square, $P_N/T^2$ as a typical noise level for tidal reconstruction, i.e., about $100/0.2^2\ h^{-3}\mathrm{Mpc}^3=2.5\times10^3\ h^{-3}\mathrm{Mpc}^3$ for the low mass catalog and $10^3/0.2^2\ h^{-3}\mathrm{Mpc}^3=2.5\times10^4\ h^{-3}\mathrm{Mpc}^3$ for the high mass sample. In general, the noise level is a few times larger than the shot noise of the halo catalogs used for reconstruction. However, tidal reconstruction provides an independent tracer of the large-scale density, which can be used to cancel cosmic variance in the galaxy density. In summary, the above results show tidal reconstruction method is very powerful at improving cosmological constraints using the sample variance cancellation technique \citep{2009MT,2009JCAP...10..007M}. The simple parametric form of the propagator and nearly white isotropic reconstruction noise make the reconstructed tides field an ideal tracer for the multi-tracer method, especially for constraining the primordial non-Gaussianity \citep{2009MT,2021PhRvD.104l3520D}. \subsection{Transverse shear reconstruction} Having discussed the full shear reconstruction, we now continue to explore the transverse shear reconstruction in redshift space. We have presented the density slices in the $x-y$ plane for transverse shear reconstruction in Figure~\ref{fig:Slice_5T_XY}. The RSD changes little the reconstruction results, which is as expected since RSD does not affect transverse modes much. Figure~\ref{fig:Slice_5T_XZ} shows the transverse shear reconstruction in the $x-z$ plane. We note that the performance is nearly the same in both real and redshift spaces even for the radial modes. This is due to that the RSD only affects the transverse shear indirectly. This original proposal to avoid RSD effect, i.e., using only transverse shear $\gamma_1$ and $\gamma_2$ for tidal reconstruction, does work. Figure~\ref{fig:Cor2D_2T} shows the two-dimensional correlation coefficient between the transverse shear reconstructed fields and the original real space dark matter density field, for two galaxy number densities. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{Cor2D_2T.pdf} \caption{Same as Figure~\ref{fig:Cor2D_5T}, but for the transverse shear reconstruction method. The reconstruction shows a similar trend for both real and redshift spaces.} \label{fig:Cor2D_2T} \end{figure} The correlation is much smaller for low $k_\perp$ and high $k_\parallel$ regime since these modes are inferred indirectly from the variation of the transverse shear $\gamma_1$ and $\gamma_2$ along the line of sight direction \citep{2012arXiv1202.5804P,2016PhRvD..93j3504Z,2019MNRAS.486.3864K}. For both number densities, the correlation does not change much when the RSD effect is included. The reconstruction shows a similar trend for both real and redshift spaces. In Figure~\ref{fig:R2DBin_RSD_2T}, we plot the cross correlation coefficient measured in ($k,\mu$) bins. The solid lines present the redshift space results while the dashed lines show the real space results. For clarity, we only plot the $1\sigma$ error for the solid lines, but the errors are similar for both cases. The correlation coefficient is almost the same for both real and redshift spaces, with some small discrepancies at small scales. Therefore, for tidal reconstruction with only transverse shear $\gamma_1$ and $\gamma_2$, the mapping from real to redshift space is a second order effect. This is consistent with the previous redshift space tidal reconstruction studied by \citet{2019MNRAS.486.3864K}. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{R2DBin_RSD_2T.pdf} \caption{ The cross-correlation coefficient for the transverse-shear reconstruction with two galaxy catalogs in redshift space (solid lines) and real space (dashed lines). } \label{fig:R2DBin_RSD_2T} \end{figure} In Figure~\ref{fig:TN2DBin_RSD_2T}, we present the propagator and noise power spectrum of transverse shear reconstruction for two galaxy number densities. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{TN2DBin_RSD_2T.pdf} \caption{The propagator and reconstruction noise power spectrum for the transverse-shear reconstruction with two galaxy catalogs in redshift space (solid lines) and real space (dashed lines). Notice that we have plotted the normalized noise power spectrum $P_N/T^2$ to have a clear comparison between real and redshift space noises. The light grey lines in the bottom panels show the dark matter power spectrum for a better comparison with the noise power amplitude. } \label{fig:TN2DBin_RSD_2T} \end{figure} The redshift space results are represented by solid lines while the real space results are plotted in dashed lines. In real space, the propagator is isotropic and approaches a constant at large scales, even we only use the two transverse shear components for tidal reconstruction. The salient feature is that the propagator is still nearly isotropic and scale-independent at large scales, $k<0.1\ h\mathrm{Mpc}^{-1}$, even in the presence of RSDs. At small scales, $k>0.1\ h\mathrm{Mpc}^{-1}$, the RSD effect leads to a small angular-dependent feature, but not as apparent as the full-shear reconstruction algorithm. Therefore, the RSD effect only changes the overall normalization of the reconstructed field, with a little anisotropic effect. In the low-$k$ limit, the propagator changes from $\sim0.4$ to $\sim0.3$ for the low mass sample and from $\sim0.5$ to $\sim0.4$ for the high mass sample. To have a better comparison between the reconstruction noise, we have plotted the ratio of the noise power spectrum to the propagator, $P_N/T^2$ in Figure~\ref{fig:TN2DBin_RSD_2T}, since the absolute amplitude of noise power spectrum depends on the normalization of the reconstructed field as we discussed above. This effectively corrects the normalization of the noise power spectrum on large scales, while increasing the small-scale noise power as the propagator becomes much smaller for higher wavenumber. We omit the $\mu=0.9$ curve for $\bar{n}=4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$ which is greater than the upper limit of the plot. We see that the dashed and solid lines are nearly the same at large scales for both number densities, i.e., the reconstruction noise is almost the same with RSDs or not, as long as we normalize the noise power using the propagator. Since we have $1-r^2=P_N/P_{\delta_r\delta_r}=P_N/T^2/(P_{\delta\delta}+P_N/T^2)$, i.e., an equal correlation coefficients $r$ leads to an equal noise power spectrum $P_N/T^2$, this conclusion is not surprising and directly follows from the result that cross correlation is the same in both cases as we have seen in Figure~\ref{fig:R2DBin_RSD_2T}. We have plotted the real space dark matter power spectrum $P_{\delta\delta}$ for a direct comparison between $P_N/T^2$ and $P_{\delta\delta}$. Since the propagator is isotropic in the low-$k$ limit, the RSD effect does not introduce additional angular dependence to the noise, except an overall scaling of the amplitude. Thus, the reconstruction noise has a similar angular dependence as the noise in real space. While the transverse shear reconstruction has a higher and anisotropic noise compared to the full shear method, it has the advantage of being less impacted by errors in the galaxy redshift. On large scales, the RSD mostly changes the overall normalization of the reconstructed field. The propagator is still nearly isotropic in the presence of RSDs. The anisotropy of reconstruction noise is largely due to the transverse nature of using only $\gamma_1$ and $\gamma_2$. This demonstrates that the transverse shear reconstruction can be powerful for some cosmological applications which need to minimize the effect of redshift errors. The modeling of reconstructed power spectrum could be accomplished using the real space results with a nuisance parameter describing the amplitude of the power spectrum. \subsection{Exploration with the RSD effect} There are two regimes in which we have a well-understanding of redshift-space distortions. In the linear scales, a large-scale overdense region, towards which surrounding galaxies are falling, appears squashed in redshift space, which is known as the linear Kaiser effect \citep{1987MNRAS.227....1K}. In Fourier space, the galaxy clustering is enhanced in redshift space than in real space by a factor $(b+f\mu^2)$. However, the linear distortions is only valid on large scales. To evaluate the effect of linear distortions on tidal reconstruction, we can apply a high-pass filter to the galaxy overdensity used for reconstruction, which removes the large-scale modes where linear theory applies. Figure~\ref{fig:R2DBin_Cover_5T} shows the cross-correlation coefficient between the reconstructed field and real space dark matter density for reconstruction without $k<0.2\ h\mathrm{Mpc}^{-1}$ modes. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{R2DBin_Cover_5T.pdf} \caption{The cross-correlation coefficient for the full-shear reconstruction in the redshift space (solid lines) and the results without $k < 0.2 \ h\mathrm{Mpc}^{-1}$ modes (dash-dotted lines).} \label{fig:R2DBin_Cover_5T} \end{figure} We see that excluding all $k<0.2\ h\mathrm{Mpc}^{-1}$ modes only degrades the result a little. If we exclude only $k<0.1\ h\mathrm{Mpc}^{-1}$ modes, one can hardly discern the difference between two curves. We have confirmed that the propagator and noise power also change only a little when all $k<0.2\ h\mathrm{Mpc}^{-1}$ are excluded and almost no difference when excluding $k<0.1\ h\mathrm{Mpc}^{-1}$ modes. This indicates that linear distortions have a negligible impact on tidal reconstruction, which makes sense since the reconstruction performance is dominated by the large number of small-scale modes. This also explicitly demonstrates that the large-scale information from tidal reconstruction is independent of the original large-scale structures directly traced by galaxies, providing more information about cosmological parameters. The modeling of the galaxy power spectrum in redshift space has advanced significantly and has been shown to be valid to $k\sim0.2-0.4\ h\mathrm{Mpc}^{-1}$, depending on specific methods \citep[see e.g.][]{2017JCAP...10..009H,2020PhRvD.102l3541N,2021JCAP...05..059S,2021JCAP...03..100C,2022MNRAS.514.3993P}, while most observable modes in galaxy surveys are still in the nonlinear regime and outside the realm of perturbative description. Therefore, the tides information from nonlinear scales $k\sim1\ h\mathrm{Mpc}^{-1}$ is complementary with that from the large-scale power spectrum. A multi-tracer analysis enables a potential improvement in the measurement of structure growth rate \citep{2009JCAP...10..007M}, similar to the $f_\mathrm{NL}$ constraints \citep{2021PhRvD.104l3520D}, which we plan to investigate in the future. As we move to smaller, nonlinear scales, the small-scale velocities elongate the galaxy clustering along the line of sight, usually known as fingers of God effect \citep{1972MNRAS.156P...1J}. The quadrupole moment of the clustering at small scales has an opposite sign than it does in the linear case. In Fourier space, the observed density field is damped in the radial direction. This dominant small-scale nonlinearity in redshift space is caused by the nonlinear velocity dispersion $\sigma_v$ that has a different nature than the large-scale linear velocity. To resemble the effect of nonlinear velocity dispersion, we move the galaxies along the line of sight with a random velocity drawn from a Gaussian distribution with standard deviation $\sigma_v$, instead of the real velocity of a galaxy from the simulation. The importance of FoG is determined by the typical velocity dispersion $\sigma_v$, converted to comoving length units, $\sigma_\chi=(1+z)\sigma_v/H(z)$. Figure~\ref{fig:R2DBin_PhotoZ_5T} shows the cross-correlation coefficient for tidal reconstruction with the synthetic redshift space galaxy catalogs which only include the small-scale random velocities. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{R2DBin_PhotoZ_5T.pdf} \caption{The cross-correlation coefficient for the full-shear reconstruction in redshift space (solid lines) and results for the synthetic redshift space galaxy catalogs which only includes the nonlinear velocity dispersion (dash-dotted lines). } \label{fig:R2DBin_PhotoZ_5T} \end{figure} We have tested a few values and found that with $\sigma_v = 75 \ \mathrm{km}/\mathrm{s}$, corresponding to $\sigma_\chi = 0.86 \ h^{-1}\mathrm{Mpc}$ at $z=0.6$, this small-scale velocity leads to a trend that qualitatively follows the real RSD effect. The cross-correlation coefficient also depends on the cosine with respect to the line of sight and its value also becomes smaller when we increase $\mu$. We have confirmed that a larger $\sigma_v$ causes a larger degradation to tidal reconstruction, specifically a much smaller cross-correlation coefficient for high $\mu$ bins. In Figure~\ref{fig:TN2DBin_PhotoZ_NT_5T}, we present the propagator and noise power spectrum for tidal reconstruction with the synthetic catalogs which only include the small-scale velocities. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{TN2DBin_PhotoZ_NT_5T.pdf} \caption{The propagator and reconstruction noise power spectrum for the full-shear tidal reconstruction with the synthetic redshift space galaxy catalogs which only include the nonlinear velocity dispersion. } \label{fig:TN2DBin_PhotoZ_NT_5T} \end{figure} We find that the propagator can also be fitted using Equation~(\ref{eq:fit}), the simple two-parameter model $T(k, \mu) = \beta_0 - \beta_2 \mu^2$. For the lower mass catalog with $\bar{n} = 3.6 \times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$, we obtain $\beta_0 = 0.266$ and $\beta_2 = 0.210$, with the standard deviation $\sigma_{\beta_0} = 0.0015$ and $\sigma_{\beta_2} = 0.0035$. For the higher mass galaxy sample with $\bar{n} = 4.25 \times 10^{-4}\ h^3\mathrm{Mpc}^{-3}$, we have $\beta_0 = 0.353$ and $\beta_2 = 0.328$, with the fitting error $\sigma_{\beta_0} = 0.0033 , \sigma_{\beta_2} = 0.0074$. From this, it is clear that the dominant anisotropy of tidal reconstruction in redshift space is induced by the small-scale nonlinear velocity dispersion. The noise power spectrum is also nearly scale-independent in the low-$k$ limit where the long wavelength limit is a good approximation and has an angular fluctuation at tens of percent level between different $\mu$ bins. This demonstrates that small-scale random nonlinear velocities can explain most behaviours we have observed for redshift space tidal reconstruction. The value $\sigma_v = 75 \ \mathrm{km}/\mathrm{s}$ is compatible with the typical velocity dispersion of spectroscopic galaxy samples at the corresponding redshift \citep{2021JCAP...05..059S}. While being much smaller than the large-scale linear bulk velocity \citep{2013PhRvD..87f3526Z,2013PhRvD..88j3510Z,2018PhRvD..97d3502Z,2018JCAP...09..006J}, the nonlinear velocity dispersion degrades the reconstruction of radial modes substantially and limits the information that we can extract from high values of $\mu$. We have used the same $\sigma_v$ for both number densities. This is simply due to that the reconstruction has a similar performance as the reconstruction with the real halo velocity estimated from the simulation. In reality, the low mass sample should have a larger velocity dispersion due to additional satellite galaxies included. The FoG effect produces a qualitatively correct result for the anisotropic reconstructed field, i.e., the angular dependence of the propagator and nearly isotropic noise. However, we note that there are still discrepancies for cross-correlation coefficients and we cannot match the propagator and noise power spectrum between real and synthetic catalogs by adjusting only $\sigma_v$. It is clear that higher order effects in the real to redshift space mapping need to be considered to have a full picture here. However, accounting the full redshift-space distortions will be a tricky business. We leave this for further work in the future. \section{Discussion and Conclusion} \label{sec:discussion} In this paper, we have applied the tidal reconstruction to redshift space galaxy fields from simulations, while most previous works focus on the real space reconstruction. The large-scale density field can be recovered with high precision for the dense galaxy sample, with a correlation coefficient higher than 0.8 at the largest scales, $k<0.05\ h\mathrm{Mpc}^{-1}$, using the full shear method, except for the highest $\mu$ bin. While for the sparse sample, the correlation coefficient can only reach $r\sim0.7$ at the large scales, limiting a substantial improvement in cosmological parameter measurement using the sample variance cancellation technique. Although for the existing galaxy samples such as SDSS BOSS/eBOSS \citep[][]{2017MNRAS.470.2617A,2021PhRvD.103h3533A}, the number density is insufficient for tidal reconstruction to be efficient, i.e., close to the number density $\bar{n}=4.25\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ we have studied in this paper, the ongoing and future surveys will have a much higher number density such as DESI BGS and ELG \citep[][]{2022arXiv220808512H,2022arXiv220808513R}, Euclid \citep[][]{2018LRR....21....2A,2020A&A...642A.191E}, SPHEREx \citep[][]{2014arXiv1412.4872D}, MegaMapper \citep[][]{2019BAAS...51g.229S,2019BAAS...51c..72F}, etc (See \citealt{2021JCAP...12..049S,2022arXiv220307506F} for a review). High density galaxy clustering and Stage-5 surveys are also being planned \citep[][]{2022arXiv220307291D,2022arXiv220903585S}. The tidal reconstruction method allows a substantial improvement on the cosmological parameter constraints, e.g., local primordial non-Gaussianity, given the fixed survey volume and galaxy number density, at no additional cost. This makes tidal reconstruction a promising probe of cosmology. The small-scale FoG effect, i.e., nonlinear velocity dispersion, leads to a degradation to the full shear reconstruction in redshift space, especially for high $\mu$ values. This makes sense since tidal reconstruction performance is dominated by the small-scale density modes. However, as the transverse shear terms are only indirectly affected by the real to redshift space mapping, the transverse shear method is largely insensitive to the RSD. Therefore, while being noisier than the full shear method, the transverse shear reconstruction could still be useful in some certain cases. Tidal reconstruction acquires a large-scale linear bias, which is constant to an excellent approximation. In redshift space, for full shear reconstruction this bias becomes angular dependent due to the anisotropic nature of redshift space galaxy density field. However, we find that the reconstruction bias can be well described by a simple two-parameter model on large scales. The noise power spectrum is nearly isotropic and scale-independent at $k<0.05\ h\mathrm{Mpc}^{-1}$. Thus, we expect that for modes with $k<0.05\ h\mathrm{Mpc}^{-1}$, the noise power spectrum can be modeled as a constant term to a good approximation in the cosmological data analysis. Therefore, this makes it possible to use the reconstructed modes alongside directly observed galaxy density modes to constrain $f_\mathrm{NL}$ using an effective multi-tracer approach \citep{2021PhRvD.104l3520D}. However, in order to reach the theoretical threshold between single and multi-field inflation models $f_\mathrm{NL}\sim1$ \citep[see e.g.][]{2014arXiv1412.4671A}, more detailed studies using very large volume simulations with primordial non-Gaussianity will be needed since even percent level stochasticities can significantly impact the inference of $f_\mathrm{NL}$. We plan to study this in future. The propagator shows a characteristic anisotropy in the cosine $\mu$, $T(k,\mu)=\beta_0-\beta_2\mu^2$, which mostly arises from the nonlinear velocity dispersion as we have shown above. It might be possible to derive this characteristic scaling in $\mu$ analytically in the large-scale limit, by assuming a tidal coupling or response function in the long wavelength limit. This response function could be obtained from tides simulations \citep[see e.g.][]{2018MNRAS.479..162S,2021MNRAS.503.1473S,2020MNRAS.496..483M,2021JCAP...04..041A,2021MNRAS.504.1694R}. We plan to investigate this topic in a future work. While being noisy for high $\mu$ values, tidal reconstruction can obtain a high signal-to-noise reconstruction of smaller $\mu$ modes. This is highly complementary with other reconstruction methods such as the kinetic Sunyaev-Zel'dovich velocity reconstruction \citep[see e.g.][]{2018arXiv181013423S,2021arXiv211111526C,2022JCAP...09..028G}, where the radial modes with $\mu\sim1$ have the lowest noise. One of the major applications of tidal reconstruction is to recover the lost radial modes due to foreground in 21~cm intensity mapping surveys. The neutral hydrogen maps have much smaller fingers of God effects than the typical spectroscopic galaxy samples at the same redshift, driven by a small number of satellite galaxies with a smaller velocity dispersion \citep{2018ApJ...866..135V,2022arXiv220712398O}. Therefore, it may be even more beneficial for 21~cm surveys such as CHIME \citep{2022ApJS..261...29C}, HIRAX \citep{2022JATIS...8a1019C}, stage-II experiments \citep{2018arXiv181009572C}, PUMA \citep{2019BAAS...51g..53S,2020arXiv200205072C}, etc. However, further studies are required to study the effects of foreground contamination and instrumental effects which we leave to future work. The isotropic modulation to the local power spectrum can also give an estimate of the large-scale density by measuring the amplitude of the small-scale power spectrum in different subvolumes \citep[e.g.][]{2014JCAP...05..048C,2014PhRvD..90j3530L,2015JCAP...09..028C} or using a tidal field quadratic estimator as we presented here. However, compared with the local anisotropic distortions, the isotropic modulation is more likely to be impacted by observational systematics, e.g., variations in the foreground stars, seeing, and galactic dust extinction, since both lead to the change in local galaxy power spectrum. A detailed exploration of observational systematics will be presented in a future paper. The reconstructed field is quadratic in the small-scale galaxy density. The cross spectrum of the reconstructed field with the original galaxy field is a bispectrum of the galaxy density, while the power spectrum of the reconstructed field is a trispectrum. Therefore, we are using higher order statistics, 3-point function and 4-point function, in redshift space to improve cosmological measurements. There are other similar methods using quadratic functions of the density field to exploit higher-order information such as the skew power spectrum \citep[see e.g.][]{2015PhRvD..91d3530S,2020JCAP...04..011M,2020JCAP...08..007D,2021JCAP...03..020S}, but requires a perturbative description. However, the perturbation theory has limited range of validity and eventually breaks down in the nonlinear regime $k\sim1\ h\mathrm{Mpc}^{-1}$. For future large-scale structure studies, we are sensitive to the breakdown of perturbation theory, therefore, it is of great importance to further develop methods that could efficiently exploit nonlinear information in redshift space beyond the linear theory \citep{2021JCAP...10..044F}. \section*{Acknowledgement} \begin{acknowledgments} Ue-Li Pen receives support from Ontario Research Fund-Research Excellence Program (ORF-RE), Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, CRD 523638-18, 555585-20], Canadian Institute for Advanced Research (CIFAR), Canadian Foundation for Innovation (CFI), the National Science Foundation of China (Grants No. 11929301), Thoth Technology Inc, Alexander von Humboldt Foundation, and the National Science and Technology Council (NSTC) of Taiwan (111-2123-M-001-008-, and 111-2811-M-001-040-). Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium and the SOSCIP Consortium's CPU computing platform. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto \citep[][]{Loken_2010}. SOSCIP is funded by the Federal Economic Development Agency of Southern Ontario, the Province of Ontario, IBM Canada Ltd., Ontario Centres of Excellence, Mitacs and 15 Ontario academic member institutions. \end{acknowledgments} \vspace{5mm} \section{Introduction} \label{sec:introduction} Galaxy peculiar velocities contribute to a galaxy's observed redshift via the Doppler effect. This leads to characteristic anisotropies in the observed galaxy clustering pattern, known as redshift-space distortions (RSDs) \citep[][]{1972MNRAS.156P...1J,1977ApJ...212L...3S,1980lssu.book.....P,1987MNRAS.227....1K,1994MNRAS.267.1020P,1996MNRAS.282..877B}. By measuring the velocity-induced statistical effect on the galaxy power spectrum, recent galaxy surveys have measured the structure growth rate accurately, providing constraints on both dark energy properties and modifications to gravity theory \citep[e.g. SDSS BOSS/eBOSS][]{2017MNRAS.470.2617A,2021PhRvD.103h3533A}. The forthcoming generation of wide-field galaxy redshift surveys generally probe larger volumes and higher galaxy densities, thus allowing for higher signal to noise ratio measurements and better insights into our Universe, e.g. DESI \citealt{2016arXiv161100036D}, PFS \citep[][]{2014PASJ...66R...1T}, Euclid \citep[][]{2018LRR....21....2A,2020A&A...642A.191E}, SPHEREx \citep[][]{2014arXiv1412.4872D}, LSST \citep[][]{2009arXiv0912.0201L}, MegaMapper \citep[][]{2019BAAS...51g.229S,2019BAAS...51c..72F}. An accurate theoretical description for galaxy clustering in redshift space is a key for the success of the future spectroscopic surveys \citep[see e.g.][]{2020PhRvD.102l3541N}. However, the modeling of the galaxy power spectrum is limited to $k\sim0.14\ h\mathrm{Mpc}^{-1}$, mainly due to the finger of God effect on small scales \citep[see e.g.][]{2021arXiv211000006I,2021arXiv211000016D}. Simulation-based methods can in principle capture the non-linear RSD effects, but recently \citet{2021arXiv211006969K} find little improvement in cosmological parameters beyond $k\sim0.2\ h\mathrm{Mpc}^{-1}$, due to the degeneracy between cosmological parameters and nuisance parameters in the analysis. The stage-III spectroscopic surveys can already map the galaxy distributions to $k\sim0.2\ h\mathrm{Mpc}^{-1}$, and will be substantial to $k\sim1\ h\mathrm{Mpc}^{-1}$ by future spectroscopic surveys. Thus, it is necessary to develop new methods to exploit small-scale information in redshift space. The gravitational coupling between density perturbations leads to striking non-Gaussian features in the large-scale structure. The small-scale filamentary structures arise from gravitational tidal interactions. The gravitational nonlinearity has traditionally led to a reduction in cosmological information \citep[e.g.][]{1999MNRAS.308.1179M,2005MNRAS.360L..82R}. It has been realized that such tidal non-Gaussianity can be exploited to improve the measurement of large-scale structures \citep[][]{2012arXiv1202.5804P}. The local anisotropic distortions can be used to reconstruct the large-scale tidal shear and gravitational potential \citep[][]{2012arXiv1202.5804P,2016PhRvD..93j3504Z,2022ApJ...929....5Z}. \citet{2012arXiv1202.5804P} presented the first tidal reconstruction method, which uses two transverse shear fields in analogy with weak lensing \citep[][]{1993ApJ...404..441K}. This method has been further explored by \citet{2016PhRvD..93j3504Z} and found to be noisier for radial modes along the line of sight, i.e. an anisotropic reconstruction noise. This is because these modes are inferred indirectly from the variations of the two transverse shear fields along the line of sight. A new algorithm which exploits all five shear terms in three-dimensional space has been proposed recently, which reconstructs the radial modes directly from another three shear fields \citep{2022ApJ...929....5Z}. The new method has a lower and isotropic reconstruction noise, compared to the previous method using two shear fields. Similar algorithms have also been investigated by other groups, following the nonlinear coupling in standard perturbation theory \citep[see][for more details]{2018JCAP...07..046F,2020PhRvD.101h3510L,2020arXiv200700226L,2021PhRvD.104l3520D}. The reconstructed field from the tidal effects provides an independent tracer of the large-scale structure. By comparing this to redshift space galaxy field, one can measure the velocity growth factor on large scales without cosmic variance, analogous to \citet{2009JCAP...10..007M}. This enables precision measurements of local-type primordial non-Gaussianity using an effective multi-tracer approach \citep[see][for more discussions]{2021PhRvD.104l3520D}, where the improvements arise from the cosmic variance cancellation \citep[][]{2009MT}. In 21~cm cosmology, the radial modes with small wave numbers are lost due to the Galactic foreground contamination. However, these modes can be reconstructed with tidal reconstruction \citep[][]{2018PhRvD..98d3511Z,2018JCAP...07..046F,2019PhRvD.100b3517L,2019MNRAS.486.3864K}. This is essential for CMB and other correlations such as weak lensing, kinematic Sunyaev-Zel'dovich effect, photometric galaxies, etc \citep[see e.g.][]{2018PhRvD..98d3511Z,2019PhRvD.100b3517L,2021arXiv211205034G}. The recovery of 21~cm radial modes opens up a new set of possibilities and has profound implications for 21~cm cosmology. This problem has also been explored by \citet{2019JCAP...11..023M,2021JCAP...10..056M} and \citet{2021MNRAS.504.4716G} using a forward modeling approach and a machine learning-based method, which could be more optimal with a higher computational cost. These applications rely on a successful implementation of tidal reconstruction in redshift space, while previous studies have focused on tidal reconstruction in real space. Measurements of density fields from galaxies are exclusively made in redshift space. In principle, the RSD effect can be included in the nonlinear coupling in standard perturbation theory, \citep[e.g. following][]{2018JCAP...07..046F,2020PhRvD.101h3510L,2021PhRvD.104l3520D,2020arXiv200700226L}, which should be valid in the mildly nonlinear regime. However, galaxies are subject to nonlinear dynamics. While the leading order effect is well described by perturbation theory on large scales, the nonlinear effects on small scales, i.e. fingers of God, are difficult to model, limiting an analytical study of redshift space tidal reconstruction. In this work, we present a detailed study of tidal reconstruction in redshift space. We apply the tidal reconstruction methods to mock galaxies from $N$-body simulations. We find that the reconstruction results are anisotropic due to the RSD effect. While the radial modes are more noisy due to the nonlinear velocity dispersion, the transverse modes can be reconstructed with high fidelity, well correlated with the large-scale matter density field. The large-scale bias of the reconstructed field can be described by a simple two-parameter model with a distinct angular dependence to the linear RSD effect and the noise power spectrum is nearly isotropic and scale-independent on large scales, which can be relatively straightly fitted in the cosmological parameter inference. This enables tidal reconstruction a promising method for the multi-tracer analysis and 21~cm intensity mapping surveys. This paper is organized as follows. In Section~\ref{sec:tidalreconstruction}, we introduce the tidal reconstruction methods. In Section~\ref{sec:method}, we describe the numerical simulations and the numerical implementation of tidal reconstruction. In Section~\ref{sec:result}, we present the numerical results of reconstruction. We discuss and conclude in Section~\ref{sec:discussion}. \section{METHODOLOGY} \label{sec:tidalreconstruction} The gravitational coupling between large- and small-scale perturbations leads to anisotropic distortions in the locally measured correlation function \citep[][]{2012arXiv1202.5804P,2010PhRvL.105p1302M,2012PhRvL.108y1301J}. Such local anisotropic tidal distortions can be used to reconstruct the large-scale matter distribution \citep[][]{2012arXiv1202.5804P,2016PhRvD..93j3504Z,2022ApJ...929....5Z}. In this section, we present the tidal reconstruction algorithm and discuss its redshift space application. We consider the gravitational interaction between a long wavelength perturbation and small-scale density fluctuations in the squeezed limit, i.e., the wavelength of the small-scale density fluctuations is sufficiently smaller than that of the large-scale density field. The leading order observable is then described by the large-scale tidal field, \begin{equation} t_{ij}=\Phi_{L,ij}, \end{equation} where $\Phi_L$ is the large-scale gravitational potential. The $3\times3$ symmetric tensor field $t_{ij}$ can be decomposed as \begin{equation} \label{eq:tij} t_{ij} = \left( \begin{array}{ccc} \epsilon_0 + \epsilon_1 - \epsilon_z & \epsilon_2 & \epsilon_x \\ \epsilon_2 & \epsilon_0 -\epsilon_1 - \epsilon_z & \epsilon_y \\ \epsilon_x & \epsilon_y & \epsilon_0 + 2\epsilon_z \end{array} \right), \end{equation} where $\epsilon_{0}=(\Phi_{L,11}+\Phi_{L,22}+\Phi_{L,33})/3$, $\epsilon_1 = (\Phi_{L, 11} - \Phi_{L, 22} )/2, \epsilon_2 = \Phi_{L, 12}, \epsilon_x = \Phi_{L, 13}, \epsilon_y = \Phi_{L, 23}$ and $\epsilon_z = (2\Phi_{L, 33} - \Phi_{L, 11} - \Phi_{L,22})/6$. The trace part of the tidal field corresponds to the local mean density, while other components describe the tidal shear terms. The gravity shear forces lead to anisotropic distortions in the locally measured power spectrum \citep[see e.g.][for more details]{2014PhRvD..89h3507S}. Since the large-scale tidal field is coherent on small scales, the tidal coupling results in a systematic change of the small-scale power. When enough small-scale modes are measured, the tidal shear terms can be reconstructed with high fidelity \citep[][]{2012arXiv1202.5804P}. The large-scale tidal shear fields can be estimated with the quadratic estimators, which are outer products of the filtered density fields \citep{2008MNRAS.388.1819L, 2010PhRvD..81l3015L, 2012PhRvD..85d3016B}, \begin{eqnarray} \label{eq:shearestimation} \hat{\epsilon}_1(\bm{x}) & = & [\delta^{w_1}(\bm{x})\delta^{w_1}(\bm{x}) - \delta^{w_2}(\bm{x})\delta^{w_2}(\bm{x})]/2, \nonumber \\ \hat{\epsilon}_2(\bm{x}) & = & \delta^{w_1}(\bm{x})\delta^{w_2}(\bm{x}), \nonumber \\ \hat{\epsilon}_x(\bm{x}) & = & \delta^{w_1}(\bm{x})\delta^{w_3}(\bm{x}), \nonumber \\ \hat{\epsilon}_y(\bm{x}) & = & \delta^{w_2}(\bm{x})\delta^{w_3}(\bm{x}), \nonumber \\ \hat{\epsilon}_z(\bm{x}) & = & [2\delta^{w_3}(\bm{x})\delta^{w_3}(\bm{x}) - \delta^{w_1}(\bm{x})\delta^{w_1}(\bm{x}) \nonumber \\ &&- \delta^{w_2}(\bm{x})\delta^{w_2}(\bm{x})]/6, \end{eqnarray} where \begin{equation} \label{eq:filterestimation} \delta^{w_j}(k)=ik_j W_R(k)\delta(\bm{k}), \end{equation} is the filtered gradient density field and $W_R(k) = \exp(-k^2R^2/2)$ is the Gaussian window with smoothing scale $R$ \citep[][]{2022ApJ...929....5Z}. In principle, the filter here should be anisotropic in redshift space. The mapping from real to redshift space brings anisotropies in the observed galaxy distribution along the line of sight, including the Kaiser effect \citep[][]{1987MNRAS.227....1K} and fingers of God \citep[][]{1972MNRAS.156P...1J}. The radial modes at small scales are usually noisier due to the fingers of God damping \citep[][]{2021JCAP...05..059S} and anisotropic smoothing can account for this and improve the performance \citep[][]{2016MNRAS.457.2068C,2018MNRAS.478.1866H}. However, it is difficult to quantify the impact of RSD on reconstruction when using an anisotropic smoothing window as we are observing the combined effects of RSD and anisotropic filtering. The real to redshift space mapping also causes additional coupling of small-scale densities to the large-scale density field along the line of the sight and this can be computed using perturbation theory \citep[][]{2017PhRvD..95h3522A,2017JCAP...06..053B,2018JCAP...02..022L,2018PhRvD..97f3527A,2018JCAP...07..049C,2019PhRvD.100j3515A}. The estimators can be constructed using the standard perturbation theory in redshift space to account for the anisotropic coupling due to the RSD effects, which potentially enables an unbiased estimate of the real space large-scale matter field \citep[e.g. following methods in][]{2018JCAP...07..046F,2020PhRvD.101h3510L,2020arXiv200700226L,2021PhRvD.104l3520D}. However, this limits the number of small-scale modes that can be included in reconstruction as perturbation theory is only valid in the mildly nonlinear regime, which degrades the reconstruction significantly \citep[see][for more discussions]{2022ApJ...929....5Z}. With the estimated tidal shear fields, we construct estimators for the large-scale density field. In general, any combination of shear fields can provide an estimate of the large-scale density field \citep[][]{2022ApJ...929....5Z}. Here, we consider two tidal reconstruction algorithms. One uses two transverse shear fields, $\epsilon_1$ and $\epsilon_2$, which are less affected by errors in redshift estimation \citep{2012arXiv1202.5804P, 2016PhRvD..93j3504Z,2019MNRAS.486.3864K}. Another algorithm exploits all five shear terms and thus has a lower reconstruction noise \citep{2022ApJ...929....5Z}. The details of the two methods are outlined below. \begin{itemize} \item {\it Transverse shear reconstruction}: In \citet{2012arXiv1202.5804P} and \citet{2016PhRvD..93j3504Z}, we uses two purely transverse shear field $\epsilon_1$ and $\epsilon_2$ in analogy with the weak-lensing mass reconstruction \citep[][]{1993ApJ...404..441K}. The large-scale density field is given by \begin{equation} \label{eq:2shear} \epsilon_0(\bm{k}) = \frac{2k^2}{3(k_1^2 + k_2^2)^2} \left[ (k_1^2 - k_2^2)\epsilon_1(\bm{k}) + 2k_1 k_2 \epsilon_2(\bm{k}) \right], \end{equation} where $\epsilon_0=\nabla^2\Phi_L/3$, which differs from the large-scale density $\delta_L$ by a constant proportional factor. This original proposal can avoid the impact of RSD on reconstruction since the transverse tidal shears in the tangential plane are less sensitive to the RSD effect along the line of sight. The RSD effect should be a second order effect for reconstruction. In this paper, we explore the redshift space performance of this method in detail. \item {\it Full shear reconstruction}: This method is proposed by \citet{2022ApJ...929....5Z}, where we exploit all five shear terms in reconstruction. The reconstructed field is given by \begin{eqnarray} \label{eq:5shear} \epsilon_0(\bm{k}) = \frac{1}{2k^2} & \left[ (k_1^2 - k_2^2)\epsilon_1(\bm{k}) + 2k_1k_2\epsilon_2(\bm{k}) + 2k_1k_3\epsilon_x(\bm{k}) \right .\\ & \left . + 2k_2k_3\epsilon_y(\bm{k}) + (2k_3^2 - k_1^2 - k_2^2)\epsilon_z(\bm{k}) \right]. \nonumber \end{eqnarray} The full shear reconstruction uses full shear information in the three-dimensional space and thus has a lower and isotropic reconstruction noise in real space \citep[][]{2022ApJ...929....5Z}. However, the shear fields $\epsilon_x$, $\epsilon_y$ and $\epsilon_z$ directly probe the inhomogeneous matter distribution in the line of sight direction, which is affected by the mapping from real to redshift space. Therefore, it is expected that the reconstruction will be highly anisotropic compared with transverse shear reconstruction in redshift space. We explore the detailed performance in redshift space with simulated mock galaxy fields below. \end{itemize} In general, the reconstructed density field can be written as \begin{equation} \label{eq:model2} \delta_r(\bm{k}) = T(\bm{k}) \delta(\bm{k})+ N(\bm{k}), \end{equation} where $\delta_r(\bm{k})$ denotes the reconstructed field from transverse or full shear algorithm, $T(\bm{k})$ is the propagator that quantifies the bias to the original real space dark matter density field $\delta(\bm{k})$ and $N(\bm{k})$ is reconstruction noise. For full shear reconstruction in real space, the reconstruction bias and noise power only depend on the magnitude of the wave vector \citep[][]{2022ApJ...929....5Z}. However, for tidal reconstruction in redshift space, we expect that both the propagator $T(\bm{k})$ and the noise $N(\bm{k})$ will depend on the magnitude of the wave vector as well as the angle between the wave vector and the radial direction. We explore the properties of the propagator and noise power using high precision simulations in the following sections. \section{Numerical setup} \label{sec:method} In this section, we describe the simulations and the numerical implementation of tidal reconstruction. \subsection{Simulations} \label{sec:datasample} To investigate the performance of tidal reconstruction in redshift space, we utilize a set of six independent $N$-Body simulations, run with {\tt MP-Gadget} \citep{feng2018}, evolving $1536^3$ dark matter particles in a periodic box with side length $L=1500\ h^{-1}\mathrm{Mpc}$ to redshift $z=0.6$. The cosmological parameters are $\Omega_{m} = 0.3075$, $\Omega_bh^2=0.0223$, $\Omega_ch^2=0.1188$, $h = 0.6774$, $\sigma_8=0.8159$, and $n_s=0.9667$. These are the same simulations used in \citet{2021JCAP...05..059S} and \citet{2021JCAP...03..020S}. The {\tt Rockstar} \citep{2013ApJ...762..109B} phase space halo finder is used to identify halos and subhalos from snapshots of dark matter particles at redshift $z=0.6$. We generate the simulated galaxy samples by imposing a soft mass cut to the viral mass to select massive halos and subhalos to represent galaxies, following the procedure of \citet{2020PhRvD.102l3541N}. There are two parameters, $\log_{10}M_{\mathrm{min}}$ and $\sigma_{\log_{10}M}$, which determine typical minimum mass and the profile of the soft mass cutoff \citep{2020PhRvD.102l3541N}. By choosing $\log_{10}M_{\mathrm{min}}=11.5\ h^{-1}M_{\odot}$ and $12.97\ h^{-1}M_{\odot}$, we have two galaxy samples with number densities $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, respectively. The value of $\sigma_{\log_{10}M}$ is 0.35 for both samples. See \citet{2020PhRvD.102l3541N} and \citet{2021JCAP...05..059S} for more details. The higher mass sample approximately reproduces the observed properties of BOSS CMASS galaxies \citep[][]{2020PhRvD.102l3541N} and the lower mass mock galaxies that will be observed by DESI have a much higher number density \citep{2021JCAP...05..059S}. We use these two catalogs to explore the effects of number density on tidal reconstruction. We implement the RSD by moving galaxies along the line of sight according to the velocities of center-of-mass given by {\tt Rockstar}. The redshift space position $\bm{s}$ of a galaxy at true comoving position $\bm{x}$ is given by \begin{equation} \label{eq:rsd} \bm{s} = \bm{x} + \frac{\hat{\bm{z}}\cdot\bm{v}(\bm{x}) }{aH}\hat{\bm{z}}, \end{equation} where $\bm{v}$ is the peculiar velocity, $a$ is the scale factor and $H$ the Hubble parameter. The logarithmic structure growth rate at $z=0.6$ is $f=0.786$. In this work, we adopt the plane-parallel or distant-observer approximation, in which the $\hat{\bm{z}}$ direction is taken to be the line of sight. We use the standard (Cloud-in-Cell) CIC interpolation scheme to paint galaxies and particles to a $1536^3$ regular grid. The resulting density fields are deconvolved with the CIC window \citep{2005ApJ...620..559J}. We also use interlacing to reduce the effect of aliasing caused by the finite sampling \citep{2016MNRAS.460.3624S}. The redshift space analysis requires a separate treatment of line-of-sight and transverse components of $\bm{k}$. We compute the two-dimensional power spectrum of density fields, $P(k_\perp,k_\parallel)$, where $k_\perp$ and $k_\parallel$ are the transverse and line-of-sight components of $\bm{k}$. For better quantitative assessments of the reconstruction performance, the power spectra of density fields are also computed in discrete $k$ and $\mu$ bins, where $k$ is the magnitude of the wave vector and $\mu=k_{\parallel}/k$ is the cosine of the angle between the line-of-sight and the wave vector. We use five uniform $\mu$ bins, $\mu=0-0.2,0.2-0.4$, etc. The width of $k$ bins is $\Delta k=3k_f$ for both $P(k_\perp,k_\parallel)$ and $P(k,\mu)$, where $k_f$ is the fundamental frequency $k_f = 2\pi/L$. In the following discussions, we use $P_{AB}(\bm{k})\equiv\langle A(\bm{k})B^*(\bm{k})\rangle$ to denote the power spectrum of fields $A(\bm{k})$ and $B(\bm{k})$. Note that here we have dropped the Dirac delta function. \subsection{Reconstruction} \label{sec:reconstruction} The tidal reconstruction works as follows. We first smooth the redshift space galaxy field with the Gaussian window and compute the filtered field using Equation~(\ref{eq:filterestimation}). The optimal smoothing scales are different for the two galaxy samples. We test a few different scales and the optimal scales which maximize the correlation are $1.25\ h^{-1}\mathrm{Mpc}$ and $1.5\ h^{-1}\mathrm{Mpc}$ for galaxy samples with $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, respectively. Then the shear fields are computed following Equation~(\ref{eq:shearestimation}). Finally, the reconstructed large-scale density is given by Equation~(\ref{eq:2shear}) and Equation~(\ref{eq:5shear}) for transverse and full shear reconstruction. To better analyse the reconstruction results, we have written the reconstructed field as \begin{equation} \delta_r(\bm{k}) = T(\bm{k})\delta(\bm{k}) + N(\bm{k}), \end{equation} where $T(\bm{k})=P_{\delta_r\delta}(\bm{k})/P_{\delta\delta}(\bm{k})$ is the propagator, the $\delta(\bm{k})$ is dark matter field in real space, and $N(\bm{k})$ is the reconstruction noise. Note that the power spectrum of dark matter density field $P_{\delta\delta}$ is isotropic in real space due to statistical isotropy. Therefore, the propagator $T(k_\perp,k_\parallel)$ or $T(k,\mu)$ fully quantifies the angular dependence of the reconstructed field due to the RSD effect. The power spectrum of the reconstruction error, or noise, which describes the stochasticity for tidal reconstruction, is given by \begin{equation} \label{eq:noisepower} P_{\mathrm{err}}(\bm{k}) \equiv \langle|\delta_r(\bm{k}-T(\bm{k})\delta(\bm{k})|^2\rangle = \left( P_{\delta_r\delta_r}(\bm{k}) - \frac{P_{\delta_r\delta}(\bm{k})^2}{P_{\delta\delta}(\bm{k})} \right), \end{equation} where we have used $T(\bm{k})=P_{\delta_r\delta}(\bm{k})/P_{\delta\delta}(\bm{k})$. It is natural to expect that the noise power spectrum would be anisotropic for tidal reconstruction in redshift space. We also compute the cross correlation coefficient between the reconstructed field and the real space dark matter density field, \begin{equation} \label{eq:crosscorrelation} r_{cc}(\bm{k}) = \frac{P_{\delta_r\delta}(\bm{k})}{\sqrt{P_{\delta\delta}(\bm{k})P_{\delta_r\delta_r}(\bm{k})}}, \end{equation} where $P_{\delta_r\delta}(\bm{k})$ is the cross power spectrum, and $P_{\delta\delta}(\bm{k})$ and $P_{\delta_r\delta_r}(\bm{k})$ are the power spectra of the real space dark matter density $\delta$ and reconstructed field $\delta_r$. Higher correlation between the two fields indicates a better reconstruction. Obviously in a perfect reconstruction we have $r(\bm{k})=1$, and $P_{\mathrm{err}}(\bm{k}) = 0$. From Equation~(\ref{eq:noisepower}) and Equation~(\ref{eq:crosscorrelation}), it can be derived that the noise power spectrum divided by the total power spectrum of reconstructed field is related to the cross correlation coefficient as \begin{equation} \label{eq:1-r2} P_{\mathrm{err}}(\bm{k})/P_{\delta_r\delta_r}(\bm{k})=1-r^2_{cc}(\bm{k}). \end{equation} For optimal filtering the reconstructed fields, we can compute the transfer function by minimizing the difference between the reconstructed and real space dark matter density fields, \begin{equation} \langle|t(\bm{k})\delta_r(\bm{k})-\delta(\bm{k})|^2\rangle, \end{equation} and we have \begin{equation} \label{eq:tf} t(\bm{k})=\frac{P_{\delta_r\delta}(\bm{k})}{P_{\delta\delta}(\bm{k})}. \end{equation} Note that for tidal reconstruction in redshift space, the transfer function depends on the cosine $\mu$, as the reconstruction noise is highly anisotropic. The power spectra are measured in $(k_\perp,k_\parallel)$ or $(k,\mu)$ bins for each simulation first and then averaged over the six independent realizations to suppress cosmic variance before computing $T(\bm{k})$, $r_{cc}(\bm{k})$, and $t(\bm{k})$. \section{RESULTS} \label{sec:result} In this section, we assess the performance of two tidal reconstruction algorithms with simulated galaxy mock catalogs from high precision simulations. We consider several metrics including the density maps, the cross-correlation coefficient with the real space dark matter density field, the propagator and the noise power spectrum. We then turn to exploring the angular-dependent reconstruction effects with synthetic redshift space galaxy samples. \subsection{Full shear reconstruction} Figure~\ref{fig:Slice_5T_XY} shows the two-dimensional slices of the dark matter density field in one of the simulations, and the two fields reconstructed with the lower mass galaxy density fields in real and redshift space, respectively. The number density of this catalog is $\bar{n} = 3.60\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$. The dark matter density field is smoothed with an $R=4\ h^{-1}\mathrm{Mpc}$ Gaussian. The reconstructed fields are convolved with the transfer function in Equation~(\ref{eq:tf}), minimizing the difference between the reconstructed field and the real space dark matter density field. This process effectively corrects the anisotropic bias and suppresses the anisotropic noise to have a better visual comparison. We see that tidal reconstruction can provide an accurate estimate of the large-scale matter distribution, consistent with findings in \citet{2022ApJ...929....5Z}. The redshift space reconstruction shows similar performance as the real space result, i.e., the RSD does not impact the reconstruction in the transverse plane much. This is not surprising as the RSD effect mainly changes the line of sight modes, and does not affect transverse modes with $\mu\simeq0$ a lot. Figure~\ref{fig:Slice_5T_XZ} compares the full shear reconstructed fields with the real space dark matter density field in the $x-z$ plane. In contrast to Figure~\ref{fig:Slice_5T_XZ}, these density slices directly probe the performance of tidal reconstruction in the radial direction. As expected, the reconstructed field shows an anisotropic noise in the $x-z$ plane. The reconstructed map is noisier than the corresponding real space map. We expect that the reconstruction degrades mainly due to the small-scale nonlinear RSD effect, i.e., fingers of God, since tidal reconstruction is dominated by the large number of small-scale modes \citep[][]{2022ApJ...929....5Z}. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Slice_XY.pdf} \caption{Two-dimensional slices of the density maps in the $x-y$ plane. From left to right, the panels show the dark matter density field, the full shear tidal reconstructed field in real and redshift spaces and the corresponding reconstructed fields for the transverse method. The maps are reconstructed from the lower mass catalog with number density $\bar{n} = 3.60\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$. The reconstructed fields are convolved with the transfer function in Equation~(\ref{eq:tf}) to suppress the anisotropic noises. The dark matter density field is smoothed by a Gaussian filter with smoothing scale $R = 4\ h^{-1}\mathrm{Mpc}$. } \label{fig:Slice_5T_XY} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Slice_XZ.pdf} \caption{Same as Fig.~\ref{fig:Slice_5T_XY}, but for density slices in the $x-z$ plane.} \label{fig:Slice_5T_XZ} \end{figure} In Figure~\ref{fig:Cor2D_5T}, we plot the two-dimensional cross-correlation coefficients $r(k_{\perp}, k_{\parallel})$ between the full-shear reconstructed fields and the real space dark matter density field for two galaxy mock catalogs with $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, respectively. In general, the tidal reconstruction works better with the higher number density sample, i.e., lower shot noise, in both real and redshift spaces, because with a lower shot noise, the small-scale modes which dominate the reconstruction performance, are measured with a higher signal-to-noise ratio. In real space, the correlation coefficients are isotropic for both higher and lower mass galaxy catalogs as expected. For redshift space, we see that the correlation coefficient shows a clear angular dependence on the angle between the wave vector and the line of sight. For reconstructed modes near the radial direction, i.e., $\mu\simeq1$, the correlation coefficient drops much faster than the transverse modes with $\mu\simeq0$ as the wave number increases. Therefore, the reconstruction noise is much higher along the line-of-sight direction than the transverse plane, which is consistent with conclusions from the visual comparison in Figures~\ref{fig:Slice_5T_XY} and \ref{fig:Slice_5T_XZ}. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{Cor2D_5T.pdf} \caption{The two-dimensional correlation coefficient $r(k_{\perp}, k_{\parallel})$ of the full shear tidal reconstructed density field with the real space dark matter density field for two galaxy number densities $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$. The upper panels show real space results, while lower panels show results in redshift space. The light straight lines indicate five $\mu$ bins, from $0-0.2$, to $0.8-1.0$. In redshift space, the correlation coefficient is anisotropic, which becomes much smaller for modes with higher $\mu$ values. } \label{fig:Cor2D_5T} \end{figure} To see the anisotropy caused by the RSD effect more clearly, we plot the cross-correlation coefficients $r(k,\mu)$ measured in $(k,\mu)$ bins in Figure~\ref{fig:R2DBin_RSD_5T}. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{R2DBin_RSD_5T.pdf} \caption{The cross-correlation coefficient $r(k,\mu)$ of the full-shear reconstructed fields with the dark matter density field, measured in five $\mu$ bins, $0-0.2$, $0.2-0.4$, etc, for two galaxy samples with $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ and $4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$. While the reconstruction of modes in the highest $\mu$ bin is degraded by the RSD effect substantially, the other modes are reconstructed with high fidelity. The envelops are the scatter estimated from the six independent realizations from the simulations. } \label{fig:R2DBin_RSD_5T} \end{figure} The lines from dark to light show the correlation for five $\mu$ bins from 0.1 to 0.9. The envelops are the scatter estimated from the six independent realizations from the simulations. For the higher mass sample, i.e., $\bar{n}=4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, the measured correlation coefficients have more fluctuations on large scales. This is not surprising as the reconstruction noise is much higher for this sample and the simulation boxes have limited volume and thus limited number of modes at scales $k\sim0.01\ h\mathrm{Mpc}^{-1}$. The correlation coefficient can only reach $\sim0.7$, i.e., $1-r^2=P_N/P_{\delta_r\delta_r}\simeq0.5$, on the largest scales for the first few $\mu$ bins, $\mu=0.1$ and $0.3$. Therefore, for the BOSS CMASS number densities $\bar{n}\sim4.25\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$, the large-scale dark matter distribution can not be reconstructed with a high signal-to-noise ratio. For the lower mass sample with a significantly higher number density of DESI galaxies, the correlation is higher than $\sim0.8$ at $k < 0.05\ h\mathrm{Mpc}^{-1}$, except the radial modes near the line of sight with $\mu=0.9$. This allows the mapping of dark matter on large scales with a high signal-to-noise, which is highly beneficial for the multi-tracer analysis \citep[][]{2009JCAP...10..007M,2009MT}. Figure~\ref{fig:TN2DBin_RSD_NT_5T} \begin{figure*}[ht!] \centering \includegraphics[width=0.8\columnwidth]{TN2DBin_RSD_NT_5T.pdf} \caption{The propagator and reconstruction noise power spectrum for the full-shear reconstructed field in redshift space. The horizontal solid lines show the simple model for the propagator at large scales. } \label{fig:TN2DBin_RSD_NT_5T} \end{figure*} presents the propagator and reconstruction noise power spectrum for tidal reconstruction in redshift space for two galaxy samples, measured in five $\mu$ bins from 0.1 to 0.9. The light shaded regions denote one standard deviation from repeating this estimate for the six simulations. The propagator is defined as $T(k,\mu)=P_{\delta_r\delta}(k,\mu)/P_{\delta\delta}(k,\mu)$. Notice that the bias of the reconstructed field measured in this way is a function of $k$ and $\mu$. To disentangle the impact of RSD effect on tidal reconstruction, we apply the real space tidal shear estimator to redshift space mock galaxy density fields. Therefore, the angular dependence of the propagator and noise power arises from the RSD effect alone. From Figure~\ref{fig:TN2DBin_RSD_NT_5T}, for the lower mass galaxies, we notice that the propagator depends on the cosine $\mu$ with respect to the line of sight direction and its amplitude decreases as $\mu$ becomes larger. We see a similar trend from the higher mass galaxy catalog though the measurements are noisier as the limited simulation volume and number of simulations. The impact of the RSD effect manifests as different multiplicative biases on the amplitude of reconstructed modes with different cosine angles $\mu$ with respect to the radial direction, while in real space the propagator is isotropic without any dependence on the direction \citep[e.g.][]{2022ApJ...929....5Z}. We will study this angular dependence in detail next. The propagator approaches a constant value on large scales for all $\mu$ bins and deviates from the constant value at $k\gtrsim0.1\ h\mathrm{Mpc}^{-1}$. It is more clear for the number density $\bar{n}=3.6\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$, while less obvious for the lower number density $\bar{n}=4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$ due to the much higher reconstruction noise. The similar trend has been observed in real space tidal reconstruction by \citet{2022ApJ...929....5Z}. This is because the tidal shear estimators are derived in the squeezed limit, where the wavelength of the large-scale mode is much larger than that of the small-scale modes used for tidal reconstruction. At the scale where the squeezed limit is assumed to break down, we would expect the bias for reconstructed fields begins running strongly with scale, as we see in Figure~\ref{fig:TN2DBin_RSD_NT_5T} \citep[see][for more discussions about this effect]{2022ApJ...929....5Z}. This is the same as the CMB and 21~cm lensing estimators derived in the long wavelength limit \citep[see e.g.][for more discussions]{2008MNRAS.388.1819L,2010PhRvD..81l3015L,2012PhRvD..85d3016B,2019PhRvL.122r1301S}. In Figure~\ref{fig:TN2DBin_RSD_NT_5T}, we also show the reconstruction noise power spectra for two galaxy samples. We see that the noise power flattens on large scales. This is as expected since in the squeezed limit, the reconstruction of large-scale modes is in the white homogeneous noise regime. The reconstruction noise is much higher for the lower number density sample, since the higher shot noise enhances the stochasticity of reconstruction. When the wavenumber is larger than $0.05\ h\mathrm{Mpc}^{-1}$, the noise power spectrum shows a mild disagreement with the white noise prediction on large scales. We notice that the reconstruction noise power spectrum is more isotropic than the propagator. In the low-$k$ limit, the amplitude of the noise power spectrum can differ by only tens of percent for both simulated galaxy samples. To use the tidal reconstructed field for cosmological inference, it is necessary to have an accurate description of the propagator and noise power spectrum. The scale-dependent propagator or reconstruction bias is flat on large scales, with an angular dependence on the cosine with the line of sight. The value of the propagator in the low-$k$ limit becomes smaller when the cosine $\mu$ becomes larger, being opposite to the linear Kaiser effect, where the power spectrum amplitude is larger near the line of sight, i.e., larger $\mu$ bins. Therefore, we attempt to fit the large-scale propagator using a simple parametric form, \begin{equation} \label{eq:fit} T(k, \mu) = \beta_0 - \beta_2 \mu^2, \end{equation} to capture the angular dependent effect. We then fit the propagator by minimizing the sum of squares \begin{equation} S=\sum_{i,j}\left(\hat{T}(k_i, \mu_j) - T(k_i, \mu_j)\right)^2, \end{equation} where the hat denotes measured data points from simulations. Here we use the data points to $k_{\mathrm{max}} = 0.1\ h\mathrm{Mpc}^{-1}$ for all $\mu$ bins. Note that the weight is uniform for all $k$ bins, while the power spectrum error usually scales as the inverse of number of modes in that $k$ bin. This should be regarded as a particular weight to up-weight the estimated propagator on large scales, which avoids over-fitting at small scales that degrades the fit on large scales, i.e., the low-$k$ limit where the propagator approaches constant. We use the \textsc{Scipy} routine {\tt scipy.optimize.curve\_fit} to implement the least square algorithm. For the galaxy catalog with number density $\bar{n} = 3.6\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$, we obtain $\beta_0 = 0.234$ and $\beta_2 = 0.181$, with corresponding uncertainty $\sigma_{\beta_0} = 0.0012$ and $\sigma_{\beta_2} = 0.0027$. For the higher mass sample with $\bar{n} = 4.25\times 10^{-4}\ h^3\mathrm{Mpc}^{-3}$, we obtain $\beta_0 = 0.269$ and $\beta_2 = 0.275$, with the variance $\sigma_{\beta_0} = 0.0028$, and $\sigma_{\beta_2} = 0.0063$, respectively. We have plotted the best fit model for both high and low mass samples in Figure~\ref{fig:TN2DBin_RSD_NT_5T}. For the higher number density sample, Equation~(\ref{eq:fit}) provides a fairly good description for the propagator at large scales. While the measurement of the propagator is noisier for the lower number density sample, it can still be modeled by the simple two-parameter model within the one-sigma uncertainties on large scales. The distinct angular dependence compared with the usual linear RSD effect, $b+f\mu^2$, is mainly due to that tidal reconstruction exploits small-scale structures, which is mostly impacted by the nonlinearities due to small-scale velocities, i.e., fingers of God effects. One way to argue how well this linear bias model for tidal reconstruction works is to ask up to which scales $T(k,\mu)$ is a constant. A significant scale dependence is a sign that the squeezed limit does not apply and higher order corrections must be included. If we need to include higher $k$ modes in the cosmological analysis, the scale and angular dependence of $T(k,\mu)$ has to be modeled with high fidelity mock catalogs, which resembles the clustering properties of specific galaxy samples. The reconstruction noise power spectrum is instead much more isotropic, with at most tens of percent fluctuations for different directions. Since the ratio of noise power to total power is given by $1-r^2$, where $r$ is the cross-correlation coefficient, the reconstructed mode has been measured to the cosmic-variance dominated limit when the correlation coefficient is close to unity, $r\sim1$. For high fidelity reconstruction, i.e., the higher number density sample, the noise power is subdominant on large scales, where $r>0.8$, except for the largest $\mu$ bin. A ten percent variation in the noise power spectrum contributes to only a few percent of the total power spectrum. Notice that the reconstruction noise power can not be directly compared with the shot noise prediction $1/\bar{n}$ for galaxies before reconstruction, since the propagator of reconstructed fields is generally $\sim0.2$, while the linear galaxy bias is of order $1\sim2$. The tidal reconstruction noise power spectrum is about $100\ h^{-3}\mathrm{Mpc}^3$ for the galaxy sample with $\bar{n} = 3.6\times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$, and $10^3\ h^{-3}\mathrm{Mpc}^3$ for galaxies with $\bar{n} = 4.25\times 10^{-4}\ h^3\mathrm{Mpc}^{-3}$. We could use the noise power divided by the propagator square, $P_N/T^2$ as a typical noise level for tidal reconstruction, i.e., about $100/0.2^2\ h^{-3}\mathrm{Mpc}^3=2.5\times10^3\ h^{-3}\mathrm{Mpc}^3$ for the low mass catalog and $10^3/0.2^2\ h^{-3}\mathrm{Mpc}^3=2.5\times10^4\ h^{-3}\mathrm{Mpc}^3$ for the high mass sample. In general, the noise level is a few times larger than the shot noise of the halo catalogs used for reconstruction. However, tidal reconstruction provides an independent tracer of the large-scale density, which can be used to cancel cosmic variance in the galaxy density. In summary, the above results show tidal reconstruction method is very powerful at improving cosmological constraints using the sample variance cancellation technique \citep{2009MT,2009JCAP...10..007M}. The simple parametric form of the propagator and nearly white isotropic reconstruction noise make the reconstructed tides field an ideal tracer for the multi-tracer method, especially for constraining the primordial non-Gaussianity \citep{2009MT,2021PhRvD.104l3520D}. \subsection{Transverse shear reconstruction} Having discussed the full shear reconstruction, we now continue to explore the transverse shear reconstruction in redshift space. We have presented the density slices in the $x-y$ plane for transverse shear reconstruction in Figure~\ref{fig:Slice_5T_XY}. The RSD changes little the reconstruction results, which is as expected since RSD does not affect transverse modes much. Figure~\ref{fig:Slice_5T_XZ} shows the transverse shear reconstruction in the $x-z$ plane. We note that the performance is nearly the same in both real and redshift spaces even for the radial modes. This is due to that the RSD only affects the transverse shear indirectly. This original proposal to avoid RSD effect, i.e., using only transverse shear $\gamma_1$ and $\gamma_2$ for tidal reconstruction, does work. Figure~\ref{fig:Cor2D_2T} shows the two-dimensional correlation coefficient between the transverse shear reconstructed fields and the original real space dark matter density field, for two galaxy number densities. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{Cor2D_2T.pdf} \caption{Same as Figure~\ref{fig:Cor2D_5T}, but for the transverse shear reconstruction method. The reconstruction shows a similar trend for both real and redshift spaces.} \label{fig:Cor2D_2T} \end{figure} The correlation is much smaller for low $k_\perp$ and high $k_\parallel$ regime since these modes are inferred indirectly from the variation of the transverse shear $\gamma_1$ and $\gamma_2$ along the line of sight direction \citep{2012arXiv1202.5804P,2016PhRvD..93j3504Z,2019MNRAS.486.3864K}. For both number densities, the correlation does not change much when the RSD effect is included. The reconstruction shows a similar trend for both real and redshift spaces. In Figure~\ref{fig:R2DBin_RSD_2T}, we plot the cross correlation coefficient measured in ($k,\mu$) bins. The solid lines present the redshift space results while the dashed lines show the real space results. For clarity, we only plot the $1\sigma$ error for the solid lines, but the errors are similar for both cases. The correlation coefficient is almost the same for both real and redshift spaces, with some small discrepancies at small scales. Therefore, for tidal reconstruction with only transverse shear $\gamma_1$ and $\gamma_2$, the mapping from real to redshift space is a second order effect. This is consistent with the previous redshift space tidal reconstruction studied by \citet{2019MNRAS.486.3864K}. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{R2DBin_RSD_2T.pdf} \caption{ The cross-correlation coefficient for the transverse-shear reconstruction with two galaxy catalogs in redshift space (solid lines) and real space (dashed lines). } \label{fig:R2DBin_RSD_2T} \end{figure} In Figure~\ref{fig:TN2DBin_RSD_2T}, we present the propagator and noise power spectrum of transverse shear reconstruction for two galaxy number densities. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{TN2DBin_RSD_2T.pdf} \caption{The propagator and reconstruction noise power spectrum for the transverse-shear reconstruction with two galaxy catalogs in redshift space (solid lines) and real space (dashed lines). Notice that we have plotted the normalized noise power spectrum $P_N/T^2$ to have a clear comparison between real and redshift space noises. The light grey lines in the bottom panels show the dark matter power spectrum for a better comparison with the noise power amplitude. } \label{fig:TN2DBin_RSD_2T} \end{figure} The redshift space results are represented by solid lines while the real space results are plotted in dashed lines. In real space, the propagator is isotropic and approaches a constant at large scales, even we only use the two transverse shear components for tidal reconstruction. The salient feature is that the propagator is still nearly isotropic and scale-independent at large scales, $k<0.1\ h\mathrm{Mpc}^{-1}$, even in the presence of RSDs. At small scales, $k>0.1\ h\mathrm{Mpc}^{-1}$, the RSD effect leads to a small angular-dependent feature, but not as apparent as the full-shear reconstruction algorithm. Therefore, the RSD effect only changes the overall normalization of the reconstructed field, with a little anisotropic effect. In the low-$k$ limit, the propagator changes from $\sim0.4$ to $\sim0.3$ for the low mass sample and from $\sim0.5$ to $\sim0.4$ for the high mass sample. To have a better comparison between the reconstruction noise, we have plotted the ratio of the noise power spectrum to the propagator, $P_N/T^2$ in Figure~\ref{fig:TN2DBin_RSD_2T}, since the absolute amplitude of noise power spectrum depends on the normalization of the reconstructed field as we discussed above. This effectively corrects the normalization of the noise power spectrum on large scales, while increasing the small-scale noise power as the propagator becomes much smaller for higher wavenumber. We omit the $\mu=0.9$ curve for $\bar{n}=4.25\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$ which is greater than the upper limit of the plot. We see that the dashed and solid lines are nearly the same at large scales for both number densities, i.e., the reconstruction noise is almost the same with RSDs or not, as long as we normalize the noise power using the propagator. Since we have $1-r^2=P_N/P_{\delta_r\delta_r}=P_N/T^2/(P_{\delta\delta}+P_N/T^2)$, i.e., an equal correlation coefficients $r$ leads to an equal noise power spectrum $P_N/T^2$, this conclusion is not surprising and directly follows from the result that cross correlation is the same in both cases as we have seen in Figure~\ref{fig:R2DBin_RSD_2T}. We have plotted the real space dark matter power spectrum $P_{\delta\delta}$ for a direct comparison between $P_N/T^2$ and $P_{\delta\delta}$. Since the propagator is isotropic in the low-$k$ limit, the RSD effect does not introduce additional angular dependence to the noise, except an overall scaling of the amplitude. Thus, the reconstruction noise has a similar angular dependence as the noise in real space. While the transverse shear reconstruction has a higher and anisotropic noise compared to the full shear method, it has the advantage of being less impacted by errors in the galaxy redshift. On large scales, the RSD mostly changes the overall normalization of the reconstructed field. The propagator is still nearly isotropic in the presence of RSDs. The anisotropy of reconstruction noise is largely due to the transverse nature of using only $\gamma_1$ and $\gamma_2$. This demonstrates that the transverse shear reconstruction can be powerful for some cosmological applications which need to minimize the effect of redshift errors. The modeling of reconstructed power spectrum could be accomplished using the real space results with a nuisance parameter describing the amplitude of the power spectrum. \subsection{Exploration with the RSD effect} There are two regimes in which we have a well-understanding of redshift-space distortions. In the linear scales, a large-scale overdense region, towards which surrounding galaxies are falling, appears squashed in redshift space, which is known as the linear Kaiser effect \citep{1987MNRAS.227....1K}. In Fourier space, the galaxy clustering is enhanced in redshift space than in real space by a factor $(b+f\mu^2)$. However, the linear distortions is only valid on large scales. To evaluate the effect of linear distortions on tidal reconstruction, we can apply a high-pass filter to the galaxy overdensity used for reconstruction, which removes the large-scale modes where linear theory applies. Figure~\ref{fig:R2DBin_Cover_5T} shows the cross-correlation coefficient between the reconstructed field and real space dark matter density for reconstruction without $k<0.2\ h\mathrm{Mpc}^{-1}$ modes. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{R2DBin_Cover_5T.pdf} \caption{The cross-correlation coefficient for the full-shear reconstruction in the redshift space (solid lines) and the results without $k < 0.2 \ h\mathrm{Mpc}^{-1}$ modes (dash-dotted lines).} \label{fig:R2DBin_Cover_5T} \end{figure} We see that excluding all $k<0.2\ h\mathrm{Mpc}^{-1}$ modes only degrades the result a little. If we exclude only $k<0.1\ h\mathrm{Mpc}^{-1}$ modes, one can hardly discern the difference between two curves. We have confirmed that the propagator and noise power also change only a little when all $k<0.2\ h\mathrm{Mpc}^{-1}$ are excluded and almost no difference when excluding $k<0.1\ h\mathrm{Mpc}^{-1}$ modes. This indicates that linear distortions have a negligible impact on tidal reconstruction, which makes sense since the reconstruction performance is dominated by the large number of small-scale modes. This also explicitly demonstrates that the large-scale information from tidal reconstruction is independent of the original large-scale structures directly traced by galaxies, providing more information about cosmological parameters. The modeling of the galaxy power spectrum in redshift space has advanced significantly and has been shown to be valid to $k\sim0.2-0.4\ h\mathrm{Mpc}^{-1}$, depending on specific methods \citep[see e.g.][]{2017JCAP...10..009H,2020PhRvD.102l3541N,2021JCAP...05..059S,2021JCAP...03..100C,2022MNRAS.514.3993P}, while most observable modes in galaxy surveys are still in the nonlinear regime and outside the realm of perturbative description. Therefore, the tides information from nonlinear scales $k\sim1\ h\mathrm{Mpc}^{-1}$ is complementary with that from the large-scale power spectrum. A multi-tracer analysis enables a potential improvement in the measurement of structure growth rate \citep{2009JCAP...10..007M}, similar to the $f_\mathrm{NL}$ constraints \citep{2021PhRvD.104l3520D}, which we plan to investigate in the future. As we move to smaller, nonlinear scales, the small-scale velocities elongate the galaxy clustering along the line of sight, usually known as fingers of God effect \citep{1972MNRAS.156P...1J}. The quadrupole moment of the clustering at small scales has an opposite sign than it does in the linear case. In Fourier space, the observed density field is damped in the radial direction. This dominant small-scale nonlinearity in redshift space is caused by the nonlinear velocity dispersion $\sigma_v$ that has a different nature than the large-scale linear velocity. To resemble the effect of nonlinear velocity dispersion, we move the galaxies along the line of sight with a random velocity drawn from a Gaussian distribution with standard deviation $\sigma_v$, instead of the real velocity of a galaxy from the simulation. The importance of FoG is determined by the typical velocity dispersion $\sigma_v$, converted to comoving length units, $\sigma_\chi=(1+z)\sigma_v/H(z)$. Figure~\ref{fig:R2DBin_PhotoZ_5T} shows the cross-correlation coefficient for tidal reconstruction with the synthetic redshift space galaxy catalogs which only include the small-scale random velocities. \begin{figure}[ht!] \centering \includegraphics[width=0.7\columnwidth]{R2DBin_PhotoZ_5T.pdf} \caption{The cross-correlation coefficient for the full-shear reconstruction in redshift space (solid lines) and results for the synthetic redshift space galaxy catalogs which only includes the nonlinear velocity dispersion (dash-dotted lines). } \label{fig:R2DBin_PhotoZ_5T} \end{figure} We have tested a few values and found that with $\sigma_v = 75 \ \mathrm{km}/\mathrm{s}$, corresponding to $\sigma_\chi = 0.86 \ h^{-1}\mathrm{Mpc}$ at $z=0.6$, this small-scale velocity leads to a trend that qualitatively follows the real RSD effect. The cross-correlation coefficient also depends on the cosine with respect to the line of sight and its value also becomes smaller when we increase $\mu$. We have confirmed that a larger $\sigma_v$ causes a larger degradation to tidal reconstruction, specifically a much smaller cross-correlation coefficient for high $\mu$ bins. In Figure~\ref{fig:TN2DBin_PhotoZ_NT_5T}, we present the propagator and noise power spectrum for tidal reconstruction with the synthetic catalogs which only include the small-scale velocities. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{TN2DBin_PhotoZ_NT_5T.pdf} \caption{The propagator and reconstruction noise power spectrum for the full-shear tidal reconstruction with the synthetic redshift space galaxy catalogs which only include the nonlinear velocity dispersion. } \label{fig:TN2DBin_PhotoZ_NT_5T} \end{figure} We find that the propagator can also be fitted using Equation~(\ref{eq:fit}), the simple two-parameter model $T(k, \mu) = \beta_0 - \beta_2 \mu^2$. For the lower mass catalog with $\bar{n} = 3.6 \times 10^{-3}\ h^3\mathrm{Mpc}^{-3}$, we obtain $\beta_0 = 0.266$ and $\beta_2 = 0.210$, with the standard deviation $\sigma_{\beta_0} = 0.0015$ and $\sigma_{\beta_2} = 0.0035$. For the higher mass galaxy sample with $\bar{n} = 4.25 \times 10^{-4}\ h^3\mathrm{Mpc}^{-3}$, we have $\beta_0 = 0.353$ and $\beta_2 = 0.328$, with the fitting error $\sigma_{\beta_0} = 0.0033 , \sigma_{\beta_2} = 0.0074$. From this, it is clear that the dominant anisotropy of tidal reconstruction in redshift space is induced by the small-scale nonlinear velocity dispersion. The noise power spectrum is also nearly scale-independent in the low-$k$ limit where the long wavelength limit is a good approximation and has an angular fluctuation at tens of percent level between different $\mu$ bins. This demonstrates that small-scale random nonlinear velocities can explain most behaviours we have observed for redshift space tidal reconstruction. The value $\sigma_v = 75 \ \mathrm{km}/\mathrm{s}$ is compatible with the typical velocity dispersion of spectroscopic galaxy samples at the corresponding redshift \citep{2021JCAP...05..059S}. While being much smaller than the large-scale linear bulk velocity \citep{2013PhRvD..87f3526Z,2013PhRvD..88j3510Z,2018PhRvD..97d3502Z,2018JCAP...09..006J}, the nonlinear velocity dispersion degrades the reconstruction of radial modes substantially and limits the information that we can extract from high values of $\mu$. We have used the same $\sigma_v$ for both number densities. This is simply due to that the reconstruction has a similar performance as the reconstruction with the real halo velocity estimated from the simulation. In reality, the low mass sample should have a larger velocity dispersion due to additional satellite galaxies included. The FoG effect produces a qualitatively correct result for the anisotropic reconstructed field, i.e., the angular dependence of the propagator and nearly isotropic noise. However, we note that there are still discrepancies for cross-correlation coefficients and we cannot match the propagator and noise power spectrum between real and synthetic catalogs by adjusting only $\sigma_v$. It is clear that higher order effects in the real to redshift space mapping need to be considered to have a full picture here. However, accounting the full redshift-space distortions will be a tricky business. We leave this for further work in the future. \section{Discussion and Conclusion} \label{sec:discussion} In this paper, we have applied the tidal reconstruction to redshift space galaxy fields from simulations, while most previous works focus on the real space reconstruction. The large-scale density field can be recovered with high precision for the dense galaxy sample, with a correlation coefficient higher than 0.8 at the largest scales, $k<0.05\ h\mathrm{Mpc}^{-1}$, using the full shear method, except for the highest $\mu$ bin. While for the sparse sample, the correlation coefficient can only reach $r\sim0.7$ at the large scales, limiting a substantial improvement in cosmological parameter measurement using the sample variance cancellation technique. Although for the existing galaxy samples such as SDSS BOSS/eBOSS \citep[][]{2017MNRAS.470.2617A,2021PhRvD.103h3533A}, the number density is insufficient for tidal reconstruction to be efficient, i.e., close to the number density $\bar{n}=4.25\times10^{-3}\ h^3\mathrm{Mpc}^{-3}$ we have studied in this paper, the ongoing and future surveys will have a much higher number density such as DESI BGS and ELG \citep[][]{2022arXiv220808512H,2022arXiv220808513R}, Euclid \citep[][]{2018LRR....21....2A,2020A&A...642A.191E}, SPHEREx \citep[][]{2014arXiv1412.4872D}, MegaMapper \citep[][]{2019BAAS...51g.229S,2019BAAS...51c..72F}, etc (See \citealt{2021JCAP...12..049S,2022arXiv220307506F} for a review). High density galaxy clustering and Stage-5 surveys are also being planned \citep[][]{2022arXiv220307291D,2022arXiv220903585S}. The tidal reconstruction method allows a substantial improvement on the cosmological parameter constraints, e.g., local primordial non-Gaussianity, given the fixed survey volume and galaxy number density, at no additional cost. This makes tidal reconstruction a promising probe of cosmology. The small-scale FoG effect, i.e., nonlinear velocity dispersion, leads to a degradation to the full shear reconstruction in redshift space, especially for high $\mu$ values. This makes sense since tidal reconstruction performance is dominated by the small-scale density modes. However, as the transverse shear terms are only indirectly affected by the real to redshift space mapping, the transverse shear method is largely insensitive to the RSD. Therefore, while being noisier than the full shear method, the transverse shear reconstruction could still be useful in some certain cases. Tidal reconstruction acquires a large-scale linear bias, which is constant to an excellent approximation. In redshift space, for full shear reconstruction this bias becomes angular dependent due to the anisotropic nature of redshift space galaxy density field. However, we find that the reconstruction bias can be well described by a simple two-parameter model on large scales. The noise power spectrum is nearly isotropic and scale-independent at $k<0.05\ h\mathrm{Mpc}^{-1}$. Thus, we expect that for modes with $k<0.05\ h\mathrm{Mpc}^{-1}$, the noise power spectrum can be modeled as a constant term to a good approximation in the cosmological data analysis. Therefore, this makes it possible to use the reconstructed modes alongside directly observed galaxy density modes to constrain $f_\mathrm{NL}$ using an effective multi-tracer approach \citep{2021PhRvD.104l3520D}. However, in order to reach the theoretical threshold between single and multi-field inflation models $f_\mathrm{NL}\sim1$ \citep[see e.g.][]{2014arXiv1412.4671A}, more detailed studies using very large volume simulations with primordial non-Gaussianity will be needed since even percent level stochasticities can significantly impact the inference of $f_\mathrm{NL}$. We plan to study this in future. The propagator shows a characteristic anisotropy in the cosine $\mu$, $T(k,\mu)=\beta_0-\beta_2\mu^2$, which mostly arises from the nonlinear velocity dispersion as we have shown above. It might be possible to derive this characteristic scaling in $\mu$ analytically in the large-scale limit, by assuming a tidal coupling or response function in the long wavelength limit. This response function could be obtained from tides simulations \citep[see e.g.][]{2018MNRAS.479..162S,2021MNRAS.503.1473S,2020MNRAS.496..483M,2021JCAP...04..041A,2021MNRAS.504.1694R}. We plan to investigate this topic in a future work. While being noisy for high $\mu$ values, tidal reconstruction can obtain a high signal-to-noise reconstruction of smaller $\mu$ modes. This is highly complementary with other reconstruction methods such as the kinetic Sunyaev-Zel'dovich velocity reconstruction \citep[see e.g.][]{2018arXiv181013423S,2021arXiv211111526C,2022JCAP...09..028G}, where the radial modes with $\mu\sim1$ have the lowest noise. One of the major applications of tidal reconstruction is to recover the lost radial modes due to foreground in 21~cm intensity mapping surveys. The neutral hydrogen maps have much smaller fingers of God effects than the typical spectroscopic galaxy samples at the same redshift, driven by a small number of satellite galaxies with a smaller velocity dispersion \citep{2018ApJ...866..135V,2022arXiv220712398O}. Therefore, it may be even more beneficial for 21~cm surveys such as CHIME \citep{2022ApJS..261...29C}, HIRAX \citep{2022JATIS...8a1019C}, stage-II experiments \citep{2018arXiv181009572C}, PUMA \citep{2019BAAS...51g..53S,2020arXiv200205072C}, etc. However, further studies are required to study the effects of foreground contamination and instrumental effects which we leave to future work. The isotropic modulation to the local power spectrum can also give an estimate of the large-scale density by measuring the amplitude of the small-scale power spectrum in different subvolumes \citep[e.g.][]{2014JCAP...05..048C,2014PhRvD..90j3530L,2015JCAP...09..028C} or using a tidal field quadratic estimator as we presented here. However, compared with the local anisotropic distortions, the isotropic modulation is more likely to be impacted by observational systematics, e.g., variations in the foreground stars, seeing, and galactic dust extinction, since both lead to the change in local galaxy power spectrum. A detailed exploration of observational systematics will be presented in a future paper. The reconstructed field is quadratic in the small-scale galaxy density. The cross spectrum of the reconstructed field with the original galaxy field is a bispectrum of the galaxy density, while the power spectrum of the reconstructed field is a trispectrum. Therefore, we are using higher order statistics, 3-point function and 4-point function, in redshift space to improve cosmological measurements. There are other similar methods using quadratic functions of the density field to exploit higher-order information such as the skew power spectrum \citep[see e.g.][]{2015PhRvD..91d3530S,2020JCAP...04..011M,2020JCAP...08..007D,2021JCAP...03..020S}, but requires a perturbative description. However, the perturbation theory has limited range of validity and eventually breaks down in the nonlinear regime $k\sim1\ h\mathrm{Mpc}^{-1}$. For future large-scale structure studies, we are sensitive to the breakdown of perturbation theory, therefore, it is of great importance to further develop methods that could efficiently exploit nonlinear information in redshift space beyond the linear theory \citep{2021JCAP...10..044F}. \section*{Acknowledgement} \begin{acknowledgments} Ue-Li Pen receives support from Ontario Research Fund-Research Excellence Program (ORF-RE), Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, CRD 523638-18, 555585-20], Canadian Institute for Advanced Research (CIFAR), Canadian Foundation for Innovation (CFI), the National Science Foundation of China (Grants No. 11929301), Thoth Technology Inc, Alexander von Humboldt Foundation, and the National Science and Technology Council (NSTC) of Taiwan (111-2123-M-001-008-, and 111-2811-M-001-040-). Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium and the SOSCIP Consortium's CPU computing platform. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto \citep[][]{Loken_2010}. SOSCIP is funded by the Federal Economic Development Agency of Southern Ontario, the Province of Ontario, IBM Canada Ltd., Ontario Centres of Excellence, Mitacs and 15 Ontario academic member institutions. \end{acknowledgments} \vspace{5mm}
1,477,468,750,490
arxiv
\section{Introduction} Security is one of the most important considerations in transmission of information from one user to another. It involves confidentiality, integrity, authentication and non-repudiation \cite{liang}. We will be concerned about confidentiality. This guarantees that the legitimate users successfully receive the information intended for them while any eavesdropper is not able to interpret this information. We will be concerned with the eavesdroppers who are passive attackers, e.g., they attempt to interpret the transmitted information without injecting any new information or trying to modify the information transmitted. Traditional techniques to achieve confidentiality in this setup are based on cryptographic encryption (\cite{menezes}, \cite{stallings}). However now, Information Theoretical Security is also being actively studied (\cite{liang}, \cite{trappe}). This does not require the secret/public keys used in cryptographic techniques. Key management, especially for wireless channels can be very challenging. Also, information theoretic security unlike the cryptography based techniques can provide provably secure communication. Information theoretic security can also be used in a system in addition to cryptographic techniques to add additional layers of protection to the information transmission or to achieve key agreement and/or distribution. The Information theoretic approach for secrecy systems was first investigated by Shannon \cite{shannon1949} in 1949. Wyner \cite{wyner1975} considers communicating a secret over a wiretap channel in the form of degraded broadcast channels, without using a key. Wyner's work was in turn extended by Leung and Hellman \cite{hellman1976} to the Gaussian channel. Csisz\`{a}r and K\"{o}rner \cite{csizar1980} considers a general discrete memoryless broadcast channel, and shows that the secrecy capacity is positive if the main channel to the intended user is more capable than of the eavesdropper, and zero if the wiretapper's channel is less noisy. Secrecy capacity of MIMO (Multiple Input Multiple Output) channels was obtained in \cite{khisti}, \cite{babak2008} and \cite{shamai2009}. In \cite{negi2007}, an artificial noise concept is proposed for MIMO channels to enhance the secrecy rate even when the eavesdropper's channel is better than the main channel. In \cite{he2011}, this result has been extended to the case where no CSI of the eavesdropper is available. The fading channel is studied in \cite{gopala2008} where power allocation schemes without CSI of the eavesdropper's channel to the transmitter are also obtained. In \cite{bloch2008}, a wire-tap channel with slow fading is studied where an outage analysis with full CSI of the eavesdropper and imperfect CSI of the eavesdropper is performed. Practical codes for the single user wire-tap channel were first reported in \cite{thangaraj}. Liang et al.\cite{broadcast} have studied a broadcast channel with a wire-tapper. Interference channels with confidential messages are studied in \cite{interference}. The first results in information theoretic security for a Multiple Access Channel (MAC) were obtained in \cite{yingbin2008} and \cite{yener2006}. In \cite{yingbin2008}, each user treats the other as an eavesdropper while in \cite{yener2006}, the eavesdropper is at the receiving end. In \cite{cooperative}, Yener and Tekin propose a technique called cooperative jamming in which a user that is not transmitting, can send a jamming signal so that the eavesdropper is more confused. This significantly improves the secrecy rate region. A fading MAC was also studied by Yener and Tekin \cite{yener2007}, where they assume that the CSI of the eavesdropper's channel is perfectly known at the transmitting users. In \cite{ulukus2010}, Bassily and Ulukus have proposed Ergodic Secret Alignment to further improve the secrecy sum-rate region of the MAC with an eavesdropper. In this paper, we consider a fading MAC-WT assuming no CSI of the eavesdropper at the transmitting users. Since the eavesdropper may not transmit any signal (it is passive), the transmitters often will not know its channel. We obtain a power control scheme that maximizes the sum secrecy rate and then also employ cooperative jamming over this scheme. It will be shown that cooperative jamming can significantly increase the secrecy rate. But these policies are difficult to compute. Thus, next we consider a computationally simpler ON/OFF power control policy. We obtain its thresholds to maximize the secrecy sum-rate. Finally, we also incorporate cooperative jamming over this power control policy. With this, at high SNR, the secrecy sum-rate exceeds the sum-rate when CSI of the eavesdropper is perfectly known at the transmitter but the cooperative jamming is not used. The rest of the paper is organized as follows: In Section II, we define the channel model and state the problem. In Section III, we obtain the power control policy with and without cooperative jamming. Section IV discusses ON/OFF power control policy with and without cooperative jamming. In Section V, we compare the different policies numerically. Finally in Section VI, we conclude this paper and discuss the future work. \section{Channel Model And Problem Statement} We consider a system with two users who want to communicate over a fading MAC to a legitimate receiver. There is also an eavesdropper who is trying to get access to the output received by the legitimate receiver. Transmitter $k=1,2$ chooses message $W_{k}$ for transmission from a set $\mathcal{W}_{k} = \{1, 2,..., M_{k}\}$ with uniform distribution. These messages are encoded into $\{X_{k,1}, ..., X_{k,n}\}$ using $(2^{nR_{k}}, n)$ codes. The legitimate receiver gets $Y_{i}$ and the eavesdropper gets $Z_{i}$ at time $i$. The decoder at the legitimate receiver estimates the transmitted message as $\tilde{W} = (\tilde{W_{1}}, \tilde{W_{2}})$ from $\textbf{Y}^{n} \equiv \{Y_{1}, ..., Y_{n}\}$. The legitimate receiver should receive the message reliably while the eavesdropper should not be able to decode it. It is assumed that the legitimate receiver as well as the eavesdropper know the codebooks. The channel model can be mathematically represented as: \begin{equation} \label{model1} Y_{i}= \tilde{h}_{1,i}X_{1,i} + \tilde{h}_{2,i}X_{2,i} + N_{R,i} \end{equation} \begin{equation} \label{model2} Z_{i}=\tilde{g}_{1,i}X_{1,i} + \tilde{g}_{2,i}X_{2,i} + N_{E,i} \\ \end{equation} where $\tilde{h}_{k,i}, ~\tilde{g}_{k,i}$ are the complex channel gains from the transmitter $k$ to the legitimate receiver and the eavesdropper respectively. Also $\{N_{R,i}\}$ and $\{N_{E,i}\}$ are Complex additive white Gaussian noise (AWGN) each with circularly symmetric independent components distributed as $\mathcal{N}(0,1)$, where $\mathcal{N}(a,b)$ is Gaussian distribution with mean $a$ and variance $b$. Also we define $\vert\tilde{h}_{k,i}\vert^{2} = h_{k,i}$ and $\vert\tilde{g}_{k,i}\vert^{2} = g_{k,i}$, for $k=1, 2$. We assume that $\{h_{k,i}, i\geq 1\}$ and $\{g_{k,i}, i\geq 1\}$ are independent, identically distributed (iid), and that each sequence is independent of the other (thus the channels experience flat, fast fading). We also assume the power constraints: \begin{equation} \label{p_cons} \frac{1}{n} \sum\limits_{i=1}^{n} X^{2}_{ki} \leq \bar{P}_{k}, ~k=1,2. \end{equation} The equivocation rate used in this paper is as defined in \cite{yener2006}. We use collective secrecy constraint to take the multi-access nature of the channel into account. Define\\ \begin{equation} \Delta _{L}^{n} = \frac{H(W_{L}\vert Z^{n})}{H(W_{L})} \end{equation} where $L \subseteq \{1, 2\}$, $Z^{n} = (Z_{1}, ... ,Z_{n})$. For each $n$ we need codebooks such that the average probability of error to the legitimate receiver goes to zero and $\Delta _{L}^{n} \rightarrow 1$ as $n \rightarrow \infty$ for each $L ~\subseteq \{1,2\}$. Also let $h=(h_{1}, h_{2}), g=(g_{1}, g_{2})$, where $h_i = \vert\tilde{h}_{i}\vert^{2}$, $g_i = \vert\tilde{g}_{i}\vert^{2}, i=1,2$ are exponentially distributed. Then from \cite{yener2006}, if the CSI $h_{k}, g_{k}$ are known at both the transmitters at time $i$, the secrecy rate region for this case is: \begin{equation} \label{r1_fullcsi} R_{1} \leq \mathsf{E}_{h,g}\left\lbrace\left[log\frac{(1+h_{1}P_{1}(h,g))(1+g_{2}P_{2}(h,g))}{1+g_{1}P_{1}(h,g)+g_{2}P_{2}(h,g)}\right]^{+}\right\rbrace, \end{equation} \begin{equation} \label{r2_fullcsi} R_{2}\leq \mathsf{E}_{h,g}\left\lbrace\left[log\frac{(1+g_{1}P_{1}(h,g))(1+h_{2}P_{2}(h,g))}{1+g_{1}P_{1}(h,g)+g_{2}P_{2}(h,g)}\right]^{+}\right\rbrace, \end{equation} \begin{equation} \label{rs_fullcsi} R_{1}+R_{2} \leq \mathsf{E}_{h,g}\left\lbrace\left[log\frac{1+h_{1}P_{1}(h,g)+h_{2}P_{2}(h,g)}{1+g_{1}P_{1}(h,g)+g_{2}P_{2}(h,g)}\right]^{+}\right\rbrace, \end{equation} where $P_{1}(h,g)$ and $P_{2}(h,g)$ are the transmit powers satisfying the constraint (3)and Gaussian signalling is used. In \cite{yener2006}, the optimal power allocation policy which maximizes the sum secrecy rate (7) has been found. In this paper, we extend this result to the case when the CSI of the legitimate receiver is known but the CSI of the eavesdropper may not be known at the transmitter; only the distribution is known. Since we are assuming a passive eavesdropper, this will often be a more reasonable assumption, i.e., there is no transmission from the eavesdropper to the transmitters for them to estimate its channel. \section{Power Control with main CSI only} \subsection{Power control without Cooperative Jamming} In this section we consider power control which maximizes the sum secrecy rate when only the main channel (to the legitimate user) CSI is known at the transmitters. Let $P_{k}(h)$ be the power used by a policy when the main channel gain is $h=(h_1, h_2)$. Of course, the policy should satisfy the average power constraint (\ref{p_cons}). We need the following notation \begin{equation} \label{notation} \phi_{p_{1},p_{2}}^{s} = 1 + s_{1}p_{1} + s_{2}p_{2}, \end{equation} where $s$ is the channel state ($h$ or $g$) and $p_{k}$ is the power used. The following theorem can be proved as in \cite{gopala2008}. \\ \begin{thm} For a given power control policy $\{P_{k}(h)\}, ~k=1, 2$, the following secrecy sum-rate \begin{equation} \label{rs_nocsi} \mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{P_{1},P_{2}}^{h}}{\phi_{P_{1},P_{2}}^{g}}\right)\right]^{+}\right\rbrace \end{equation} is achievable. \end{thm} The policy that maximizes (\ref{rs_nocsi}) is not available in closed form, but can be numerically computed (see Appendix). An example will be provided in Section VI. We will also consider a simpler ON/OFF power control policy which was employed in \cite{gopala2008} for a single user case. Next we consider power control with cooperative jamming. \subsection{Power Control with Cooperative Jamming} The power policy obtained in the last section depends on $h=(h_1,h_2)$. If both the main channels $h_{1}$ and $h_{2}$ are good, both the transmitters send their coded symbols. If a transmitter's channel is bad, it may not. In \cite{cooperative} and \cite{yener2007}, it is suggested that when a transmitter is not sending its data, it can help the other user by jamming the channel to the eavesdropper. We extend their result to our set-up. Let $\{P_{k}(h)\},~ k=1, 2$, be the power control policy when the users are transmitting and $\{Q_{k}(h)\},~ k=1, 2$, be the power control policy when the users are jamming. To satisfy (3), we need \begin{equation} \label{power_cons} \mathsf{E}_{h}[P_{k}(h) + Q_{k}(h)] \leq \bar{P_{k}},~k=1,2. \end{equation} Then we have the following theorem \\ \begin{thm} With the above power control policies secrecy sum-rate \setlength{\arraycolsep}{0.0em} \begin{equation} \label{rs_coop} \mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{P_{1},P_{2}}^{h} +\phi_{Q_{1},Q_{2}}^{h} -1}{\phi_{P_{1},P_{2}}^{g} + \phi_{Q_{1},Q_{2}}^{g} -1}\right)\left(\frac{\phi_{Q_{1},Q_{2}}^{g}}{\phi_{Q_{1},Q_{2}}^{h}}\right)\right]^{+}\right\rbrace \end{equation} is achievable. \end{thm} The proof of this theorem is given in Appendix A.1 We will obtain the power control policy that maximizes the sum rate in the Appendix. We will see in Section VI that cooperative jamming can significantly improve the sum-rate. We also propose a simple ON/OFF power control policy with cooperative jamming. \section{Fading MAC with ON/OFF Power Control} The policy obtained in Section III can be computed only numerically and its structure is not known. The following ON/OFF policy is easier to compute and is intuitive: User $k$ transmits with a constant power $P_{k}$ if $h_{k} \geq \tau_{k}$, where $\tau_{k}$ is an appropriate threshold. Hence the following cases arise: \begin{enumerate} \item h$_{1}\geq\tau_{1}$, h$_{2}\geq\tau_{2}$ : Both transmit; \item h$_{1}\geq\tau_{1}$, h$_{2}<\tau_{2}$ : User-1 transmits; \item h$_{1}<\tau_{1}$, h$_{2}\geq\tau_{2}$ : User-2 transmits; \item h$_{1}<\tau_{1}$, h$_{2}<\tau_{2}$ : No user transmits. \end{enumerate} From average power constraint we get: \begin{equation} \label{p_1_on_off} \bar{P_{1}}=P_{1}Pr(h_{1}\geq\tau_{1}) \end{equation} and \begin{equation} \label{p_2_on_off} \bar{P_{2}}=P_{2}Pr(h_{2}\geq\tau_{2}). \end{equation} Let \begin{equation*} A_{1} \triangleq \{h_{1}\geq\tau_{1},~h_{2}<\tau_{2}\},~ A_{2} \triangleq \{h_{1}<\tau_{1},~h_{2}\geq\tau_{2}\} \end{equation*} and \begin{equation} \label{indicator} A_{12} \triangleq \{h_{1}>\tau_{1},~h_{2}>\tau_{2}\} \end{equation} The secrecy sum-rate by this policy is given by \begin{equation*} R_{B}=\mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{P_{1},P_{2}}^{h}}{\phi_{P_{1},P_{2}}^{g}}\right)1_{A_{12}}\right]^{+}\right\rbrace \end{equation*} \begin{equation*} +~\mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{P_{1},0}^{h}}{\phi_{P_{1},0}^{g}}\right)1_{A_{1}}\right]^{+}\right\rbrace \end{equation*} \begin{equation} \label{r1_onoff} +~\mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{0,P_{2}}^{h}}{\phi_{0,P_{2}}^{g}}\right)1_{A_{2}}\right]^{+}\right\rbrace \end{equation} where $1_{A}$ is the indicator function of set A. When $h_{k}$ and $g_{k}$, $k=1,2$ have exponential distributions with independent components, with joint density of $h$ and $g$ respectively given as: \begin{equation} f_{1}(h) =\frac{1}{\gamma_{1}^{h}\gamma_{2}^{h}}e^{-\frac{h_{1}}{\gamma_{1}^{h}}}e^{-\frac{h_{2}}{\gamma_{2}^{h}}}~, ~f_{2}(g) = \frac{1}{\gamma_{1}^{g}\gamma_{2}^{g}}e^{-\frac{g_{1}}{\gamma_{1}^{g}}}e^{-\frac{g_{2}}{\gamma_{2}^{g}}} \end{equation} then \begin{equation} P_{1} = \bar{P_{1}}e^{\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}}~,~ P_{2} = \bar{P_{2}}e^{\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}} \end{equation} We numerically obtain the secrecy sum-rate for thresholds $\tau_{1}$ and $\tau_{2}$ which maximize sum-rate (\ref{r1_onoff}). \section{Fading MAC with ON/OFF Power Control and Cooperative Jamming} Cooperative jamming in section III B has been found to increase the sum-rate substantially (see Fig 1 below). Therefore, we now use it with the ON/OFF policy studied in Section IV. A user when not transmitting its data jams the channel for the eavesdropper. Also it transmits with different powers taking into account the channel gain of the other user. The following cases arise:\\ \begin{enumerate} \item h$_{1}\geq\tau_{1}$, h$_{2}\geq\tau_{2}$ : Both transmit with power $P_{1a}$, $P_{2a}$; \item h$_{1}\geq\tau_{1}$, h$_{2}<\tau_{2}$ : User-1 transmits with power $P_{1b}$, user-2 jams with power $Q_{2}$; \item h$_{1}<\tau_{1}$, h$_{2}\geq\tau_{2}$ : User-2 transmits with power $P_{2b}$, user-1 jams with power $Q_{1}$; \item h$_{1}<\tau_{1}$, h$_{2}<\tau_{2}$ : None transmits or jams. \end{enumerate} The powers and the thresholds in the above scheme are chosen to satisfy the average power constraints. With this power control scheme, the secrecy sum-rate is given by: \begin{equation*} R_{B}^{CJ}= \mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{P_{1a},P_{2a}}^{h}}{\phi_{P_{1a},P_{2a}}^{g}}\right)1_{A_{12}}\right]^{+}\right\rbrace \end{equation*} \begin{equation*} ~~~~~+~\mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{P_{1b},Q_{2}}^{h}}{\phi_{P_{1b},Q_{2}}^{g}}\right)1_{A_{1}}\right]^{+}\right\rbrace \end{equation*} \begin{equation} \label{rs_coop_on_off} ~~~~~+~\mathsf{E}_{h,g}\left\lbrace\left[log\left(\frac{\phi_{Q_{1},P_{2b}}^{h}}{\phi_{Q_{1},P_{2b}}^{g}}\right)1_{A_{2}}\right]^{+}\right\rbrace \end{equation} When $\tilde{h}_{k}$ and $\tilde{g}_{k}$ have Rayleigh distribution\\ \begin{eqnarray} \bar{P_{1}}&{} ~& = P_{1a}e^{-\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}}e^{-\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}}+~P_{1b}e^{-\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}}(1-e^{-\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}})\nonumber\\ &&{+}\: Q_{1}e^{-\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}}(1-e^{-\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}}), \end{eqnarray} \begin{eqnarray} \bar{P_{2}}&{}~ & = P_{2a}e^{-\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}}e^{-\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}} + P_{2b}e^{-\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}}(1-e^{-\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}})\nonumber\\ &&{+}\:Q_{2}e^{-\frac{\tau _{1}}{\bar{\gamma}_{1}^{h}}}(1-e^{-\frac{\tau _{2}}{\bar{\gamma}_{2}^{h}}}). \end{eqnarray} \section{Numerical Results} In this section, we compare the sum rates obtained via the different power control schemes proposed in this paper. The receiver's AWGN noise has variance 1. The fading for each channel is Rayleigh distributed with parameters $\gamma_{1}^{h}=\gamma_{2}^{h}=\gamma_{1}^{g}=\gamma_{2}^{g}=1$. The sum rates are plotted in Fig.1 for different powers $P_{1}=P_{2}$. We observe that cooperative jamming substantially improves the sum-rate (up to 75\%). Of course, for each case knowledge of the eavesdropper's CSI at the transmitter improves the sum-rate. At high SNR, the cooperative jamming can provide sum-rate without CSI higher than the full CSI case without cooperative jamming. Also, optimal ON/OFF power control is sufficient to recover most of the sum rate achievable by a policy (for no eavesdropper's CSI and no jamming, ON/OFF provides sum-rate very close to the optimal algorithm in that scenario). \begin{figure} \epsfig{figure=all6_final.eps,height=6cm,width=16cm} \caption{Comparison of Full CSI, Only Receiver's CSI and ON/OFF Power Control policies: With and without Cooperative Jamming} \label{fig:on_off_opt} \end{figure} \section{Conclusions and Future Work} In this paper, we provide achievable secrecy sum-rate in a fading MAC with an eavesdropper when the eavesdropper's channel is not known to the transmitter. We obtain the power controls that optimize the secrecy sum-rate. We also obtain the power control when cooperative jamming is also employed. It is shown that cooperative jamming can substantially improve the secrecy sum-rates. We, then, obtain more easily computable ON/OFF power control schemes which provide secrecy sum-rates close to the optimal. It is shown that via these techniques, one can recover most of the secrecy sum-rate achievable with the perfect knowledge of the CSI of the eavesdropper. For future work one can consider the schemes when partial CSI of the legitimate receiver's channel is available at the transmitter. \appendices[Power Control] \section{Proof of theorems} \subsection{Proof of theorem 1} We sketch the outline of proof for sum-rate only, the proof of single user rate follows on the similar lines. We quantize $h_{1}, h_{2}, g_{1}$ and $g_{2}$ into uniform bins as defined in \cite{gopala2008}, i.e $\left\lbrace h_{1}\right\rbrace _{i_{1}=1}^{q_{1}^{h}}, \left\lbrace h_{2}\right\rbrace _{i_{2}=1}^{q_{2}^{h}}, \left\lbrace g_{1}\right\rbrace _{j_{1}=1}^{q_{1}^{g}} $ and $\left\lbrace g_{2}\right\rbrace _{j_{2}=1}^{q_{2}^{g}} $ where $h_{1}\in [0,M_{1}^{h}], h_{2}\in [0,M_{2}^{h}], g_{1}\in [0,M_{1}^{g}] $ and $ g_{2}\in [0,M_{2}^{g}]$ \\ Also let \\ $\mathcal{H}(i_{1},i_{2})$=\{\textbf{h}:$h_{1,i_{1}}\leq~h_{1}\leq~h{1,i_{1}+1},h_{2,i_{2}}\leq~h_{2}\leq h{2,i_{2}+1}$\} \\ $\mathcal{G}(j_{1},j_{2})$=\{\textbf{g}:$g_{1,j_{1}}\leq g_{1}\leq h{1,j_{1}+1},g_{2,j_{2}}\leq g_{2}\leq g{2,j_{2}+1}$\} Also we define $\mathcal{S}(i_{1},i_{2},j_{1},j_{2})$ = $\mathcal{H} \bigcup \mathcal{G}$ \\ We say that a channel is in state $s_{i_{1},i_{2},j_{1},j_{2}} $if (\textbf{h},\textbf{g})$ \in \mathcal{S}(i_{1},i_{2},j_{1},j_{2})$ We define quantized power control policy as \\ \begin{equation} \label{new_power} P_{k}(h) = \displaystyle\inf_{\textbf{h}\in\mathcal{H}}P_{k}(\textbf{h}) \end{equation} It can be easily shown that (\ref{p_cons_opt}) implies that (\ref{new_power}) also satisfies power constraint. Now for a particular quantized state, we have a Gaussian MAC-WT whose rate region is characterized in \cite{yener2006}. In particular, the sum-rate is bounded as:\\ \begin{equation} R_{1}+R_{2}\leq r_{sum} = \frac{1}{2}\left[log\left(\frac{1+h_{1,i_{1}}P_{1}(h)+h_{2,i_{2}}P(h)}{1+g_{1,j_{1}}P_{1}(h)+g_{2,j_{2}}P_{2}(h)}\right)\right]^{+} \end{equation} Now the number of times the channel is in state $s_{i_{1},i_{2},j_{1},j_{2}} $ \\ \begin{equation} N_s = nPr\{(\textbf{h},\textbf{g}) \in \mathcal{S}(i_{1},i_{2},j_{1},j_{2})\}\equiv nPr\{s_{i_{1},i_{2},j_{1},j_{2}}\} \end{equation} Now let R$_{sum}$ = R$_{1}$ + R$_{2}$, the following rate is achievable\\ \begin{equation} \displaystyle\lim_{n\rightarrow\infty} R_{sum} = \displaystyle\lim_{n\rightarrow\infty}\sum\limits_{i_{1}=0}^{M_{1}^{h}} \sum\limits_{i_{2}=0}^{M_{2}^{h}} \sum\limits_{j_{1}=0}^{M_{1}^{g}} \sum\limits_{j_{2}=0}^{M_{2}^{g}} r_{sum}\frac{N_{s}}{n} \end{equation} \begin{equation} =\sum\limits_{i_{1}=0}^{M_{1}^{h}} \sum\limits_{i_{2}=0}^{M_{2}^{h}} \sum\limits_{j_{1}=0}^{M_{1}^{g}} \sum\limits_{j_{2}=0}^{M_{2}^{g}} r_{sum}Pr\{s_{i_{1},i_{2},j_{1},j_{2}}\} \end{equation} Due lack of space we omit some steps, one can easily prove also that average probabililty of error also vanishes i.e\\ \begin{equation} \displaystyle\lim_{n\rightarrow\infty}\bar{P_{e}}\leq\sum\limits_{i_{1}=0}^{M_{1}^{h}} \sum\limits_{i_{2}=0}^{M_{2}^{h}} \sum\limits_{j_{1}=0}^{M_{1}^{g}} \sum\limits_{j_{2}=0}^{M_{2}^{g}} \bar{P_{e}}(i_{1},i_{2},j_{1},j_{2})Pr\{s_{i_{1},i_{2},j_{1},j_{2}}\} = 0 \end{equation} Now one can show that the optimal power control policy given in theorem 1 achives this sum-rate R$_{sum}^{opt}$ given by\\ R$_{sum}^{opt}$=\\ \begin{equation} \int_{\tau_{1}}^{\infty}\int_{\tau_{2}}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty} \left[log\left(\frac{1+h_{1}P_{1} + h_{2}P_{2}}{1+g_{1}P_{1} + g_{2}P_{2}}\right)\right]^{+}\phi(\textbf{H})d\textbf{H} \end{equation} For that we need to show that for a given ~$\epsilon>0$~$\exists M_{1}^{h},M_{2}^{h},M_{1}^{g},M_{2}^{g}$ s.t\\ \begin{equation} \sum\limits_{i_{1}=0}^{M_{1}^{h}} \sum\limits_{i_{2}=0}^{M_{2}^{h}} \sum\limits_{j_{1}=0}^{M_{1}^{g}} \sum\limits_{j_{2}=0}^{M_{2}^{g}} r_{sum}Pr\{s_{i_{1},i_{2},j_{1},j_{2}}\} \geq R_{sum}^{opt} - \epsilon \end{equation} One can easily show the R$_{sum}^{opt}$ is finite. Then by invoking Dominated convergence theorem along the same lines as done in \cite{gopala2008} the achievability of sum-rate with the optimal power control policy is proved. \subsection{Proof of theorem 2} The proof follows along the same lines as in theorem 1 since from \cite{yener2007} we know that when full CSI of eve's channel is known, the optimal power control policy is that one user can either transmit or jam but not both at a time. Hence we can use same arguments as in proof of theorem 1 to prove the achievability of secrecy sum-rate, which we omit here due to lack of space. \section{Optimal Power Control} \subsection{Without Cooperative Jamming} We provide the optimal power control policy without cooperative jamming for Rayleigh fading. Similarly one can obtain the powers for other distributions. Let $f_{1}$ and $f_{2}$ denote the densities of $h$ and $g$ respectively. For Rayleigh fading case, averaging over all fading realizations of eve's channel, i.e., $g_{1}$ and $g_{2}$, which give positive secrecy sum-rate, we get from Theorem 3.1\\ \\ $R$ = \begin{equation} \label{optimal_ncj} \int\limits_{h_{1}}\int\limits_{h_{2}}\left[log\left(\phi_{P_{1},P_{2}}^{h}\right)-\frac{1}{\xi_{P_{1},P_{2}}}\left\lbrace P_{1}\gamma_{1}^{g}\theta_{P_{1}} - P_{2}\gamma_{2}^{g}\theta_{P_{2}}\right\rbrace \right]f_{1}(h)dh \end{equation} where\\ $\phi_{P_{1},P_{2}}^{h}$ is as defined in (\ref{notation}) and\\ \begin{equation} \xi_{a,b} = a\gamma_{1}^{g}-b\gamma_{2}^{g}, \end{equation} \begin{equation} \theta_{P_{1}}=e^{\frac{1}{P_{1}\gamma_{1}^{g}}}\left[Ei\left(\frac{1}{P_{1}\gamma_{1}^{g}}\right) - Ei\left(\frac{1}{P_{1}\gamma_{1}^{g}} + \frac{h_{1}P_{1}+h_{2}P_{2}}{P_{1}\gamma_{1}^{g}}\right)\right], \end{equation} \\ \begin{equation} \theta_{P_{2}}=e^{\frac{1}{P_{2}\gamma_{2}^{g}}}\left[Ei\left(\frac{1}{P_{2}\gamma_{2}^{g}}\right) - Ei\left(\frac{1}{P_{2}\gamma_{2}^{g}} + \frac{h_{1}P_{1}+h_{2}P_{2}}{P_{2}\gamma_{2}^{g}}\right)\right], \end{equation} and\\ \begin{equation} Ei(x)=\int_{x}^{\infty} \frac{e^{-t}}{t}dt. \end{equation} After writing the Lagrangian and invoking KKT (Karush-–Kuhn-–Tucker) conditions (which are only necessary here as the objective function need not be concave \cite{boyd}), we get \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \label{KKT_1} \frac{h_{1}}{\phi_{P_{1},P_{2}}^{h}} &&+ \frac{1}{\xi_{P_{1},P_{2}}}\left\lbrace\frac{\theta_{P_{1}}}{P_{1}}-\gamma_{1}^{g}+\frac{\alpha_{1}}{h_{2}\phi_{P_{1},P_{2}}^{h}}+\frac{P_{2}\left(\alpha_{1}+\alpha_{2}\right)}{\phi_{P_{1},P_{2}}^{h}}\right\rbrace \nonumber\\ &&+ \frac{P_{2}\gamma_{1}^{g}\gamma_{2}^{g}}{\xi_{P_{1},P_{2}}^{2}}\left(\theta_{P_{1}}-\theta_{P_{2}}\right) - \lambda_{1} = 0 \end{eqnarray} \setlength{\arraycolsep}{0pt} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \label{KKT_2} \frac{h_{2}}{\phi_{P_{1},P_{2}}^{h}} &&-\frac{1}{\xi_{P_{1},P_{2}}}\left\lbrace\frac{\theta_{P_{2}}}{P_{2}}-\gamma_{2}^{g}+\frac{\alpha_{2}}{h_{1}\phi_{P_{1},P_{2}}^{h}}+\frac{P_{1}\left(\alpha_{1}+\alpha_{2}\right)}{\phi_{P_{1},P_{2}}^{h}}\right\rbrace \nonumber\\ &&- \frac{P_{1}\gamma_{1}^{g}\gamma_{2}^{g}}{\xi_{P_{1},P_{2}}^{2}}\left(\theta_{P_{1}}-\theta_{P_{2}}\right) - \lambda_{2}=0 \end{eqnarray} \setlength{\arraycolsep}{0pt} where $\lambda_{1}$ and $\lambda_{2}$ are the Lagrangian multipliers and\\ \begin{equation} \alpha_{1}=h_{2}\gamma_{1}^{g} e^{-\left(\frac{h_{1}P_{1}+h_{2}P_{2}}{P_{1}\gamma_{1}^{g}}\right)}, \end{equation} \begin{equation} \alpha_{2}=h_{1}\gamma_{2}^{g} e^{-\left(\frac{h_{1}P_{1}+h_{2}P_{2}}{P_{2}\gamma_{2}^{g}}\right)}. \end{equation} \\ We solve this set of equations numerically for optimum power policy:\\ \begin{enumerate} \item If we find positive solutions for $P_{1}$ and $P_{2}$ from (\ref{KKT_1}) and (\ref{KKT_2}), both should be transmitting with their respective powers. \item If we do not find positive solutions for both and $h_1 > h_2$, we solve (\ref{KKT_1}) for $P_{1}$ with $P_{2} = 0$. \item If we do not find positive solutions for both and $h_1 < h_2$, we solve (\ref{KKT_2}) for $P_{2}$ with $P_{1} = 0$. \end{enumerate} \subsection{With Cooperative Jamming} A user can either transmit or jam. We have different expressions of secrecy sum-rate based on whether the users are transmitting or jamming. Averaging (\ref{rs_coop}) over all the fading realizations $(g_{1}, g_{2})$ if there is a positive solution $P_{1}$ and $P_{2}$ from (\ref{KKT_1}) and (\ref{KKT_2}), both users will transmit, and the secrecy sum-rate is given in (\ref{optimal_ncj}). When there is no solution of (\ref{KKT_1}) and (\ref{KKT_2}) such that $P_{1}>0$ and $P_{2}>0$, and the channel of user 1 is better than that of user 2, the secrecy sum-rate is\\ \begin{equation} \label{optimal_cj_1} \int\limits_{h_{1}}\int\limits_{h_{2}}\left[log\frac{\phi_{P_{1},Q_{2}}^{h}}{\phi_{0,Q_{2}}^{h}}-\frac{1}{\xi_{P_{1},Q_{2}}}\left\lbrace P_{1}\gamma_{1}^{g}\left(\beta_{P_{1}} - \beta_{Q_{2}}\right)\right\rbrace \right]f_{1}(h)dh. \end{equation} Similarly when the channel of user 2 is better than user 1, the secrecy sum-rate is\\ \begin{equation} \label{optimal_cj_2} \int\limits_{h_{1}}\int\limits_{h_{2}}\left[log\frac{\phi_{Q_{1},P_{2}}^{h}}{\phi_{Q_{1},0}^{h}}-\frac{1}{\xi_{Q_{1},P_{2}}}\left\lbrace P_{2}\gamma_{2}^{g}\left(\beta_{Q_{1}} - \beta_{P_{2}}\right)\right\rbrace \right]f_{1}(h)dh. \end{equation} where \\ \begin{math} \beta_{P_{1}}=e^{\frac{1}{P_{1}\gamma_{1}^{g}}}\left[Ei\left(\frac{1}{P_{1}\gamma_{1}^{g}}\right) - Ei\left(\frac{1}{P_{1}\gamma_{1}^{g}} + \frac{h_{1}}{\gamma_{1}^{g}(1+h_{2}Q_{2})}\right)\right], \end{math} \begin{math} \beta_{P_{2}}=e^{\frac{1}{P_{2}\gamma_{2}^{g}}}\left[Ei\left(\frac{1}{P_{2}\gamma_{2}^{g}}\right) - Ei\left(\frac{1}{P_{2}\gamma_{2}^{g}} + \frac{h_{2}}{\gamma_{2}^{g}(1+h_{1}Q_{1})}\right)\right], \end{math} \begin{math} \beta_{Q_{1}}=e^{\frac{1}{Q_{1}\gamma_{1}^{g}}}\left[Ei\left(\frac{1}{Q_{1}\gamma_{1}^{g}}\right) - Ei\left(\frac{1}{Q_{1}\gamma_{1}^{g}} + \frac{h_{2}}{\gamma_{2}^{g}(1+h_{1}Q_{1})}\right)\right], \end{math} \begin{math} \beta_{Q_{2}}~~~=~~~e^{\frac{1}{Q_{2}\gamma_{2}^{g}}}\left[Ei\left(\frac{1}{Q_{2}\gamma_{2}^{g}}\right) - Ei\left(\frac{1}{Q_{2}\gamma_{2}^{g}} + \frac{h_{1}}{\gamma_{2}^{g}(1+h_{2}Q_{2})}\right)\right]. \end{math} \\ Now the problem is to maximize the above objective functions appropriately for each case. This function may not be concave. Hence, KKT conditions are necessary but not sufficient. We solve the equations obtained via KKT numerically to obtain power policy.
1,477,468,750,491
arxiv
\section{Introduction} Visibility between geometric objects is a cornerstone notion in discrete and computational geometry, that appeared as soon as the late 1960s in pioneering experiments in robotics~\cite{LP79}. Visibility is involved in major themes that helped shape the field, such as art gallery and motion planning problems~\cite{dutch,ghosh,artgallery}. However, despite decades of research on those topics, the combinatorial structures induced by visibility relations in the plane are far from understood. Among such structures, {\em visibility graphs} are arguably the most natural. In general, a visibility graph encodes the binary, symmetric visibility relation among sets of objects in the plane, where two objects are visible from each other whenever there exists a straight line of sight between them that does not meet any obstacle. More precisely, a {\em point visibility graph} associated with a set $P$ of points in the plane is a simple undirected graph $G=(P,E)$ such that two points of $P$ are adjacent if and only if the open segment between them does not contain any other point of $P$. Note that the points play both roles of vertices of the graph and obstacles. In what follows, we will use the abbreviation PVG for point visibility graph. \subsection{Our results} We consider the {\em recognition} problem for point visibility graphs: given a simple undirected graph $G=(V,E)$, does there exists a point set $P$ such that $G$ is isomorphic to the visibility graph of $P$? More concisely, the problem consists of deciding the property of being a point visibility graph of some point set. As is often the case for geometric graphs, the recognition problem appears to be intractable under usual complexity-theoretic assumptions. We actually characterize the problem as complete for the existential theory of the reals; hence recognizing point visibility graphs is as hard as deciding the existence of a solution to an arbitrary system of polynomial inequalities over the reals. Equivalently, this amounts to deciding the emptiness of a semialgebraic set. This complexity class is intimately related to fundamental results on {\em oriented matroids} and {\em pseudoline arrangements} starting with the insights of Mn\"ev on the algebraic universality properties of these structures~\cite{mnev1988universality}. The notation $\exists\mathbb{R}$ has been proposed recently by Schaefer~\cite{S09} to refer to this class, motivated by the continuously expanding collection of problems in computational geometry that are identified as complete for it. The only known inclusion relations for $\exists\mathbb{R}$ are $NP\subseteq\exists\mathbb{R}\subseteq PSPACE$. It is known from the Tarski-Seidenberg Theorem that the first-order theory of real closed fields is decidable, but polynomial space algorithms for problems in $\exists\mathbb{R}$ have been proposed only much more recently by Canny~\cite{canny1988}. Whenever a graph is known to be a point visibility graph, the description of the point set as a collection of pairs of integer coordinates constitutes a natural certificate. Since it is not known whether $\exists\mathbb{R}\subseteq NP$, we should not expect such a certificate to have polynomial size. In fact, we show that there exist point visibility graphs all realizations of which have an irrational coordinate, and point visibility graphs that require doubly exponential coordinates in any realization. \subsection{Related work and Connections} The recognition problem for point visibility graphs has been explicitly stated as an important open problem by various authors~\cite{KPW05}, and is listed as the first open problem in a recent survey from Ghosh and Goswami~\cite{ghosh2013unsolved}. A linear-time recognition algorithm has been proposed by Ghosh and Roy for {\em planar} point visibility graphs~\cite{GR14}. For general point visibility graphs they showed that recognition problem lies in $\exists\mathbb{R}$. More recently, Roy~\cite{roy2014point} published an ingenious and rather involved NP-hardness proof for recognition of arbitrary point visibility graphs. Our result clearly implies NP-hardness as well, and, in our opinion, has a more concise proof. Structural aspects of point visibility graphs have been studied by K\'ara, P\'or, and Wood~\cite{KPW05}, P\'or and Wood~\cite{PW10}, and Payne et al.~\cite{PPVW12}. Many fascinating open questions revolve around the {\em big-line-big-clique} conjecture, stating that for all $k,\ell\geq 2$, there exists an $n$ such that every finite set of at least $n$ points in the plane contains either $k$ pairwise visible points or $\ell$ collinear points. {\em Visibility graphs of polygons} are defined over the vertices of an arbitrary simple polygon in the plane, and connect pairs of vertices such that the open segment between them is completely contained in the interior of the polygon. This definition has also attracted a lot of interest in the past twenty years. Ghosh gave simple properties of visibility graphs of polygons and conjectured that they were sufficient to characterize visibility graphs~\cite{G88,G97}. These conjectures have been disproved by Streinu~\cite{S05} via the notion of {\em pseudo-visibility} graphs, or visibility graphs of {\em pseudo-polygons}~\cite{ORS97}. A similar definition is given by Abello and Kumar~\cite{AK02}. Roughly speaking, the relation between visibility and pseudo-visibility graphs is of the same nature as that between arrangements of straight lines and pseudolines. Although, as Abello and Kumar remark, these results somehow suggest that the difficulty in the recognition task is due to a stretchability problem, the complexity of recognizing visibility graphs of polygons remains open, and it is not clear whether the techniques described in this paper can help characterizing it. The influential surveys and contributions of Schaefer about $\exists\mathbb{R}$-complete problems in computational geometry form an ideal point of entry in the field~\cite{S09,S12}. Among such problems, let us mention recognition of segment intersection graphs~\cite{KM94}, recognition of unit distance graphs and realizability of linkages~\cite{K02,S12}, recognition of disk and unit disk intersection graphs~\cite{MM13}, computing the rectilinear crossing number of a graph~\cite{B91}, simultaneous geometric graph embedding~\cite{kyncl2011simple}, and recognition of $d$-dimensional Delaunay triangulations~\cite{APT14}. \subsection{Outline of the paper} In Section~\ref{sec:uniquerepresentations}, we provide two simple visibility graph constructions, the {\em fan} and the {\em generalized fan}, all geometric realizations of which are guaranteed to preserve a specified collection of subsets of collinear points. The proofs are elementary and only require a series of basic observations. In Section~\ref{sec:grid}, we give two applications of the fan construction. In the first, we show that there exists a point visibility graph that does not have any geometric realization on the integer grid. In other words, all geometric realizations of this point visibility graph are such that at least one of the points has an irrational coordinate. Another application of the fan construction follows, where we show that there are point visibility graphs each grid realization of which require coordinates of values $2^{2^{\sqrt[3]{n}}}$ where $n$ denotes the number of vertices of the point visibility graph. The main result of the paper is given in Section~\ref{sec:reduction}. We first recall the main notions and tools used in the results from Mn\"ev~\cite{mnev1988universality}, Shor~\cite{shor1991stretchability}, and Richter-Gebert~\cite{richtergebert1995mnev} for showing that realizability of abstract order types is complete for the existential theory of the reals. We then combine these tools with the generalized fan construction to produce families of point visibility graphs that can simulate arbitrary arithmetic computations over the reals. \subsection{Notations} For the sake of simplicity, we slightly abuse notations and do not distinguish between a vertex of a point visibility graph and its corresponding point in a geometric realization. We denote by $G[P']$ the induced subgraph of a graph $G=(P,E)$ with the vertex set $P'\subseteq P$. For a point visibility realization $R$ we denote by $R[P']$ the induced subrealization containing only the points $P'$. The PVG of this subrealization is in general not an induced subgraph of $G$. By $N(p)$ we denote the open neighbourhood of a vertex $p$. The line through two points $p$ and $q$ is denoted by $\ell(p,q)$ and the open segment between $p$ and $q$ by $\overline{pq}$. We will often call $\overline{pq}$ the \emph{sightline} between $p$ and $q$, since $p$ and $q$ see each other iff $\overline{pq}\cap P=\emptyset$. We call two sightlines $\overline{p_1q_1}$ and $\overline{p_2q_2}$ non-crossing if $\overline{p_1q_1}\cap\overline{p_2q_2}=\emptyset$. For each point $p$ all other points of $G$ lie on $\deg(p)$ many rays $R^p_1,\dots,R^p_{\deg(p)}$ originating from $p$. \section{Point visibility graphs preserving collinearities}\label{sec:uniquerepresentations} We first describe constructions of point visibility graphs, all the geometric realizations of which preserve some fixed subsets of collinear points. \subsection{Preliminary observations} \begin{figure}[ht] \centering \includegraphics[width=.4\textwidth]{stableline_emptyhalfspace2.pdf} \caption{\label{fig:stableline_emptyhalfspace} (Lemma~\ref{lem:empty_halfspace}) Left: a point sees points on consecutive rays with small angle. Right: a vertex of $\deg(q)=1$ in $G[N(p)]$ lies on the boundary of an empty halfspace.} \end{figure} In the realization of a PVG, the point $p$ sees exactly $\deg(p)$ many vertices, hence all other points lie on $\deg(p)$ rays of origin $p$. \begin{lemma}\label{lem:empty_halfspace} Let $q\in N(p)$ be a degree-one vertex in $G[N(p)]$. Then all points lie on one side of the line $\ell (p,q)$. Furthermore, the neighbor of $q$ lies on the ray that forms the smallest angle with $\overline{qp}$. \end{lemma} \begin{proof} If the angle between two consecutive rays is smaller than $\pi$, then every vertex on one ray sees every vertex on the other ray. Hence one of the angles incident to $q$ is at least $\pi$ and the neighbour of $q$ lies on the other incident ray. \end{proof} \begin{corollary}\label{cor:pathUnique} If $G[N(p)]$ is an induced path, then the order of the path and the order of the rays coincide. \end{corollary} \begin{proof} By Lemma~\ref{lem:empty_halfspace} the two endpoints of the path lie on rays on the boundary of empty halfspaces. Thus all other rays form angles which are smaller than $\pi$, and thus they see their two neighbors of the path on their neighboring rays. \end{proof} \begin{observation}\label{obs:secondPoint} Let $q$, $q\not=p$, be a point that sees all points of $N(p)$. Then $q$ is the second point (not including $p$) on one of the rays emerging from $p$. \end{observation} \begin{proof} Assume $q$ is not the second point on one of the rays. Then $q$ cannot see the first point on its ray which is a neighbor of $p$. \end{proof} This also shows the following observation. \begin{observation}\label{obs:nonSecondPoint} Let $q$, $q\not= p$, be a point that is not the second point on one of the rays from $p$ and sees all but one ($r$) of the neighbors of $p$. Then $q$ lies on the ray of $r$. \end{observation} \subsection{Fans and generalized fans} \begin{figure}[ht] \centering \includegraphics[scale=.7]{fan.pdf} \caption{\label{fig:fan} A fan: a vertex is placed on each intersection of two lines/segments.} \end{figure} We have enough tools by now to show the uniqueness of a PVG obtained from the following construction, which is depicted in Figure~\ref{fig:fan}. Consider a set $S$ of segments between two lines $\ell$ and $\ell'$ intersecting in a point $p$. , such that each segment has one endpoint on each line. For each intersection of a pair of segments, construct a ray of origin $p$ and going through this intersection point. Add two segments $s_1$ and $s_2$ between $\ell$ and $\ell'$, such that $s_1$ is the closest and $s_2$ is the second closest segments to~$p$. We now put a point on each intersection of the segments and rays and construct the PVG of this set of points. We call this graph the \emph{fan} of $S$ and denote it by $\fan(S)$. Since we have the choice of the position of the segments $s_1$ and $s_2$ we can avoid any collinearity between a point on $s_1$ or $s_2$ and points on other segments, except for the obvious collinearities on one ray. Thus every point sees all points on $s_1$ except for the one of the ray it lies on. \begin{lemma}\label{lem:uniqueFan} All realizations of a fan preserve collinearities between points that lie on one segment and between points that lie on one ray. \end{lemma} \begin{proof} We first show that the distribution of the points onto the rays of $p$ is unique. By construction the points on $s_2$ see all the points on $s_1$, which are exactly the neighbors of $p$. Thus by Observation~\ref{obs:secondPoint} the points from $s_2$ are the second points of a ray. Since there is exactly one point for each ray on $s_2$, all the other points are not second points on a ray. By construction each of the remaining points sees all but one point of $s_1$. Observation~\ref{obs:nonSecondPoint} gives a unique ray a point lies on. The order of the rays is unique by Corollary~\ref{cor:pathUnique}. On each ray the order of the points is as constructed, since the PVG of points on one ray is an induced path. Now we have to show that the points originating from one segment are still collinear. Consider three consecutive rays $R_1,R_2,R_3$. We consider a visibility between a point $p_1$ on $R_1$ and one point $p_3$ on $R_3$ that has to be blocked by a point on $R_2$. Let $p_2$ be the original blocker from the construction. For each point on $R_2$ that lies closer to $p$ there is a sightline blocked by this point, and for each point that lies further away from $p$ there is a sightline blocked by this point. For each of those points pick one sightline that corresponds to an original segment and $\overline{p_1p_3}$. This set of sightlines is non-crossing, since the segments only intersect on rays by assumption. So we have a set of non-crossing sightlines and the same number of blockers available. Since the order on each ray is fixed, and the sightlines intersect $R_2$ in a certain order, the blocker for each sightline is uniquely determined and has to be the original blocker. By transitivity of collinearity all points from the segments remain collinear. \end{proof} To show the hardness of PVG recognition in the existential theory of the reals in Section~\ref{sec:reduction} we need a unique realization property for the following generalization of a fan. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{generalizedfan2.pdf} \caption{\label{fig:genfan} Left: a bundle of a generalized fan above and below each intersection. Right: the generalized fan with the segment $s_0$ and the point $p$.} \end{figure} Consider again two lines $\ell$ and $\ell'$ and a set of $n$ segments $S$ located between those lines. We assume for now that $\ell$ and $\ell'$ are parallel, i.e., their intersection point $p$ lies on the line at infinity, and horizontal. Now we are not interested in preserving the exact arrangement of the segments $S$ in a PVG, but only in keeping the segments straight, and the order of the segments on $\ell$ and on $\ell'$ as described by $S$. For that purpose we add three parallel and equidistant segments $s_1,s_2,s_3$ to the left of all segments of $S$. Below $\ell'$ and above $\ell$ we add $5n$ equidistant rays each, that are parallel to $\ell$ and $\ell'$ and start on the point at infinity $p$. Let $\varepsilon$ be the distance between two consecutive rays in one bundle. We choose $\varepsilon$ such that $(5n)^4\varepsilon$ is smaller that the distance of any intersection of segments in $S$ to $\ell$ or $\ell'$. We call such a set of $5n$ rays a \emph{bundle}. Above the bundle close to $\ell$ and below the bundle close to $\ell'$ we add $(5n)^4$ segments starting on $s_3$ and ending in $p$. The segments are parallel to the rays of the bundles and are also equidistant with distance $\varepsilon$ to their close bundle. The bundles together with the $(5n)^4$ segments forms what we will call the \emph{extended} bundle. The equidistance property is preserved according to the following lemma. \begin{lemma}\label{lem:grid} Consider a realization of a PVG of an $r\times q$ integer grid, $r\geq 6$, $q\geq 3$, such that the points of each of the $r$ rows lie on a horizontal line. Then -- up to a projective transformation -- the horizontal lines are equally spaced, the verticals are parallel, and also equally spaced. \end{lemma} \begin{figure}[h] \centering \includegraphics[width=.3\textwidth]{grid_unique.pdf} \caption{\label{fig:grid_unique} Lemma~\ref{lem:grid}: The rows and columns of a grid are up to projective transformation parallel.} \end{figure} \begin{proof} We first show that there is a projective transformation such that the columns lie on parallel lines. Using a projective transformation we can assume that the first two lines $l_1$ and $l_2$ of the grid are vertical. Assume the third line $l_3$ is not parallel to the first two. Then, since $r\geq 6$, there is a $2\times 2$ cell in which the distance between $l_2$ and $l_3$ is larger than between $l_1$ and $l_2$ on the upper and lower boundary of the cell, or the distance is smaller on the upper and lower boundary. Now consider the position of the middle point of the $2\times 2$ cell. The point lies on $l_2$ and the intersection of the two diagonals. This does not coincide in the described case as shown in Figure~\ref{fig:grid_unique}. This argument also shows that the lines $l_1,l_2,l_3$ are equidistant. By symmetry also the lines $r_i$ are equidistant. \end{proof} Now we apply a projective transformation, such that the intersection point $p$ of $\ell$ and $\ell'$ does not lie on the line at infinity as shown in Figure~\ref{fig:genfan}. We add a segment $s_0$ between $\ell$ and $\ell'$ that lies between $p$ and $s_1$. Again we take all the intersection points between segments, rays or lines as points and construct the visibility graph of those points. Note that we can add $s_0$, such that each point on $s_0$ sees all points that do not lie on its ray or $s_0$. A visibility graph constructed in this way will be called a \emph{generalized fan}. In Lemma~\ref{lem:uniqueGeneralizedFan} we show that all realizations of a generalized fan preserve the collinearities between the points on the segments. Let us briefly consider the differences between a fan and a generalized fan. In the fan in Figure~\ref{fig:fan} the vertical order of the intersection points is $a>b>c>d>e$. In contrast, the generalized construction, shown on the left of Figure~\ref{fig:genfan}, allows different vertical orders on those points. In Figure~\ref{fig:genfanorder} we used three bundles instead of two bundles to fix the orders. In the proof of Lemma~\ref{lem:uniqueGeneralizedFan} it will turn out that all realizations for this construction also preserve collinearities. In this case we have a further restriction on the vertical order of the intersection points: the points $a$ and $b$ must lie above the middle bundle, and the points $c,d,e$ must lie below. This restricts the possible vertical orders of intersection points to some linear extensions of the partial order shown in Figure~\ref{fig:genfan}. To indicate that $a$ and $b$ lie above $c,d$ and $e$ we introduce the notation $\{a,b\}>\{c,d,e\}$. This notation captures exactly the restriction we can add to the horizontal orders of a fan: given a realization of the segments $S$ between the lines $\ell$ and $\ell'$ it is possible to add bundles between some intersection points, partitioning the intersection points of the segments into subsets $I_1,\dots,I_k$. Now every realization of the PVG respects the vertical order $I_1>\dots>I_k$ of the intersection points. If $|I_j|=1$, one line through an intersection point as in Figure~\ref{fig:fan} can also be used. \begin{figure}[ht] \centering \includegraphics[width=0.55\textwidth]{stackgenfan.pdf} \caption{\label{fig:genfanorder} A generalized fan with several bundles.} \end{figure} \begin{lemma}\label{lem:uniqueGeneralizedFan} All realizations of a generalized fan preserve collinearities between points that lie on one segment and between points that lie on one ray. \end{lemma} \begin{proof} The argument showing that the distribution of the points onto the rays starting at $p$ and the order of the rays remains as constructed is identical to the proof of Lemma~\ref{lem:uniqueFan}. So we only have to show that the points from the segments stay collinear. We do this in two steps. In the first one we show that the points on segments within one extended bundle stay collinear. We will use this in a second step to show that the segments in two consecutive bundles stay aligned. We proceed with the first step. First note that the points from one segment within one bundle stay collinear in each realization by the same arguments as in Lemma~\ref{lem:uniqueFan}. The same holds for the points on a segment $s_k$, $k\in\{0,\dots,3\}$, and the intersection with the $(5n)^4$ segments. So for the first step we only have to show that the segments $s_0,\dots,s_3$ in extended bundles stay aligned. Therefore we consider the lowest ray of the bundle close to $\ell'$ and two neighboring segments. The points on the segments $s_k$ stay collinear on those three rays, because four non-crossing sightlines have to be blocked by four points. Now consider the two lowest rays of the bundle close to $\ell'$, and the $(5n)^4$ segments below. Assume that the points on one of the segments $s_0,\dots,s_4$ do not stay aligned for one $s_k$. Then the points on $s_k$ that lie on the two lowest rays $u_{k}$ (lowest) and $v_{k}$ (second lowest) and the lowest segment $w_k$ form the convex hull of all the points on $s_k$ that lie in between, see Figure~\ref{fig:extendedgridcollinear}. In this triangle there are $(5n)^4-1$ non-crossing sightlines that have to be blocked. This implies that one of the other segments $s_l$ have to support blockers. If the triple $(u_k,v_k,w_k)$ is oriented clockwise some the blockers have to be supported by a segment $s_k'$ to the right, or by one to the left otherwise. In the clockwise case the three according points on the convex hull of the $s_k'$ have to be oriented clockwise as well. Since a symmetric case holds for the counterclockwise case we obtain a contradiction for the rightmost clockwise or leftmost counterclockwise oriented triple. \begin{figure}[ht] \centering \includegraphics[width=.4\textwidth]{extendedgridcollinear.pdf} \caption{\label{fig:extendedgridcollinear} A clockwise orientation of $(u_k,v_k,w_k)$ forces the triple on a right segment $s_k'$ to be oriented clockwise.} \end{figure} So it is left to show that the two subsegments within consecutive bundles stay aligned. We will refer to those subsegments as the upper and the lower part of a segment. First note that the segments $s_k$, $k\in\{0,\dots,3\}$ stay aligned in consecutive extensions of a bundle, thus they cannot provide blockers for sightlines between upper and lower part on the other segments. We assume the points from one original segment $s$ are not all collinear in a realization of the fan. We denote by $s'$ and $s''$ respectively the lower and upper part of $s$. If $s'$ and $s''$ are not aligned then one of the two lower points of $s''$ does not lie on the supporting line of $s'$. We denote this point by $q$. Between $q$ and the points on $s'$ there are at least $(5n)^4-1$ non-crossing sightlines that have to be blocked. At most $n$ of those sightlines can be blocked from points on the upper bundle, namely the points from the lowest ray if $q$ lies on the second lowest ray. The other blockers lie on the other $n-1$ lower parts of the segments. From the pigeonhole principle there is a lower part $b$ of a segment that provides at least $\lceil(5n-n-1)/(n-1)\rceil=5$ blockers for sightlines between $q$ and points on $s'$. We will show that this is not possible. By first reversing the projective transformation applied in the construction of the generalized fan, and then applying Lemma~\ref{lem:grid}, we can assume that the lines in the lower bundle are parallel and equidistant, as shown in Figure~\ref{fig:genfan_blocker}. Now we use an affine transformation such that the points of $s'$ have coordinates $(0,i)$ for $i\in\{-k,\dots,r-1-k\}$, where $k$ is chosen such that the lowest point blocked by a point on $b$ has coordinates $(0,0)$. By another linear transformation we can ensure that $q=(N,N)$ for some $N>0$. We can now use the segments starting from $s_3$ to give a lower bound on $N$: the segments above the bundle of $s'$ are also equidistant with the same distance as the lines in the bundle, since the segments extend the grid. Since $q$ lies on a parallel line above those rays we know that $N>(5n)^4$. The points on $b$ that block visibilities between points on $s'$ from $q$ also have $y$-coordinates in $\{0,\dots,r-1-k\}$, since they lie on lines in the same bundle as $s'$. Let us assume that the point $b_{ij}$ on $b$ has $y$-coordinate $j$ and blocks the visibility of $(0,i)$ from $q$. Then the $x$-coordinate of $b_{ij}$ is $x=(j-i)\frac{N}{N-i}$. We consider the sets $M:=\{(i,j)\mid b_{ij} \mbox{ is a blocker}\}$ and $M':=\{(j-i)\mid b_{ij} \mbox{ is a blocker}\}$. We will obtain a contradiction in the following two cases. \begin{figure}[ht] \centering \includegraphics[width=.9\textwidth]{block1.pdf} \caption{\label{fig:genfan_blocker} Left: A blocker on $b$. Right: The situation after the coordinate transformation.} \end{figure} {\bf Case 1:} $|M'|<3$: In this case there are three points in $M$ with the same value for $j-i$. Those points on $b$ have the coordinates of the form $(\frac{cN}{N-i},c+i)$ where $c=j-i$ is constant. This is a parameterization of a hyperbola. No three points for $i<N$ on this curve are collinear, which contradicts that they all lie on the segment $b$. {\bf Case 2:} $|M'|\geq 3$: In this case there are three blockers $b_0,b_1,b_2$ with pairwise different values for $j-i$. Assume without loss of generality that $b_0=(x_0,j_0)$ blocks $(0,0)$ from $q$, $b_1=(x_1,j_1)$ blocks $(0,i_1)$, and $b_2=(x_2,j_2)$ blocks $(0,i_2)$. Then the $x$-coordinates of $b_k$ is given by $x_k=(j_k-i_k)\frac{N}{N-i_k}$. The difference of the $x$-coordinate of two consecutive points on $b$ is $d_{min}:=\frac{x_k-x_0}{j_k-j_0}$. Calculating $d_{min}$ using the expression above once with $b_1$ and once with $b_2$ leads to the following equation. \begin{align*} \frac{(j_2-i_2)\frac{N}{N-i_2}-j_0}{j_2-j_0} = \frac{(j_1-i_1)\frac{N}{N-i_1}-j_0 }{j_1-j_0}\\ \Leftrightarrow ({i_1}^2 j_0-{i_1}^2j_2-i_1j_0j_2+i_1j_1j_2-{i_2}^2 j_0+{i_2}^2j_1+i_2j_0j_1-i_2j_1j_2)N\\+(-i_1 j_0+i_1 j_2+i_2 j_0-i_2 j_1)N^2 +i_1i_2j_0(j_2-j_1)=0 \end{align*} Since all coefficients in the last equation are integral we obtain that $i_1i_2j_0(j_2-j_1)$ is a multiple of $N$. This is a contradiction to $N>(5n)^4$ since each of the factors is bounded by $5n$ and is nonzero. \end{proof} \section{Drawing point visibility graphs on grids}\label{sec:grid} We give a first simple application of the fan construction. \begin{figure}[ht] \centering \includegraphics[width=.3\textwidth]{Perles.pdf} \caption{\label{fig:perles} The Perles configuration.} \end{figure} \begin{theorem} There exists a point visibility graph every geometric realization of which has at least one point with one irrational coordinate. \end{theorem} \begin{proof} We use the so-called {\em Perles configuration} of 9 points on 9 lines illustrated in Fig.~\ref{fig:perles}. It is known that for every geometric realization of this configuration in the Euclidean plane, one of the points has an irrational number as one of its coordinate~\cite{G03}. We combine this construction with the fan construction described in the previous section. Hence we pick two lines $\ell$ and $\ell'$ intersecting in a point $p$, such that all lines of the configuration intersect both $\ell$ and $\ell'$ in the same wedge. Note that up to a projective transformation, the point $p$ may be considered to be on the line at infinity and $\ell$ and $\ell'$ taken as parallel. We add two non-intersecting segments $s_1$ and $s_2$ close to $p$, that do not intersect any line of the configuration. We then shoot a ray from $p$ through each of the points, and construct the visibility graph of the original points together with all the intersections of the rays with the lines and the two segments $s_1, s_2$. From Lemma~\ref{lem:uniqueFan}, all the collinearities of the original configuration are preserved, and every realization of the graph contains a copy of the Perles configuration. \end{proof} Also note that point visibility graphs that can be realized with rational coordinates do not necessarily admit a realization that can stored in polynomial space in the number of vertices of the graph. To support this, consider a line arrangement $\mathcal{A}$, and add a point $p$ in an unbounded face of the arrangement, such that all intersections of lines are visible in an angle around $p$ that is smaller than $\pi$. Construct rays $\ell$ and $\ell'$ through the extremal intersection points and $p$. From Lemma~\ref{lem:uniqueFan}, the fan of this construction gives a PVG that fixes $\mathcal{A}$. Since there are line arrangements that require integer coordinates of values $2^{2^{\Theta(|\mathcal A|)}}$~\cite{goodman1990intrinsic} and the fan has $\Theta(|\mathcal{A}|^3)$ points we get the following worst-case lower bound on the coordinates of points in a representation of a PVG. \begin{corollary}\label{cor:size_small} There exists a point visibility graph with $n$ vertices every realization of which requires coordinates of values $2^{2^{\Theta(\sqrt[3]{n})}}$. \end{corollary} \section{$\exists\mathbb{R}$-completeness reductions}\label{sec:reduction} The existential theory of the reals ($\exists\mathbb{R}$) is a complexity class defined by the following complete problem. We are given a well-formed quantifier-free formula $F(x_1,\dots,x_k)$ using the numbers $0$ and $1$, addition and multiplication operations, strict and non-strict comparison operators, Boolean operators, and the variables $x_1,\dots,x_k$, and we are asked whether there exists an assignment of real values to $x_1,\dots,x_k$, such that $F$ is satisfied. This amounts to deciding whether a system of polynomial inequalities admits a solution over the reals. The first main result connecting this complexity class to a geometric problem is the celebrated result of Mn\"ev, who showed that \emph{realizability of order types}, or -- in the dual -- stretchability of pseudoline arrangements, is complete in this complexity class~\cite{mnev1988universality}. In what follows, we use the simplified reductions due to Shor~\cite{shor1991stretchability} and Richter-Gebert~\cite{richtergebert1995mnev}. The latter is in turn well explained in a recent manuscript by Matou\v{s}ek~\cite{matousek2014segment}. We refer the curious reader to those references for further details. The \emph{orientation} of an ordered triple of points $(p,q,r)$ indicates whether the three points form a clockwise or a counterclockwise cycle, or whether the three points are collinear. Let $P=\{p_1,\dots,p_n\}$ and an orientation $O$ of each triple of points in $P$ be given. The pair $(P,O)$ is called an \emph{(abstract) order type}. We say that the order type $(P,O)$ is realizable if there are coordinates in the plane for the points of $P$, such that the orientations of the triples of points match those prescribed by $O$. In order to reduce the order type realizability problem to solvability of a system of strict polynomial inequalities, we have to be able to simulate arithmetic operations with order types. This uses standard constructions introduced by von Staudt in his ``{\em algebra of throws}''~\cite{staudt}. \subsection{Arithmetics with order types.\label{subsec:arithmetics}} To carry out arithmetic operations using orientation predicates, we associate numbers with points on a line, and use the \emph{cross-ratio} to encode their values. The cross ratio $(a,b;c,d)$ of four points $a,b,c,d\in\mathbb{R}^2$ is defined as $$(a,b;c,d):=\frac{|a,c|\cdot|b,d|}{|a,d|\cdot|b,c|},$$ where $|x,y|$ is the determinant of the matrix obtained by writing the two vectors as columns. The two properties that are useful for our purpose is that the cross-ratio is invariant under projective transformations, and that for four points on one line, the cross-ratio is given by $\frac{\overrightarrow{ac}\cdot\overrightarrow{bd}}{\overrightarrow{ad}\cdot\overrightarrow{bc}}$, where $\overrightarrow{xy}$ denotes the oriented distance between $x$ and $y$ on the line. We will use the cross-ratio the following way: We fix two points on a line and call them $0$ and $1$. On the line through those points we call the point at infinity $\infty$. For a point $a$ on this line the cross-ratio $x:=(a,1;0,\infty)$ results in the distance between $0$ and $a$ scaled by the distance between $0$ and $1$. Because the cross-ratio is a projective invariant we can fix one line and use the point $a$ for representing the value $x$. In this way, we have established the coordinates on one line. \begin{figure}[ht] \centering \includegraphics[width=.9\textwidth]{gadgets.pdf} \caption{\label{fig:gadgets} Gadgets for addition (left) and multiplication (right) on a line.} \end{figure} For computing on this line, the gadgets for addition and multiplication depicted in Figure~\ref{fig:gadgets} can be used. Let us detail the case of multiplication. We are given the points $\infty< 0< 1< x< y$ on the line $\ell$, and wish to construct a point on $\ell$ that represents the value $x\cdot y$. Take a second line $\ell_\infty$ that intersects $\ell$ in $\infty$, and two points $a,b$ on this line. Construct the segments $\overline{by},\overline{b1}$ and $\overline{ax}$. Denote the intersection point of $\overline{ax}$ and $\overline{b1}$ by $c$. Call $d$ the intersection point of $\overline{by}$ and $\ell(0,c)$. The intersection point of $\ell$ and $\ell(d,a)$ represents the point $x\cdot y=:z$ on $\ell$, i.e., $(z,1;0,\infty)=(x,1;0,\infty)\cdot (y,1;0,\infty)$. In a projective realization of the gadget in which the line $\ell_{\infty}$ is indeed the line at infinity, the result can be obtained by applying twice the intercept theorem, in the triangles with vertices $0, d, y$ and $0, d, z$, respectively. To add the cross ratios of two points on a line, a similar construction is given in Figure~\ref{fig:gadgets}. \subsection{The reduction for order types} Using the constructions above we can already model a system of strict polynomial inequalities. However, it is not clear how we can determine the complete order type of the points without knowing the solution of the system. Circumventing this obstacle was the main achievement of Mn\"ev~\cite{mnev1988universality}. We cite one of the main theorems in a simplified version. \begin{theorem}[\cite{shor1991stretchability},\cite{richtergebert1995mnev}] Every \emph{primary semialgebraic set} $V\subseteq\mathbb{R}^d$ is \emph{stably equivalent} to a semialgebraic set $V'\subseteq\mathbb{R}^n$, with $n=\mathrm{poly}(d)$, for which all defining equations have the form $x_i+x_j=k$ or $x_i\cdot x_j=x_k$ for certain $1 \leq i \leq j < k\leq n$, where the variables $1=x_1<x_2<\dots<x_n$ are totally ordered. \end{theorem} A \emph{primary semialgebraic set} is a set defined by polynomial equations and strict polynomial inequalities with coefficients in $\mathbb{Z}$. Although we cannot give a complete definition of {\em stable equivalence} within the context of this paper, let us just say that two semialgebraic sets $V$ and $V'$ are stably equivalent if one can be obtained from the other by rational transformations and so-called {\em stable projections}, and that stable equivalence implies {\em homotopy equivalence}. From the computational point of view, the important property is that $V$ is the empty set if and only $V'$ is, and that the size of the description of $V'$ in the theorem above is polynomial in the size of the description of $V$. We call the description of a semialgebraic set $V'$ given in the theorem above the \emph{Shor normal form}. We can now encode the defining relations of a semialgebraic set given in Shor normal form using abstract order types by simply putting the points $\infty,0,1,x_1,\dots,x_n$ in this order on $\ell$. To give a complete order type, the orientations of triples including the points of the gadgets and the positions of the gadget on $\ell_\infty$ have to be specified. This can be done exploiting the fact that the distances between the points $a$ and $b$ of each gadget and their position on $\ell_\infty$ can be chosen freely. We refer to the references mentioned above for further details. We next show how to implement these ideas to construct a graph $G_V$ associated with a primary semialgebraic set $V$, such that $G_V$ has a PVG realization if and only if $V\not=\emptyset$. \section{$\exists\mathbb{R}$-completeness of PVG recognition} The idea to show that PVG recognition is complete in $\exists\mathbb{R}$ is to encode the gadgets described in the previous section in a generalized fan. We therefore consider the gadgets not as a collection of points with given order types, but as a collection of segments between the lines $\ell$ and $\ell_\infty$ with given crossing information, i.e., a certain arrangement of the segments of the fan. We will consider the addition and multiplication gadgets given in Fig.~\ref{fig:gadgets}, and for a copy $g_i$ of the addition gadget, denote by $a_i,b_i,c_i,d_i$, and $e_i$ the points corresponding to $g_i$, and similarly for the multiplication gadget. To formalize the freedom we have in choosing the points $a_i$ and $b_i$ for each addition or multiplication gadget $g_i$, we make the following two observations. The points of a gadget that do not lie on $\ell$ are denoted by $P_i$. \begin{observation}[\cite{richtergebert1995mnev},\cite{matousek2014segment}] The points $a_i$ and $b_i$ can be positioned arbitrarily on $\ell_\infty$. The position of the other points of $P_i$ is fully determined by $a_i$, $b_i$ and the input values on $\ell$. \end{observation} \begin{observation}[\cite{richtergebert1995mnev},\cite{matousek2014segment}]\label{obs:gadgetsmall} All points of $P_i$ are placed close to $a_i$ if $a_i$ and $b_i$ are placed close to each other. (For each $\varepsilon>0$ there exists a $\delta>0$, such $|a_i-b_i|<\delta$ implies $|p-q|<\varepsilon$ for all $p,q\in P_i$.) \end{observation} With those two observations in hand, we show we can place the points of the gadgets on $\ell_\infty$ one by one, such that we have a partial information on the \emph{relative height} of the crossings of the involved segments. This partial information can be combined with the generalized fan construction to force the exact encoding. Here we need a generalized fan since we cannot obtain the full information of the height all the crossings with the segments of other gadgets, since the position and distance of the other segments of gadgets is influenced by the solution of the inequality system. For simplicity, we can work in the projective plane. This allows us to apply a projective transformation such that the point $\infty$ is mapped onto the line at infinity, and the lines $\ell$ and $\ell_\infty$ are parallel. Furthermore we can assume $\ell$ and $\ell_\infty$ are horizontal lines. In this setting we have to specify a order on the $y$-coordinate of the intersection points of the segments/the points of the gadgets. Therefore we fix one order of the gadgets $g_1,g_2,\dots,g_l$ on $\ell_\infty$. \begin{lemma}\label{lem:PVG_fan_reduction} Let $V$ be a nonempty primary semialgebraic set given in Shor normal form and let $g_1,g_{i-1},g_i,\dots,g_l$ be the gadgets realizing the defining equations, such that $g_j$ is realizing an addition if $j<i$ and a multiplication otherwise. Then there exists a realization such that the order of the $y$-coordinates of the intersection points is given by \begin{eqnarray} a_{1}=\dots=a_{l}=b_1=\dots=b_l=f_i=\dots=f_l \label{eqn:linfty}\\ >e_l>d_l>c_l>\dots>e_i>d_i>c_i \label{eqn:gadgets1}\\ >e_{i-1}>c_{i-1}=d_{i-1}>\dots>e_{1}>c_{1}=d_{1} \label{eqn:gadgets2}\\ >I_2>\dots>I_l \label{eqn:betweengadgets}\\ >0=x_1=x_2=\dots=x_k \label{eqn:l}, \end{eqnarray} where $I_j$ denotes the intersections between the segments of the gadget $g_k$ with the segments of the gadgets $g_j$ for $j<k$. \end{lemma} \begin{proof} \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{orderreduction.pdf} \caption{\label{fig:orderreduction} The vertical order of the points in the reduction.} \end{figure} We fix one solution for the relations defining $V$. The points on $\ell$ are fixed realizing this solution. We place the points $a_i$ and $b_i$ such that the other points of the gadgets realize the order of the $y$-coordinates described in the lemma. First note that the order of the points within one gadget is determined as described by the construction of the gadgets. The points corresponding to variables are also on $\ell$ and the points $a$, $b$ and $f$ all lie on $\ell_\infty$. Thus the total relations given in (\ref{eqn:linfty}) and (\ref{eqn:l}), as well as the relations between each triple of points belonging to one gadget in (\ref{eqn:gadgets1}) and (\ref{eqn:gadgets2}) are satisfied in all realizations. We place the points $a_i$ and $b_i$ of the gadgets inductively. Assume that we have placed the first $i-1$ gadgets such that the inequalities above are satisfied. Now there exists a real $\varepsilon$ such that none of the points of the gadgets lies in an $\varepsilon$-neighborhood of $\ell$ or $\ell_\infty$, see Figure~\ref{fig:orderreduction}. For this reason there exists an axis-aligned rectangle of height $\varepsilon$ with lower boundary on $\ell$, such that every segment drawn so far intersects the upper and the lower boundary of this rectangle (the lower grey box in Figure~\ref{fig:orderreduction}). We now place $a_i$ such that all segments that are constructed for the gadget $g_i$ (blue) intersect the right boundary of this rectangle. This can be achieved by placing $a_i$ further than the intersection point of $\ell_{\infty}$ and the supporting line of the diagonal with positive slope of the rectangle (the red segment in Figure~\ref{fig:orderreduction}). This shows that (\ref{eqn:betweengadgets}) can be satisfied. To show the inequalities in (\ref{eqn:gadgets1}) and (\ref{eqn:gadgets2}) hold it remains to check that the points $c_i,d_i$ (and eventually $e_i$) can be placed in an $\varepsilon$-neighborhood of $\ell_\infty$. This can be done, using Observation~\ref{obs:gadgetsmall}, by placing $b_i$ close to $a_i$. \end{proof} \begin{theorem} The recognition of point visibility graphs is $\exists\mathbb{R}$-complete. \end{theorem} \begin{proof} To prove completeness, we first have to check that the problem belongs to $\exists\mathbb{R}$. For this, we have to encode, for each edge $pq$ of the visibility graph, that no other point lies on the segment $\overline{pq}$, and the opposite for nonedges. This can be done with polynomial inequalities and Boolean operations. For an detailed formulation of this inequality system we refer to~\cite{GR14}. For the hardness part, the idea of the proof is the following. For a semialgebraic set $V$ we compute the Shor normal form and denote the corresponding primary semialgebraic set by $V'$. For $V'$, we can construct the arrangement of pseudosegments that are attached on the lines $\ell$ and $\ell_\infty$. By inverting the projective transformation applied in Lemma~\ref{lem:PVG_fan_reduction} we can construct a generalized fan $G_V$ of the pseudosegments between $\ell$ and $\ell_\infty$, such that in any PVG realization the order of the intersection points of the segments satisfies the inequalities in Lemma~\ref{lem:PVG_fan_reduction}. The bundles and rays for the generalized fan are added, such that the possible vertical orders are fixed to the ones described in Lemma~\ref{lem:PVG_fan_reduction}, see Figure~\ref{fig:orderreduction}: We add an orange ray from $p$ through each of the points $c_i,d_i$ and $e_i$ of each gadget $g_i$, $i\in [l]$. This fixes the inequalities in lines (\ref{eqn:gadgets1})-(\ref{eqn:gadgets2}). A green bundle is added before and after each of the sets $I_j$, $j\in\{2,\dots,l\}$, such that (\ref{eqn:betweengadgets}) is satisfied. From this generalized fan we want to construct a point visibility graph $G_V$. Here we have to be a careful with collinearities between point that do not lie on one segment or one ray. Therefore, we show that we can construct the edges and nonedges between points on different segments and different rays, such that they do not restrict \emph{too many} solutions of our strict inequality system. First notice that we can avoid collinearities between points on segments of different gadgets by perturbing the positions of the points $a_i$ $b_i$, the exact position of the bundles, and the distance of the rays within a bundle (we have this freedom in the proof of Lemma~\ref{lem:PVG_fan_reduction}). So we can assume that the only collinearities of points on different segments appear between segments in one gadget. In the addition gadget we have no three \emph{segments} that intersect in one point. By perturbing the position of the bundles we can avoid collinearities in those gadgets. In the multiplication gadget we are in the situation that we have three segments $0,1,x$ (and $0,y,x\cdot y$) that intersect in one point. If the ratio of those three points on $\ell$ is rational they are (after projective transformations) columns in the integer grid. If those are intersected by a bundle we obtain the points on projective transformation of the integer grid and thus collinearities. The point here is that we can compute during the construction which collinearities appear: the solutions of the original strict inequality system form an open set. In this set we can assume that our solution consists of sufficiently \emph{independent} numbers, e.g. they are algebraically independent over $\mathbb{Q}$, such that $0,1,x$ and $0,y,x\cdot y$ only have a rational ratio if $x$ is a coefficient of the inequality system. In this case we can calculate the collinearities. Otherwise, we can perturb the bundles $a_i$ and $b_i$ to avoid collinearities. Hence all collinearities between points on different segments can be computed and do not influence the solvability of the inequality system. This way we can determine all edges of $G_V$. The number of vertices of the graph $G_V$ is polynomial in the size of $V$ since calculating the Shor normal form of $V$ gives a description of $V'$ which has size polynomial in the size of $V$. The number of segments, bundles, rays, and the size of a bundle in the fan are all polynomial in the number of operations in the Shor normal form. All calculations in this construction can be done in polynomial time. For the $\exists\mathbb{R}$-hardness it remains to show that the graph $G_V$ is a point visibility graph if and only if $V$ (and thus $V'$) is nonempty. To show that $V$ is nonempty if $G_V$ has a PVG realization we observe that the collinearities from a ray and from a segments stay collinear in each realization by Lemma~\ref{lem:uniqueGeneralizedFan}. Thus the gadgets implementing the calculations on $\ell$ are preserved. Using the cross-ratio as described in Subsection~\ref{subsec:arithmetics} a PVG realization encodes a point in $V'$, and $V$ is nonempty if $G_V$ has a PVG realization. We show that there exists a PVG realization if $V$ and $V'$ are nonempty. We consider a solution $x\in V'$ and place the points corresponding to the variables on a line $\ell$. With points in this position the gadgets implementing the calculations can be realized between $\ell$ and $\ell_\infty$, such that the intersection points of the segments satisfy the order in Lemma~\ref{lem:PVG_fan_reduction}. \end{proof} \subsection*{Acknowledgments} We thank an anonymous referee for pointing out an error in the original proof of Lemma~\ref{lem:uniqueGeneralizedFan}. The revised proof is largely based on the suggested fix.
1,477,468,750,492
arxiv
\subsection{} \label{operad} The notion of an operad was created for the study of iterated loop spaces~\cite{may1}. Since then, operads have been used as universal objects representing a wide range of algebraic concepts. We give a brief definition and provide classic examples to highlight the issues to be discussed. \begin{defn} An {\em operad} $\{\Op{n} \; | \; n \in \mathbb {N} \}$ is a collection of objects $\Op{n}$ in a monoidal category endowed with certain extra structures: 1. $\Op{n}$ carries an action of the symmetric group $\Sg_n$. 2. There are composition maps \begin{equation} \Op{n} \otimes \Op{k_1} \otimes \cdots \otimes \Op{k_n} \rightarrow \Op{k_1 + \cdots + k_n} \label{e:operad} \end{equation} \indent \hspace{10pt} which satisfy certain well-known axioms, {\em cf}.\ \cite{may2}. \end{defn} This paper will be concerned mostly with operads in the context of topological spaces, where the objects $\Op{n}$ will be equivalence classes of geometric objects. \begin{exmp} These objects can be pictured as {\em trees} (Figure~\ref{btp}a). A tree is composed of corollas\footnote{A corolla is a collection of edges meeting at a common vertex.} with one external edge marked as a {\em root} and the remaining external edges as {\em leaves}. Given trees $s$ and $t$, basic compositions are defined as $s \circ_i t$, obtained by grafting the root of $s$ to the $i^{\rm th}$ leaf of $t$. This grafted piece of the tree is called a {\em branch}. \end{exmp} \begin{figure} [h] \centering {\includegraphics {btp.eps}} \caption{Trees, Bubbles, and Polygons} \label{btp} \end{figure} \begin{exmp} There is a dual picture in which {\em bubbles} replace corollas, {\em marked points} replace leaves, and the root is denoted as a point labeled $\infty$ (Figure~\ref{btp}b). Using the above notation, the composition $s \circ_i t$ is defined by fusing the $\infty$ of the bubble $s$ with the $i^{\rm th}$ marked point of $t$. The branches of the tree are now identified with {\em double points}, the places where bubbles intersect. \end{exmp} \subsection{} Taking yet another dual, we can define an operad structure on a collection of {\em polygons} (modulo an appropriate equivalence relation) as shown in Figure~\ref{btp}c. Each bubble corresponds to a polygon, where the number of marked and double points become the number of sides; the fusing of points is associated with the gluing of faces. The nicest feature of polygons is that, unlike corollas and bubbles, the iterated composition of polygons yields a polygon with marked diagonals (Figure~\ref{onepoly}). \begin{figure} [h] \centering {\includegraphics {onepoly.eps}} \caption{Polygon composition} \label{onepoly} \end{figure} Unlike the {\em rooted} trees, this {\em mosaic} operad is {\em cyclic} in the sense of Getzler and Kapranov~\cite[\S2]{cyclic}. The most basic case (Figure~\ref{polycomp}) shows how two polygons, with sides labeled $a$ and $b$ respectively, compose to form a new polygon. The details of this operad are made precise in ~\S\ref{mosaic}. \begin{figure} [h] \centering {\includegraphics {polycomp.eps}} \caption{{\em Mosaic} composition} \label{polycomp} \end{figure} \subsection{} \label{ss:lcubes} In the work of Boardman and Vogt~\cite[\S2.6]{bv}, an operad is presented using $m$ dimensional cubes $I^m \subset \R^m$. An element $\Lc{n}$ of this {\em little cubes} operad is the space of an ordered collection of $n$ cubes linearly embedded by $f_i:I^m \hookrightarrow I^m$, with disjoint interiors and axes parallel to $I^m$. The $f_i$'s are uniquely determined by the $2n$-tuple of points $(a_1, b_1, \ldots ,a_n, b_n)$ in $I^m$, corresponding to the images of the lower and upper vertices of $I^m$. An element $\sg \in \Sg_n$ acts on $\Lc{n}$ by permuting the labeling of each cube: $$(a_1, b_1, \ldots ,a_n, b_n) \mapsto (a_{\sg(1)}, b_{\sg(1)}, \ldots ,a_{\sg(n)}, b_{\sg(n)}).$$ The composition operation \eqref{e:operad} is defined by taking $n$ spaces $\Lc{k_i}$ (each having $k_i$ embedded cubes) and embedding them as an ordered collection into $\Lc{n}$. Figure~\ref{cubes} shows an example for the two dimensional case when $n = 4$. \begin{figure} [h] \centering {\includegraphics {cubes.eps}} \caption{{\em Little cubes} composition} \label{cubes} \end{figure} Boardman showed that the space of $n$ distinct cubes in $\R^m$ is homotopically equivalent to $\Con^n(\R^m)$, the configuration space on $n$ distinct labeled points in $\R^m$.\footnote{The equivariant version of this theorem is proved by May in~\cite[\S4]{may1}.} When $m = 2$, $\Con^n(\R^2)$ is homeomorphic to $\C^n - \Delta$, where $\Delta$ is the {\em thick} diagonal $\{(x_1, \ldots , x_n) \in \C^n \: | \: \exists \: i, j, \: i \neq j \:,\: x_i = x_j\}$. Since the action of $\Sg_n$ on $\C^n - \Delta$ is free, taking the quotient yields another space $(\C^n - \Delta) / \Sg_n$. It is well-known that both these spaces are aspherical, having all higher homotopy groups vanish~\cite{cdavis}. The following short exact sequence of fundamental groups results: $$ \pi_1 (\C^n - \Delta) \rightarrowtail \pi_1 ((\C^n - \Delta) / \Sg_n) \twoheadrightarrow \Sg_n.$$ But $\pi_1$ of $\C^n - \Delta$ is simply $\Pb_n$, the pure braid group. Similarly, $\pi_1$ of $\C^n - \Delta$ quotiented by all permutations of labelings is the braid group $\B_n$. Therefore, the short exact sequence above takes on the more familiar form: $$\Pb_n \rightarrowtail \B_n \twoheadrightarrow \Sg_n.$$ We will return to these ideas in~\S\ref{quasi}. \section {The Moduli Space} \subsection{} \label{ss:collide} The moduli space of Riemann spheres with $n$ punctures, $${\mathcal M}_0^{n}(\C) = \Con^n(\C \Pj^1)/\Pj \Gl_2(\C),$$ has been studied extensively~\cite{keel}. It has a Deligne-Mumford-Knudsen compactification ${{\overline{\mathcal M}}{^n_0(\C)}}$, a smooth variety of complex dimension $n-3$. In fact, this variety is defined over the integers; we will look at the {\em real} points of this space. These are the set of fixed points of ${{\overline{\mathcal M}}{^n_0(\C)}}$ under complex conjugation. \begin{defn} The moduli space \M{n} of configurations of $n$ smooth points on punctured stable real algebraic curves of genus zero is a compactification of the quotient $((\R \Pj^1)^n - \Delta)/\Pj \Gl_2(\R),$ where $\Delta$ is the thick diagonal. \end{defn} \begin{rem} This is an action of a non-compact group on a non-compact space. Geometric invariant theory gives a natural compactification for this quotient, defined combinatorially in terms of bubble trees or algebraically as a moduli space of real algebraic curves of genus zero with $n$ points, which are stable in the sense that they have only finitely many automorphisms. \end{rem} A point of \oM{n} can be visualized as a bubble (that is, $\R\Pj^1$) with $n$ {\em distinct} labeled points. For a particular labeling, the configuration space of such points gives us a fundamental domain of \oM{n}. There are $n!$ possible labelings. However, since there exists a copy of the dihedral group $D_n$ in $\Pj \Gl_2(\R)$, and since \oM{n} is defined as a quotient by $\Pj \Gl_2(\R)$, two labeled bubbles are identified by an action of $D_n$. Therefore, there are $\frac{1}{2}(n-1)!$ copies of the fundamental domain that make up \oM{n}. Since we remove the thick diagonal, these domains are open cells. In \M{n}, however, these marked points are allowed to `collide' in the following sense: As two adjacent points $p_1$ and $p_2$ of the bubble come closer together and try to collide, the result is a new bubble fused to the old at the point of collision (a double point), where the marked points $p_1$ and $p_2$ are now on the new bubble (Figure~\ref{bcollide}). Note that each bubble must have at least three marked or double points in order to be stable. \begin{figure}[h] \centering {\includegraphics {bcollide.eps}} \caption{Collision on bubbles} \label{bcollide} \end{figure} The mosaic operad encapsulates all the information of the bubbles, enabling one to look at the situation above from the vantage point of polygons. Having $n$ marked points on a circle now corresponds to an $n$-gon; when two adjacent sides $p_1$ and $p_2$ of the polygon try to collide, a diagonal of the polygon is formed such that $p_1$ and $p_2$ lie on one side of the diagonal (Figure~\ref{pcollide}). \begin{figure}[h] \centering {\includegraphics {pcollide.eps}} \caption{Collision on polygons} \label{pcollide} \end{figure} What is quite striking about \M{n} is that its homotopy properties are completely encapsulated in the fundamental group. \begin{thm} \textup{\cite[\S5.1]{djs}} \M{n} is aspherical. \end{thm} \noindent We will return to the structure of the fundamental group in~\S\ref{quasi}. \subsection{} \label{mosaic} We now turn to defining the mosaic operad and relating its properties with the structure of \oM{n}. Let $S^1$ be the unit circle bounding $\D$, the disk endowed with the Poincar\'{e} metric; this orients the circle. The geodesics in $\D$ correspond to open diameters of $S^1$ together with open circular arcs orthogonal to $S^1$. The group of isometries on $\D$ is $\Pj\Gl_2(\R)$~\cite[\S4]{rat}. A configuration of $n$ distinct points on $\R\Pj^1$ defines an {\em ideal} polygon in $\D$, with all vertices on the circle and geodesic sides. Let $\G(n,0)$ be the space of such configurations, modulo $\Pj\Gl_2(\R)$, and let $\G(n,k)$ be the space of such ideal polygons marked with $k$ non-intersecting geodesics between non-adjacent vertices. We want to think of the elements of $\G(n,k)$ as limits of configurations in $\G(n,0)$ in which $k$ sets of points have coalesced (see discussion above). Specifying $k$ diagonals defines a decomposition of an $n$-gon into $k+1$ smaller polygons, and we can topologize $\G(n,k)$ as a union of $(k+1)$-fold products of $\G(m,0)$'s corresponding to this decomposition. For example, to the one dimensional space $\G(4,0)$ we attach zero dimensional spaces of the form $\G(3,0) \times \G(3,0)$. The combinatorics of these identifications can be quite complicated, but Stasheff's associahedra were invented to solve just such problems, as we will see in~\S\ref{ss:gl-mon} below. Henceforth, we will visualize elements of $\G(n,k)$ as $n$-gons with $k$ non-intersecting diagonals, and we write $\G(n)$ for the space of $n$-gons with any number of such diagonals. Elements of $\G(n)$ inherit a natural cyclic order on their sides, and we write $\G^L(n)$ for the space of $n$-gons with labeled sides. \begin{prop} \label{p:gln1} There exists a bijection between the points of \,\oM{n} and the elements of \,$\G^L(n,0)$. \end{prop} \begin{rem} Given an element in $\G(n,k)$, we can associate to it a dual tree. Its vertices are the barycenters of polygons, defined using the Riemann mapping theorem, and the branches are geodesics between barycenters. The leaves are geodesics that extend to points on $\R\Pj^1$ midway between two adjacent marked points on $\R\Pj^1$. It then follows that \M{n} is a space of {\em hyperbolic} planar trees. This perspective naturally gives a Riemann metric to \M{n}. \end{rem} \begin{defn} Given $G \in \G^L(m,l)$ and $G_i \in \G^L(n_i,k_i)$ (where $1 \leq i \leq m$), there are composition maps $$G \ _{a_{1}} \!\! \circ \! _{b_{1}} \ G_1 \ _{a_{2}} \!\! \circ \! _{b_{2}} \ \cdots \ _{a_{m}} \!\! \circ \! _{b_{m}} \ G_{m} \mapsto G_t,$$ where $G_t \in \G^L(-m + \sum n_i,\ m + l + \sum k_i)$. The object $G_t$ is obtained by gluing side $a_i$ of $G$ along side $b_i$ of $G_i$. The symmetric group $\Sg_n$ acts on $G_n$ by permuting the labeling of the sides. These operations define the {\em mosaic} operad $\{\G^L(n,k)\}$. \end{defn} \begin{rem} The one dimensional case of the little cubes operad is $\{\Li{n}\}$, the {\em little intervals} operad. An element $\Li{n}$ is an ordered collection of $n$ embeddings of the interval $I \hookrightarrow I$, with disjoint interiors. The notion of {\em trees} and {\em bubbles}, shown in Figure~\ref{btp}, is encapsulated in this intervals operad. Furthermore, after embedding $I$ in $\R$ and identifying $\R \cup \infty$ with $\R\Pj^1$, the mosaic operad $\{\G^L(n,k)\}$ becomes a compactification of $\{\Li{n}\}$. \end{rem} \subsection{} \label{ss:gl-mon} We now define the fundamental domain of \M{n} as a concrete geometric object and present its connections with the mosaic operad. \begin{defn} Let ${\mathcal A}$ be the space of $n-3$ distinct points $\{t_1, \ldots, t_{n-3}\}$ on the interval $[0,1]$ such that $0 < t_1 < \cdots < t_{n-3} <1$. Identifying $\R \cup \infty$ with $\R\Pj^1$ carries the set $\{0, t_1, \ldots, t_{n-3}, 1, \infty\}$ of $n$ points onto $\R\Pj^1$. Therefore, there exists a natural inclusion of ${\mathcal A}$ in \M{n}. Define the {\em associahedron} $K_{n-1}$ as the closure of the space ${\mathcal A}$ in \M{n}. \end{defn} \begin{prop} \label{p:gln2} An interior point of \,$K_{n-1}$ corresponds to an element of \,$\G(n,0)$, and an interior point of a codim $k$ face corresponds to an element of \,$\G(n,k)$. \end{prop} \begin{proof} Since $\Sg_3 \subset \Pj\Gl_2(\R)$, one can fix three of the $n$ distinct points on $\R\Pj^1$ to be $0, 1,$ and $\infty$. Thus, the associahedron $K_{n-1}$ can be identified with the cell tiling $\M{n}$ and the proposition follows from the construction of $\G(n,k)$. \end{proof} The relation between the $n$-gon and $K_{n-1}$ is further highlighted by a work of Lee~\cite{lee}, where he constructs a polytope $Q_n$ that is dual to $K_{n-1}$, with one vertex for each diagonal and one facet for each triangulation of an $n$-gon. He then proves the symmetry group of $Q_n$ to be the dihedral group $D_n$. Restated, it becomes \begin{prop} \textup{\cite[\S5]{lee}} $D_n$ acts as a group of isometries on $K_{n-1}$. \end{prop} \begin{hnote} Stasheff classically defined the associahedron $K_{n-1}$ for use in homotopy theory~\cite[\S6]{jds} as a CW-ball with codim $k$ faces corresponding to using $k$ sets of parentheses meaningfully on ${n-1}$ letters.\footnote{From the definition above, the ${n-1}$ letters can be viewed as the points $\{0, t_1, \ldots, t_{n-3}, 1\}$.} It is easy to describe the associahedra in low dimensions: $K_2$ is a point, $K_3$ a line, and $K_4$ a pentagon. The two descriptions of the associahedron, using polygons and parentheses, are compatible: Figure~\ref{k4} illustrates $K_4$ as an example. The associahedra have continued to appear in a vast number of mathematical fields, gradually acquiring more and more structure, {\em cf}.\ \cite{zie}. \end{hnote} \begin{figure} [h] \centering {\includegraphics {k4.eps}} \caption{$K_4$} \label{k4} \end{figure} \subsection{} The polygon relation to the associahedron enables the use of the mosaic operad structure on $K_{n-1}$. \begin{prop} \label{p:asdecomp} \textup{\cite[\S2]{jds}} Each face of $K_{n-1}$ is a product of lower dimensional associahedra. \end{prop} \noindent In general, the codim $k-1$ face of the associahedron $K_{m-1}$ will decompose as $$K_{n_1-1} \times \cdots \times K_{n_k-1} \hookrightarrow K_{m-1},$$ where $\sum n_i = m + 2(k-1)$ and $n_i \geq 3$. This parallels the mosaic operad structure $$G(n_1) \circ \cdots \circ G(n_k) \mapsto G(m),$$ where $G(n_i) \in \G^L(n_i,0),\ G(m) \in \G^L(m,k-1)$, and the gluing of sides is arbitrary. Therefore, the product in Proposition~\ref{p:asdecomp} is indexed by the internal vertices of the tree corresponding to the face of the associahedron. \begin{exmp} We look at the codim one faces of $K_5$. The three dimensional $K_5$ corresponds to a 6-gon, which has two distinct ways of adding a diagonal. One way, in Figure~\ref{k5codim1}a, will allow the 6-gon to decompose into a product of two 4-gons ($K_3$'s). Since $K_3$ is a line, this codim one face yields a square. The other way, in Figure~\ref{k5codim1}b, decomposes the 6-gon into a 3-gon ($K_2$) and a 5-gon ($K_4$). Taking the product of a point and a pentagon results in a pentagon. \end{exmp} \begin{figure} [h] \centering {\includegraphics {k5codim1.eps}} \caption{Codim one cells of $K_5$} \label{k5codim1} \end{figure} \begin{exmp} We look at the codim one faces of $K_6$. Similarly, Figure~\ref{k6codim1} shows the decomposition of the codim one faces of $K_6$, a pentagonal prism and $K_5$. \end{exmp} \begin{figure} [h] \centering {\includegraphics {k6codim1.eps}} \caption{Codim one cells of $K_6$} \label{k6codim1} \end{figure} \section {The Tessellation} \label{s:tess} \subsection{} \label{twisting} We extend the combinatorial structure of the associahedra to \M{n}. Propositions~\ref{p:gln1} and \ref{p:gln2} show the correspondence between the associahedra in \M{n} and $\G^L(n,k)$. We investigate how these copies of $K_{n-1}$ glue to form \M{n}. \begin{defn} Let $G \in \G^L(n,k)$ and $d$ be a diagonal of $G$. A {\em twist} along $d$, denoted by $\nabla_d (G)$, is the element of $\G^L(n,k)$ obtained by `breaking' $G$ along $d$ into two parts, `twisting' one of the pieces, and `gluing' them back (Figure~\ref{twist}). \end{defn} \begin{figure} [h] \centering {\includegraphics {twist.eps}} \caption{{\em Twist} along $d$} \label{twist} \end{figure} \noindent The twisting operation is well-defined since the diagonals of an element in $\G^L(n,k)$ do not intersect. Furthermore, it does not matter which piece of the polygon is twisted since the two results are identified by an action of $D_n$. It immediately follows that $\nabla_d \cdot \nabla_d = e,$ the identity element. \begin{prop} \label {p:twist} Two elements, $G_1, G_2 \in \G^L(n,k)$, representing codim $k$ faces of associahedra, are identified in \M{n} if there exist diagonals $d_1, \ldots, d_r$ of $G_1$ such that $$(\nabla_{d_1} \cdots \nabla_{d_r}) (G_1) = G_2.$$ \end{prop} \begin{proof} As two adjacent points $p_1$ and $p_2$ on $\R\Pj^1$ collide, the result is a new bubble fused to the old at a point of collision $p_3$, where $p_1$ and $p_2$ are on the new bubble. The location of the three points $p_i$ on the new bubble is {\em irrelevant} since $\Sg_3 \subset \Pj\Gl_2(\R)$. In terms of polygons, this means $\nabla_d$ does not affect the cell, where $d$ is the diagonal representing the double point $p_3$. In general, it follows that the labels of triangles can be permuted without affecting the cell. Let $G$ be an $n$-gon with diagonal $d$ partitioning $G$ into a square and an $(n-2)$-gon. Figure~\ref{twistpf} shows that since the square decomposes into triangles, the cell corresponding to $G$ is invariant under the action of $\nabla_d$. Since any partition of $G$ by a diagonal $d$ can be decomposed into triangles, it follows by induction that $\nabla_d$ does not affect the cell. \end{proof} \begin{figure} [h] \centering {\includegraphics {twistpf.eps}} \caption{$\nabla_d$ does not affect the cell} \label{twistpf} \end{figure} \begin{thm} \label{t:kxs} There exists a surjection $$K_{n-1} \times_{D_n} \Sg_n \rightarrow \M{n},$$ which is a bijection on the interior of the cells. In particular, $\frac{1}{2}(n-1)!$ copies of $K_{n-1}$ tessellate \M{n}. \end{thm} \begin{proof} The bijection on the interior of the cells follows immediately from the discussion in~\S\ref{ss:collide}. The map is not an injection since the boundaries of the associahedra are glued according to Proposition~\ref{p:twist}. \end{proof} \subsection{} In Figure~\ref{pieces}, a piece of \M{5} represented by labeled polygons with diagonals is shown. Note how two codim one pieces (lines) glue together and four codim two pieces (points) glue together. Understanding this gluing now becomes a combinatorial problem related to $\G^L(n,k)$. \begin{figure} [h] \centering {\includegraphics {m05pieces.eps}} \caption{A piece of \M{5}} \label{pieces} \end{figure} \begin{nota} Let \Bnd{x}{\mathfrak X} be the number of codim $x$ cells in a CW-complex $\mathfrak X$. For a fixed codim $y_2$ cell in \M{n}, and for $y_1 < y_2$, let \Cobnd{y_1}{y_2} be the number of codim $y_1$ cells in \M{n} whose boundary contains the codim $y_2$ cell. Note the number \Cobnd{y_1}{y_2} is well-defined by Theorem~\ref{t:kxs}. \end{nota} \begin{lem} \label{l:cayley} $$\Bnd{k}{K_{n-1}} = \frac{1}{k+1} \; \binom{n-3}{k} \; \binom{n-1+k}{k}.$$ \end{lem} \begin{proof} This is obtained by just counting the number of $n$-gons with $k$ non-intersecting diagonals, done by A. Cayley in 1891~\cite{cay}. \end{proof} \begin{lem} \label{l:codim} $$\Cobnd{k-t}{k} = 2^t \; \binom{k}{t}.$$ \end{lem} \begin{proof} The boundary components of a cell corresponding to an element in $\G^L(n,k)$ are obtained by adding non-intersecting diagonals. To look at the coboundary cells, diagonals need to be {\em removed}. For each diagonal removed, two cells result (coming from the {\em twist} operation); removing $t$ diagonals gives $2^t$ cells. We then look at all possible ways of removing $t$ out of $k$ diagonals. \end{proof} \begin{thm} \label{t:euler} \begin{equation} \chi (\M{n}) = \begin{cases} 0 & n \text{ even}\\ (-1)^{\frac{n-3}{2}}(n-2)((n-4)!!)^2 & n \text{ odd.} \end{cases} \label{e:euler} \end{equation} \end{thm} \begin{proof} It is easy to show the following: $$\Bnd{k}{\M{n}} \cdot \Cobnd{0}{k} = \Bnd{0}{\M{n}} \cdot \Bnd{k}{K_{n-1}}.$$ Using Theorem~\ref{t:kxs} and Lemmas~\ref{l:cayley} and~\ref{l:codim}, we solve for \Bnd{k}{\M{n}}; but this is simply the number of codim $k$ cells in \M{n}. Therefore, $$\chi (\M{n}) = \sum_{k=0}^{n-3} (-1)^{n-3-k} \;\; \frac{(n-1)!}{ 2^{k+1}} \:\; \frac{1}{k+1} \; \binom{n-3}{k} \; \binom{n-1+k}{k}.$$ This equation can be reduced to the desired form. \end{proof} \begin{rem} Professor F.\ Hirzebruch has kindly informed us that he has shown, using techniques of Kontsevich and Manin~\cite{km}, that the signature of ${{\overline{\mathcal M}}{^n_0(\C)}}$ is given by \eqref{e:euler}. He remarks that the equivalence of this signature with the Euler number of the space of real points is an elementary consequence of the Atiyah-Singer $G$-signature theorem. \end{rem} \section {The Hyperplanes} \subsection{} \label{braidarr} Another approach to \M{n} is from a {\em top-down} perspective using hyperplane arrangements as formulated by Kapranov~\cite[\S4.3]{kapchow} and described by Davis, Januszkiewicz, and Scott~\cite[\S0.1]{djs}. \begin{defn} Let $V^n \subset \R^{n-1}$ be the hyperplane defined by $\Sigma x_i = 0$. For $1 \leq i < j \leq n-1$, let $H^n_{ij} \subset V^n$ be the hyperplane defined by $x_i = x_j$. The {\em braid arrangement} is the collection of subspaces of $V^n$ generated by all possible intersections of the $H^n_{ij}$. \end{defn} If $\Hu^n$ denotes the collection of subspaces $\{H^n_{ij}\}$, then $\Hu^n$ cuts $V^n$ into $(n-1)!$ simplicial cones. Let $\Sg(V^n)$ be the sphere in $V^n$ and let $\Pj(V^n)$ be the projective sphere in $V^n$ (that is, $\R\Pj^{n-3}$). Let \Ba{n} to be the intersection of $\Hu^n$ with $\Pj(V^n)$; the arrangement \Ba{n} cuts $\Pj(V^n)$ into $\frac{1}{2}(n-1)!$ open $n-3$ simplices. \begin{defn} Let $\ba^k$ be a codim $k$ {\em irreducible} cell of $\Pj(V^n)$ if $\binom{k+1}{2}$ hyperplanes of $\Hu^n$ intersect there.\footnote{The use of the word {\em irreducible} comes from \cite{djs} in reference to Coxeter groups.} \end{defn} \begin{exmp} We look at the case when $n=5$. Figure~\ref{svpv} shows the `scars' on the manifolds made by $\Hu^5$. On $\Pj(V^5)$, there are four places where three hyperplanes intersect, corresponding to the four codim two irreducible points. \end{exmp} \begin{figure} [h] \centering {\includegraphics {svpv.eps}} \caption{\protect{$\Sg(V^5) \rightarrow \Pj(V^5)$}} \label{svpv} \end{figure} \begin{defn} Replace $\ba^k$ with $\sba^k$, the sphere bundle associated to the normal bundle of $\ba^k \subset \Pj(V^n)$. This process yields a manifold with boundary. Then projectify $\sba^k$ into $\pba^k$, the projective sphere bundle. This defines a manifold without boundary, called the {\em blow-up of \,$\Pj(V^n)$ along $\ba^k$}. \end{defn} \begin{rem} Replacing $\ba^k$ with $\sba^k$ for {\em any} dimension $k$ creates a {\em new} manifold with boundary. However, blowing up along $\ba^{k}$ defines a new manifold for all dimensions {\em except} codim one. That is, for codim one, projectifying $\sba^k$ into $\pba^k$ annuls the process of replacing $\ba^k$ with $\sba^k$. \end{rem} \begin{prop} \textup{\cite[\S4.3]{kapchow}} \label{p:kap} The iterated blow-up of \,$\Pj(V^n)$ along the cells $\{\ba^k\}$ in {\em increasing} order of dimension yields \M{n}. It is inessential to specify the order in which cells $\{\ba^k\}$ of the {\em same} dimension are blown up. \end{prop} Therefore, the compactification of \oM{n} is obtained by replacing the set $\{\ba^k\}$ with $\{\pba^k\}$. The {\em closure} of \oM{n} in $\Pj(V^n)$ is obtained by replacing the set $\{\ba^k\}$ with \{$\sba^k$\}; this procedure truncates each $n-3$ simplex of $\Pj(V^n)$ into the associahedron $K_{n-1}$. We explore this method of truncation in~\S\ref{ss:truncate}. \begin{exmp} \label {e:m05blowup} The blow-up of $\Pj(V^5)$ yielding \M{5} is shown in Figure~\ref{pvm05}. The arrangement \Ba{5} on $\Pj(V^5) \simeq \R\Pj^2$ yields six lines forming twelve $2$-simplices; the irreducible components of codim two turn out to be the points $\{\ba^2_1, \ldots, \ba^2_4\}$ of triple intersection. Blowing up along these components, we get $S^1$ as a hexagon for $\sba^2_i$ and $\R\Pj^1$ as a triangle for $\pba^2_i$. The associahedron $K_4$ is a pentagon, and the space \M{5} becomes tessellated by twelve such cells (shaded), an ``evil twin'' of the dodecahedron. \M{5} appears as the connected sum of five real projective planes. \end{exmp} \begin{figure} [h] \centering {\includegraphics {pvm05.eps}} \caption{\protect{$\Pj(V^5) \rightarrow \M{5}$}} \label{pvm05} \end{figure} \begin{hnote} The diagram of \M{5} shown in Figure~\ref{pvm05} is first found in a different context by Brahana and Coble in $1926$~\cite[\S1]{bc} relating to possibilities of maps with twelve five-sided countries. \end{hnote} \subsection{} Another way of looking at the moduli space comes from observing the inclusion $\Sg_3 \subset \Pj \Gl_2 (\R)$. Since \M{n} is defined as $n$ distinct points on $\R \Pj^1$ quotiented by $\Pj\Gl_2 (\R)$, one can fix three of these points to be $0, 1,$ and $\infty$. From this perspective we see that \M{3} is a point. When $n=4$, the {\em cross-ratio} is a homeomorphism from \M{4} to $\R\Pj^1$, the result of identifying three of the four points with $0, 1,$ and $\infty$. In general, \M{n} becomes a manifold blown up from an $n-3$ dimensional torus, coming from the $(n-3)$-fold products of $\R \Pj^1$. Therefore, the moduli space {\em before} compactification can be defined as $$((\R \Pj^1)^n - \Delta^*)/\Pj \Gl_2(\R),$$ where $\Delta^* = \{(x_1, \ldots , x_n) \in (\R \Pj^1)^n \:|\: $at least 3 points collide\}. Compactification is accomplished by blowing up along $\Delta^*$. \begin{exmp} An illustration of \M{5} from this perspective appears in Figure~\ref{m05c}. From the five marked points on $\R \Pj^1$, three are fixed leaving two dimensions to vary, say $x_1$ and $x_2$. The set $\Delta$ is made up of seven lines $\{x_1, x_2 = 0, 1, \infty\}$ and $\{x_1 = x_2\}$, giving a space tessellated by six squares and six triangles. Furthermore, $\Delta^*$ becomes the set of three points $\{x_1=x_2 = 0,1,\infty\}$; blowing up along these points yields the space \M{5} tessellated by twelve pentagons. This shows \M{5} as the connected sum of a torus with three real projective planes. \end{exmp} \begin{figure} [h] \centering {\includegraphics {m05c.eps}} \caption{\M{5} from the torus} \label{m05c} \end{figure} \begin{exmp} \label{e:m06} In Figure~\ref{m06c}, a rough sketch of \M{6} is shown as the blow-up of a three torus. The set $\Delta^*$ associated to \M{6} has ten lines \{$x_i=x_j=0,1,\infty$\} and \{$x_1=x_2=x_3$\}, and three points \{$x_1=x_2=x_3=0,1,\infty$\}. The lines correspond to the hexagonal prisms, nine cutting through the faces, and the tenth (hidden) running through the torus from the bottom left to the top right corner. The three points correspond to places where four of the prisms intersect. The shaded region has three squares and six pentagons as its codim one faces. In fact, all the top dimensional cells that form \M{6} turn out to have this property; these cells are the associahedra $K_5$ (see Figure~\ref{k6codim1}b). \end{exmp} \begin{figure} [h] \centering {\includegraphics {m06c.eps}} \caption{\M{6}} \label{m06c} \end{figure} \subsection{} We now introduce a construction which clarifies the structure of \M{n}. \begin{defn} \textup{\cite[\S4]{kap}} A double cover of \M{n}, denoted by \dM{n}, is obtained by fixing the $n^{\rm th}$ marked point on $\R\Pj^1$ to be $\infty$ and assigning it an orientation.\footnote{Kapranov uses the notation $\tilde S^{n-3}$ to represent this double cover.} \end{defn} \begin{exmp} Figure~\ref{m04} shows the polygon labelings of \dM{4} and \M{4}, being tiled by six and three copies of $K_3$ respectively. In this figure, the label $4$ has been set to $\infty$. Note that the map $\dM{4} \rightarrow \M{4}$ is the antipodal quotient. \end{exmp} \begin{figure} [h] \centering {\includegraphics {m04.eps}} \caption{\protect{$\dM{4} \rightarrow \M{4}$}} \label{m04} \end{figure} The double cover can be constructed using blow-ups similar to the method described above; instead of blowing up the projective sphere $\Pj(V^n)$, we blow-up the sphere $\Sg(V^n)$. Except for the anomalous case of \dM{4}, the double cover is a {\em non-orientable} manifold. Note also that the covering map $\dM{n} \rightarrow \M{n}$ is the antipodal quotient, coming from the map $\Sg(V^n) \rightarrow \Pj(V^n)$. Being a double cover, \dM{n} will be tiled by $(n-1)!$ copies of $K_{n-1}$.\footnote{These copies of $K_{n-1}$ are in bijection with the vertices of the {\em permutohedron} $P_{n-1}$~\cite{kap}.} It is natural to ask how these copies glue to form \dM{n}. \begin{defn} A {\em marked twist} of an $n$-gon $G$ along its diagonal $d$, denoted by $\widetilde \nabla_d (G)$, is the polygon obtained by breaking $G$ along $d$ into two parts, reflecting the piece that does {\em not} contain the side labeled $\infty$, and gluing them back together. \end{defn} The two polygons at the right of Figure~\ref{twist} turn out to be {\em different} elements in \dM{n}, whereas they are identified in \M{n} by an action of $D_n$. The following is an immediate consequence of the above definitions and Theorem~\ref{t:kxs}. \begin{cor} \label{c:kxs} There exists a surjection $$K_{n-1} \times_{\Z_n} \Sg_n \rightarrow \dM{n}$$ which is a bijection on the interior of the cells. \end{cor} \begin{rem} The spaces on the left define the classical $A_{\infty}$ operad~\cite[\S2.9]{cyclic}. \end{rem} \begin{thm} The following diagram is commutative: $$\begin{CD} (K_{n-1} \times \Sg_n)/_{\widetilde \nabla} @>>> \dM{n}\\ @VVV @VVV\\ (K_{n-1} \times \Sg_n)/_{\nabla} @>>> \M{n} \end{CD}$$ where the vertical maps are antipodal identifications and the horizontal maps are a quotient by $\Z_n$. \end{thm} \begin{proof} Look at $K_{n-1} \times \Sg_n$ by associating to each $K_{n-1}$ a particular labeling of an $n$-gon. We obtain $(K_{n-1} \times \Sg_n)/_{\widetilde \nabla}$ by gluing the associahedra along codim one faces using $\widetilde \nabla$ (keeping the side labeled $\infty$ fixed). It follows that two associahedra will {\em never} glue if their corresponding $n$-gons have $\infty$ labeled on different sides of the polygon. This partitions $\Sg_n$ into $\Sg_{n-1} \cdot \Z_n$, with each element of $\Z_n$ corresponding to $\infty$ labeled on a particular side of the $n$-gon. Furthermore, Corollary~\ref{c:kxs} tells us that each set of the $(n-1)!$ copies of $K_{n-1}$ glue to form \dM{n}. Therefore, $(K_{n-1} \times \Sg_n)/_{\widetilde \nabla} \:=\: (K_{n-1} \times \Sg_{n-1})/_{\widetilde \nabla} \times \Z_n \:=\: \dM{n} \times \Z_n.$ \end{proof} \section{The Blow-Ups} \subsection{} \label{ss:observe} The spaces \M{n} and $\R\Pj^{n-3}$ differ only by blow-ups, making the study of their structures crucial. Looking at the arrangement \Ba{n} on $\Pj(V^n)$, there turn out to be $n-1$ irreducible points $\{\ba^{n-3}\}$ in {\em general position}. In other words, these points can be thought of as vertices of an $n-3$ simplex with an additional point at the center. Between every two $\ba^{n-3}$ points of \Ba{n}, there exists a $\ba^{n-4}$ line, resulting in $\binom{n-1}{n-3}$ such irreducible lines. In general, $k$ irreducible points of \Ba{n} span a \mbox{$k-1$} dimensional irreducible cell; restating this, we get \begin{prop} \label{p:icells} The number of irreducible components $\ba^k$ in \Ba{n} equals \begin{equation} \binom{n-1}{k+1}. \label{e:countirr} \end{equation} \end{prop} \noindent The construction of the braid arrangement shows that around a point $\ba^{n-3}$ of $\Pj(V^n)$, the structure of \Ba{n} resembles the barycentric subdivision of an $n-3$ simplex. We look at some concrete examples to demonstrate this. \begin{exmp} In the case of \M{5}, Figure~\ref{pvm05}a shows the $\ba^2$ cells in general position; there are four points, three belonging to vertices of a $2$-simplex, and one in the center of this simplex. Between every two of these points, there exists a $\ba^1$; we see six such lines. Since these lines are of codim one, they need not be blown up. Figure~\ref{pvm05}b shows the structure of a blown up point $\ba^2$ in \M{5}. Notice that $\sba^2$ is a hexagon and $\pba^2$ is a triangle. It is no coincidence that these correspond exactly to \dM{4} and \M{4} (see Figure~\ref{m04}). \end{exmp} \begin{exmp} For the three dimensional \M{6}, the $\ba^3$ cells {\em and} the $\ba^2$ cells need to be blown up, {\em in that order}. Choose a codim three cell $\ba^3$; a neighborhood around $\ba^3$ will resemble the barycentric subdivision of a $3$-simplex. Figure~\ref{braid6} shows four tetrahedra, each being made up of six tetrahedra (some shaded), pulled apart in space such that when glued together the result will constitute the aforementioned subdivision. The barycenter is the point $\ba^3$. \begin{figure} [h] \centering {\includegraphics {braid6.eps}} \caption{Barycentric subdivision of a $3$-simplex} \label{braid6} \end{figure} The left-most piece of Figure~\ref{blow6} shows one of the tetrahedra from Figure~\ref{braid6}. The map $f_1$ takes the barycenter $\ba^3$ to $\sba^3$ whereas the map $f_2$ takes each $\ba^2$ going through the barycenter to $\sba^2$. When looking down at the resulting `blown up' tetrahedron piece, there are six pentagons (shaded) with a hexagon hollowed out in the center. Taking $\sba^2$ to $\pba^2$ turns these hexagons into triangles. \begin{figure} [h] \centering {\includegraphics {blow6.eps}} \caption{Blow-up of vertex and lines} \label{blow6} \end{figure} Putting the four `blown up' tetrahedra pieces together, the faces of $\sba^3$ make up a two dimensional sphere tiled by 24 pentagons, with 8 hexagons (with antipodal maps) cut out. This turns out to be \dM{5}; projectifying $\sba^3$ to $\pba^3$ yields \M{5} as shown in Figure~\ref{dm05}. \begin{figure} [h] \centering {\includegraphics {dm05.eps}} \caption{\protect{$\dM{5} \rightarrow \M{5}$}} \label{dm05} \end{figure} \end{exmp} This pattern seems to indicate that for \M{n}, blowing up along the point $\ba^{n-3}$ will yield \M{n-1}. But what happens in general, when a codim $k$ cell $\ba^k$ is blown up? A glimpse of the answer was seen above with regard to the hexagons and triangles showing up in \M{6}. \subsection{} To better understand \M{n}, we analyze the structure of $\ba^k \in \Pj(V^n)$ before blow-ups and $\pba^k \in \M{n}$ after blow-ups. This is done through the eyes of mosaics, looking at the faces of associahedra surrounding each blown up component of \Ba{n}. The following is a corollary of Proposition~\ref{p:icells}. \begin{cor} \label{c:icells} Each irreducible cell \,$\ba^k$ corresponds to a choice of $k+1$ elements from the set $\{1, \ldots, n-1\}$. \end{cor} Choose an arbitrary $\ba^k$ and assign it such a choice, say $\{p_1, \ldots, p_{k+1}\}$, where $p_i \in \{1, \ldots, n-1\}$. We can think of this as an $n$-gon having a diagonal $d$ partitioning it such that $k+1$ labeled sides $\{p_1, \ldots, p_{k+1}\}$ lie on one side and $n-k-1$ labeled sides $\{p_{k+2}, \ldots, p_{n-1}, n\}$ lie on the other. Using the mosaic operad structure, $d$ decomposes the $n$-gon into $G_1 \circ \,G_2$, where $G_1 \in \G^L(k+2)$ and $G_2 \in \G^L(n-k)$, with the new sides $d_i$ of $G_i$ coming from $d$. Note that $G_1 \circ \,G_2$ corresponds to the product of associahedra $K_{k+1} \times K_{n-k-1}$. There are $(k+1)!$ different ways in which $\{p_1, \ldots, p_{k+1}\}$ can be arranged to label $G_1$. However, since {\em twisting} is allowed along $d_1$, we get $\frac{1}{2}(k+1)!$ different labelings of $G_1$, each corresponding to a $K_{k+1}$. But observe that this is {\em exactly} how one gets \M{k+2}, where the associahedra glue as defined in \S\ref{twisting}. Therefore, a fixed labeling of $G_2$ gives $\M{k+2} \times K_{n-k-1}$; all possible labelings result in \begin{thm} In \,\M{n}, each irreducible cell \,$\ba^k$ in \,\Ba{n} becomes \begin{equation} \M{k+2} \times \M{n-k}. \label{e:mxm} \end{equation} \end{thm} \begin{exmp} Since \M{3} is a point, the blown up $\ba^{n-3}$ cell becomes \M{n-1}, matching the earlier observations of \S\ref{ss:observe}. Furthermore, \eqref{e:countirr} shows there to be \mbox{$n-1$} such structures. \end{exmp} \begin{exmp} Although blowing up along codim one components does not affect the resulting manifold, we observe their presence in \M{5}. From \eqref{e:countirr}, we get six such $\ba^1$ cells which become \M{4} after blow-ups. The \M{4}'s are seen in Figure~\ref{pvm05} as the six lines cutting through $\R\Pj^2$. Note that every line is broken into six parts, each part being a $K_3$. \end{exmp} \begin{exmp} The space \M{6}, illustrated in Figure~\ref{m06c}, moves a dimension higher.\footnote{Although this figure is not constructed from the braid arrangement, it is homeomorphic to the structure described by the braid arrangement.} There are ten $\ba^2$ cells, each becoming $\M{4} \times \M{4}$. These are the hexagonal prisms that cut through the three torus as described in Example~\ref{e:m06}. \end{exmp} \subsection{} The question arises as to {\em why} $\, \M{n-k}$ appears in \M{n}. The answer lies in the braid arrangement of hyperplanes. Taking \M{6} as an example, blowing up along each point $\ba^3$ in \Ba{6} uses the following procedure: A small spherical neighborhood is drawn around $\ba^3$ and the inside of the sphere is removed, resulting in $\sba^3$. Observe that this sphere (which we denote as $\Sh$) is engraved with great arcs coming from \Ba{6}. Projectifying, $\sba^3$ becomes $\pba^3$, and $\Sh$ becomes the projective sphere $\Pj\Sh$. Amazingly, the engraved arcs on $\Pj\Sh$ are \Ba{5}, and $\Pj\Sh$ can be thought of as $\Pj(V^5)$. Furthermore, blowing up along the lines $\ba^2$ of \Ba{6} corresponds to blowing up along the points $\ba^2$ of \Ba{5} in $\Pj\Sh$. As before, this new etching on $\Pj\Sh$ translates into an even lower dimensional braid arrangement, \Ba{4}. It is not hard to see how this generalizes in the natural way: For \M{n}, the iterated blow-ups along the cells $\{\ba^{n-3}\}$ up to $\{\ba^2\}$ in turn create braid arrangements within braid arrangements. Therefore, $\M{n-k}$ is seen in $\M{n}$. \subsection{} \label{ss:truncate} So far we have been looking at the structure of the irreducible cells $\ba^k$ before and after the blow-ups. We now study how the $n-3$ simplex (tiling $\Pj(V^n)$) is truncated by blow-ups to form $K_{n-1}$ (tiling \M{n}).\footnote{For a detailed construction of this truncation from another perspective, see Appendix B of~\cite{jds2}.} Given a regular $n$-gon with one side marked $\infty$, define $\Gc$ to be the set of such polygons with one diagonal. \begin{defn} For $G_1, G_2 \in \Gc$, create a new polygon $G_{1,2}$ (with {\em two} diagonals) by {\em superimposing} the images of $G_1$ and $G_2$ on each other (Figure~\ref{f:simpose}). $G_1$ and $G_2$ are said to satisfy the {\em \SI\ condition} if $G_{1,2}$ has non-intersecting diagonals. \end{defn} \begin{figure} [h] \centering {\includegraphics {simpose.eps}} \caption{{\em Superimpose}} \label{f:simpose} \end{figure} \begin{rem} It follows from \S\ref{ss:gl-mon} that elements of $\Gc$ correspond bijectively to the codim one faces of $K_{n-1}$. They are {\em adjacent} faces in $K_{n-1}$ if and only if they satisfy the \SI\ condition. Furthermore, the codim two cell of intersection in $K_{n-1}$ corresponds to the superimposed polygon. \end{rem} The diagonal of each element $G_i \in \Gc$ partitions the $n$-gon into two parts, with one part {\em not} having the $\infty$ label; call this the {\em free part of $G_i$}. Define the set $\Gc^i$ to be elements of $\Gc$ having $i$ sides on their free parts. It is elementary to show that the order of $\Gc^i$ is $n-i$ (for $1 < i < n-1$). In particular, the order of $\Gc^2$ is $n-2$, the number of sides (codim one faces) of an $n-3$ simplex. Arbitrarily label each face of the simplex with an element of $\Gc^2$. \begin{rem} For some adjacent faces of the $n-3$ simplex, the \SI\ condition is not satisfied. This is an obstruction of the simplex in becoming $K_{n-1}$. As we continue to truncate the cell, more faces will begin to satisfy the \SI\ condition. We note that once a particular labeling is chosen, the labels of all the new faces coming from truncations (blow-ups) will be forced. \end{rem} When the zero dimensional cells are blown up, two vertices of the simplex are truncated. The labeling of the two new faces corresponds to the two elements of $\Gc^{n-2}$. We choose the vertices and the labels such that the \SI\ condition is satisfied with respect to the {\em new} faces and {\em their} adjacent faces. Figure~\ref{f:trunk4} shows the case for the $2$-simplex and $K_4$ (compare with Figure~\ref{pvm05}). \begin{figure} [h] \centering {\includegraphics {trunk4.eps}} \caption{Truncation of $K_4$ by blow-ups} \label{f:trunk4} \end{figure} The blow-up of one dimensional cells results in the truncation of three lines. As before, the labels of the three new faces correspond to the three elements of $\Gc^{n-3}$, choosing edges and the labels such that the \SI\ condition is satisfied with respect to the new faces and their adjacent faces. Figure~\ref{f:trunk5} shows the case for the $3$-simplex and $K_5$ (compare with Figures~\ref{braid6} and~\ref{blow6}). \begin{figure} [h] \centering {\includegraphics {trunk5.eps}} \caption{Truncation of $K_5$ by blow-ups} \label{f:trunk5} \end{figure} As we iterate the blow-ups in Proposition~\ref{p:kap}, we jointly truncate the \mbox{$n-3$} simplex using the above process. The blow-ups of the codim $k$ irreducible cells $\ba^k$ add \mbox{$n-k-1$} new faces to the polytope, each labeled with an element from $\Gc^{k+1}$. Note that Corollary~\ref{c:icells} is in agreement with this procedure: Each irreducible cell $\ba^k$ corresponds to a choice of $k+1$ labels which are used on the elements of $\Gc^{k+1}$. In the end, we are left with $\sum |\Gc^i|$ faces of the truncated polytope, matching the number of codim one faces of $K_{n-1}$. \section{The Fundamental Group} \label{quasi} \subsection{} Coming full circle, we look at connections between the little cubes and the mosaic operads. We would like to thank M.\ Davis, T.\ Januszkiewicz, and R.\ Scott for communicating some of their results in preliminary form~\cite{djs2}. Their work is set up in the full generality of Coxeter groups and reflection hyperplane arrangements, but we explain how it fits into the notation of polygons and diagonals. \begin{defn} Let $G_a, G_d \in \Gc$, with diagonals $a, d$ respectively, satisfy the \SI\ condition. Let $G_b$ be the element in $\Gc$ after removing diagonal $d$ from $\widetilde \nabla_d (G_{a,d})$. We then say that $G_a$ and $G_b$ are {\em conjugate in $G_d$}. Figure~\ref{f:conjugate} shows such a case. \end{defn} \begin{figure} [h] \centering {\includegraphics {conjugate.eps}} \caption{{\em Conjugate}} \label{f:conjugate} \end{figure} \begin{defn} Let \Cox{n-1} be a group generated by elements $\{s_i\}$, in bijection with the elements $\{G_i\}$ of $\Gc$, with the following relations: \vspace{3pt} \begin{tabular}{cl} $s_i^2 = 1$ & \\ $s_d s_a = s_b s_d$ & if $G_a$ and $G_b$ are conjugate in $G_d$ \\ $s_a s_b = s_b s_a$ & if $G_a$ and $G_b$ satisfy the \SI\ condition {\em and} $\widetilde \nabla_a (G_{a,b}) = \widetilde \nabla_b (G_{a,b}).$ \end{tabular} \end{defn} The machinery above is introduced in order to understand $\pi_1(\M{n})$. Fix an ordering of $\{1, 2, \ldots, n-1\}$ and use it to label the sides of each element in $\Gc$. We define a map $\phi: \Cox{n-1} \rightarrow \Sg_{n-1}$ as follows: Let $\phi(s_i)$ be the product of transpositions corresponding to the permuted labels of $G_i$ under $\widetilde \nabla_d (G_i)$. Figure~\ref{f:mapphi} gives a few examples. \begin{figure} [h] \centering {\includegraphics {mapphi.eps}} \caption{Examples of $\Gc \rightarrow \Sg_6$} \label{f:mapphi} \end{figure} It is not too difficult to show that the relations of $\Cox{n-1}$ carry over to $\Sg_{n-1}$. Furthermore, the transpositions form a set of generators for $\Sg_{n-1}$, showing $\phi$ to be surjective.\footnote{To see this, simply consider the elements of $\Gc^2$.} This leads to the following \begin{thm} \textup{\cite[\S4]{djs2}} \;$ker \, \phi \times \Z_2 \,=\, \pi_1 (\dM{n}) \times \Z_2 \,=\, \pi_1 (\M{n}).$ \end{thm} \subsection{} The {\em pair-of-pants} product (Figure~\ref{pairpants}) takes $m+1$ and $1+n$ marked points on $\R\Pj^1$ to $m+1+n$ marked points. The operad structure on the spaces \M{n+1}, its simplest case corresponding to the pair-of-pants product, defines composition maps \: $\Cox{m} \times \Cox{n} \rightarrow \Cox{m+n}$ \: analogous to the juxtaposition map of braids. \begin{figure} [h] \centering {\includegraphics {pairpants.eps}} \caption{Pair-of-pants} \label{pairpants} \end{figure} We can thus construct a monoidal category which has finite ordered sets as its objects and the group \Cox{n} as the automorphisms of a set of cardinality $n$, all other morphism sets being empty. Note the following similarity between the braid group $\B_n$ obtained from the little cubes operad and the `quasibraids' \Cox{n} obtained from the mosaic operad: \centerline{ \begin{tabular}{ccccc} $\pi_1 (\C^n - \Delta)$ & $\rightarrowtail$ & $\B_n$ & $\twoheadrightarrow$ & $\Sg_n$ \\ [.4 cm] $\pi_1 (\dM{n+1})$ & $\rightarrowtail$ & $\Cox{n}$ & $\twoheadrightarrow$ & $\Sg_n$ \end{tabular}} \medskip \noindent There are deeper analogies between these structures which have yet to be studied. \bibliographystyle{amsplain}
1,477,468,750,493
arxiv
\section*{Acknowledgements} M.S. and G.T. acknowledge Louis Lambotte for his technical support. The authors are grateful to the STEREO and GRANIT collaborations for their material and technical support. This work is supported by the Department of Physics of the University of Namur and the Belgian Federal Science Policy Office (BELSPO) through the Trans-National Neutron Initiative (TRANSNI).\\
1,477,468,750,494
arxiv
\section{Introduction} The Prym map $\mathcal P_{g,r}$ assigns to a degree $2$ morphism $\pi: D \longrightarrow C$ of smooth complex irreducible curves ramified in an even number of points $r\ge 0$, a polarized abelian variety $P(\pi)=P(D,C)$ of dimension $g - 1 + \frac r2 $, where $g$ is the genus of $C$. We assume $g>0$ throughout the paper. The variety $P(\pi)$ is called the Prym variety of $\pi$ and is defined as the connected component of the origin of the kernel of the norm map $\operatorname{Nm}_{\pi }:JD \longrightarrow JC.$ Hence, denoting by $\mathcal R_{g,r} $ the moduli space of isomorphism classes of the morphisms $\pi$, we have maps: \begin{equation*} \mathcal P_{g,r} : \mathcal R_{g,r} \longrightarrow \mathcal A^\delta_{g-1+\frac{r}{2}}, \end{equation*} to the moduli space of abelian varieties of dimension $g-1+\frac{r}{2}$ with polarization type $\delta:=(1,\ldots, 1,2, \ldots ,2)$, with $2$ repeated $g$ times if $r>0$ and $g-1$ times if $r=0$. The case $r=0$ is very classical. Indeed, Prym varieties of unramified coverings are principally polarized abelian varieties and they have been studied for over one hundred years, initially by Wirtinger, Schottky and Jung (among others) in the second half of the $19$th century from the analytic point of view. They were studied later from an algebraic point of view in the seminal work of Mumford \cite{mumford} in 1974. We refer to \cite[section 1]{Farkas} for a historical account. Since Mumford's work, a lot of information has been obtained about the unramified (or ``classical'') Prym map $\mathcal P_{g,0}$. This theory is strongly related with the study of the Jacobian locus, Schottky equations and rationality problems among other topics. It is known that $\mathcal P_{g,0}$ is generically injective for $g\ge 7$ but never injective (see \cite{donagi} and the references therein). Moreover, in low genus, a detailed study of the structure of the fibre was provided by the works of Verra (\cite{verra}, for $g=3$), Recillas (\cite{ReciTrig} for $g=4$), Donagi (\cite{donagi} for $g=5$) and Donagi and Smith (\cite{ds} for $g=6$). All these results have been summarized under a uniform presentation in the fundamental work of Donagi \cite{donagi}. As we explain below, the aim of this paper is to do an analogous work for the fibres of the ramified Prym map in low genus. Although some specific cases were considered previously in \cite{nr} and \cite{bcv}, a systematic study of the properties of the ramified Prym map in full generality starts with the work of Marcucci and Pirola \cite{mp}. Combining their results with the main theorems in \cite{mn} and \cite{naranjo-ortega}, the generic Torelli theorem is proved for all the cases where the dimension of the source $\mathcal R_{g, r}$ is smaller than the dimension of the target $\mathcal A_{g-1+\frac r2}^{\delta}$. In fact, recently, a global Torelli theorem has been announced for all $g$ and $r\ge 6$ (\cite{ikeda} for $g=1$ and \cite{naranjo-ortega2} for all $g$). In this paper we address to the opposite side of the study of the ramified Prym map: the structure of the generic fibre when \begin{equation}\label{disuguaglianza} \dim \mathcal R_{g, r}=3g-3+r > \dim \mathcal A_{g-1+\frac r2}^{\delta}=\frac{1}{2}(g-1+\frac{r}{2})(g+\frac{r}{2}). \end{equation} Notice that this inequality still holds in case $g=1$: using translations we can always assume that one of the branch points is in the origin, hence $\dim \mathcal R_{1, r}=1+(r-1)=r$. Condition \eqref{disuguaglianza} is only possible in six cases that will be considered along the paper: $r=2$ and $1\le g\le 4$ and $r=4$ and $1\le g \le 2$. The case $g=1, r=4$ was considered by Barth in his study of abelian surfaces with polarization of type $(1,2)$ (see \cite{Barth}). In \cite{fgs} and \cite{gm} infinitely many examples of totally geodesic and of Shimura subvarieties of ${\mathcal A}_g$ generically contained in the Torelli locus have been constructed as fibres of ramified Prym maps. In \cite{moonen-special}, \cite{moonen-oort}, \cite{fgp}, \cite{fpp} examples of Shimura subvarieties of ${\mathcal A}_g$ generically contained in the Torelli locus have been constructed as families of Jacobians of Galois covers of ${\mathbb P}^1$ or of elliptic curves. Some of them are contained in fibres of ramified Prym maps. In particular, the images in ${\mathcal M}_2$ and in ${\mathcal M}_3$ of ${\mathcal R}_{1,2}$, respectively ${\mathcal R}_{1,4}$, are the bielliptic loci and, in \cite{fgs}, it is proven that the irreducible components of the fibres of the Prym maps ${\mathcal P}_{1,2}$, ${\mathcal P}_{1,4}$ yield totally geodesic curves in ${\mathcal A}_2$ and ${\mathcal A}_3$ and countably many of them are Shimura curves. Moreover in \cite{fgs} it is shown that family $(7) = (23) = (34)$ of \cite{fgp} is a fibre of the Prym map ${\mathcal P}_{1,4}$, which is a Shimura curve. In this paper (section 8) we give an explicit example of a totally geodesic curve which is an irreducible component of a fibre of the Prym map ${\mathcal P}_{1,2}$. It is worthy to mention that degree $2$ coverings ramified in $2$ points can be seen as the normalization of coverings of nodal curves. Beauville extended in \cite{beau} the classical Prym map to some coverings of stable curves (called ``admissible'' coverings) in such a way that the extended Prym map \[ \overline {\mathcal P}_{g}:\overline {\mathcal R}_{g} \longrightarrow \mathcal A_{g-1} \] becomes proper (to simplify the notation, $\mathcal P_{g,0}$ and $\mathcal R_{g,0}$ are denoted by $ \mathcal P_{g} $ and $ \mathcal R_{g} $). Then the moduli space $\mathcal{R}_{g-1,2}$ can be identified with an open set of a boundary divisor of $\overline{\mathcal{R}}_{g}$. With this strategy the mentioned works of Verra, Recillas and Donagi could help to understand the cases $r=2$ and $2\le g \le 4$. Unfortunately this way becomes cumbersome since the intersection of the generic fibre with the boundary is usually difficult to be described. We have found (except for the case $g=4$) direct procedures to study the fibre mainly based on the bigonal construction (see \cite{donagi}) and the extended trigonal construction (see \cite{lange-ortega}). We overview both in the next section. The results obtained in this paper can be summarized in the Theorem below. To state our theorem in the case $r=2$, $g=4$ we need to recall that Donagi found a birational map \[ \kappa: \mathcal A_{4} \longrightarrow \mathcal {RC}^+, \] where $\mathcal {RC}^+$ is the moduli space of pairs $(V, \delta )$, $V$ being a smooth cubic threefold and $\delta$ an ``even'' $2$-torsion point in the intermediate Jacobian $JV$ (see \cite[section 5]{donagi}). \begin{teo} We have the following description of the fibres of the ramified Prym map in the cases $(g,r) \in \{(1,2),(1,4),(2,2),(2,4),(3,2),(4,2)\}$: \begin{enumerate} \item [a)] For a generic elliptic curve $E$ the fibre $\mathcal P_{1,2}^{-1}(E)$ is isomorphic to $L_1 \sqcup \ldots \sqcup L_4$, where each $L_i$ is the complement of three points in a projective line. \item [b)] (Barth) Let $(A,L)$ be a generic abelian surface with a polarization of type $(1,2)$. Then there is a natural polarization $L ^* $ of type $(1,2)$ in the dual abelian variety $A^*$ and the fibre $\mathcal P_{1,4}^{-1}(A)$ is canonically isomorphic to the linear system $\vert L^*\vert$. \item [c)] The generic fibre of $\mathcal P_{2,2}$ is isomorphic to the complement of $15$ lines in a projective plane. \item [d)] The generic fibre of $\mathcal P_{2,4}$ is isomorphic to the complement of $15$ points in an elliptic curve. \item [e)] Let $X$ be a generic quartic plane curve, consider the variety $\mathsf G^1_4(X)$ of the $g^1_4$ linear series on $X$, and denote by $i$ the involution $L\mapsto \omega_X^{\otimes2}\otimes L^{-1}$. Then $\mathcal P_{3,2}^{-1}(JX)$ is isomorphic to the quotient by $i$ of an explicit $i$-invariant open subset of $\mathsf G^1_4(X)$. \item [f)] Let $(V, \delta)$ be a generic element in $\mathcal {RC}^+$ and let $\Gamma \subset JV$ be the curve of lines $l$ in $V$ such that there is a $2$-plane $\Pi$ containing $l$ with $\Pi \cdot V=l+2r$. Then $\mathcal P_{4,2}^{-1}(V,\delta )$ is isomorphic to a precise open set $\Gamma_0$ of $\Gamma $ (see \ref{open_set}). \end{enumerate} \end{teo} For more details see Theorems \eqref{teo1}, \eqref{teo2}, \eqref{teo3}, \eqref{teo4}, \eqref{teo5}, \eqref{teo6}. The paper is organized as follows: in the next section we give some preliminaries about the bigonal and trigonal construction and also on the differential of the ramified Prym map. In particular, we prove that the maps we are considering are dominant. Next we devote one section for each of the six cases. We include for completeness a description of Barth's results for $g=1$ and $r=4$. The most involved cases are $g=3$, $r=2$ and $g=4$, $r=2$. By means of the trigonal construction, the case $g=3$, $r=2$ is related with the determination of the tetragonal series on a generic quartic plane curve which do not contain divisors of type $2p+2q$. This is studied in detail in section 6. In the case $g=4$, $r=2$ we need to take care of the behaviour at the boundary of the rather sophisticated Donagi's description of the fibre of $\overline {\mathcal P}_{5}$ (see \cite[section 5]{donagi}). In particular we have to take care of the quadrics containing a nodal canonical curve of genus $5$ which we consider interesting on its own. This is the content of section 7. In section 8 we describe some examples of irreducible components of fibres of ramified Prym maps that yield totally geodesic or Shimura subvarieties of ${\mathcal A}_g$. \\ \textbf{Acknowledgements:} The authors would like to thank A. Verra for his suggestions and references concerning section 6. We also thank Andr\'es Rojas for detecting a mistake in section $7$ of a previous version. The third author thanks IMUB (Institut de Matemàtica Universitat de Barcelona) for the hospitality it offered when the first draft of this project started. \section{Preliminaries} \subsection{The differential of the ramified Prym map}\label{DIffeCov} By the theory of double coverings, the moduli space $\mathcal R_{g,r}$ can be alternatively described as the following moduli space of triples $(C, \eta, B)$: \[ \mathcal R_{g,r}=\{ (C, \eta, B) \mid [C] \in \mathcal M_g, \eta \in Pic^{\frac{r}{2}}(C), B \text{ reduced divisor in } |\eta^{\otimes 2}| \}/\cong. \] The codifferential of $\mathcal P_{g,r}$ at a point $[(C, \eta, B)]\in \mathcal R_{g,r}$ is given by the multiplication map (\cite{mp}) $$ d\mathcal P_{g,r}^* (C, \eta, B): Sym^2 H^0(C, \omega_C \otimes \eta) \longrightarrow H^0(C, \omega_C^2 \otimes \mathcal O(B)). $$ Let us now recall the definition of admissible covers given by Beauville in \cite[Lemma 3.1]{beau}. \begin{defin} Let $\tilde C$ be a connected curve with only ordinary double points and arithmetic genus $2g-1$, and let $\sigma$ be an involution on $\tilde C$. Then $\tilde C\rightarrow \tilde C/\sigma$ is an admissible covering of type $(\ast)$ if the fixed points of $\sigma$ are exactly the singular points and at a singular point the two branches are not exchanged under $\sigma$. \end{defin} Under these conditions, Beauville shows that the Prym variety attached to the covering $\tilde C\rightarrow \tilde C/\sigma$ can be defined in a similar way to the standard Prym construction and it is a principally polarized abelian variety. Let $\pi: D\rightarrow C$ be an element of $\mathcal R_{g,2}$. By glueing in $C$ the two branch points and in $D$ the two ramification points we get an instance of admissible covering of type $(\ast)$ of $\bar{\mathcal{R}}_{g+1}$. \begin{prop} \label{dominant} Assume that \[ (g,r) \in \{(1,2),(1,4),(2,2),(2,4),(3,2),(4,2)\}, \] then the ramified Prym map $\mathcal P_{g,r}$ is dominant. \end{prop} \begin{proof} It is enough to show that for a generic $(C,\eta, B)$ there are no quadrics containing the image of $\varphi_{\omega_C \otimes \eta }:C \rightarrow \mathbb P H^0(C, \omega_C \otimes \eta )^*$. Notice that there is nothing to prove in the cases $(g,r)\in\{(1,2), (2,2), (1,4)\}$. For $(g,r)=(3,2)$ and $(g,r)=(2,4)$ the curve $\varphi_{\omega_C \otimes \eta }(C) $ is a plane curve (with nodes) of degree $5$ and $4$ respectively. For the case $(g,r)=(4,2)$ we identify elements $(C,\eta, p+q)\in\mathcal{R}_{4,2}$ with coverings $(C^*=C/p\sim q ,\eta^*)$ of type $(\ast)$ in $\overline{\mathcal{R}}_5$. Izadi proved in \cite[Theorem (3.3), Remark (3.10)]{izadi} that the fiber at an abelian fourfold $A$, out of the closure of the Jacobian locus and of some specific high codimension locus, intersects the boundary in dimension $1$. Therefore, by dimensional reasons, either $\Delta^n$ dominates $\mathcal A_4$ or maps in the Jacobian locus. The fiber at a generic Jacobian is given in \cite[Theorem (5.14), (4)]{donagi}. It is formed by two surfaces, one in the locus of coverings of trigonal curves and the other in the divisor of Wirtinger coverings. Hence the $11$-dimensional divisor $\Delta^n$ cannot be mapped to the $9$-dimensional Jacobian locus.\\ \end{proof} \begin{cor} The assumptions of the previous proposition imply that the dimension of the generic fibre $F_{g,r}$ of $\mathcal P_{g,r}$ is: \[ \begin{aligned} &\dim F_{1,2}=1, \, \dim F_{2,2}=2, \,\dim F_{3,2}=2, \\ &\dim F_{4,2}=1, \,\dim F_{1,4}=1, \,\dim F_{2,4}=1. \end{aligned} \] \end{cor} \subsection{Dual Abelian Variety}\label{dualpol} Here we recall the main result of Birkenhake-Lange concerning dual abelian varieties and dual polarizations. \begin{teo}[\cite{birkenhake-langepol}, Theorem 3.1] There is a canonical isomorphism of coarse moduli spaces \begin{align}\label{lbiso} \mathcal{A}_g^{(d_1,...d_g)}&\rightarrow\mathcal{A}_g^{(\frac{d_1d_g}{d_g},\frac{d_1d_g}{d_{g-1}},...,\frac{d_1d_g}{d_1})}\\ (A,L)&\mapsto (A^*,L^*)\notag \end{align} sending a polarized abelian variety to its polarized dual abelian variety. \end{teo} Here $L^*$ is the polarization on the dual abelian variety $A^*$ which satisfies \[\lambda_{L^*}\circ\lambda_{L}=(d_g)_{A} \quad \text{and}\quad \lambda_L\circ\lambda_{L^*}=(d_g)_{A^*},\] where $\lambda_L: A\rightarrow A^* $, $\lambda_{L^*}: A^*\rightarrow (A^*)^*=A$ are the polarization maps and $ (d_g)_{A}:A\rightarrow A $, $ (d_g)_{A^*}:A^*\rightarrow A^*$ are the multiplications by $d_g$. Notice that the dual polarization $L^*$ satisfies $(L^*)^*=L$. \subsection{The polygonal construction}\label{polygonal} This section is devoted to the description of the so-called polygonal construction. It provides a very useful tool which starts from a ``tower'' of coverings \[A\rightarrow B\rightarrow C\]and produces new ones: $ A'\rightarrow B'\rightarrow C', A''\rightarrow B''\rightarrow C'',...$ determining relations among the Prym varieties. All details of this construction are borrowed from \cite{donagi}.\\ Let us consider a curve $C$ of genus $g$ with a map $f:C\to\mathbb{P}^1$ of degree $n$ and a $2$-sheeted ramified covering $\pi: D\rightarrow C$. Then we can always associate a $2^n$- covering \begin{equation*} D'\to \mathbb{P}^1 \end{equation*} defined in the following way: the fibre over a point $p\in \mathbb{P}^1$ is given by the $2^n$ sections $s$ of $\pi$ over $p$. This means that: \begin{equation}\label{sezioni} s: f^{-1}(p)\to\pi^{-1}f^{-1}(p) \quad \text{and}\; \pi\circ s=id . \end{equation} $D'$ can be better described inside $D^{(n)}$, where $D^{(n)}$, as usual, represents the $n$-symmetric product of the curve $D$ and it parametrizes effective divisors of degree $n$. Indeed it can be described by the following fibre product diagram: \begin{equation}\label{poligconstr} \begin{tikzcd} D'\arrow{d}{2^n:1}\arrow[hook]{r}&D^{(n)}\arrow{d}{\pi^{(n)}}\\ \PP\arrow[hook]{r}&C^{(n)} \end{tikzcd} \end{equation} where $\PP$ is embedded in $C^{(n)}$ by sending a point $ p $ to its fibre $f^{-1}(p)$. $D'$ carries a natural involution $i':D'\to D'$ defined as follows: \begin{equation}\label{involutione} q_1+...+q_n\mapsto i(q_1)+...+i(q_n) \end{equation} where $i$ is the involution of $D$ which induces the covering $\pi$. Moreover we can define an equivalence relation on $D'$ identifying two sections $s_i,s_j$ if they correspond under an even number of changes $q_i\mapsto i(q_i)$. This gives another tower \begin{equation*} D'\rightarrow O\rightarrow \PP, \end{equation*} where $O$ is the quotient obtained considering this equivalence. It is known as the orientation cover of $f\circ \pi$. We conclude recalling, without proof, a result shown in \cite{donagi}. \begin{prop}\label{Reducible} If $D$ is \textit{orientable}, that means that the orientation cover $O\rightarrow\PP$ is trivial, then $D'$ is reducible: $D'=D^0\cup D^1$. \begin{itemize} \item[-] If $n$ is even then $i'$ acts on each $D^j$ and the quotient has a degree $2^{n-2}$ map to $\PP$; \item[-] If $n$ is odd then $i'$ exchanges the two branches $D^j$. Each $D^j$ has a map of degree $2^{n-1}$ to $\PP$. \end{itemize} \end{prop} \subsection{The bigonal construction}\label{bigonal} Let us see an application of the polygonal construction described above in case of $n=2$. Starting from a tower \[D\xrightarrow{\pi} C\xrightarrow{f}\PP,\] where $\pi$ and $f$ both have degree 2, we get \[D'\xrightarrow{\pi'} C'\xrightarrow{f'}\PP,\] by means of a fibre product diagram as in diagram \eqref{poligconstr}. Taking $k\in\PP$ the possible situations are the following (see \cite{donagi}, pp. 68-69): \begin{itemize} \item[1)] If $\pi,f$ are \'{e}tale the same are $\pi',f'$; \item[2)] If $f$ is \'{e}tale while $\pi$ is branched at one point of $f^{-1}(k)$, then $h$ inherits two critical points of order 2 in the fibre which are exchanged by $i$. This means that $\pi'$ is \'{e}tale, while $f'$ is branched; \item[3)] Viceversa if $\pi$ is \'{e}tale while $f$ is branched in $k$, then $h$ has a critical point of order 2 in the fibre and 2 more points which are exchanged by $i'$. This means that $\pi'$ has a critical point of order 2 while $f'$ is \'{e}tale; \item[4)] If $\pi,f$ are both branched the same are $\pi',f'$ (in particular $h$ has a single critical point of order 4); \item[5)] If $f$ is \'{e}tale while $\pi$ is branched at both points then $C'$ will have a node over $k$. \end{itemize} We call an element $D\rightarrow C\rightarrow\PP$ \textit{general} if it avoids situations of type 5) (where the bigonal construction induces singular coverings). \begin{prop} Assuming $f\circ\pi$ general, then $g(D')=r+g-2$ and $g(C')=\frac{r}{2}-1$. \begin{proof} By the assumption of generality, a straightforward application of Riemann-Hurwitz formula for $h$ gives: \begin{equation*} 2g(D')-2=4(-2)+2(r-r')+6-r'+3r'. \end{equation*} Similarly for $\pi'$, we get: \begin{equation*} 2g(D')-2=2(2g(C')-2)+6-r'+r'. \end{equation*} \end{proof} \end{prop} \begin{lemma}[\cite{donagi}, Lemma 2.7] The bigonal construction is symmetric: if it takes $D\xrightarrow{\pi} C\xrightarrow{f}\PP$ to $ D'\xrightarrow{\pi'} C'\xrightarrow{f'}\PP$ then it takes $ D'\xrightarrow{\pi'} C'\xrightarrow{f'}\PP$ to $ D\xrightarrow{\pi} C\xrightarrow{f}\PP$. \end{lemma} Moreover the following holds: \begin{teo}[Pantazis,\cite{Pantazis}]\label{Pantazis} The Prym varieties $P(D,C)$ and $P(D',C')$ associated to the two bigonally-related covering maps $D\rightarrow C$ and $D'\rightarrow C'$ are dual each other as polarized abelian varieties. \end{teo} \subsection{The trigonal construction} Thanks to the work of Recillas (\cite{ReciTrig}), we have a theorem concerning the polygonal construction in the case $n=3$. It is known as \textit{trigonal construction} and it deals with \'{e}tale double covers of smooth trigonal curves. Denote $\mathcal{R}_{g+1}^{tr}$ the moduli space of 2:1 \'{e}tale coverings of trigonal curves $\tilde C$ of genus $g+1$. Each point in $\mathcal{R}_{g+1}^{tr}$ corresponds to a triple $(\tilde C, \eta, M)$, where $\eta\in\text{Pic}^0(\tilde C)$ such that $\eta\neq\mathcal{O}_C$ and $\eta^2=\mathcal{O}_{\tilde C}$ gives the double covering and $M$ is the $g^1_3$ which gives the map to $\PP.$ This means that we consider towers \begin{equation} \tilde D\xrightarrow{\tilde \pi}\tilde C\xrightarrow{3:1} \PP. \end{equation} Now call $\mathcal{M}^{tet}_{g,0}$ the locus in $\mathcal{M}_g$ given by tetragonal curves $X$ with the property that above each point of $\PP$ the associated linear series $g^1_4$ has at least one \'{e}tale point. In \cite{ReciTrig} Recillas showed (see \cite{lange-birkenhake} for details of the construction) that: \begin{teo}The trigonal construction gives the following isomorphism:\begin{align*} T_0: \mathcal{R}_{g+1}^{tr}&\rightarrow\mathcal{M}^{tet}_{g,0}\\ (\tilde C, \eta, M)&\mapsto(X,F), \end{align*} where $F$ is a $g^1_4$.\\ Moreover, calling $P(\tilde \pi)$ the Prym variety associated to $\tilde \pi$, we have: \begin{equation*} P(\tilde \pi)\cong JX \label{Isom} \end{equation*} as isomorphism of principally polarized abelian variety (from now on denoted by ppav). \end{teo} Notice that the polygonal construction in case $n=3$, applied to an unbranched covering $\pi$, gives a reducible curve $X=X^0\cup X^1$ (see \cite[Corollary 2.2]{donagi}). The two components are isomorphic tetragonal curves of genus $g$. Take one of them to define the image $(X,F)$ of $( \tilde C, \eta, M)$ through $T_0$. \\ A similar statement is valid also in the case of double covers of trigonal curves with two ramification points. This has been proved by Lange and Ortega in \cite{lange-ortega}. Let $\mathcal{R}b_g^{tr}$ be the moduli space of pairs $ (\pi: D\rightarrow C, M )$ where $\pi$ is a ramified double cover of a smooth trigonal curve $C$ of genus $g$ and $M$ is a $g^1_3$ on $C$. Suppose that the branch locus of $\pi$ is disjoint from the ramification locus of the degree 3 map $f:C\rightarrow\PP$. As in \cite{lange-ortega}, we will call an element $D\xrightarrow{\pi}C\xrightarrow{f}\PP$ \textit{special} if the branch locus of $\pi$ (given by two points $p_1,p_2$) is contained in a fibre of $f$, otherwise we will call it \textit{general}. Let $\mathcal{R}b_{g,sp}^{tr}$ be the moduli space of special elements. Moreover we denote $\mathcal{M}_{g,*}^{tet}$ the moduli space of pairs $(X,k)$ of smooth tetragonal curves with a 4:1 map $k:X\rightarrow\PP$ with at least one \'{e}tale point on each fibre with the exception of exactly one fibre which consists of two simple ramification points. \begin{teo}[\cite{lange-ortega}, Theorem 4.3]\label{ramifiedtrigonal} The map \begin{equation} \mathcal{R}b_{g,sp}^{tr}\rightarrow\mathcal{M}_{g,\ast}^{tet}\label{IsoLangeOrtega} \end{equation} is an isomorphism. Moreover if $D\xrightarrow{\pi}C\xrightarrow{f}\PP$ is an element of $\mathcal{R}b_{g,sp}^{tr}$ and $X$ is the corresponding tetragonal curve, then we have an isomorphism of ppav:\begin{equation*} P(\pi)\cong JX. \end{equation*} \end{teo} \section{Case $g=1$, $r=2$} Let us consider $\pi: D\rightarrow C$ a double ramified covering in $ \mathcal{R}_{1,2}$ and take $$\mathcal{P}_{1,2}: \mathcal{R}_{1,2}\rightarrow\mathcal{A}_1,$$ the corresponding Prym map. We denote by $b_1+b_2$ the branch divisor (on $C$) and $r_1+r_2$ the ramification divisor (on $D$). The covering $\pi$ is determined by the data $(C,\eta, b_1+b_2)$ where $\eta \in Pic^1(C)=C$ satisfies $ \eta^{\otimes 2} =\mathcal O_C(b_1+b_2)$. The linear series $\vert b_1+b_2 \vert $ gives a map $f:C\xrightarrow{2:1}\mathbb P^1$ ramified in four points $p_1,p_2,p_3,p_4 \in C$. By construction $\eta $ is one of the sheaves $\mathcal O_C(p_i)$. Calling $\sigma $ the involution on $C$ attached to $f$, then $$\sigma(b_1)=b_2 \quad \text{and}\quad \sigma(p_i)=p_i,\;\; i=1,...,4.$$ Hence $\sigma$ leaves invariant $b_1+b_2$ and $\eta$. The following is well-known: \begin{lemma} Let $\sigma $ be an involution on a curve $C$ leaving invariant a reduced divisor $B$ and a sheaf $\eta \in Pic(C)$ such that $\eta ^{\otimes 2}\cong \mathcal O_C(B)$. Let $\pi: D\rightarrow C$ be the double covering attached to $(C,\eta, B)$, then there exists an involution $\tilde \sigma $ on $D$ lifting $\sigma$, that is $\sigma \circ \pi= \pi \circ \tilde \sigma $. \begin{proof} The curve $D$ is defined as $D:= Spec (\mathbf{\mathcal A})$, where $\mathbf{\mathcal A}$ is the $\mathcal O_C$-algebra $\mathcal O_C \oplus \eta ^{-1}$ and the multiplication is defined in the obvious way using that $\eta ^{-2}\cong \mathcal O_C(-B)\subset \mathcal O_C$ (see \cite[section 1]{mumford}). Then $\sigma $ induces an involution on $\mathbf {\mathcal A}$ and therefore an involution $\tilde \sigma $ on $D$ which lifts $\sigma $ by construction. \\\end{proof} \end{lemma} We have the following Cartesian diagram: \begin{equation} \begin{tikzcd} & D\arrow{dl}\arrow{dr}\arrow[d,"\pi"]\\ \PP\arrow{dr} &C \arrow[d,"f"]&E\arrow[dl,"f'"]\\ &\PP \end{tikzcd} \end{equation} Here $E$ is the quotient of $D$ by $\tilde \sigma$ while $\PP$ is the quotient of $D$ by $\tau\tilde{\sigma}$, where $\tau$ is the involution attached to $\pi$. Indeed, without loss of generality, assume that $\pi$ corresponds to the point $p_1$ (i.e. $\eta \cong \mathcal O_C(p_1)$). Then the preimages of $p_1 $ by $\pi$ are fixed points of $\tilde \sigma $ and they are the only ones. On the other hand, $\tau\tilde{\sigma}$ fixes the preimages by $\pi$ of $p_2,p_3,p_4$. Therefore $E$ is an elliptic curve, while $D/\langle\tau\tilde{\sigma}\rangle$ is $\PP$ (note that if this is not true the contrary holds). Using \cite[section 7]{mumford}, we get $P(D,C)\cong E$. \\ Calling $a_i=f(p_i)$ and $b=f(b_1)=f(b_2)$, we obtain that the branch locus of $f': E\rightarrow \PP$ is given by $b, a_2, a_3, a_4$. We have the following: \begin{teo}\label{teo1} Fix a generic elliptic curve $E\in\mathcal{A}_1$. The preimage of $E$ by the ramified Prym map $\mathcal{P}_{1,2}$ is isomorphic to $L_1 \sqcup \ldots \sqcup L_4$, where each $L_i$ is the complement of three points in a projective line. \begin{proof} Start with $E$ represented as a double covering of $\mathbb P^1$ branched in four points $c_1, c_2, c_3, c_4$ and put $$L_i=\mathbb P^1\backslash \{ c_{1},...,\hat{c}_{i},...,c_{4}\}.$$ Then for any $q\in L_1$ we get a unique element in $\mathcal{P}_{1,2}^{-1}(E)$ in the following way: $C$ is the covering of $\mathbb P^1$ branched in $q, c_2, c_3, c_4$. Denote with $b_1,b_2$ the preimages of $c_1$ via this covering. Then $D\rightarrow C$ is determined by $b_1+b_2$ and by $\eta=\mathcal{O}_C(p)$, where $p$ is the ramification point in $C$ attached to $q$. \\ Doing the same for the other $L_i's$, we conclude.\\ \end{proof} \end{teo} \section{Case $g=1$, $r=4$} The case \begin{equation}\label{Prym1,4} \mathcal{P}_{1,4}: \mathcal{R}_{1,4}\to \mathcal{A}_2^{(1,2)} \end{equation} is completely studied in \cite{Barth}. Here we include the main result without proof by the sake of completeness.\\ Actually, instead of \eqref{Prym1,4}, it is easier to study the composition: \begin{equation} \mathcal{R}_{1,4}\to \mathcal{A}_2^{(1,2)}\xrightarrow{\cong}\mathcal{A}_2^{(1,2)} \end{equation} where the isomorphism sends the Prym variety to its dual (see subsection \ref{dualpol}). Fix a general polarized abelian surface $(A,L)$ of type (1,2). We have the following: \begin{prop}[\cite{Barth}, pp. 46-48] The pencil $\vert L\vert $ has no fixed component and its base locus consists of four points $e_1,...,e_4$. The general member $D\in \vert L\vert $ is an irreducible smooth curve of genus 3. Moreover, $L$ is symmetric and the same occurs for all $D\in \vert L\vert$. \end{prop} Furthermore if $D \in |L|$ is smooth, the multiplication by $-1$ has exactly 4 fixed points on it. This means that the quotient $D/\langle -1 \rangle$ is an elliptic curve. \begin{teo}[Duality Theorem 1.12,\cite{Barth}]\label{teo2} The fibre of the Prym map is parametrized by the linear system $|L^*|$, where $L^*$ is the dual polarization of $L$ defined as in \ref{dualpol}. \end{teo} \section{Cases $g=2$ and $r=2$} This section is devoted to the analysis of the fibres of \[ \mathcal{P}_{2,2}: \mathcal{R}_{2,2}\to \mathcal{A}_2. \] Taking an element $(C,\eta, B)\in \mathcal{R}_{2,2} $, we can apply the bigonal construction using the hyperelliptic involution of $C$. Thus we pass from towers: \begin{equation*} \label{startbigonal} D\rightarrow C\rightarrow\PP \end{equation*} to towers \begin{equation*}\label{resbigonal} D'\xrightarrow{\pi'} C'\xrightarrow{f'}\PP, \end{equation*} where $\pi'$ is a degree 2 map branched on $6$ points and $C'$ is a curve of genus $0$ maybe with one node. In order to understand the nodal case the map $f'$ can be seen as choosing two different points in the projective line, the limit case appears when the two points come together. Denote by $\mathcal M_{0,6}$ the moduli space of 6 unordered different points in the projective line and by $\mathcal M_{0,6,2}$ the moduli space of two collections of points in the line: $6$ unordered different points and $2$ unordered different points more. A partial compactification of this last space $\overline {\mathcal M}_{0,6,2}$ consists in allowing the set of two points to be a repeated one. As described in subsection \ref{bigonal}, the bigonal map yields an injective map: \[ b: \mathcal R_{2,2} \rightarrow \overline {\mathcal{M}}_{0,6,2}. \] This is an isomorphism of $\mathcal R_{2,2}$ with $b(\mathcal R_{2,2})$, where the inverse map is the bigonal construction again (according to \cite[Section 2.3]{donagi}, more precisely possibility (vi) in page 69, the bigonal map extends to nodal admissible coverings). We denote the image $b(\mathcal R_{2,2})$ by $\overline {\mathcal M}_{0,6,2}^{0}$ . In order to give a precise description of this moduli space we identify the symmetric product $Sym^2 \mathbb P^1$ with a projective plane in the standard way: we see $\PP\hookrightarrow\mathbb{P}^2$ as a conic via the Veronese embedding of degree $2$. The pairs of points (possibly equal) correspond to lines and therefore $Sym^2 \mathbb P^1$ can be identified with $\mathbb P^{2 \vee}$. The way that each pair of different points $x_1,x_2$ determines a $2:1$ map on the conic is easy: two points $z_1,z_2$ correspond by the involution $\sigma_{x_1+x_2}$ with fixed points $x_1,x_2$, if they form a harmonic ratio: $\vert x_1,x_2;z_1,z_2 \vert =-1$. Geometrically, viewing the points in the plane, this means that the pole of the line $x_1x_2$ is aligned with $z_1$ and $z_2$.\\ We have $6$ marked points $p_1,...,p_6$ in the line and we have to avoid property (5) of subsection \ref{bigonal}: we have to eliminate the pairs $x_1 + x_2 \in Sym^2\mathbb P^1$ such that $\sigma _{x_1+x_2}(p_i)=p_j$ for some $i\neq j$. Hence: \[ \overline {\mathcal M}_{0,6,2}^{0}=\{[(p_1+\ldots +p_6,x_1+x_2)]\in \overline {\mathcal M}_{0,6,2} \mid \vert x_1,x_2;p_i,p_j\vert \neq -1, \forall i\neq j \}. \] Then we have a commutative diagram: \begin{equation} \begin{tikzcd}\label{Fibra2,2} \mathcal{R}_{2,2}\arrow{r}\arrow{d}{b} & \mathcal{A}_2\\ \overline {\mathcal{M}}_{0,6,2}^0 \arrow{r}{\phi} &\mathcal{M}_{0,6}\arrow{u} \end{tikzcd} \end{equation} where $\phi$ is the forgetful map and $\mathcal{M}_{0,6} \rightarrow \mathcal{A}_2$ is just the Torelli morphism. Therefore, studying the fibre of $\phi$, we conclude with the following: \begin{teo}\label{teo3} The fibre of the Prym map $\mathcal{P}_{2,2}$ over a general principally polarized abelian surface $S$ is isomorphic to a projective plane minus $15$ lines. \begin{proof} Let $S$ be a general principally polarized abelian surface, assume that $S$ is the Jacobian of a genus $2$ curve $H$ and represent $H$ as an element in $\mathcal{M}_{0,6}$, where the six marked points $p_1,...,p_6$ are all different and correspond to the branch locus of the hyperelliptic involution. Diagram \eqref{Fibra2,2} says that we must look at the fibre of $\phi$ over $H$. The harmonic condition $\vert x_1,x_2;p_i,p_j\vert = -1$ says that $x_1$, $x_2 $ and the pole $p_{i j}$ of $p_ip_j$ are in a line. Therefore, looking at the dual, we have to rule out the points of the $15$ lines $(p_{i j})^*\subset \mathbb P^{2 \vee}$. Notice that the limit case $x_1=x_2$ means that $p_{i j}$ belongs to the tangent line at the point and this is not excluded in the fibre. \\ \end{proof} \end{teo} \section{Case $g=2$ and $r=4$} We study now the map $$\mathcal{P}_{2,4}: \mathcal{R}_{2,4}\to \mathcal{A}_3. $$ In this case the bigonal construction produces an injective map: \[ b: \mathcal{R}_{2,4} \rightarrow \tilde{\mathcal{R}}_{1,6}, \] where $\tilde{\mathcal{R}}_{1,6} $ is the moduli space of isomorphism classes of pairs $(\pi',f')$. Here $\pi':D' \rightarrow C'$ is a degree 2 map branched on 6 smooth points $p_1,...,p_6$. The curve $D'$ has at most one node, fixed by the involution on $D'$ attached to $\pi'$, which determines an admissible singularity for $\pi'$ of type $(\ast)$ by \cite[Section 2.3, possibility (vi)]{donagi}. The map $f'$ is a $g^1_2$ on $C'$. We denote by $\tilde{\mathcal{R}}_{1,6}^0 $ the image of $b$. This is an open set that, as in the previous section, can be described explicitly \[ \tilde{\mathcal{R}}_{1,6}^0=\{ (\pi',f')\in \tilde {\mathcal R}_{1,6} \mid f'(p_i)\neq f'(p_j) \;\forall i\neq j \}. \] The symmetry of the bigonal construction makes $b$ an isomorphism onto its image.\begin{remark} In order to state the next diagram, we need first to extend $\mathcal P_{1,6}$ to the partial compactification $\mathcal R_{1,6}'$ of the double coverings of curves of (arithmetic) genus $1$ which satisfy our assumptions on $\pi'$. This is possible, although we are working with ramified coverings, doing a local analysis around the singular points of $\pi'$ and imitating Beauville's construction of the extension of the Prym map to admissible coverings. Indeed both $JD'$ and $JC'$ turn out to be $\mathbb{C}^*$-extensions of abelian varieties and a diagram similar to that in \cite{beau} pag. 174 shows that the kernel of the Norm map induced by $\pi'$ (that is $P(D'\rightarrow C')$) is an abelian variety. \end{remark} Similarly to the previous section we have the following diagram: \begin{equation} \begin{tikzcd}\label{Fibra2,4} \mathcal{R}_{2,4}\arrow{r}\arrow{d}{b} & \mathcal{A}_3^{(1,2,2)}\cong\mathcal{A}_3^{(1,1,2)}\\ \tilde{\mathcal{R}}_{1,6}^0\arrow{r}{\phi} &\mathcal{R}_{1,6}'\arrow{u} \end{tikzcd} \end{equation} where $\phi$ is the forgetful map. The isomorphism $\mathcal{A}_3^{(1,2,2)}\cong\mathcal{A}_3^{(1,1,2)}$ is given by \eqref{lbiso} and sends a polarized abelian threefold to its dual (endowed with the dual polarization). The remaining vertical arrow $\mathcal{P}_{1,6}': \mathcal{R}_{1,6}' \rightarrow \mathcal{A}_3^{(1,1,2)} $ is the extension of the Prym map $\mathcal P_{1,6}$. From a result of Ikeda (\cite{ikeda}), we know that $\mathcal{P}_{1,6}$ is injective and in fact an embedding (see \cite{naranjo-ortega2}). Therefore the extension to $\mathcal R_{1,6}'$ is generically injective. As before, Theorem \eqref{Pantazis} guarantees the commutativity of \eqref{Fibra2,4}. We conclude with the following: \begin{teo}\label{teo4} The fibre of the Prym map $\mathcal{P}_{2,4}$ over a general $A\in\mathcal{A}_3$ is isomorphic to an elliptic curve $E$ minus 15 points. \begin{proof} Let us consider a generic polarized abelian threefold in $\mathcal{A}_3^{(1,2,2)} $ and let $(E, \eta, p_1+\ldots + p_6)$ be its unique preimage in $\mathcal R'_{1,6}$. Call $B=p_1+...+p_6$ the branch divisor . Diagram \eqref{Fibra2,4} says that the fibre over $A$ is isomorphic to the fibre of $\phi$ over $(E, \eta, p_1+\ldots + p_6)$, hence it is isomorphic to: \[Pic^2(E)\setminus\bigcup_{\substack{p_i,p_j\in B,\\p_i\neq p_j}}\mathcal{O}_E(p_i+p_j). \] The isomorphism $Pic^2(E)\cong E$ concludes the proof.\\ \end{proof} \end{teo} \section{Case $g=3, r=2$} Let us now look at the map \begin{equation} \mathcal{P}_{3,2}: \mathcal{R}_{3,2}\to \mathcal{A}_3. \end{equation} Note that here the associated Prym varieties are principally polarized. Each non-hyperelliptic curve of genus $3$ admits a $1$-dimensional space of $g^1_3$'s. In particular, seeing $C$ as a quartic plane curve and considering the line $l=p_1+p_2$ passing through $p_1$ and $p_2$, we can always get two degree 3 maps: they are defined considering the two different projections from one of the two remaining points $x,y$ of the intersection $C\cdot l$. In fact if we consider the canonical divisor $$K_C=p_1+p_2+x+y$$ we get $h^0(C,\omega_C(-x))=h^0(C,\omega_C(-y))=2$ and we can use the associated linear systems to define the 3:1 maps to $\PP$. Call them $f_x$ and $f_y$. Both have, by definition, the two branch points $p_1,p_2$ on the same fibre and they are the unique trigonal maps on $C$ with this property. \\ We will use the following diagram to describe the fibres of $\mathcal{P}_{3,2}$: \begin{equation}\label{Diag} \begin{tikzcd} &\mathcal{M}_{3,\ast}^{tet}\arrow{r}{2:1}&\mathcal{M}q_{3,\ast}^{tet}\arrow{dr}\\ \mathcal{R}b_{3,sp}^{tr}\arrow{d}{2:1}\arrow{ur}{\cong}& & &\mathcal{M}_{3}\arrow{d}{j}\\ \mathcal{R}_{3,2} \arrow{rrr}{\mathcal{P}_{3,2}}& & &\mathcal{A}_3 \end{tikzcd} \end{equation} Let $\mathcal{R}b_{3,sp}^{tr}$ be the moduli space of pairs $(\pi: D\rightarrow C, M)$, where $\pi$ is a ramified double cover of a smooth trigonal curve $C$ of genus 3, $M$ is a $g^1_3$ on $C$ such that the branch locus of $\pi$ is contained in one of its fibres. By above considerations, the forgetful map $\mathcal{R}b_{3,sp}^{tr}\rightarrow\mathcal{R}_{3,2} $ is a 2:1 map. The ramified trigonal construction (as recalled in \eqref{ramifiedtrigonal}) gives the isomorphism between $\mathcal{R}b_{3,sp}^{tr}$ and $\mathcal{M}_{3,\ast}^{tet}$ . \\ We will study the fibre of $\mathcal{P}_{3,2}$ using the map $ \mathcal{M}_{3,\ast}^{tet}\rightarrow \mathcal{M}_{3}$. Note that in the above diagram it factors as the composition of two maps. The first one is $ \mathcal{M}_{3,\ast}^{tet}\rightarrow \mathcal{M}q_{3,\ast}^{tet}$ defined as the quotient map associated with an involution that acts on $\mathcal{M}_{3,\ast}^{tet}$. We will describe this action later. The second map $ \mathcal{M}q_{3,\ast}^{tet}\rightarrow \mathcal{M}_{3}$ is the forgetful map. Finally, $j$ is just the Torelli morphism.\\ Let us take a general abelian threefold $A\in\mathcal{A}_3$. We can assume that $A$ is the Jacobian of a general curve $X$ of genus $3$. In order to study the fibres of $ \mathcal{M}_{3,\ast}^{tet}\rightarrow \mathcal{M}_{3}$ we need to recall some facts on $g^1_4$'s on $X$. \subsection{The blow-up} Let $X\subset \mathbb P^2=\mathbb P (H^0(X,\omega_X)^{*}) $ be a non-hyperelliptic curve of genus $3$ canonically embedded. Let $\mathsf G^1_4(X)$ be the variety of all $g^1_4$ linear series on $X$, complete or not (see \cite{ACGH}, chapter IV). Then by Riemann-Roch \[ \psi:\mathsf G^1_4(X) \longrightarrow W^1_4(X) =Pic ^4(X) \] is a birational surjective map which is an isomorphism out of $W^2_4(X)=\{\omega_X\}$. In fact $$\text{Supp}(\mathsf G^1_4(X))=\{(L,V)\mid L\in Pic ^4(X), V\in Gr(2,H^0(X,L))\}.$$ Thus over $L\neq \omega_X$ the fibre is just the complete linear series $(L,H^0(X,L))$. Call $E$ the preimage of the canonical sheaf, then $\mathsf G^1_4(X)\smallsetminus E \cong Pic^4(X)\smallsetminus \{\omega_X\}$. The set $E$ parametrizes all the non-complete $g^1_4$ linear series on $X$ which correspond to \[ Gr(2, H^0(X ,\omega_X))\cong \{ \text{lines in }\mathbb P H^0(X,\omega_X)\} = \mathbb P (H^0(X, \omega_X)^*). \] In other words $\mathsf G^1_4(X)$ is the blow-up of $Pic^4(X)$ at $\omega_X$ and the points of the exceptional divisor correspond to points in the plane $\mathbb P^2$ where the curve $X$ is canonically embedded. The linear series is the projection from this point and if the point belongs to $X$ itself, then the linear series has a base point.\\ We can assume that $X$ has exactly $28$ bitangents, that is that there are not hyperflexes in $X$ (points $p$ such that the tangent line at $p$ intersects $X$ in $4p$). In fact, the curves with hyperflexes define a divisor in $\mathcal M_3$. Each bitangent defines a divisor of the form $2p_i+2q_i$ in the canonical linear series of $X$. Denote by $\mathcal B \subset X^{(2)}$ the set $\{p_i+q_i \mid i=1,\ldots , 28\}$ and let $$S:=\text{Bl}_\mathcal{B}X^{(2)} $$ be the surface obtained by blowing-up $X^{(2)}$ at $\mathcal B$. By the universal property of the blow-up we have a diagram: \begin{equation}\label{blowup} \begin{tikzcd} S \ar[d] \ar{r}{\varphi } & \mathsf {G}^1_4(X) \ar{d}{\psi}\\ X^{(2)} \ar{r}{\varphi_0 }& Pic^4(X), \end{tikzcd} \end{equation} where $\varphi_0(x+y)=\mathcal O_X(2x+2y)$. Let us now consider the involution \begin{align*} i: Pic^4(X)&\rightarrow Pic^4(X)\\ L&\mapsto \omega_X^{\otimes2}\otimes L^{-1}. \end{align*} \begin{prop}\label{lift} The involution $i$ on $Pic^4(X)$ lifts to an involution on $\mathsf G^{1}_4 (X)$ and it acts as the identity on the exceptional divisor $E$. Moreover, by construction, $i$ leaves $\varphi (S)$ invariant. \begin{proof} To simplify the notation put $Pic= Pic^4(X)$. The involution $i$ has an isolated fixed point at $\omega_X$. In fact the fixed points are the line bundles $L$ such that $L^{\otimes2}=\omega_X^{\otimes2}$ and this happens if and only if $L =\omega_X\otimes \eta$, where $\eta$ is a two torsion point. The exceptional divisor $E$ is equal to $ \mathbb P (T_{\omega_X}Pic)$ so the action of $i$ on $E$ is given by the projectivisation of the differential of $i$ at $\omega_X$, $di_{\omega_X}$. We claim that $di_{\omega_X}$ is $-Id$, hence it is the identity on $E = {\mathbb P}(T_{\omega_X}Pic)$. In fact by the linearisation theorem of Cartan, there exist local coordinates $z$ in a neighborhood $U$ of $\omega_X$ such that in these coordinates, $i(z) = Az$, where $A$ is a matrix such that $A^2 = I$. Thus the eigenvalues of $A$ are $\pm1$. But if there is an eigenvalue equal to 1, there would exist a space of fixed points which is positive dimensional. This leads to a contradiction. Finally let us take $x+y\in X^{(2)}$, as in diagram \eqref{blowup}, and let us denote with $x',y'$ the two remaining points of the intersection of $X$ with the line $l=x+y$. In this way $K_X=x+y+x'+y'$ and thus the last statement follows from $i(\varphi_0(x+y))=\mathcal O_X(2x'+2y')$. \end{proof} \end{prop} Proposition \ref{lift} guarantees that $i$ naturally induces an involution on $ \mathcal{M}_{3,\ast}^{tet}$ (the moduli space of pairs $(C,g^1_4)$ of curves $C$ of genus 3 with a 4:1 map to $\PP$ with at least an \'etale point on each fibre and a special fibre of type $2p+2q$). We still denote this involution by $i$. \subsection{Geometric description of the complete linear series} In the case of complete linear series $g^1_4$, we can describe geometrically the divisors in the image of $\phi$: fix two different points $r, s \in X$ such that the line $l=r+s$ intersects $X$ in four different points. Put $$l\cdot X= r+s+u+v.$$ Denote by $t_r, t_s, t_u, t_v$ the tangent lines to $X$ at the points $r, s, u, v$ respectively. Let us define: \[ \mathcal F_{r ,s}=\{\text{conics through } u, v \text{ tangent to } t_u, t_v \text{ at } u, v \text{ resp.}\}\cong \mathbb P^1. \] If $Q \in \mathcal F_{r,s}$, then $Q\cdot X= 2u+2v+p_1+p_2+p_3+p_4$. All these degree $8$ divisors are linearly equivalent on $X$ (they belong to $\vert 2 K_X \vert $ since we are intersecting with a conic). One of these conics is the double line $l^2\in \mathcal F_{r,s}$ which intersects $X$ in the divisor $2u+2v+2r+2s$. Therefore: \[ 2u+2v+2r+2s\sim 2u+2v+ p_1+p_2+p_3+p_4, \] hence $2r+2s\sim p_1+p_2+p_3+p_4$. The description of the $g^1_4$ is now simple: given a point $p_1 \in X$, there is a unique $Q \in \mathcal F_{r,s}$ passing through $p_1$. Then there is a map $f_{r,s}: X \longrightarrow \mathcal F_{r,s}\cong \mathbb P^1$, sending $p_1$ to this conic. The fibre is the divisor $p_1+p_2+p_3+p_4$ considered above. Notice that one of the fibres is $2r+2s$, hence $f_{r,s}$ is one of the $g^1_4$ we are looking for. In the same way taking the pencil $\mathcal F_{u,v}$ of conics tangent to $t_r$ (resp. $t_s$) at $r$ (resp. $s$), intersecting the conics with $X$ and subtracting the divisor $2r+2s$ we obtain the linear series $f_{u, v}:X\longrightarrow \mathcal F_{u, v}\cong \mathbb P^1$. Observe that the involution $i$ sends $f_{r,s}$ to $f_{u,v}$. \subsection{The curve of $g^1_4$'s with two special fibres} We need to determine the curve on $\varphi (S)\subset \mathsf G^1_4(X)$ given by the $g^1_4$'s on $X$ with two fibres of type $2p+2q$. In the case of linear series in $E$ (the non-complete linear series), these clearly correspond to points which are in two bitangents. In the other cases, we have to understand when a map $f_{r, s}$ as above has a second fibre of the form $2x+2y$. Thanks to our description now we know that we only have to look at \[ \begin{aligned} \Gamma :=\{r+s \in X^{(2)} \mid \exists\, \text{a conic}\, Q \,\text{(of rank at least 2)} \\ \text{ with } Q\cdot X =2u+2v+2x+2y \}. \end{aligned} \] Consider the composition of maps \[ X^{(2)}\times X^{(2)} \xrightarrow{m} X^{(4)} \xrightarrow{s} Pic^8(X), \] where $m$ is the addition of divisors and $s$ is the ``square'' map $\sum p_i \mapsto \mathcal O_X(2\sum p_i)$. The map $s$ is surjective since it is the composition of two surjective maps: $X^{(4)}\longrightarrow Pic^4(X)$ and $Pic^4 (X) \longrightarrow Pic^8(X)$, $L\to 2L$. We observe that $s^{-1}(\omega_X^{\otimes2})$ is the disjoint union of $2^{2\cdot g(X)}=2^6=64$ components. One is isomorphic to a projective plane and it is simply the canonical linear series. This component is rather uninteresting since it gives only double lines $l^2$. The other $63$ components are projective lines corresponding to the paracanonical systems $\vert \omega_X \otimes \alpha\vert $, $\alpha \in JX_2\smallsetminus \{0\}$. A divisor $D$ in one of these lines is formed by $4$ points not in a line and such that there is a conic intersecting $X$ in $2D$. Define $\Gamma_{\alpha }:=m^{-1}(\vert \omega_X \otimes \alpha \vert) $ for a non trivial 2-torsion point $\alpha $. Then \[ \Gamma = \bigcup_{\alpha \in JX_2\smallsetminus \{0\}} \Gamma _{\alpha}. \] Since $\Gamma $ does not contain points of $\mathcal B$, its preimage in $S$ is isomorphic to $\Gamma$ hence it is a disjoint union of curves in $S$ that we still denote by $\Gamma $. Call $\mathsf U_X$ the open set obtained subtracting to $S$ the set $\Gamma $ and the set of points in the exceptional divisors corresponding to points belonging to two bitangents. \subsection{The involution on $\mathsf G^1_4(X)$} We want to prove that the natural involution in $\mathcal R_{3,sp}^{tr}$, which exchanges the trigonal maps $f_x$ and $f_y$, corresponds, via the trigonal construction, to the involution \[ i:(X,L) \mapsto (X,\omega_X^{\otimes2}\otimes L^{-1}) \] in $\mathcal M^{tet}_{3 ,*}$. \begin{remark} The involution in $\mathcal Rb_{3, sp}^{tr}$ does not exchange the covering (it acts only on the trigonal series). Since the Prym variety of the covering is isomorphic to the Jacobian of the associated tetragonal curve we know that the involution $i$ has to leave the curve $X$ invariant. \end{remark} To prove the equality of the involutions we go in the opposite direction: we fix the quartic $X$ as above and the two complete linear series $f_{r, s}$, $f_{u,v}$ such that $r, s, u, v$ are on a line. These linear series correspond by the involution $i$. It is enough to prove the coincidence of both involutions for these examples since they are the generic elements. \\ Define (following Recillas, see \cite{lange-birkenhake}): \[ \tilde D_{r,s} =\{ a+b \in X^{(2)} \mid f_{r,s}(a)=f_{r,s}(b) \}. \] Notice that there are involutions $\sigma_{r, s}$ (and resp. $\sigma_{u, v}$) on the curves $\tilde D_{r,s}$ (and resp. $ \tilde D_{u,v} $) sending each pair of points to the complement in the corresponding linear series. We denote by $\tilde C_{r, s}$ and $\tilde C_{u, v}$ the quotient (trigonal) curves. Recillas trigonal construction says that there are isomorphisms of principally polarized abelian varieties: \[ P(\tilde D_{r, s}, \tilde{C}_{r, s}) \cong JX \cong P(\tilde D_{u, v}, \tilde{C}_{u, v}). \] The assignment $(X,f_{r, s}) \mapsto (\tilde D_{r, s}, \tilde{C}_{r, s}, M)$ is the inverse of the trigonal construction. $M$ is the $g^1_3$ on $\tilde{C}_{r, s}$ which sends $[p_1+p_2]=[p_3+p_4]$ to the corresponding conic in $\mathcal F_{r, s}$. Its fibre is of the form $\{[p_1+p_2], [p_1+p_3], [p_1+p_4]\}$. Notice that here tetragonal maps $f_{r, s}$ (and resp. $ f_{u, v} $) come with two simple ramification points over $l^2$. This implies that, unlike what occurs in Recillas construction, $(\tilde D_{r, s}, \tilde{C}_{r, s})$ (and resp. $(\tilde D_{u, v}, \tilde{C}_{u, v})$) is an admissible double cover of type $(\ast)$ in the sense of Beauville. Donagi extended the trigonal construction to admissible double covers (for details see \cite[Theorem 2.9]{donagi}). Call $(D_{r, s}, {C}_{r, s})$ (and resp. $(D_{u, v}, {C}_{u, v})$) the normalizations of these coverings. \begin{prop}\label{Propinvoluzione} \begin{enumerate} \item [a)] There is a canonical isomorphism $\tilde D_{r, s}\xrightarrow{\lambda } \tilde D_{u, v}$ compatible with the involutions: $\lambda \circ \sigma_{r, s}=\sigma_{u, v} \circ \lambda $. In particular there is an isomorphism $\tilde{C}_{r, s} \xrightarrow{\overline \lambda} \tilde{C}_{u, v}$. \item [b)] Let $b \in \tilde{C}_{r,s}$ be the branch point of $\tilde D_{r, s} \xrightarrow{\pi_{r,s}} \tilde{C}_{r,s}$, then $b':=\overline \lambda (b)$ is the branch point of $\tilde D_{u, v} \longrightarrow \tilde{C}_{u, v}$. \item [c)]There exist points $x \in C_{r,s}$ and $y\in C_{u, v}$ such that $\vert b_1+b_2+x \vert $ and $\vert b_1'+b_2'+y \vert $ are the corresponding trigonal series (where $b_i$ and $b_i'$ are the preimages of $b$ and $b'$ in the normalizations of $\tilde{C}_{r,s}$ and $\tilde{C}_{u,v}$). \item [d)] There is an isomorphism $\mathcal O_{C_{u, v}}(b_1'+b_2'+\overline \lambda (x)+y)\cong \omega_{C_{u,v}}$. \end{enumerate} \end{prop} \begin{proof} Let $p_1+p_2 \in \tilde D_{r,s}$. By definition $h^0(X,\mathcal O_X(2r+2s-p_1-p_2))=1$. Thus, by Serre duality we have that \[ \begin{aligned} 1= & h^0(X,\omega_X(p_1+p_2-2r-2s))=\\ &h^0(X,\mathcal O_X(r+s+u+v+p_1+p_2-2r-2s))= \\ &h^0(X,\mathcal O_X(u+v+p_1+p_2-r-s)). \end{aligned} \] Let $q_1+q_2\in \vert u+v+p_1+p_2-r-s \vert$. Let us see that $q_1+q_2 \in \tilde D_{u,v}$. Indeed: \[ \begin{aligned} & h^0(X,\mathcal O_X(2u+2v-q_1-q_2))=\\ &h^0(X,\mathcal O_X(2u+2v-u-v-p_1-p_2+r+s))\\ =&h^0(X,\mathcal O_X(u+v+r+s-p_1-p_2))=1. \end{aligned} \] Therefore the map $\tilde D_{r, s}\xrightarrow{\lambda } \tilde D_{u, v}$ given by \[ \lambda (p_1+p_2)=q_1+q_2\sim p_1+p_2+u+v-r-s, \] is well defined and the compatibility with the involutions is an exercise. This proves a). Observe that b) is an obvious consequence once we notice that $\sigma_{r, s}$ has a unique fixed point given by $r+s$. The same occurs in $u+v$ for $\sigma_{u, v}$. From point a) we know that $\lambda(r+s)=u+v$. Thus, calling $b$ and $b'$ the images of $r+s$ (resp. $u+v$) in $\tilde{C}_{r,s}$ (resp. in $\tilde{C}_{u,v}$), we get $\overline \lambda(b)=b'.$ To prove c) we refer to the description of the extended trigonal construction given by Donagi. Indeed, we have that the fibre of the 3:1 map $\tilde{C}_{r,s}\rightarrow\PP$ over $l^2$ consists of a node in $b$ and an additional point $x=\pi_{r,s}(2r)=\pi_{r,s}(2s)$. The normalization of $\tilde{C}_{r,s}$ gives the trigonal series $|b_1+b_2+x|.$ The same occurs for $\tilde{C}_{u,v}$ calling $y=\pi_{u,v}(2u)=\pi_{u,v}(2v)$. Finally we conclude with d). First notice that with an abuse of notation we are still calling $\overline{\lambda}$ the isomorphism induced between the normalized curves ${C}_{r,s}\rightarrow{C}_{u,v}$. Then consider $C_{r,s}$ and $C_{u,v}$ as quartic plane curves and the canonical divisors obtained intersecting $C_{r,s}$ (resp. $C_{u,v}$) with the line $b_1+b_2$ (resp. $b_1'+b_2'$). Thus we get $$K_{C_{r,s}}=x+b_1+b_2+z\quad \text{and} \quad K_{C_{u,v}}=y+b'_1+b'_2+w.$$ Now we have two possibilities: $$w=\overline{\lambda}(x)\quad \text{or} \quad w=\overline{\lambda}(z).$$ We claim that $ w=\overline{\lambda}(x) $. In fact if $ w=\overline{\lambda}(z) $, since by construction $x=\pi_{r,s}(2r)=\pi_{r,s}(2s)$ then we would have $\lambda(2r)=2u$ or $\lambda(2r)=2v$. But this contradicts the definition of $\lambda$ given above. Hence we get $\mathcal O_{C_{u, v}}(b_1'+b_2'+\overline \lambda (x)+y)\cong \omega_{C_{u,v}}$.\\ \end{proof} \begin{remark} The isomorphism of Proposition \eqref{Propinvoluzione}[$d)$], gives the compatibility between the two trigonal maps $f_x$ and $f_y$ defined for the general element of $\mathcal{R}b_{3,sp}^{tr}$ and the two trigonal maps obtained on $C_{r,s}$ (resp. $C_{u,v}$) projecting from $x$ or from $z$ (resp. from $\overline{\lambda}(x)$ or from $y$).\\ \end{remark} \begin{teo}\label{teo5} The fibre of $\mathcal{P}_{3,2}$ at a generic $JX$ is isomorphic to the quotient of $\varphi (\mathsf U_X)\subset \mathsf G^1_4(X)$ by the involution $i$. \begin{proof} Starting with a general 3-dimensional abelian variety, i.e. the Jacobian of a curve $X$, diagram \eqref{Diag} says that the fibre of $\mathcal{P}_{3,2}$ over $JX$ is described by the fibre over $X$ of the map $\mathcal{M}_{3,*}^{tet}\rightarrow\mathcal{M}_{3}$. Thus we need to look for all tetragonal maps $k:X\rightarrow\PP$ which have an \'{e}tale point on every fibre and only a fibre with exactly two ramification points of order 2. We consider the map $\varphi_0$ in \eqref{blowup} and we look at its image in $Pic^4(X)$. The blow up $S$ of $X^{(2)}$ at $\mathcal{B}$ recovers all tetragonal maps obtained as projections from points on bitangent lines. Hence, considering the open set $\mathsf{U}_X$, we avoid tetragonal maps which have two fibres of type $2p+2q$ (which are not allowed by the trigonal construction). Finally, since $\mathcal{R}b_{3,sp}^{tr}$ has an involution which exchanges the two special trigonal series, we let $i$ act on $\mathcal{M}_{3,*}^{tet}$ to identify the two tetragonal maps on $X$ which correspond (by the isomorphism \eqref{IsoLangeOrtega}) to the trigonal maps $f_x$ and $f_y$ and we denote by $\mathcal{M}q_{3,*}^{tet}$ the corresponding moduli space. Letting $i$ act on $\varphi (\mathsf U_X)$, we obtain the fibre over $JX$.\\ \end{proof} \end{teo} \section{Case $g=4,r=2$} In this last case we identify $\mathcal R_{4,2}$ with $\Delta^{n,0} $, the set of isomorphism classes of irreducible admissible coverings of curves of arithmetic genus $5$ with exactly one node. Notice that $\Delta^{n,0}$ is a dense open set of an irreducible divisor $\Delta^n$ in the boundary of $\overline {\mathcal R}_5$. In \cite{donagi} Donagi describes the generic fibre of the extended classical Prym map \[ \overline {\mathcal{P}}_{5}: \overline {\mathcal{R}}_5 \rightarrow \mathcal{A}_4. \] He defines a birational map \[ \kappa: \mathcal A_4 \rightarrow \mathcal {RC}^+, \] where $\mathcal {RC}^+$ is the moduli space of pairs $(V,\delta)$, where $V$ is a smooth cubic threefold $V$ and $\delta \in JV_2$ a non-zero $2$-torsion point in the intermediate Jacobian $JV$ with a "parity" condition. An explicit open set in $\mathcal A_4$ where $\kappa $ is an isomorphism is given in \cite{izadi}. The main theorem in section $5$ of \cite{donagi} says that the fibre of $\kappa \circ \overline {\mathcal P}_5 $ at a generic $(V,\delta)$ is isomorphic to the surface $\widetilde {F(V)}$, which is the unramified double covering of the Fano surface $F(V)$ attached to $\delta $ (remember that $Pic^0(F(V))\cong JV$). Our aim is to identify which elements of $\widetilde {F(V)}$ correspond to admissible irreducible double coverings of nodal curves. In other words, we want to find the intersection: \[ \widetilde {F(V)}\cap \Delta ^{n,0}. \] We will prove that the image of this intersection by the double covering \[ \tau: \left(\kappa \circ \overline {\mathcal P}_5\right)^{-1}(V,\delta)=\widetilde {F(V)} \rightarrow F(V) \] lies in a curve $\Gamma $ already considered in the literature and which is defined as follows: \[ \Gamma :=\{ l \in F(V) \mid \exists\; \text{a plane}\; \Pi \;\text{and a line}\; r\in F(V)\; \text{with} \;V\cdot \Pi = l+2r\}. \] \begin{remark}\label{open_set} It is stated in \cite[Proposition 2.6]{naranjo-ortega} that the curve $\Gamma $ is smooth for a generic cubic threefold. There is a mistake in the parameter count of the proof in that paper and in fact this curve has always nodes. Nevertheless, for a generic element of $\Gamma $ the plane $\Pi $ in the definition above is unique. Hence for a general element $l\in \Gamma$ the discriminant quintic $Q_l$ of the conic bundle structure provided by $l$, has only one node. We denote by $\Gamma_0 \subset \Gamma$ the open set of the points with this property. \end{remark} Let us denote by $\widetilde \Gamma $ the curve $\tau ^{-1}(\Gamma ).$ Izadi developed in \cite{izadi} the ideas outlined by Donagi in \cite[section 5]{donagi}. In section 3 she studied in detail the action of the involution $\lambda $ associated with $\tau$ and the intersection of $\widetilde{F(V)}$ with the boundary. This intersection is a curve that we call $\Gamma '$ as explained in Proposition \ref{dominant}. The curve $\Gamma'$ is interchanged with a curve in the smooth locus of $\widetilde{F(V)}$, the locus of Prym curves with odd vanishing theta null (\cite[p. 121]{izadi}). These two curves map to $\Gamma $. Hence the preimage of $\Gamma$ breaks into two components. Our aim is to determine the intersection of $\Gamma'$ with $\Delta^{n,0}.$ \begin{prop} For a generic cubic threefold $V$ we have that \[ \tau( \widetilde {F(V)}\cap \Delta ^{n,0})\subset \Gamma. \] In particular $\widetilde {F(V)}\cap \Delta ^{n,0} \subset \Gamma'$. \end{prop} We have two proofs for this fact. The first follows closely Donagi's description of the fibre and concludes that if $l\in F(V) \smallsetminus \Gamma $ then $\tau ^{-1}(l)$ is given by coverings of smooth curves, that is $\tau^{-1}(l) \subset \mathcal R_5$. Let us remind briefly this description: let $A\in \mathcal A_4$ be a generic abelian fourfold and put $\kappa (A)=(V,\delta)$. Choose a generic line $l\in F(V)$ and denote by $\pi_l:\widetilde {Q_l}\longrightarrow Q_l$ the admissible double covering attached to the conic bundle structure on $V$ provided by $l$. Then $Q_l$ is a smooth quintic plane curve and $P(\widetilde {Q_l},Q_l)\cong JV$ (see \cite{beauJacInt} or \cite[Appendix C]{clemgriff}). Let $\sigma \in ({JQ_l})_2$ be the $2$-torsion point that determines $\pi_l$. Then, by the general theory of Prym varieties (see \cite[page 332, Corollary 1]{mumford}), there is an exact sequence \begin{equation}\label{two-torsion} 0 \longrightarrow \langle \sigma \rangle \longrightarrow \langle \sigma \rangle ^\perp \longrightarrow P(\widetilde {Q_l}, Q_l)_2=JV_2 \longrightarrow 0, \end{equation} where $\langle \sigma \rangle ^\perp \subset ({JQ_l})_2 $ is the orthogonal with respect to the Weil pairing. Denote by $\nu $ a preimage of the fixed $2$-torsion point $\delta $ in $JV_2$, then the other preimage is $\nu':=\nu + \sigma $. Hence $\sigma,\nu,\nu'$define an isotropic subgroup $W_l$ of rank $2$ on $JQ_l$. The parity condition of $(V, \delta) \in \mathcal {RC}^+$ means that $h^0(Q_l,\mathcal O_{Q_l}(1)\otimes \nu )$ and $h^0(Q_l,\mathcal O_{Q_l}(1)\otimes \nu' )$ are even. Thus there are two curves of genus $5$, $C$ and $C'$, such that $J C\cong P(Q_l, \nu)$ and $JC'\cong P(Q_l, \nu')$. This is due to the existence of a bijection between non-hyperelliptic genus 5 curves $C$ and admissible coverings of quintic plane curves with an even 2-torsion point. The quintic appears as the quotient of $W^1_4(C)$ by the natural involution. Using for $P(Q_l,\nu)$ an exact sequence similar to (\ref{two-torsion}), we get: \[ 0 \longrightarrow \langle \nu \rangle \longrightarrow \langle \nu \rangle ^\perp \longrightarrow P(Q_l,\nu)_2=J{C}_2 \longrightarrow 0. \] Therefore the rank 2 subgroup $W_l\subset \langle \nu \rangle ^\perp $ determines on $JC$ a $2$-torsion point $\mu$. Similarly there is a $\mu ' \in J C'_2$. Denoting with $\lambda $ the sheet interchange for the covering $\tau$, Donagi proves that: \[ \lambda (C,\mu)=( C',\mu') \qquad \text {and } \qquad P( C,\mu) \cong P(C,\mu')\cong A. \] Hence the preimages of $l$ by $\tau $ are the elements $( C,\mu), (C',\mu')$ obtained previously. In particular, since they are smooth, we find that $\tau^{-1}(l) \subset \mathcal R_5$, as claimed. This concludes the first proof. \\ The second proof is more constructive and more useful for our purposes. We show directly that for a covering in $\Delta ^{n,0}$ the corresponding line $l$ belongs to $\Gamma $. This approach relies on the following result of Izadi (see \cite[Theorem 6.13]{izadi}): \begin{teo}\label{izadi} Let $(V,\delta)$ be a generic smooth cubic threefold endowed with a non-zero 2-torsion point and let $\pi^*: D^* \rightarrow C^*$ be an admissible covering in the fibre of $(V,\delta)$. Assume that $\tau ( \pi^*)=l\in F(V)$. Then the discriminant quintic $Q_l$ of the conic bundle structure attached to $l$ parametrizes the set of singular quadrics through the canonical model of $ C^*$. \end{teo} By canonical model we mean the image of $C^*$ by the morphism attached to the dualizing sheaf. \begin{remark} The line $l$ attached to $\pi^*$ is defined in \cite{izadi} in a different way. However it is proved in 6.30 in loc. cit. that it equals $\tau (\pi^*)$. \end{remark} Let $(V,\delta)$ be a generic smooth cubic threefold endowed with a non-zero 2-torsion point and let $\pi : D\rightarrow C$ be the generic element in $\mathcal R_{4,2}$ of the fibre of $\mathcal P_{4,2}$ above $(V,\delta)$. We denote by $\pi^* : D^*\rightarrow C^*$ the corresponding admissible covering in $\Delta ^{n,0} \subset \overline{ \mathcal{R}}_5$. By definition $C^*=C/b_1\sim b_2$ is a curve of arithmetic genus $5$ with a node in $p$ obtained by glueing the two branch points $b_1, b_2$ of $\pi$. \begin{lemma}\label{Lemmaquintiche}Under the above assumptions:\begin{itemize} \item[a)] the quintic plane curve parametrizing the singular quadrics containing the image of the canonical map of $C^*$ is a quintic with exactly one node. In particular $\tau (\pi ^*) \in \Gamma_0.$ \item[b)] The quintic plane curve parametrizing the singular quadrics containing the canonical image of an arithmetic genus 5 curve with at least two nodes is a nodal quintic with at least two nodes. \end{itemize} \end{lemma} \begin{proof} Since the general fibre of $\mathcal P_{4,2}$ has pure dimension 1, by dimensional reason, it will not intersect the locus where $C$ is hyperelliptic and it will intersect the locus where $C$ admits a unique $g^1_3$ in at most a finite number of points. Therefore it is enough to prove part $ a) $ assuming $C$ not hyperelliptic and with two distinct $g^1_3$'s. The map $\varphi: C \rightarrow \mathbb P(H^0(C, \omega_C(b_1+b_2))^*)$ satisfies $\varphi (b_1)=\varphi (b_2)$ and it is an isomorphism out of these two points. Hence $\varphi (C)=C^*$ and $\varphi$ can be seen as the normalization $n:C\rightarrow C^*$ composed with the inclusion $C^*\subset \mathbb{P}(H^0(C, \omega_C(b_1+b_2)))^*=\mathbb{P} ^4$. We have the following exact sequence: \begin{equation}\label{normalization} 0\rightarrow \omega_{C^*}\rightarrow n_*(\omega_C(b_1+b_2))\rightarrow \mathbb{C}_p\ra0, \end{equation} which induces \begin{equation}\label{coomnormalization} 0\rightarrow H^0(C^*,\omega_{C^*})\rightarrow H^0(C,\omega_C(b_1+b_2))\xrightarrow{res}\mathbb{C}\rightarrow\mathbb{C}\ra0, \end{equation} where $res$ is the map $\omega\mapsto \text{res}_{b_1}\omega+\text{res}_{b_2}\omega$. By the residue theorem it vanishes identically. Therefore \[ H^0(C^*,\omega_{C^*})\cong H^0(C,\omega_C(b_1+b_2)). \] Now let $L$ be a $g^1_3$ on $C$ and consider bases \[ H^0(C,L)=\langle t_1,t_2\rangle, \qquad H^0(C,\omega_C\otimes L^{-1})=\langle s_1,s_2\rangle. \] Put \[ \omega_1=t_1s_1 \quad \omega_2=t_2s_1 \quad \omega_3=t_1s_2 \quad \omega_4=t_2s_2, \] to get $$ H^0(C,\omega_C)=\langle \omega_1,\omega_2, \omega_3, \omega_4\rangle $$ and then completing the basis we get $ H^0(C,\omega_C(b_1+b_2))=\langle \omega_1,\omega_2, \omega_3, \omega_4, \omega_5\rangle $. We obtain the following diagram: \begin{equation}\label{cono} \begin{tikzcd} C\arrow[r,"\varphi"]\arrow[dr,hook, "g"]&\mathbb{P}^4\arrow[d,dashed]\\ &\mathbb{P}^3 \end{tikzcd} \end{equation} where $g$ is the canonical map and the vertical rational map is given by dualizing the inclusion $H^0(\omega_C)\subset H^0(\omega_C(b_1+b_2))$. It corresponds to the projection from the point $p$. Since $C$ is a general curve of genus 4, there exists a unique quadric $Q$ containing its canonical model and it has rank $4$, namely (here $\odot$ denotes the symmetric product): $Q=\omega_1\odot\omega_4-\omega_2\odot\omega_3.$ In particular, in the chosen coordinates, $\varphi(b_i)=p=[0:0:0:0:1]$ ($i=1,2$) and $Q=\{x_1x_4-x_2x_3=0\}.$ The preimage of $Q$ by the projection is a cone with vertex $p$ which contains $C^*$ and has rank four (and in fact the same equation). We still call it $Q$. Using now (see e.g. \cite[pp. 90]{ACGH2}) \begin{equation}\label{normalizationtwisted} 0\rightarrow \omega_{C^*}^{\otimes2}\rightarrow n_*(\omega_C^{\otimes2}(2b_1+2b_2))\rightarrow \mathbb{C}_p\ra0 \end{equation} and its corresponding long exact sequence in cohomology, we obtain that also in the case of a nodal curve of arithmetic genus 5 \[\dim \ker (Sym^2H^0(C^*, \omega_{C^*})\rightarrow H^0(C^*, \omega_{C^*}^{\otimes2}))=\dim I_2(\omega_{C^*})=3.\] Taking $ Q, Q_1,Q_2$ as a basis, we would like to show that the discriminant curve $\Delta$ of the family of quadrics \[ \mathbb{P}(I_2(\omega_{C^*}))=\mathbb P(\langle Q, Q_1,Q_2\rangle)\] is nodal. By the above considerations $p\in S(Q)\cap C^*$, where $ S(\cdot) $ denotes the singular locus. In the paper \cite{wall}, Wall studied the discriminant locus of nets of quadrics. In particular \cite[Lemma 1.1]{wall} ensures that $([1:0:0], p)$ belongs to $S(N)$, where \[ N:=\{([\lambda_0:\lambda_1:\lambda_2],x)\mid x^t(\lambda_0 A_0+\lambda_1 A_1+\lambda_2 A_2)x=0 \}\subset \mathbb P(\langle Q, Q_1,Q_2\rangle)\times \mathbb P^4 \] is the universal family of the net of quadrics containing $C^*$ ($A_i, i=0,1,2$ are the matrices associated to $Q,Q_1,Q_2$). To be more precise we would have to write $Q=Q_{[1:0:0]}$ and the analogous for other $Q_i$. We will omit the subscript when it will be possible. Assuming that every point in $S(C^*)$ is tame (we give the definition below), the map $S(N)\rightarrow S(C^*)$, which sends $(\lambda=[\lambda_0:\lambda_1:\lambda_2],x)$ to $ x$, becomes bijective. Since, in our case, $S(C^*)=\{p\}$, we obtain that $S(N)=\{([1:0:0],p)\}$. Moreover $p\in S(C^*)$ and $([1:0:0],p) \in S(N)$ have the same type of singularity (by \cite[Proposition 1.3]{wall}). \begin{claim}\label{mapproj} The map \begin{align*} \rho: N&\rightarrow\mathbb{P}^2\\ (\lambda, x)&\mapsto\lambda \notag \end{align*} sends $S(N)$ to $S(\Delta)$. \begin{proof} First observe that since\[\rho^{-1}(\Delta)=\{(\lambda,x) : x \in Q_{\lambda}\;\text{and}\; Q_{\lambda}\;\text{is singular}\},\] we get \[S(N)\subseteq \rho^{-1}(\Delta).\] We conclude with the Jacobian criterion. Indeed, using local coordinates for which a singular point of $N$ is $ ([1:0:0],[0:0:0:0:1]) $, we have: \begin{equation*} Q\leftrightarrow\begin{pmatrix} a_1 &0 &0 &0 &0\\ 0 &a_2 &0 &0 &0\\ 0 &0 &a_3 &0 &0\\ 0 &0 &0 &a_4 &0\\ 0 &0 &0 &0 &0\\ \end{pmatrix}, \; Q_1\leftrightarrow\begin{pmatrix} & & & & \\ & & & &\\ & &\ast & &\\ & & & &\\ & & & &0\\ \end{pmatrix}, \; Q_2\leftrightarrow\begin{pmatrix} & & & & \\ & & & &\\ & &\ast & &\\ & & & &\\ & & & &0\\ \end{pmatrix}, \end{equation*} since $ p $ is a point in all quadrics of the net and $ Q $ is singular in $p$. Therefore $ \lambda_0A_0+\lambda_1A_1+\lambda_2A_2 $ has a $ 0 $ in position $(5,5)$, homogeneous linear polynomials $l=l(\lambda_1,\lambda_2)$ out of the diagonal and $l+\lambda_0a_i$ on the diagonal ($i=1,2,3,4$). Put $ G(\lambda_0,\lambda_1,\lambda_2)=det(\lambda_0A_0+\lambda_1A_1+\lambda_2A_2 ) $. Then it is possible to write $$G=f_5+\lambda_0f_4+\lambda_0^2f_3+\lambda_0^3f_2,$$ where $ f_i $ are homogeneous polynomials in $(\lambda_1,\lambda_2)$ of degree $i$. Therefore $\partial_\lambda G(p)=0$, i.e. $G$ is singular at $p$ and thus $[1:0:0]$ belongs to $S(\Delta)$.\\ \end{proof} \end{claim} Applying \cite[Theorem 1.4]{wall} for $([1:0:0])$ in $S(\Delta)$ (and resp. $([1:0:0],p) \in S(N)$), we conclude that the discriminant locus of $N$ has a unique nodal point, as claimed. It only remains to show that $p$ is tame. By definition a point of $C^*$ is \textit{tame} if the tangent planes to $C^*$ at the point span a $2$-dimensional vector space. We check that $p$ is tame: call $\pi_i, i=0,1,2$ the tangent planes of the three quadrics at $p$. In coordinates: \[ \pi_i: (0:0:0:0:1)A_i \textbf{y}=0,\qquad i=0,1,2. \] Thus: \[ p A_0 \textbf{y}=(0:0:0:0:1) \begin{pmatrix} 0 &0 &0 &1 &0\\ 0 &0 &-1 &0 &0\\ 0 &-1 &0 &0 &0\\ 1 &0 &0 &0 &0\\ 0 &0 &0 &0 &0\\ \end{pmatrix} \textbf{y}=0 \] and \[ p A_i \textbf{y}=0 \Leftrightarrow (a_{5,1}^i,a_{5,2}^i,a_{5,3}^i,a_{5,4}^i,a_{5,5}^i) \textbf{y}=0, \] where $a^i_{5,j}$ are the coefficients of the last row of the matrices $A_i, i=1,2$. Call these vectors $\mathbf{a_1},\mathbf{a_2}$. Then $p$ is tame if \[ \dim \langle \mathbf{a_1},\mathbf{a_2}\rangle=2. \] Suppose, by contradiction, that $\mathbf{a_2}=\mu\mathbf{a_1}$. Then $Q': \mu A_1-A_2$ belongs to $I_2(K_C)$, so $Q'=\nu Q$. Therefore \[ 0=\nu Q -Q'=\nu Q-\mu Q_1+Q_2, \] which is impossible. This concludes the proof of part $a)$. In order to prove part $b)$, let us start with an admissible double-nodal covering $\pi^{2*}: D^{2*}\rightarrow C^{2*}$, i.e. $S(C^{2*})=\{p_1,p_2\}$. Consider the following partial normalization maps: \begin{equation*} \begin{tikzcd} N_1\arrow[r,"\alpha"]\arrow[rr,bend left, "n"]&\tilde{N}_1\arrow[r,"\beta"]&C^{2*} \end{tikzcd} \end{equation*} where $n$ is the normalization, $\beta$ is the partial normalization of the node in $p_2$ while $\alpha$ of the one in $p_1$. By dimension reason we can assume that $N_1$ is not hyperelliptic. A short exact sequence for $\omega_{\tilde{N}_1}$ similar to \eqref{normalization} ensures that $$\dim I_2(\omega_{\tilde{N}_1})=\dim I_2(\omega_{N_1}(q_1+q_1'))=1,$$ where $q_1,q_1'$ are the two points of $N_1$ sent to $p_1$ by $\alpha$. We remark that the unique quadric $Q$ containing the image of \[N_1\rightarrow\tilde{N}_1\subset \mathbb{P}(H^0(\omega_{N_1}(q_1+q_1'))) \] cannot be singular in $p_1$. Otherwise, if we write in local coordinates $$Q=\sum_{i,j\leq 4}a_{ij}x_ix_j\quad \text{and} \quad p_1=[0:0:0:1],$$ we would get $\partial_iQ(p_1)=a_{i4}=0$ for every $i$. But this would imply $Q\in I_2(\omega_{N_1})$, which is impossible since $I_2(\omega_{N_1})=0$. Then, dualizing the inclusion $H^0(\omega_{\tilde{N}_1})\subset H^0(\omega_{\tilde{N}_1}(q_2+q_2'))$ (points $q_2,q_2'$ are identified in $p_2$ in $C^{2*}$), we obtain a diagram as \eqref{cono} where the rational map $\mathbb{P}^4\dashrightarrow\mathbb{P}^3$ is given by the projection from $p_2$. The preimage of $Q$ is a cone with vertex $p_2$ which is smooth in $p_1$ and which contains our curve with two nodes. With an abuse of notation, we still denote it by $Q$. The short exact sequence \eqref{normalizationtwisted} for the bicanonical $\omega_{C^{2*}}^{\otimes2}$ of $C^{2*}$ shows that $\dim I_2(\omega_{C^{2*}})=3.$ Call, as before, $N\subset \mathbb{P}(I_2(\omega_{C^{2*}}))\times \mathbb{P}^4$ the universal family of the net of quadrics containing $C^{2*}$. Thus, $Q$ is the point $\lambda=[1:0:0]$ in $\mathbb{P}^2$. Since $C^{2*}$ has two singular points, \cite[Lemma 1.2]{wall} shows that there exists $\mu\in \Delta$ (the discriminant curve of the net $\mathbb{P}(I_2(\omega_{C^{2*}}))$) such that $p_1\in S(Q_{\mu})$. Hence $Q\neq Q_{\mu}$. This concludes the proof: $(\lambda, p_2) $ and $(\mu, p_1) $ belong to $S(N)$. Therefore the map $\rho$ of Claim \ref{mapproj} (which works also in case of $Q$ of rank 3, that is $a_4=0$) determines two different singular points in $\Delta$. The cases $\#S(C^*)=3,4$ are similar: the partial normalization at one point leads to a curve of arithmetic genus 4 with singular points. Call one of them $p_1$. As above we find a quadric $Q$ in $I_2$ which is a cone with vertex $p_1$ on a quadric which is smooth in at least one of the remaining nodes. Applying Wall's theorems, we know the existence of another quadric which is singular in at least one among the other nodes. This leads to a discriminant curve $\Delta$ which has at least two singular points. \\ \end{proof} Thus the following holds (see \ref{open_set} for the definition of $\Gamma_0$): \begin{teo}\label{teo6} The generic fibre of $\mathcal P_{4,2}$ at $(V,\delta)$ is isomorphic to $ \Gamma_0$. \begin{proof} Take $\pi: D\rightarrow C$ in $\mathcal R_{4,2}$ and denote, as above, $\pi^*$ the corresponding element in $\Delta^{n,0}$. Lemma \ref{Lemmaquintiche}$a)$ shows that $\tau(\pi^*)$ belongs to $\Gamma_0 $. Therefore, in order to show that the generic fibre of $\mathcal P_{4,2}$ at $(V,\delta)$ is isomorphic to $\Gamma_0$, it remains to prove that an element in $\Gamma '$ with two or more nodes maps to $\Gamma \setminus \Gamma_0$. Since $A\in \mathcal{A}_4$ is generic, we can suppose $A$ simple and hence, using \cite{beau}, we can just take into account coverings of irreducible curves. Finally, the inclusion $\Delta^{n,0}\subset \Delta^n$ guarantees that we only have to take care of admissible coverings of irreducible curves with more than one node. Therefore, suppose by contradiction that $\widetilde \Gamma$ contains an admissible covering of an irreducible curve with (at least) two nodes. Lemma \ref{Lemmaquintiche}$b)$ gives us a quintic plane curve with at least two nodes. This contradicts the assumption on $\Gamma_0$ and thus we can conclude. \\ \end{proof} \end{teo} \section{Fibres of the Prym map and Shimura varieties} In this section we give some examples of irreducible components of some fibres of the ramified Prym maps which yield Shimura subvarieties of ${\mathcal A}_g$. Recall that in \cite{moonen-special}, \cite{moonen-oort}, \cite{fgp}, \cite{fpp} examples of Shimura subvarieties of ${\mathcal A}_g$ generically contained in the Torelli locus have been constructed as families of Jacobians of Galois covers of ${\mathbb P}^1$ or of elliptic curves. Some of them are contained in fibres of ramified Prym maps. In \cite{fgs} and \cite{gm} infinitely many examples of totally geodesic and of Shimura varieties generically contained in the Torelli locus have been constructed as fibres of ramified Prym maps. In particular, the images in ${\mathcal M}_2$ and in ${\mathcal M}_3$ of ${\mathcal R}_{1,2}$, respectively ${\mathcal R}_{1,4}$, are the bielliptic loci and in \cite{fpp} it is shown that their images in ${\mathcal A}_2$, resp. ${\mathcal A}_3$, via the Torelli maps are Shimura subvarieties. In \cite{fgs} it is proven that the irreducible components of the fibres of the Prym maps ${\mathcal P}_{1,2}$, ${\mathcal P}_{1,4}$ are totally geodesic curves and countably many of them are Shimura curves. Moreover in \cite{fgs} the authors show that family $(7) = (23) = (34)$ of \cite{fgp} is a fibre of the Prym map ${\mathcal P}_{1,4}$, which is a Shimura curve. It is easy to see that the Shimura family (24) of \cite{fgp} is contained in a fibre of the Prym map ${\mathcal P}_{2,2}$. In fact it is a family of curves $D$ of genus 4 with an action of a group $G = {\mathbb Z}/2 \times {\mathbb Z}/2 \times {\mathbb Z}/3$, such that $D/G \cong {\mathbb P}^1$ and the map $D \to D/G \cong {\mathbb P}^1 $ is branched over $B$ which consists of 4 distinct points. In terms of the generators $g_1, g_2, g_3$ of $G$, with $o(g_1) = o(g_2) = 2$, $o(g_3) = 3$, the monodromy of the covering $\theta: \pi_1({\mathbb P^1} \setminus B) \cong \langle \gamma_1,...,\gamma_4 \ | \ \gamma_1 \gamma_2 \gamma_3 \gamma_4 = 1 \rangle \to G$ is $\theta(\gamma_1) = g_2$, $\theta(\gamma_2) = g_1g_2$, $\theta(\gamma_3) = g_3 $, $\theta(\gamma_4) = g_1g_3^2$. One easily checks that the map $D \to D/\langle g_1 \rangle$ is a double covering of a genus 2 curve, ramified over 2 points. Moreover the Prym variety $P(D,C)$ is isogenous to $E \times E'$ where $E = D/\langle g_2 \rangle$ and $E' = D/\langle g_1g_2 \rangle$ and $E$ and $E'$ do not move, since the Galois covers $E \to E/(G/\langle g_2 \rangle) = D/G = {\mathbb P}^1$ and $E' \to E'/(G/\langle g_1g_2 \rangle) = D/G = {\mathbb P}^1$ both have only 3 critical values. This shows that the family of covers $D \to C$ is contained in a fibre of the Prym map ${\mathcal P}_{2,2}$. Finally, we give an explicit new example of a totally geodesic curve which is an irreducible component of a fibre of the Prym map ${\mathcal P}_{1,2}$. \\ {\bf Example.} Consider a family of Galois covers $\psi_{\lambda} : D_{\lambda} \to D_{\lambda}/G \cong {\mathbb P}^1$, ramified over $B = \{P_1 = \lambda, P_2=1, P_3=0, P_4 = \infty\}$ with $G \cong (\mathbb Z/4 \times \mathbb Z/4) \ltimes \mathbb Z/2$ and with $g(D_{\lambda} ) =11$. We use the following presentation of $G$: \begin{gather*} G \cong \langle g_1, g_2, g_3, g_4, g_5 \ | \ g_1^8=g_2^2=g_3^4=g_4^4=g_5^2=1,\ g_1^2 = g_4, \\ g_3^2 = g_4^2 = g_5, \ g_1^{-1} g_2 g_1 = g_2g_3, \ g_1^{-1} g_3 g_1 = g_3g_5, \ g_2^{-1} g_3 g_2 = g_3g_5 \rangle. \end{gather*} Notice that $G = (\langle g_1g_2g_3 \rangle \times \langle g_4 \rangle) \ltimes \langle g_2 \rangle \cong (\mathbb Z/4 \times \mathbb Z/4) \ltimes \mathbb Z/2$. For simplicity, as above, we omit the index $\lambda$ and we denote an element of the family of Galois covers simply by $\psi: D \to D\to D/G \cong {\mathbb P}^1$. The monodromy of the cover $\theta: \pi_1({\mathbb P^1} \setminus B) \cong \langle \gamma_1,...,\gamma_4 \ | \ \gamma_1 \gamma_2 \gamma_3 \gamma_4 = 1 \rangle \to G$ is \[[\theta(\gamma_1) = g_2g_3g_5, \ \theta(\gamma_2) = g_3g_4g_5, \ \theta(\gamma_3) = g_1g_2g_4g_5, \ \theta(\gamma_4) = g_1g_3g_4g_5]\] and these elements have orders $[2,2,4,8]$ in $G$. Consider the subgroup $H = \langle g_2, g_5\rangle \cong \mathbb Z/2 \times \mathbb Z/2$ of $G$. By Riemann Hurwitz formula one easily computes the genus of the quotient $D/H$ which is 2. Set $K : = \langle g_2, g_3g_4 \rangle \cong D_4$, $K_1 := \langle g_2, g_4 \rangle \cong \mathbb Z/2 \times \mathbb Z/4 $, $K_2 := \langle g_2, g_3 \rangle \cong D_4$. The genus 2 curve $C$ admits three distinct double covers: \begin{gather*} f: C = D/H \to D/ K \cong {\mathbb P}^1,\\ f_1: C = D/H \to D/ K_1=:E_1, \ f_2: C = D/H \to D/ K_2 =: E_2, \end{gather*} where $E_1$ and $E_2$ are elliptic curves. The double covers $\pi_1: E_1 = D/ K_1 \to D/ \langle g_2, g_3, g_4\rangle \cong {\mathbb P}^1$, $\pi_2: E_2 = D/ K_2 \to D/ \langle g_2, g_3, g_4\rangle \cong {\mathbb P}^1$ allow to express the elliptic curves $E_1$ and $E_2$ in Legendre form: $$E_1: y^2 = x(x-\mu)(x^2 -1), \ \ E_2: y^2 = x(x^2-1),$$ where $\mu^2 = \lambda$. Therefore the elliptic curve $E_2$ does not move and $J(C)$ is isogenous to $E_1 \times E_2$. So the Prym varieties $P(C,E_1)$ are isogenous to the fixed elliptic curve $E_2$. Thus the 1-dimensional family of double covers $ \pi_1: C \to E_1$ is contained in a fibre of the Prym map ${\mathcal P}_{1,2}$, hence it gives an irreducible component of a fibre of ${\mathcal P}_{1,2}$.
1,477,468,750,495
arxiv
\section{Introduction} The fundamental notion of geometric phase \cite{berry1984quantal,berry1988geometric,shapere1989geometric} has been highly influential in physics and related sciences. It lies at the heart of many quantum phenomena pertaining to basic science and at the same time delineates important technological applications for quantum sensing and quantum computation. Various geometric phases were originally introduced as a result of physical systems undergoing adiabatic cyclic evolution. However, this notion was extended with the removal of the adiabatic and even the cyclic conditions. In this work, we intend to review fundamental aspects underlying these phases and present some of their practical applications, such as in gyroscopes and Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors. Differently from other review articles on the subject, see, e.g., Refs. \cite{olariu1985quantum, zwanziger1990berry,anandan1992geometric, cohen2019geometric, karnieli2022geometric}, we focus on the relation between geometric phases and accelerating as well as gravitational systems. In particular, we give special emphasis on quantum applications of the Sagnac effect in such systems. At the same time, much more attention is given to geometric phases here than in review articles specialized on Sagnac interferometers, such as Refs. \cite{post1967sagnac, anderson1994sagnac, schreiber2013invited}. The paper is organized as follows. Section \ref{sec:geometry} discusses the geometry underlying quantum systems. From this structure, geometric phases are introduced in Section \ref{sec:geom-phases}. This section also compares quantum geometric phases with classical ones. The Aharonov-Bohm (AB) effect, which is associated with a special --- i.e., topological --- type of geometric phase, is presented in Section \ref{sec:ab-effect}. Following that, Section \ref{sec:sagnac-theory} discusses geometric phases in non-inertial frames and introduces the Sagnac effect. Then, Section \ref{sec:sagnac-application} presents various applications of the Sagnac effect for quantum sensing and metrology. Section \ref{sec:ab-gravitational} presents geometric and gravitational AB-like effect and discuss their relevance in the present context. Finally, Section \ref{sec:discussion} concludes the paper with some future outlook. \section{Geometry of quantum states} \label{sec:geometry} In this section, we present the geometry underlying the mathematical structures of quantum states. Although the formal name of each structure is mentioned, we restrict our presentation to the intuition behind them. For a more technical exposition, we refer the reader to, e.g., Refs. \cite{marsden1990reduction, chruscinski2004geometric}. Consider a pure quantum system represented by a state in a complex Hilbert space $\mathcal{H}_{n+1}\setminus\mathcal{O}$, where $n\in\mathbb{N}$, $n+1$ is the dimension of the space, and $\mathcal{O}$. Because of the probabilistic interpretation, it is conventional to work with normalized vectors. Also, states that differ only by a global phase are indistinguishable. Then, two non-null vectors $|\psi\rangle$ and $|\varphi\rangle$ in $\mathcal{H}_{n+1}$ represent the same physical state if there exists $\lambda\in\mathbb{C}$ such that \begin{equation} |\psi\rangle = \lambda |\varphi\rangle. \label{eq:relation-cp} \end{equation} Then, while $n+1$ complex parameters are needed to characterize a vector in $\mathcal{H}_{n+1}$, $n$ suffices to characterize a pure quantum state. Eq. \eqref{eq:relation-cp} effectively defines an equivalence relation in $\mathcal{H}_{n+1}$, subdividing the space into \textit{equivalence classes}. In fact, denoting this relation by $\sim$, we can write \begin{equation} |\psi\rangle \sim |\varphi\rangle \end{equation} whenever there exists $\lambda\in\mathbb{C}$ such that Eq. \eqref{eq:relation-cp} is satisfied. Here, again, $|\psi\rangle$ and $|\varphi\rangle$ are non-null vectors in $\mathcal{H}_{n+1}$. The resultant space after the operation $\sim$ is applied to entire $\mathcal{H}_{n+1}\setminus\mathcal{O}$ is the \textit{projective space}, also known as \textit{ray space}, \begin{equation} \mathbb{CP}(n)\equiv \frac{\mathcal{H}_{n+1}\setminus\mathcal{O}}{\sim}, \end{equation} which is an $n$-dimensional complex space. \begin{figure*} \centering \includegraphics[width=\textwidth]{geometry.pdf} \caption{Geometric representation of pure quantum systems whose state is an element of a complex Hilbert space $\mathcal{H}_{n+1}$, which is isomorphic to a real vector space of dimension $2n+2$. \textbf{(a)} An arbitrary direction of the Hilbert space, like the red line, represents a single physical state. Then, restricting to normalized vectors with an equivalence relation $\sim_S$, the resulting space is isomorphic to a real hypersphere $S_{2n+1}$. Finally, a projector $\pi$ can map each point that differs by a global phase to a single element of a complex projective space $\mathbb{CP}(n)$. \textbf{(b)} The global phases, thus, can be represented by tangent fibers to each point of $\mathbb{CP}(n)$. \textbf{(c)} Using the fiber bundle structure, a notion of distance can be naturally introduced in $\mathbb{CP}(n)$ since the projector of orthogonal direction to a certain state $|\psi\rangle$, represented by $\Pi_{|\psi\rangle}^\perp$ is well-defined. \textbf{(d)} Moreover, a system that completes a loop in $\mathbb{CP}(n)$ does not always return to its initial point in the bundle structure. This lack of holonomy constitutes the geometric origins of geometric phases.} \label{fig:geometry} \end{figure*} It is typically convenient to treat $\sim$ as a composition of two distinct equivalence relations. One of them, denoted by $\sim_S$ is such that any two vectors $|\psi\rangle$ and $|\varphi\rangle$ vectors in $\mathcal{H}_{n+1}\setminus\mathcal{O}$ are equivalent, i.e., \begin{equation} |\psi\rangle \sim_S |\varphi\rangle, \end{equation} whenever there exists $\rho\in\mathbb{R}$ such that \begin{equation} |\psi\rangle = \rho |\varphi\rangle. \end{equation} It can be checked that the resultant space is isomorphic to the unity $2n+1$-dimensional real hyper-sphere $S_{2n+1}$, as represented in Figure \ref{fig:geometry}a. In fact, with the relation $\sim_S$, the $n+1$ complex parameters (or $2n+2$ real ones) necessary to characterize a state in $\mathcal{H}_{n+1}$ are constrained by an equation that establishes the unit norm, which results in $2n+1$ free real parameters necessary for the characterization of a state. Because of this isomorphism, the space associated with $\sim_S$ will be referred to as the hypersphere $S_{2n+1}$. To complete the relation $\sim$, a map $\pi$, also associated with an equivalence relation, is applied to the elements of $S_{2n+1}$, mapping them into a representative element of their class in $\mathbb{CP}(n)$, as illustrated in Figure \ref{fig:geometry}a. Observe that if two points associated with the normalized vectors $|\psi\rangle$ and $|\varphi\rangle$ are such that \begin{equation} \pi(|\psi\rangle) = |\varphi\rangle, \end{equation} then there exists $\theta\in\mathbb{R}$ such that \begin{equation} |\psi\rangle = e^{i\theta} |\varphi\rangle, \end{equation} i.e., they differ by a global phase. Hence, these phases can be seen as fibers with circular topology at each point of $\mathbb{CP}(n)$, as represented in Figure \ref{fig:geometry}b. In fact, this constitutes a \textit{fiber bundle} structure, where $\mathbb{CP}(n)$ is the base space, the space of the phases is the fiber, $S_{2n+1}$ is the total space, and $\pi$ is the projection map, also known as bundle projection. It can be shown that the space $\mathbb{CP}(n)$ inherits the symplectic structure of $\mathcal{H}_{n+1}$, i.e., a well-defined notion of hypervolume \cite{chruscinski2004geometric}. Thus, the projection of the temporal evolution of the unitary dynamics on $\mathbb{CP}(n)$ is a Hamiltonian dynamics. In addition to this structure, $\mathbb{CP}(n)$ also acquires the metric induced by the inner product in $\mathcal{H}_{n+1}$. It is, then, possible to compute the distance between two points in $\mathbb{CP}(n)$. In fact, let $|\psi\rangle$ and $|\psi+d\phi\rangle$ be infinitesimally close in $S_{2n+1}$. Then, as illustrated in Figure \ref{fig:geometry}c, a natural definition for the distance in $\mathbb{CP}(n)$ between them is \begin{equation} ds^2(\mathbb{CP}(n)) \equiv K \langle d\psi| \Pi_{|\psi\rangle}^\perp |d\psi\rangle, \label{eq:metric-def} \end{equation} where $K$ is a positive real constant, $\Pi_{|\psi\rangle}^\perp \equiv I-|\psi\rangle\langle\psi|$ is the projector in the orthogonal direction to $|\psi\rangle$, and $|d\psi\rangle = |\psi+d\psi\rangle - |\psi\rangle$. Interestingly, if $|\psi+d\psi\rangle$ is such that $|\psi+d\psi\rangle=|\psi(t+dt)\rangle$, i.e., is the result of an evolution in time generated by a Hamiltonian $H$, we obtain \begin{equation} |d\psi(t)\rangle = |\psi(t+dt)\rangle - |\psi(t)\rangle = -\frac{i}{\hbar} H |\psi(t)\rangle dt \end{equation} from the Schr\"odinger equation. Then, the distance between these two states in $\mathbb{CP}(n)$ is \begin{equation} \begin{aligned} ds^2(\mathbb{CP}(n)) &= K \left(\langle d\psi|d\psi\rangle - \langle d\psi|\psi\rangle\langle\psi|d\psi\rangle\right) \\ &= \frac{K}{\hbar^2} \left(\langle \psi|H^2|\psi\rangle - \langle\psi|H|\psi\rangle^2\right) dt^2 \\ &= \frac{K}{\hbar^2} \left(\Delta H\right)^2 dt^2, \end{aligned} \end{equation} i.e., the ``velocity'' of the system in $\mathbb{CP}(n)$ is proportional to the uncertainty of its energy, as shown by Anandan and Aharonov \cite{anandan1990geometry}. Interestingly, it follows from the Schr\"odinger equation that the distance between two states $|\psi(t)\rangle$ and $|\varphi(t)\rangle$ evolving under the same Hamiltonian do not change in time since \begin{equation} \frac{d}{dt} \left(\langle\psi(t)|\varphi(t)\rangle\right) = 0 \label{eq:const-prob-trans} \end{equation} and Eq. \eqref{eq:metric-def} implies that, in $\mathbb{CP}(n)$, the distance $s(|\psi\rangle,|\varphi\rangle)$ between two states $|\psi\rangle$ and $|\varphi\rangle$ is such that \begin{equation} s^2(|\psi\rangle,|\varphi\rangle) = K \left(1-|\langle\psi|\varphi\rangle|^2\right). \end{equation} Physically, this means that the probability of transition between these two states is also preserved. In particular, if $|\psi(t)\rangle = |\varphi(t)\rangle$, it follows that the norm of vectors is kept constant in time. In other words, the evolution of a state creates a trajectory in $S_{2n+1}$, which can, then, be projected into $\mathbb{CP}(n)$. Another important aspect is that, although the global phase of a state vector has no physical meaning, the phase difference between two states has. It is even possible to define a \textit{connection} in $S_{2n+1}$ by establishing that, given two states $|\psi\rangle$ and $|\psi+d\psi\rangle$ projected into points infinitesimally close to each other in $\mathbb{CP}(n)$, they are parallel to each other in $S_{2n+1}$ if the phase difference between them is null, i.e., \begin{equation} \arg\left(\langle\psi|\psi+d\psi\rangle\right) = 0. \end{equation} With the connection, the concept of \textit{parallel transport} can be introduced. Given a curve in $\mathbb{CP}(n)$ represents an infinite number of curves in $S_{2n+1}$. A subset of special interest is the one for which the curves are defined on geodesics in $S_{2n+1}$, i.e., the curves built through parallel transport in $S_{2n+1}$. These trajectories are characterized by \begin{equation} \langle\psi(s)| \frac{d}{ds} |\psi(s)\rangle = 0 \end{equation} along the curve defined by $|\psi(s)\rangle$ in terms of a real parameter $s$. Each of these curves is called a \textit{geodesic lifting} of the curve in $\mathbb{CP}(n)$. A geodesic lifting does not have intrinsic physical meaning --- only the family of geodesic liftings has. This is illustrated in Figure \ref{fig:geometry}d, where all geodesic liftings (green curves) have an equivalent physical meaning. Besides its significance to the study of geometric phases, as it will be discussed in the next section, the geometric structure presented here was used for a pedagogical approach to the quantum adiabatic theorem \cite{lobo2012geometry} and signaling in Weinberg's non-linear quantum mechanics \cite{paiva2014alguns}. Also, this structure provides a geometric interpretation to von Neumann's measurement interaction model, weak values, and quantum erasers \cite{tamate2009geometrical, lobo2014weak}, and can be used for the introduction of a type of time-energy uncertainty relation \cite{anandan1990geometry}. Finally, this structure is also the same that appears in Yang-Mills theories, which underlie the standard model of particle physics. \section{Geometric phases} \label{sec:geom-phases} As a consequence of the geometry discussed in the previous section, when a system completes a closed cycle in $\mathbb{CP}(n)$, its trajectory is not necessarily closed in $S_{2n+1}$. A measure of this lack of \textit{holonomy} can be locally defined at each point of $\mathbb{CP}(n)$ by considering an infinitesimal closed curve around each point. This quantity corresponds to the \textit{curvature} of the connection and is the mathematical structure in which we are interested. In fact, this structure stands behind the geometric (or non-integrable) phase, as represented in Figure \ref{fig:geometry}d. To see that, assume a system, initially in the state $|\psi(0)\rangle$, completes a cyclic evolution of period $\tau$ in $\mathbb{CP}(n)$. Then, we can write its final state as \begin{equation} |\psi(\tau)\rangle = e^{i\phi} |\psi(0)\rangle. \end{equation} Part of the phase it acquires is the dynamical phase \begin{equation} \phi_\text{dyn}(\tau) = -\frac{1}{\hbar} \int_0^\tau \langle\psi(t)|H(t)|\psi(t)\rangle \ dt, \end{equation} where $H(t)$ is the Hamiltonian that governs the dynamics of the system. However, Berry noted that, in the adiabatic regime, the system in fact accumulates an extra phase, which became known as the \textit{Berry phase} \cite{berry1984quantal}. This extra phase is geometric and corresponds to the lack of holonomy we just discussed, as it was shown by Simon \cite{simon1983holonomy}. Because of it, the natural connection introduced above is known as the Berry-Simon connection. Later on, the adiabatic condition was removed by Aharonov and Anandan \cite{aharonov1987phase}. In fact, defining \begin{equation} |\tilde{\psi}(t)\rangle \equiv e^{-i \left[f(t) + \phi_\text{dyn}(t)\right]} |\psi(t)\rangle, \end{equation} where $f$ is a function such that $\phi-\phi_\text{dyn}(\tau)=f(\tau)-f(0)$, it follows that $|\tilde{\psi}(\tau)\rangle=|\tilde{\psi}(0)\rangle$ and \begin{equation} i\hbar \frac{d}{dt} |\tilde{\psi}(t)\rangle = \left[ H(t) + \hbar \frac{d}{dt} f(t) - \langle\psi(t)|H(t)|\psi(t)\rangle \right] |\tilde{\psi}(t)\rangle, \end{equation} which, in turn, implies that \begin{equation} \frac{d}{dt}f(t) = \langle \tilde{\psi}(t) | \left(i \frac{d}{dt} \right) | \tilde{\psi}(t) \rangle, \end{equation} regardless of $H$. Then, after ending the cycle, the state $|\psi\rangle$ accumulates an extra phase $\phi_\text{geom} \equiv f(\tau) - f(0)$ that can be expressed as \begin{equation} \phi_\text{geom} = \int_0^\tau \langle \tilde{\psi}(t) | \left(i \frac{d}{dt} \right) | \tilde{\psi}(t) \rangle \ dt = i \oint_C \langle \tilde{\psi}(R) | d\tilde{\psi}(R) \rangle, \label{def-geom-phase} \end{equation} which can be shown to depend only on geometric properties associated with the cycle, thus, in particular, is not a function of $\tau$. The last integral in Eq. \eqref{def-geom-phase} is taken over a curve $C$ in the parameter space, which has a parameter $R$ associated with it. These phases are present and play a fundamental role in the properties of many physical systems in nature, like in molecular systems \cite{mead1992geometric} and crystalline dielectrics \cite{resta1994macroscopic}. Geometric phases can be introduced for mixed states \cite{sjoqvist2000geometric, singh2003geometric}. In fact, they are studied in open and non-Hermitian systems \cite{garrison1988complex, berry1990geometric, zwanziger1991measuring, ning1992geometrical, bliokh1999appearance, carollo2003geometric, tong2004kinematic, lombardo2006geometric, dietz2011exceptional}. However, an important remark that should be presented here is that, depending on how it is extended to the dynamics of non-Hermitian Hamiltonians, the phase introduced by Aharonov and Anandan always assumes real values. In this case, the adiabatic limit of $\phi_\text{geom}$ does not always correspond to the Berry phase \cite{wu1996berry}. Moreover, in general scenarios, these phases can be studied in systems with non-cyclic evolution and even when intermediate measurements are taken into account \cite{samuel1988general}. Also, geometric phases can be introduced for classical systems undergoing adiabatic evolution, as it was shown by Hannay \cite{hannay1985angle}. In fact, for integrable systems, one can represent the Hamiltonian that governs the evolution in terms of action-angle coordinates \cite{arnol1989mathematical}. It can be, then, concluded that the evolution is defined in a torus in phase space. In cyclic adiabatic evolutions, action coordinates are preserved while angle coordinates may change. Hannay observed that these changes in angle coordinates during a cycle have dynamical and geometrical contributions, in a similar manner first observed by Berry for quantum systems. The geometrical contribution is what became known as the Hannay angle. From a more geometrical perspective, the Cartesian product between the parameter space and the phase-space defined a trivial fiber over the parameter space. Within this structure, the Hannay angle is, then, obtained from liftings in the total space according to a connection known as the Hannay-Berry connection \cite{golin1989hannay, marsden1990reduction, chruscinski2004geometric}. Then, like for Berry phases, the notion of a Hannay angle for open paths can also be considered \cite{pati1998adiabatic}. It should be noted that the notion of geometric phases can be extended to classical dissipative systems \cite{kepler1991geometric}. Moreover, perturbative methods can be used for nonadiabatic correction in some scenarios \cite{andersson2005nonadiabatic}. However, the corrections are not purely geometric since, in general, they depend on the time-parametrization of the trajectories. Furthermore, geometric phases are also studied in nonlinear classical field theories \cite{garrison1988geometrical, anandan1988comment, latmiral2020berry}. Given a Hamiltonian of an integrable system and denoting the action variable by $I=\{I_0,\cdots, I_{n-1}\}$, it is possible to find a direct mathematical relation between the Hannay angle $h(I;C)$ and the Berry phase $\phi(C)$ acquired by a physical system during a cyclic evolution through a path $C$. This follows from the use of the Bohr-Sommerfeld quantization rule, in which each $I_j$ is quantized as $I_j = \hbar (n_j + \mu_j/4)$, where $\mu_j$ is a \textit{Maslov index}, which is a topological quantity, and $n_j$ is an integer. This, combined with further analysis of the geometrical structure permeating both phases, leads to \cite{berry1985classical, chruscinski2004geometric} \begin{equation} \frac{\partial\phi(C)}{\partial n_j} = -h(I;C) + O(\hbar). \end{equation} While this implies that the Hannay angle may vanish in scenarios where the Berry phase differs from zero, the converse does not hold: the Berry phase cannot vanish if the Hannay angle is null. This type of relation makes the ability to discriminate quantum and classical contributions to interference experiments an important issue, which was already considered, e.g., in optomechanical systems \cite{armata2016quantum}. Also, it can be argued that geometric phases observed in various experiments with light-waves --- not matter-waves --- are Hannay angles \cite{agarwal1990berry}. This is the case because the Berry phase becomes proportional to the Hannay angle. In this regard, the use of squeezing Hamiltonians was suggested as a possible way to make Berry phases detectable in these experimental setups \cite{chaturvedi1987berry, chiao1988lorentz, fuentes2000proposal}. Before concluding this section, we call attention to the fact that, in cases of Hamiltonians with degeneracy, geometric phases (both quantum and classical) become non-Abelian \cite{wilczek1984appearance, anandan1988non}. \section{Aharonov-Bohm effect} \label{sec:ab-effect} An important type of geometric phase is the phase associated with the AB effect. In classical physics, the dynamics of a particle with charge $q$ is only affected by a magnetic field that directly interacts with it, i.e., if the particle travels in a region where the magnetic field is non-zero. However, in quantum mechanics, this is not always the case. In fact, suppose a charge encircles a region in space that contains a magnetic field. Then, it accumulates a quantum phase proportional to the magnetic flux inside the region enclosed by its trajectory, regardless of whether there was any magnetic field on the particle's trajectory. In 1939, this effect might have been hinted at by Franz in a talk at a physical society meeting in Danzig \cite{franz1939}. Later, in 1949, the effect was presented by Ehrenberg and Siday \cite{ehrenberg1949refractive}, although the influence of magnetic field in the particle's dynamics seemed to be a peculiar feature of the optical configuration considered by them. The authors themselves wrote: ``One might therefore expect wave-optical phenomena to arise which are due to the presence of a magnetic field but not due to the magnetic field itself, i.e., which arise whilst the rays are in field-free regions of space.'' It was only in 1959 that the fundamental nature of the effect was revealed in a seminal article by Aharonov and Bohm \cite{Aharonov1959}, the reason for which it is known as the AB effect. More details on early historical aspects of the discovery of the effect can be found in Ref. \cite{hiley2013early}. Since Aharonov and Bohm's work, the AB effect has been vastly investigated in theoretical and experimental works \cite{chambers1960shift, mollenstedt1962kontinuierliche, liebowitz1965significance, aharonov1969modular, boyer1973classical, berry1980wavefront, tonomura1982observation, tonomura1986evidence, berry1986aharonov, berry1986statistics, osakabe1986experimental, berry1989quantum, ford1994aharonov, aharonov1994, aharonov2004effect, aharonov2005quantum, tonomura2006aharonov, recher2007aharonov, russo2008observation, peng2010aharonov, berry2010semifluxon, fang2012photonic, bardarson2013quantum, noguchi2014aharonov, duca2015aharonov, kang2015locality, cohen2015measure, aharonov2016nonlocality, mukherjee2018experimental, paiva2019topological, paiva2020magnetic}. This effect is usually presented by considering a charge encircling a solenoid whose axis lies, say, on the $z$ axis. For simplicity, the solenoid is taken to be infinitely thin and is sometimes referred to as a \textit{flux line}. Also, for simplicity, the particle is assumed to travel in the $xy$ plane, having the superposition state \begin{equation} |\Psi\rangle = \frac{1}{\sqrt{2}} \left(|\psi_L\rangle + |\psi_R\rangle\right), \end{equation} where $|\psi_L\rangle$ is a wavepacket that passes to the left and $|\psi_R\rangle$ is a wavepacket that passes to the right of the flux line. This superposition can be achieved, for instance, with the use of a double-slit or a beamsplitter. The Hamiltonian of the particle, in this case, can be written as \begin{equation} H = \frac{1}{2m} \left[\vec{P}- q\vec{A}(\vec{Q})\right]^2, \label{eq-ham-vp} \end{equation} where $\vec{P}=P_x\hat{x}+P_y\hat{y}$, $\vec{Q}=X\hat{x}+Y\hat{y}$, and $\vec{A}$ is the vector potential associated with the solenoid, which can take different forms according to the choice of gauge. To obtain the solution of this Hamiltonian, let the states $|\psi_L^0\rangle$ and $|\psi_R^0\rangle$ be the solutions in the case where there is no magnetic field, i.e., $H_0 = P^2/2m$. Then, based on a procedure introduced by Dirac \cite{dirac1931quantised}, it can be obtained that, after the left and the right wavepackets travel, respectively, the trajectories $\gamma_L$ and $\gamma_R$, \begin{equation} |\psi_L\rangle = e^{iq\int_{\gamma_L}\vec{A}\cdot d\vec{\ell}/\hbar}|\psi_L^0\rangle \label{eq-phase-l} \end{equation} and \begin{equation} |\psi_R\rangle = e^{iq\int_{\gamma_R}\vec{A}\cdot d\vec{\ell}/\hbar}|\psi_R^0\rangle. \label{eq-phase-r} \end{equation} Now, recall that quantum states are equivalent up to a global phase and observe that \begin{equation} \int_{\gamma_R}\vec{A}\cdot d\vec{\ell} - \int_{\gamma_L}\vec{A}\cdot d\vec{\ell} = \oint \vec{A}\cdot d\vec{\ell}, \end{equation} which is independent of the path $\gamma_R-\gamma_L$ and has physical significance since it corresponds to the magnetic flux $\Phi_B$ inside the region enclosed by the charge. Then, the state of the system after it encircles the flux line is \begin{equation} |\Psi\rangle = \frac{1}{\sqrt{2}}\left(|\psi_L^0\rangle + e^{i\Delta\phi_{AB}}|\psi_R^0\rangle\right), \label{shifted-state} \end{equation} where \begin{equation} \Delta\phi_{AB} \equiv \frac{q}{\hbar} \oint \vec{A}\cdot d\vec{\ell} \label{AB effect} \end{equation} is the quantum phase accumulated by the charge, usually called the AB phase. To verify that the AB phase is indeed a geometric phase, observe that, for a wavepacket $\Psi$ with center at $\vec{\ell}$ encircling the solenoid, the inner product $\langle\Psi(\vec{r}-\vec{\ell})|\vec{\nabla}_{\vec{\ell}}\Psi(\vec{r}-\vec{\ell})\rangle$ results in \begin{equation} \int \Psi^*(\vec{r}-\vec{\ell}) \left[-i\frac{q}{\hbar} \vec{A}(\vec{\ell})+\vec{\nabla}_{\vec{\ell}}\right] \Psi(\vec{r}-\vec{\ell}) \ d^3r = -i \frac{q}{\hbar} \vec{A}(\vec{\ell}). \end{equation} Then, replacing it in Eq. \eqref{def-geom-phase}, \begin{equation} \phi_\text{geom} = i \oint_C \langle \Psi| \vec{\nabla}_{\vec{\ell}} \Psi\rangle\cdot d\vec{\ell} = \Delta\phi_{AB}. \label{eq-ref-geophase} \end{equation} Differently from various geometric phases, however, the AB phase does not depend on the path which $\vec{A}$ is integrated over, as already mentioned, which is one of the reasons it can be considered a \textit{topological phase}. Also, because there always exists a gauge in which vector potential vanishes in an arbitrary region that does not completely enclose the solenoid, \textit{a priori}, the AB effect cannot be seen as the result of the local interaction between the charge and the vector potential. As a result, it is typically considered to be of nonlocal nature. However, there is no final consensus on this issue and, thus, the origins of this phase are investigated until this day with works that include the idea of modular variables \cite{aharonov1969modular} and models that consider the source of magnetic field as part of the dynamics or quantize the magnetic field itself \cite{peshkin1961quantum, aharonov1991there, santos1999microscopic, choi2004exact, Vaidman2012, aharonov2015comment, vaidman2015reply, aharonov2016nonlocality, pearle2017quantized, li2018transition, marletto2020aharonov, saldanha2020shielded, horvat2020probing, paiva2021aharonov}. Furthermore, the interplay between Berry and AB phases in various scenarios can also be of interest. For instance, the phases acquired by a quantum charge with a large spreading while it encircles one or multiple solenoids were studied in Ref. \cite{aharonov1994aharonov}. To conclude this section, we note that an electric AB effect was also proposed in the seminal article by Aharonov and Bohm \cite{Aharonov1959}. Moreover, a similar effect to the (magnetic) AB effect for neutral particles with a magnetic moment was later proposed by Aharonov and Casher \cite{aharonov1984topological}. This effect, often referred to as the Aharonov-Casher effect, was further studied and experimentally verified in various scenarios \cite{kaiser1988neutron, goldhaber1989comment, cimmino1989observation, dalibard2011colloquium}. \section{Geometric phases and non-inertial effects} \label{sec:sagnac-theory} \begin{figure} \centering \includegraphics[width=8.6cm]{circuit1.pdf} \caption{\textbf{Schematic representation of a standard Sagnac interferometer.} Two beams or wavepackets travel an interferometer in opposite directions. If the interferometer is rotating with angular speed $\omega$, the interference pattern is shifted by an amount that depends on $\omega$ and the area $a$ enclosed by the interferometer.} \label{fig:sagnac-generic} \end{figure} In 1913, Sagnac showed that angular rotation could be detected using interferometers in a classical setup \cite{sagnac1913preuve}. The configuration introduced by him is now known as the Sagnac interferometer. His result had been anticipated in two publications by Lodge \cite{lodge1893xv, lodge1897vi}, a prediction that, in turn, can be traced back to unpublished correspondences between Lodge and Larmor \cite{anderson1994sagnac}. This interferometer typically consists of two light beans enclosing a certain region with area $a$, which is rotating with angular speed $\omega$, as represented in Figure \ref{fig:sagnac-generic}. Because of the rotation of the region, the interference of the wavepackets is shifted by a phase \begin{equation} \Delta\phi_S = \frac{8\pi a\omega}{c\lambda}, \label{original-phase-s} \end{equation} where $\lambda$ is the initial wavelength of the photon. Based on this idea, in 1925, Michelson, Gale, and Pearson conducted an experiment to measure the effects of Earth's rotation on the speed of light \cite{michelson1925effect}. One could, then, wonder whether such an effect had a quantum analog. This is indeed the case, as it was experimentally verified, for instance, by Collela, Overhauser, and Werner \cite{colella1975observation}, based on a previous theoretical proposal \cite{overhauser1974experimental}, and by Werner, Staudenmann, and Colella \cite{werner1979effect}. Nevertheless, it is possible to make distinctions between the ``classical'' and ``quantum'' Sagnac effects \cite{anandan1981sagnac, frauendiener2018notes, frauendiener2020gravitational}. In particular, in the nonrelativistic regime, there is no classical Sagnac effect, while the quantum version of it exists. However, in the relativistic limit, they become equivalent. Also, in the relativistic case, rotations of the interferometer or any other non-inertial influences are not required for the existence of a phase difference between the beams or wavepackets. In fact, because of the Doppler shift, the Sagnac effect can be obtained from a loop in space and relative motion --- even if both of the systems of reference involved are inertial \cite{tartaglia2015sagnac}. The Sagnac effect is, of course, not exclusive to light-waves. In fact, it was observed in neutrons \cite{colella1975observation}, electrons \cite{hasselbach1993sagnac}, and atoms \cite{gustavson1997precision}. One may even consider a Sagnac effect for the superfluid Josephson interferometer \cite{anandan1981gravitational, *anandan1984gravitational}. This varied of systems is particularly relevant because, for precision purposes, we may benefit from the use of matter-waves since they have lower wavelengths, which increases the Sagnac phase-shift and the resulting sensitivity of the interferometer \cite{delgado2002quantum}. An analogous way to see this is by observing that the ratio between the rest mass $m$ of a particle and the effective mass $\hbar\omega/c^2$ of a light-wave with frequency $\omega$ amounts to a value with eleven orders of magnitude if the particle is an atom and the light-wave is an optical photon in the visible regime \cite{gustavson1997precision, barrett2014sagnac}. Then, if the Sagnac effect is seen as the result of the rotation of the interfering systems, matter-wave systems highly outperform light-wave systems in interferometers with equal areas. At the same time, optical systems have the advantage of higher particle fluxes and, typically, larger enclosed areas. Still, generally speaking, matter-wave systems are expected to outperform optical systems by several orders of magnitude \cite{gustavson1997precision}. To see how the Sagnac effect is manifest for massive particles, we start from a classical analysis of a particle in a rotating system. Such a particle is subject to a Coriolis force given by \begin{equation} \vec{F} = 2m \vec{u} \times \vec{\omega}, \end{equation} where $\vec{u}$ indicates the velocity of the system (in the rotating system). Considering the potential vector $\vec{A}_r=\vec{\omega}\times\vec{r}$, we can write $\vec{w}=\vec{\nabla}\times \vec{A}_r$ and \begin{equation} \vec{F} = 2m \vec{u} \times \left(\vec{\nabla}\times \vec{A}_r\right). \end{equation} From that, a Lagrangian can be defined, followed by its associated Hamiltonian \begin{equation} H=\frac{1}{2m}\left(\vec{P}-2m\vec{A}_r\right)^2 = \frac{1}{2m}\left(\vec{P} - \frac{4\pi\hbar}{c\lambda} \vec{A}_r\right)^2. \end{equation} This Hamiltonian, in turn, can be quantized. In this case, it implies that, according to the direction of motion, a wavepacket acquires a phase of $\pm 4\pi a\omega/c\lambda$. Then, a system in a superposition of wavepackets moving in opposite directions ends up with a relative phase given by Eq. \eqref{original-phase-s}. A more direct way to see this type of influence of non-inertial frames on the dynamics of quantum systems follows the reasoning presented in Ref. \cite{aharonov2014measure}. There, the authors showed that the dynamics of a free particle, i.e., a particle whose Hamiltonian is $H=(P_x^2+P_y)^2/2m$ in a rotating frame is described by a Hamiltonian with a term that is similar to a vector potential. In fact, if $\omega$ is the angular speed of the frame about the $z$ axis, the Hamiltonian $H'$ of the particle from the perspective of the rotating frame is a transformation of $H$ by the unitary $\exp(-i L_z \omega/\hbar)$, which in first order becomes \begin{equation} H'\approx\frac{1}{2m} \left[\left(P_x+m\omega Y\right)^2+\left(P_y-m\omega X\right)^2\right] \label{hamil-ang-vel-alt} \end{equation} if the particle has an orbit with a well-defined radius. From Eqs. \eqref{hamil-ang-vel} and \eqref{hamil-ang-vel-alt}, we see that the rotation takes the usual place of the vector potential in the Hamiltonian. From this discussion, we should expect a relation between the interference in the Sagnac interferometer and the AB effect. This is, in fact, the case, and it was presented in \cite{wang2004generalized}. Before discussing this result, we point out several differences between the two effects, e.g. while the AB effect is given in terms of $\hbar$ and vanishes in the classical limit, this is not the case for the Sagnac effect. In addition, the AB effect is topological while this effect is geometric. By considering the setup with a rotating single mode optical fiber loop, the authors of Ref. \cite{wang2004generalized} first observed that the relative phase accumulated by each wavepacket is given by \begin{equation} \Delta\phi_S = \frac{4\pi \vec{v}\cdot\Delta \vec{\ell}}{c\lambda}, \end{equation} where $\vec{v}$ is the velocity of the fiber and $\Delta\ell$ is the length of a segment of the optical fiber. However, the important novelty of their work is that they also considered a more general scenario where only a portion of the circuit was moving. With that, they were able to gather evidence for the (expected) validity of the general expression for the phase $\Delta\phi_S$, which is \begin{equation} \Delta\phi_S = \frac{4\pi}{c\lambda} \oint_\ell \vec{v}\cdot d\vec{\ell}. \end{equation} Comparing with the AB phase in Eq. \eqref{AB effect}, it can be seen that the parallel works with the mapping \begin{equation} \vec{A} \rightarrow \frac{4\pi \hbar}{qc\lambda} \vec{v}. \end{equation} However, it should be noted that the vector potential and Berry connection are gauge-dependent quantities, while the angular velocity is not typically tied to a specific gauge. Still, this difference between these two quantities may not be as well defined as one may think if we consider that a choice of gauge is ultimately associated with a choice of frame \cite{aharonov1991there}. This aspect deserves to be further investigated since it is associated with a different discussion in the literature. On the one hand, in the nonrelativistic limit, the Sagnac effect can be seen as a manifestation of the lack of holonomy of the underlying geometry of the encircling particle, as we just showed --- and previously argued by, e.g., Chiao \cite{chiao1989berry} and Hendriks and Nienhuis \cite{hendricks1990sagnac}. As such, it is a Berry phase. On the other hand, this analogy between the Sagnac effect and AB effect or, more generally, Berry phases are not unanimous. For instance, while reviewing this effect, Malykin noted that Berry phases appear in addition to the Sagnac effect in different scenarios \cite{malykin2000sagnac}. Moreover, the AB and Berry phases are related to quantum systems, while the Sagnac effect can be attributed to classical systems. Because of it, Malykin concludes that, although valid, the analogy with AB and Berry phases is not of a fundamental nature. In any case, the interplay between different geometric phases in different implementations of the Sagnac effect seems worthy of further exploration. In the next section, we present various applications of the Sagnac effect in quantum metrology and sensing, with special emphasis on the detection of gravitational effects. \section{Applications of the Sagnac interferometer in quantum sensing} \label{sec:sagnac-application} In quantum sensing and metrology, one attempts to employ unique quantum properties such as single-particle interference, squeezing, and entanglement to improve classical techniques in terms of sensitivity, precision, or resolution. The probe particles could be massive or massless, but the goal in quantum metrology is typically to go beyond the shot-noise limit (SNL), towards the Heisenberg limit. For achieving this goal, interferometers traversed by quantum states of light or matter are known to be invaluable tools \cite{giovannetti2004quantum, mitchell2004super, nagata2007beating, resch2007time, cooper2010entanglement, eberle2010quantum, wolf2019motional}. For utilizing uniquely quantum states, various methods have been suggested and implemented. For instance, in the case of optical interferometers, the preferred method for this purpose is the employment of quantum squeezing \cite{caves1981quantum, xiao1987precision, schnabel2017squeezed} due to the simple generation of squeezed light by parametric amplification and the ability to seed a standard, high-power classical interferometer with squeezed light. This method claims usufruct rights for both the quantum advantage of squeezing together and the classical advantage of power \cite{ligo2011gravitational, barsotti2018squeezed, tse2019quantum}. However, all sub-shot-noise methods require a high detection efficiency, greater than 90\% for the SNL to be considerably overcome \cite{schnabel2017squeezed} and can be applied only in limited spectral ranges, where high-efficiency, low-noise detectors are available. The Sagnac interferometer is a major interferometric tool of high scientific and technological importance that can also be enhanced by squeezing. The Sagnac is the basis of optical and matter-wave gyroscopes, which are critical for military and civilian applications. In addition, the zero-area Sagnac interferometer was suggested \cite{traeger2000polarization,chen2003sagnac,eberle2010quantum} for the next generation of gravitational wave detectors. \subsection{Quantum gyroscopes} Since the Sagnac interferometer is sensitive to rotations, a natural application that takes advantage of this characteristic is its use in the construction of gyroscopes. As such, this type of application became common since they were first suggested or implemented in optical \cite{vali1976fiber, bergh1981all, bergh1981all2} and matter-wave \cite{riehle1991optical} setups. Assuming the G\"odel metric, they were even proposed as a device capable of placing an upper bound on the rotation of the universe \cite{delgado2002quantum}. An important advantage of Sagnac interferometers over other configurations relies on their geometric nature. In fact, geometric phases depend exclusively on global variables and are, thus, intrinsically immune to local noise disturbances that preserve these geometric features \cite{carollo2003geometric, che2018phase}. Because of that, measurements of effects based on geometric phases can provide high-precision quantum sensing in real-world scenarios. Here, we briefly present some advances in the development of Sagnac-based gyroscopes. To start, we focus on matter-wave gyroscopes. Since the first experimental application of the Sagnac effect with atoms \cite{riehle1991optical}, sensitivity improvements continued to be investigated \cite{gustavson1997precision, dowling1998correlated, gustavson2000rotation, wu2007demonstration}. In particular, the use of trapped atoms was suggested \cite{ketterle1992trapping, sauer2001storage, arnold2003adaptable} to overcome the complexity of implementing the atom gyroscopes \cite{moan2020quantum}. Because they support the atoms against gravity, the traps present longer measurement times without requiring large dropping distances, which otherwise would be challenging for matter-waves. With that, configurations using trapped atomic clocks \cite{stevenson2015, che2018phase} were shown to saturate the quantum Cram\'er-Rao bound. Moreover, entanglement-enhanced atomic gyroscopes with atoms trapped in an optical ring potential were proposed to achieve a precision of the order given by the Heisenberg limit \cite{cooper2010entanglement}. On the optical side, it was first observed that single-mode fiber optic gyroscope had increased simplicity, stability, and the potential of very high rotation sensitivity \cite{vali1976fiber, bergh1981all, bergh1981all2}. However, the experiments began to really make use of quantum advantages after it was shown that the use of squeezed light in these gyroscopes could raise the sensitivity beyond the SNL \cite{marte1987enhanced, mehmet2010demonstration}. Entanglement was also shown to improve sensitivity in various schemes \cite{tian2018theoretical, fink2019entanglement, grace2020quantum}. \subsection{Third generation of gravitational wave detectors} Gravitational wave detectors have to meet harsh standards to function properly. For instance, in order to increase the information obtained from the measured interference pattern, they should use sophisticated feedback mechanisms allowing for major frequency and power stability. Moreover, they have to be extremely long to detect such small spacetime variations induced by gravitational waves. Also, like in most high-precision experiments with squeezed light, strict limits are imposed on the system's losses. \begin{figure} \centering \includegraphics[width=\columnwidth]{circuit3.pdf} \caption{\textbf{Schematic representation of a simple zero-area Sagnac interferometer.} The disjoint oriented areas enclosed by systems travelling the interferometer (in yellow and orange) cancel each other. As a result, such an interferometer is insensitive to rotations.} \label{fig:circuit3} \end{figure} The Sagnac effect was suggested for detecting gravitational waves \cite{anandan1981sagnac}. With that and envisioning it as a possible replacement to the Michelson-Fabry-P\'erot-based LIGO, the zero-area Sagnac interferometer was introduced \cite{sun1996sagnac}. In this variation of the standard Sagnac interferometer, the waves still travel in opposite directions. However, the circuit is constructed in such a way to have area cancellation, as schematically illustrated in Figure \ref{fig:circuit3}. At first, it may seem that such an interferometer loses the most remarkable characteristic of the Sagnac interferometers: rotation sensitivity. However, some of the main advantages of this interferometer are its insensitivity to variations of laser frequency and its peak response in the frequency band of interest for LIGO applications. Moreover, this interferometer is also insensitive to mirror displacement at dc, thermally induced birefringence, and reflectivity imbalance in the arms. With all that, the optical tolerance requirements of the system are reduced and the system is more easily controlled. Nevertheless, these advantages did not suffice to overcome some of the Sagnac topology disadvantages, like its low tolerance to beamsplitter reflectivity error and beamsplitter tilt \cite{petrovichev1998simulating}. Furthermore, the ideal Sagnac interferometer did not present sensitivity advantages when compared to signal-recycled Michelson interferometers for the astrophysical needs of this type of application \cite{mizuno1997frequency}. \begin{figure} \centering \includegraphics[width=\columnwidth]{circuit4.pdf} \caption{\textbf{Schematic representation of the ring-Sagnac interferometer.} The inner Sagnac interferometer, composed of the three white beamsplitters and a mirror (black element), is attached to two resonant cavities, one attached to its top left and the other to its bottom right. The elements inside the area delimited by the dashed line are used for homodyne readout. The orange beamsplitter separates a small portion of the signal. Then, this portion is reflected by a mirror and goes through a phase shifter (yellow circle). The blue box represents a homodyne detector. This setup was analysed in Ref. \cite{huttner2016candidates}.} \label{fig:circuit4} \end{figure} Despite these challenges, much effort was put into pushing the Sagnac scheme into an implementable state for the third generation of gravitational-wave detectors \cite{traeger2000polarization, chen2003sagnac, eberle2010quantum, voronchev2014sagnac, huttner2016candidates,huttner2020comparison}. For instance, it was shown that the zero-area Sagnac interferometer is a speedmeter, which can have advantages over position meters, like more conventional Michelson interferometers \cite{chen2003sagnac}. Moreover, the sensitivity of the zero-area Sagnac interferometer can be improved upon using ring cavities in the arms and signal recycling, similarly to the illustration in Figure \ref{fig:circuit4} without the elements inside the area delimited by the dashed line, which were a later addition. Although this would make the sensibility of this interferometer and the Michelson scheme comparable, the implementation of the Sagnac device had the advantage of not requiring the addition of any extra Fabry-P\'erot cavities. Based on these results, it was suggested that further increase of the zero-area Sagnac interferometer's sensing could be achieved with the input of squeezed vacuum on its open port combined with a standard homodyne measurement \cite{eberle2010quantum}. However, such a measurement is inherently narrowband and requires near-ideal photodetectors as well as precise, technically demanding, active phase-locking of the squeezed vacuum to the local oscillator. Nevertheless, when compared to different interferometers, a Sagnac speedmeter is less susceptible to loss in a filter cavity \cite{voronchev2014sagnac}. This seems to firm the Sagnac topology as a good candidate for the third generation gravitational waves detectors since it simplifies the development of the filter cavity, reducing its implementation costs. More improvements were suggested with the introduction of a sloshing-Sagnac interferometer, which is made out of two resonant Fabry–P\'erot arm cavities in a Michelson configuration linked to a similar, antiresonant cavity running parallel to them \cite{huttner2016candidates}. This constitutes the Sagnac configuration represented in Figure \ref{fig:circuit5}. When compared to the ring-Sagnac interferometer shown in Figure \ref{fig:circuit4}, this device offers an extra degree of freedom for optimizations in the form of the finesse of the sloshing cavity that is separated from the arm cavity. In fact, with the use of squeezing, the sloshing-Sagnac interferometer presented a better performance in a lossy environment when compared to various other interferometers \cite{huttner2020comparison}. \begin{figure} \centering \includegraphics[width=\columnwidth]{circuit5.pdf} \caption{\textbf{Schematic representation of the sloshing-Sagnac interferometer.} The main part of the interferometer is composed of a beamsplitter (white element) and the horizontal and vertical arms. In each arm, there are two mirrors with equal reflectance and transmittance (gray elements). The signal leaked at the end of each of these cavities is reflected by a mirror (black element) and linked to an antiresonant cavity (in the diagonal). Lenses, represented by the elliptical elements, are used to match the cavity modes. Finally, the elements inside the area delimited by the dashed line are used for homodyne readout, like in the ring-Sagnac interferometer. This setup was investigated in Ref. \cite{huttner2016candidates}.} \label{fig:circuit5} \end{figure} Before concluding this subsection and proceeding to a broader class of sensing techniques, it should be noted that alternatives to Sagnac interferometers are also being considered in the literature. For instance, Michelson interferometers can also be adapted to become speedmeters, which may give them some advantages compared to the more conventional Michelson interferometers \cite{purdue2002analysis, purdue2002practical, freise2019simplified}. \subsection{Enhanced sensing with weak value amplification} Weak value amplification \cite{dixon2009ultrasensitive, susa2012optimal, dressel2013strengthening, jordan2014technical, pang2014entanglement, alves2015weak, harris2017weak} is a broad set of sensing techniques with diversified applications. It often consists of weakly coupling a quantum pointer to the quantum system of interest. The latter is pre- and post-selected, i.e., it has both initial and final boundary conditions $|\psi\rangle$ and $|\phi\rangle$, respectively. In this scenario, the shift of the pointer is, to first order in the coupling strength, proportional to the \textit{weak value} \cite{aharonov1988result} \begin{equation} A_w=\frac{\langle \phi| A | \psi\rangle}{\langle \phi | \psi\rangle} \end{equation} of the measured operator $A$. As can be easily seen, the weak value of $A$ is not confined to its spectrum and can thus be very large. This amplification was shown to be particularly helpful in the presence of technical noise and detector saturation \cite{jordan2014technical, harris2017weak}. Related schemes use intermediate-strength or strong measurements for assessing weak values \cite{elitzur2011retrocausal,cohen2018determination, pan2020weak, dziewior2019universality} or, alternatively, rely on the inverse weak values \cite{martinez2017ultrasensitive}. Weak values and weak measurements also bear interesting relations with the geometric phase \cite{sjoqvist2006geometric, cho2019emergence, gebhart2020topological, wang2021observing}. Post-selection of the dark port of a Sagnac interferometer was shown to yield ultra-sensitive deflection \cite{dixon2009ultrasensitive} and tilt measurements \cite{martinez2017ultrasensitive}, as well as for sensitive gravimetry \cite{jordan2019gravitational}. Another work used weak value amplification for improved sensing of angular rotations \cite{magana2014amplification}. Enhancement of angular velocity measurements based on weak value amplification was recently demonstrated in Refs. \cite{fang2021weak, huang2021amplification}. \section{Aharonov-Bohm effect and non-inertial systems} \label{sec:ab-gravitational} In this section, we discuss two effects that, to the best of our knowledge, have not been vastly considered in the literature and deserve more attention in practical applications. From Larmor's theorem \cite{larmor1900aether}, it is known that a magnetic field $B$ acting on a particle can be emulated by the rotation of the frame with an angular speed proportional to $B$. It could be asked, then, if this also holds in the case of the AB effect since the particle only travels in regions with a null field. This question was answered in 1973 by Aharonov and Carmi \cite{aharonov1973quantum}, and their solution was further studied and employed in Refs. \cite{harris1980review, aharonov2014measure, aharonov2015comment}. They presented a direct analogy between the \textit{vector potential} and the angular velocity. As a result, this analogy allows the understanding of the AB phase geometrically. To understand the result presented by Aharonov and Carmi, consider a lab given by a narrow ring with an inner radius $R_1$ and an outer radius $R_2$, as represented by the blue region in Figure \ref{fig6}. Also, assume that the ring rotates with an angular velocity $\omega$ and, for simplicity, that the ratio between the charge and mass is the same for every particle inside the ring. Moreover, the disc with radius $R_1$ is taken to be massive. Because of its rotation, the ring experiences two pseudo-forces, namely the centrifugal and the Coriolis force. These forces, however, can be canceled by external electromagnetic forces. Indeed, the Coriolis force $\vec{F}_C$ acting on an object with mass $m$ and velocity $\vec{v}$ (measured in the lab) can be written as $\vec{F}_C=m\vec{v}\times\vec{C}$, where $\vec{C}$ is the field associated with $\vec{F}_C$. Moreover, because $\vec{C}$ satisfies $\vec{\nabla}\cdot\vec{C} = 0$, $\vec{F}_C$ is given by a field that is the rotation of a vector potential $\vec{A}_C$. Also, if $\vec{F}_c$ is the centrifugal force, $\vec{\nabla}\times\vec{F}_c =0$, i.e., $\vec{F}_c$ can be written as the gradient of a scalar potential. Then, suppose an electromagnetic field is applied only \textit{inside} the ring to remove the pseudo-forces. Even in this case, a quantum experiment with a particle enclosing the disk $D_1$ (with radius $R_1$) in a superposition of wavepackets traveling in different paths, as represented in Figure \ref{fig6}, can detect that the ring is not an inertial frame. In fact, denoting by $\vec{A}_T$ the vector potential associated with the rotating mass after the inclusion of the magnetic field in the lab, it is possible to write the Hamiltonian of the system as \begin{equation} H = \frac{1}{2m} \left(\vec{P}-m\vec{A}_T\right)^2, \label{hamil-ang-vel} \end{equation} which implies that the relative phase accumulated by the wavepackets is proportional to \begin{equation} \Delta\phi_R = m\oint \vec{A}_T \cdot d\vec{\ell} = m\int_{D_1} \vec{C} \cdot d\vec{S} = 2\pi m R_1^2 \omega, \end{equation} i.e., the accumulated phase is proportional to the angular speed $\omega$ of the lab. The Hamiltonian in Eq. \eqref{hamil-ang-vel} compares to the Hamiltonian associated with the AB effect in Eq. \eqref{eq-ham-vp}. Moreover, a computation similar to the one performed in Eq. \eqref{eq-ref-geophase} shows that $\Delta\phi_R$ is indeed a geometric phase. \begin{figure} \centering \includegraphics[width=5cm]{fig6.pdf} \caption{\textbf{Representation of a laboratory given by a narrow ring.} The massive disk (gray circle) and the laboratory (blue region) are assumed to rotate as a single system. Even if an electromagnetic field is used to counterbalance the gravitational field generated by the massive disk and the fictitious force created by its rotation inside the laboratory, a quantum experiment can still detect that the laboratory is not an inertial frame.} \label{fig6} \end{figure} This phenomenon is an AB-like effect. It suggests the necessity for a modification of the equivalence principle in quantum theories. In fact, the charge travels in regions where the effective field is null, although its potential is not. In particular, we can deduce from the previous example that, in a general relativistic treatment, curvature effects, and not just accelerations caused by them, can be detected in quantum interference experiments. Indeed, low-order curvature effects were studied in an interferometer with thermal neutrons that only makes use of horizontal mirrors \cite{anandan1984curvature}. Moreover, another similarity between the example studied in this section and the magnetic AB effect is that the trajectories of the wavepackets can be deformed and, as long as they do not enter the gray region in Figure \ref{fig6}, the acquired relative phase between them is unchanged. However, differently from the magnetic AB effect, the phase considered here is acquired in a manifestly local manner. One can go even further and discuss the gravitational AB effect \cite{anandan1977gravitational, anandan1979interference, anandan1983interferometry, ford81, chiao2014gravitational}. In particular, the authors of Ref. \cite{chiao2014gravitational} replace the magnetic flux with a Lense-Thirring field. Then, they show a relation between this effect and gravitational radiation and parametric oscillators. Generally speaking, it seems that the gravitational AB effect may enjoy the better sensitivity of the proposed squeezing-enhanced Sagnac interferometer. However, this is a practical question that deserves further exploration. \section{Discussion and outlook} \label{sec:discussion} The geometry of quantum states is remarkable in breadth and depth. We have only described a small selection of its various fundamental and practical merits in this review article. On the fundamental side, it arises from a rich mathematical structure that can be related to a classical counterpart via the Bohr-Sommerfeld quantization rule, as we explained briefly in Section \ref{sec:geom-phases}. Moreover, AB-like non-inertial and gravitational effects introduce interesting quantum phenomena and even suggest the necessity of a reformulation of basic concepts, like the equivalence principle, as seen in Section \ref{sec:ab-gravitational}. On the application side, particular attention was given to gravitational and non-inertial measurements. However, geometric phases are also an important player in quantum information and computation \cite{vedral2003geometric, sjoqvist2015geometric, chen2020observable}, chemical physics \cite{zwanziger1990berry, mead1992geometric, kuppermann1993geometric, kendrick2015geometric}, and many other areas. Before concluding, we outline some topics for further research. From a practical perspective, the geometric and gravitational AB effects do not seem to be vastly studied in the literature and may lead to new quantum-enhanced precision measurements. These effects and their relation to other relativistic ones, such as frame-dragging, seem to deserve further attention. On the fundamental level, there are still some open questions such as: Does the analogy between various geometric phases and the Sagnac effect represent a genuine physical relation between them? Is the AB effect local? If so, in which sense? Can constructions of a complete quantum mechanical description of systems which does not utilize potentials help in this investigation? Regarding this last question, it indeed seems that an interesting direction is the study of fully quantized systems where gauge-dependence is linked to frame-dependence \cite{aharonov1991there, paiva2021aharonov}. This approach could reveal similarities and also highlight important differences between the AB and other geometric phases. In this approach and other similar ones, geometric and topological phases were linked to the creation of entanglement. For instance, in the case of the AB effect, if the source of the magnetic field is not an eigenstate of a relevant observable, the source and the charge encircling it become entangled \cite{paiva2021aharonov}. However, as argued by Vaidman \cite{Vaidman2012}, it is possible to conceive a model in which entanglement is present even if initially there is no uncertainty in the flux. This suggests that a general interplay may exist between dynamical nonlocality and kinematical nonlocality (see also \cite{marletto2020aharonov}). The latter is a broad category that includes, for instance, Bell nonlocality. The former is a type of nonlocality that emerges from the dynamical equations of motion \cite{aharonov1969modular, aharonov2005quantum, aharonov2017finally} and can be associated, e.g., with the AB effect. Then, a better understanding of how entanglement is related to these phases could have fundamental and practical consequences. Finally, one may also study to what extent other effects associated with geometric phases, like the optical Magnus effect \cite{bolotovskii1977optical, zel1990rotation, dooghin1992optical, bliokh2004topological, bliokh2004modified}, are fundamentally related or can benefit from the Sagnac effect. \acknowledgements{We thank Ady Arie, Avi Pe'er, and Michael Rosenbluh for many helpful discussions. This research was supported by grant number FQXi-RFP-CPW-2006 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor-advised fund of Silicon Valley Community Foundation, by the Israeli Innovation authority (grants 70002 and 73795), by the Pazy foundation, and by the Quantum Science and Technology Program of the Israeli Council of Higher Education.}
1,477,468,750,496
arxiv
\section{Aircraft model and baseline controller} The dynamics that we will consider in this report is a linearized version of the pitch dynamics of the ADMIRE aircraft \citep{Forssell2005} on the form \begin{equation} \dot{x} = Ax + Bu, \quad y = Cx \label{eq:sys} \end{equation} where $x = \begin{bmatrix} \alpha & q \end{bmatrix}^T$ and $\alpha$ is the angle of attack and $q$ is the pitch rate of the aircraft, see Figure~\ref{fig:aircraft}, and $u$ is the elevator control surface deflection. \begin{figure}[hbt] \centering \includegraphics[width=0.3\textwidth]{ShortPeriodDyn.png} \caption{Definition of angles for aircraft control} \label{fig:aircraft} \end{figure} The matrices $A$ and $B$ vary when the c.g. shifts from its most forward position to its most aft (backward) position. With the c.g. in the most forward position the matrices are \begin{equation} A = \begin{bmatrix} -1.453 & 0.9672 \\ 5.181 & -1.639 \end{bmatrix}, \quad B = \begin{bmatrix} 0.4467 \\ 34.79 \end{bmatrix} \label{eq:sys_fwd} \end{equation} and when the c.g. is in the most aft position \begin{equation} A = \begin{bmatrix} -1.45 & 0.9673 \\ 15.08 & -1.414 \end{bmatrix}, \quad B = \begin{bmatrix} 0.4461 \\ 31.77 \end{bmatrix} \label{eq:sys_bwd} \end{equation} From this we can see that the force equation (first row of the matrices) is almost unaffected by the c.g. shift while the moment equation is largely affected by the shift in c.g. To stress the adaptive controller as much as possible we have designed the baseline controller for the most forward c.g. case and then simulate the total system with the model of the most aft c.g. case. The baseline controller consist of an LQ feedback term, a static feed forward term to get a static gain of one between the reference and the output (angle of attack) and finally a integral part which integrates the error between the output and the nominal closed loop response (without the integral part). The baseline control signal is thus \[ u_{bl} = -Kx + Fr + \int (y - y_{ref}) dt \] where $y = \alpha$ and $y_{ref} = C(sI-A+BK)^{-1}BFr$. The baseline controller is designed using the matrices $A$ and $B$ from \eqref{eq:sys_fwd} but the $B$ matrix is simplified by setting the element in the force equation to zero, i.e., assuming that the control surface deflection do not generate any lift force but only moment. This approximation is not necessary at this stage but will have some nice implications in the adaptive design. In Figure~\ref{fig:nom_response} the response of the closed loop system with the nominal controller and different c.g. positions is shown. We can see that the response is good when the c.g. is at its nominal position (the blue line) but as the c.g. is moving backwards (green and red lines) there is a large overshoot. It is this overshoot that we want to minimize with an adaptive augmentation without destroying the nominal performance of the baseline controller. \begin{figure}[hbt] \centering \includegraphics[width=0.6\textwidth]{nom_response.pdf} \caption{Step response of the closed loop system with nominal controller for different c.g positions} \label{fig:nom_response} \end{figure} \section{Robust MRAC design} In Model Reference Adaptive Control (MRAC) one compare the output ($y$) of the closed loop system with that from a reference model ($y_m$). Then the controller parameters are updated such that the closed loop system response is as close as possible to that of the reference system. The parameter update can be done in several different ways, e.g., by using the \emph{MIT-rule} or by using Lyapunov stability theory. In this project we have chosen to use Lyapunov stability theory to derive the update laws for the controller parameters. This is mainly due to the theoretical stability guarantees that comes with the method. In this report we will only briefly describe the Lyapunov design process. For more information on the theoretical background of MRAC and the MIT and Lypunov update rules we refer the reader to the books of \cite{Astrom2008,Ioannou2012,Lavretsky2013}. In the MRAC design technique that we have adopted the uncertain system is modeled as \begin{equation} \label{eq:unc_sys} \dot{x} = Ax + B\Lambda (u + \theta^T\phi(x) ) \end{equation} where $A$ is an unknown matrix, $\Lambda$ is an unknown diagonal matrix and $B$ is known. The vector $\theta$ is the unknown coefficients of the general nonlinear function $\theta^T\phi(x)$ where $\phi(x)$ is a set basis functions. The aim of the adaptive controller is to have the system \eqref{eq:unc_sys} follow a reference model \begin{equation} \label{eq:ref_sys} \dot{x} = A_m x + B_m r \end{equation} as close as possible with the use of the control signal $u = u_{ad} = -\hat{K}_x x + \hat{K}_r r$. This is only possible if there exist a set of ideal controller parameters $K_x^*$ and $K_r^*$ such that \[ A - B\Lambda K_x^* = A_m, \quad B\Lambda K_r^* = B_m \] These are the so called \emph{model matching conditions}. Unfortunately the uncertainties in the model \eqref{eq:sys} due to the c.g. variations does not fulfil the model matching conditions, i.e., we can not use \eqref{eq:unc_sys} applied to system \eqref{eq:sys} to model the uncertainties. Instead we consider a model in the new state variable $z = [ \alpha \;\; \dot{\alpha}]^T$ which, if we use the same approximation of the $B$ matrix as used in the nominal controller, is a simple linear transformation of the original states \[ z = Tx = \begin{bmatrix} 1 & 0 \\ a_{11} & a_{12} \end{bmatrix}x \] and we get the following model of the uncertain system \begin{equation} \label{eq:unc_sys_mod} \dot{z} = \underbrace{\begin{bmatrix} 0 & 1 \\ \tilde{a}_{21} & \tilde{a}_{22} \end{bmatrix}}_{\tilde{A}} z + \underbrace{\begin{bmatrix} 0 \\ \tilde{b}_2 \end{bmatrix}}_{\tilde{B}} u \end{equation} For this system the model matching conditions are fulfilled and we can model it with the structure \eqref{eq:unc_sys}, choosing $\Lambda = \lambda$ (scalar) and $\theta^T\phi(x) = 0$. The error between the closed loop system and the reference model can be written, using the model matching conditions and crudely ignoring the integral term in the nominal controller, as \begin{equation} \label{eq:error} \dot{e}(t) =\dot{z}(t) - \dot{z}_m(t) = \tilde{A}_me(t) - \tilde{B}^0\lambda\Delta\hat{K}_zz + \tilde{B}^0\lambda\Delta\hat{K}_r r \end{equation} where $\tilde{A}_m = TA_mT^{-1} = T(A-BK)T^{-1}$ and $\tilde{B}_m = TB_m = TBF$. Using the Lyapunov function candidate \begin{equation} \label{eq:lyapunov_cand} V(e,\Delta K_x,\Delta K_r) = \half \left(e^TPe + \frac{\abs{\lambda}}{\gamma_z} \Delta K_z \Delta K_z^T + \frac{\abs{\lambda}}{\gamma_r} \Delta K_r^2 \right) \end{equation} and differentiate w.r.t. time, using \eqref{eq:error}, we obtain \begin{align*} \dot{V} & = \half \left( \dot{e}^TPe + e^TP\dot{e} + \frac{\abs{\lambda}}{\gamma_z} \Delta \dot{K}_z \Delta K_z^T + \frac{\abs{\lambda}}{\gamma_z} \Delta K_z \Delta \dot{K}_z^T + 2\frac{\abs{\lambda}}{\gamma_r} \Delta K_r \Delta \dot{K}_r \right)\\ & = \half e^T\left(\tilde{A}_m^TP + P\tilde{A}_m\right)e + \frac{\abs{\lambda}}{\gamma_z}\Delta \dot{K}_z \Delta K_z^T - \lambda e^TP\tilde{B}^0\Delta K_z z + \frac{\abs{\lambda}}{\gamma_r} \Delta \dot{K}_r \Delta K_r + \lambda e^TP\tilde{B}^0\Delta K_r r\\ & = - \half e^TQe + \left(\frac{\abs{\lambda}}{\gamma_z}\Delta \dot{K}_z - \lambda e^TP\tilde{B}^0\right) \Delta K_z^T + \left( \frac{\abs{\lambda}}{\gamma_r} \Delta \dot{K}_r + \lambda e^TP\tilde{B}^0 \right) \Delta K_r \end{align*} If the derivative is negative then \eqref{eq:lyapunov_cand} is a valid Lyapunov function and the closed loop system is stable. A negative derivative is obtained if $\tilde{A}_m^TP + P\tilde{A}_m = -Q$ for some $Q > 0$ and if we select the adaptive controller gains as \begin{subequations} \label{eq:mrac_update} \begin{align} \dot{K}_z &= \gamma_z\text{sgn}(\lambda) e^TP\tilde{B}^0z^T \\ \dot{K}_r & = - \gamma_r\text{sgn}(\lambda) e^TP\tilde{B}^0r \end{align} \end{subequations} To be able to cope with disturbances, sensor noise and to not interfere with the nominal controller we need to add some additional ingredients to the update laws \eqref{eq:mrac_update}. To not have the parameters drift due to noise we add a limit on the parameters in the form of a projection operator \citep{Ioannou2012}. In addition to this we also add a dead zone for small errors, $e(t)$. This also reduces the sensitivity to noise but additionally it makes the adaptive controller not to interfere with the performance of the nominal controller \citep{Lavretsky2013}. With these modifications the adaptive laws become \begin{subequations} \label{eq:robust_mrac_update} \begin{align} \dot{K}_z &= \begin{cases} \text{Proj} \left(K_z, \gamma_z\text{sgn}(\lambda) e^TP\tilde{B}^0z^T \right) & \norm{e(t)}{} > \epsilon \\ 0 & \norm{e(t)}{} \leq \epsilon \end{cases} \\ \dot{K}_r & = \begin{cases} \text{Proj} \left(K_r, - \gamma_r\text{sgn}(\lambda) e^TP\tilde{B}^0r \right) & \norm{e(t)}{} > \epsilon \\ 0 & \norm{e(t)}{} \leq \epsilon \end{cases} \end{align} \end{subequations} Finally, the adaptive augmentation control used is $u_{ad} = - K_z z +K_r r$, where $K_z$ and $K_r$ are updated according to \eqref{eq:robust_mrac_update}. We have validated the design by simulating the system \eqref{eq:sys} with the c.g. in its most aft position, i.e., with the use of matrices \eqref{eq:sys_bwd}. This system has been controlled with the baseline controller designed for the case with matrices \eqref{eq:sys_fwd} and with the adaptive augmentation. In Figure~\ref{fig:simulation} we see the result of the simulation. In the first part of the figure we see the angle of attack response of the closed loop adaptive system (blue) compared to using only the baseline controller (magenta). We can see that already in the beginning, when the adaptive controller has not converged, the performance is better with the adaptive augmentation. At the end of the simulation, when the adaptive controller has ``converged'' we can see that the response is really good. \begin{figure}[hbt] \centering \includegraphics[width = 0.6\textwidth]{mrac_sim.pdf} \caption{Simulation of the closed loop system with both baseline controller and adaptive augmentation. The simulated system is the aircraft with the c.g. in its most aft position.} \label{fig:simulation} \end{figure} In the second part of the figure we can see the evolution of the adaptive parameters. The blue solid line is the angle of attack feedback term, the blue dashed line is the pitch rate feedback and the green line is the reference feedforward term. An interesting observation is that at the end when the performance is best the pitch rate feedback and the feedforward gains are almost zero. Therefore it might be interesting to try and use only an angle of attack feedback term in the adaptive augmentation. In the third part of the figure the baseline control signal, adaptive augmentation control signal as well as the total control signal is shown. Here we can see that the adaptation causes ripples on the control signal. The magnitude and frequency of the ripples is dependent of the tuning but it is difficult to remove it completely. In the last part of the figure we plot the 2-norm of the model following error, $e(t) = z(t) - z_m(t)$ together with the dead zone level, $\epsilon$ (red dashed line). \section{Open issues and future work} Even though the validating simulation looks good there are still some issues that needs to be further investigated before we can conclude that adaptive augmentation of this form is a good methodology to use for c.g. adaptation in aircrafts. The first issue is that even though the Lyapunov design method shall have guaranteed stability it was possible to make the closed loop instable for different tuning and reference signals. Secondly the adaptive control laws are not scale invariant to a large range of reference signals. A tuning that works good for small reference signal values makes the closed loop unstable for large reference signals and if the adaptive laws are tuned for large reference signals then there will be poor performance for small signals. Additionally the dead zone level, $\epsilon$, is dependent on the signal levels. This phenomenon is pointed out in \cite{Astrom2008} and both they and \cite{Ioannou2012} suggest a normalisation scheme for the update laws. However the normalised adaptive laws in both \cite{Astrom2008} and \cite{Ioannou2012} are designed in a transfer function framework and we have found no equivalent in the state space setting. \cite{Lavretsky2013} suggest as an alternative to use an integral feedback term instead of the reference feedforward term in the adaptive control signal. The most important issues to continue working on is probably the scale invariance of the controller since this affects both stability and the tuning of the dead zone. The idea is to try and find a suitable way to normalize the adaptive laws in the state space setting. The possibility to use only an angle of attack feedback as adaptive augmentation is also a very interesting idea to investigate further. \bibliographystyle{abbrvnat}
1,477,468,750,497
arxiv
\section{INTRODUCTION} The even-order dispersion cancellation effect based on nonclassical frequency-anticorrelated entangled photons has been known in quantum optics for some time \cite{franson92, steinberg92a}. The nonlinear optical process of spontaneous parametric down conversion (SPDC) traditionally provides a reliable source of frequency-entangled photon pairs with anticorrelated spectral components, as a consequence of energy conservation. If the frequency of the signal photon is $\omega_s$, then the frequency of its twin idler photon must be $\omega_i=\Omega_p - \omega_s$, where $\Omega_p$ is the frequency of the pump beam. A quantum interferometer records the modulation in the rate of coincidence between pulses from two photon-counting detectors at the output ports of a beamsplitter in response to a temporal delay between two spectrally correlated photons entering its input ports symmetrically. This type of quantum optics intensity correlation measurement, exhibited in the Hong-Ou-Mandel (HOM) interferometer \cite{hom87}, is manifested by an observed dip in the rate of coincidences. In previous demonstrations of dispersion cancellation, one photon of the downconverted pair travels through a dispersive material in one arm of the HOM interferometer while its twin travels only through air. The final coincidence interference dip is not broadened in this case, demonstrating insensitivity to even-order dispersion coefficients \cite{steinberg92a,qoct02}. Even-order dispersion cancellation has been used in quantum information processing, quantum communication, and in quantum optical metrology. For example, it enhances the precision of measuring photon tunneling time through a potential barrier \cite{steinberg92b} and improves the accuracy of remote clock synchronization \cite{giovannettiNAT01}. The same effect provides superior resolution in quantum optical coherence tomography \cite{nasr03} by eliminating the broadening of interference envelope resulting from group velocity dispersion. The potential of quantum even-order dispersion cancellation has recently stimulated efforts to mimic this effect by use of classical nonlinear optical analogues \cite{shapiroOCT06,resch07, reschNature08}. In this Letter we introduce a novel type of quantum interferometer that enables demonstration of the odd-order dispersion cancellation as a part of new dispersion management technique. In our design, both even-order and odd-order dispersion cancellation effects can be recorded as parts of a single quantum interference pattern. \begin{figure} [ht] \centering \includegraphics [width = 8 cm] {Fig1.eps} \caption{Schematic diagram of the optical setup. The SPDC source produces pairs of frequency anticorrelated photons combining on a beamsplitter in a HOM interferometer configuration. Photons exiting one HOM port are fed into a MZ interferometer. Coincidence events are registered between two single-photon detectors at the output ports of the MZ interferometer. A dispersive sample in one arm of the MZ interferometer generates a phase delay ($\phi $).} \label {setup} \end{figure} HOM interferometers are commonly used to produce either $ \left | \Psi \right \rangle \sim \left | 2,0 \right \rangle - \left | 0,2 \right \rangle $ state, when the delay $\tau_1$ is set to balance the two paths, ensuring destructive interference in the middle of the interference dip, or a superposition of $ \left | 1,1 \right \rangle$, $ \left | 0,2 \right \rangle $ and $ \left | 2,0 \right \rangle $ states, when the delay $\tau_1$ significantly unbalances two paths and shifts coincidences to the shoulder of HOM interference pattern. Mach-Zehnder (MZ) interferometers fed by a particular quantum state have also been studied in detail \cite{Campos90}. In the new design two interferometers work together: one output port of a HOM interferometer provides input to a MZ interferometer. The state of light introduced into the MZ interferometer is continuously modified when the delay $\tau_1$ in the HOM interferometer is scanned. A signal from one of the HOM output ports is fed into a MZ interferometer with a dispersive sample providing a phase shift $ \phi$ in one arm, as shown in Fig.1. The delay $\tau_2$ inside the MZ interferometer is kept at a fixed value. A peculiar quantum interference pattern is observed in the rate of coincidences between two photon-counting detectors $D1$ and $D2$ at the output ports of the MZ interferometer as a function of $\tau_1$. The interference profile has two distinct patterns. The central interference pattern depends only on even-order dispersion coefficients, while the peripheral pattern depends only on odd-order terms. This ability to manipulate and evaluate odd-order and even-order dispersion terms independently in a single quantum interferometer opens new perspectives in quantum communication and in precise optical measurement. \section{THEORETICAL MODEL} For detectors $D_1$ and $D_2$ much slower than the temporal coherence of the downconverted photons, the coincidence rate in such intensity correlation measurements is \cite{SPDC_theory}: \begin{equation} \label{eq:Rc} R_{c}(\tau_{1}, \tau_{2}) = \int dt_{1}\int dt_{2} G^{(2)}(t_{1},t_{2}), \end{equation} \noindent with $G^{(2)}(t_1,t_2)$ second order correlation function $G^{(2)}(t_1,t_2)$: \begin{equation} \label{eq:G2} G^{(2)}(t_{1},t_{2}) = \mid \left \langle 0 \right | \hat{E}_{1}^{(+)}(t_{1}) \hat{E}_{2}^{(+)}(t_{2}) \left | \Psi \right \rangle \mid^{2}. \end{equation} \noindent $ E_{1}^{(+)}(t_{1})$ and $ E_{2}^{(+)}(t_{2})$ are the electrical field operators at the surfaces of detectors $D_1$ and $D_2$, respectively. \begin{equation} \label{eq:fields} \hat{E}_{j}^{(+)}(t_{j)} = \frac{1}{\sqrt{2\pi}}\int d\omega_{j}e^{-i\omega_{j}t_{j}}\hat{b}_{j}(\omega_{j}), \end{equation} \noindent where $\hat{b}_{j}(\omega_{j})$ is the mode operator at detector $j$, expressed in terms of the input field operators $\hat{a}_{j}(\omega_{j})$ \cite{SPDC_theory}. The quantum state of light emitted in a frequency-degenerate non-collinear type-I phase-matching SPDC process with a monochromatic pump $\Omega_{p}$ is: \begin{equation} \label{eq:state} \left | \Psi \right \rangle \propto\int d\omega f(\omega) \hat{a}_{1}^{\dagger}(\Omega_{0}+\omega)\hat{a}_{2}^{\dagger}(\Omega_{0}-\omega) \left | 0 \right \rangle, \end{equation} \noindent where $f(\omega)$ is a photon wavepacket spectral function defined by the phase matching condition in the nonlinear material, $\Omega_{0} = \Omega_{p}/2$ is a central frequency of each wavepacket, $ \omega_{s}= \Omega_{0}+\omega$ is the signal photon frequency, and $ \omega_{i}=\Omega_{0}-\omega$ is the idler frequency . The phase shift $\phi (\omega)$ acquired by the broadband optical wavepacket as it travels through a dispersive material could be expanded in a Taylor's series \cite{book_Diels_Rudolf_ultrashort}: \begin{equation} \label{eq:dispersion} \phi(\omega_{s,i}) = c_0 + c_1 (\omega_{s,i}-\Omega_{0}) + c_2( \omega_{s,i}-\Omega_{0})^2+c_3( \omega_{s,i}-\Omega_{0})^3 + \cdots \end{equation} \noindent where the linear term $c_1$ represents the group delay and the second-order term $c_2$ is responsible for group delay dispersion. In a conventional white-light interferometer, $c_1$ is responsible for a temporal shift of the interference pattern envelope, $c_2$ causes its temporal broadening, while $c_3$ provides a non-symmetric deformation of the wavepacket envelope. Higher-order terms might be included when a strongly dispersive material is used or in the case of extremely broadband optical wavepackets. In the optical setup of Fig.1, the dispersive material providing phase shift $\phi(\omega)$ could be situated in three possible locations. When the sample is placed an arm of the HOM interferometer it leads to the well-known even-order dispersion cancellation effect \cite{qoct02}. It may be shown that the presence of a dispersive material between the two interferometers does not affect the coincidence interferogram. We thus concentrate on the most interesting case: we place the dispersive sample of phase shift $\phi(\omega)$ inside the MZ interferometer, with delay $\tau_2$ set to a fixed value, and $\tau_1$ as the variable parameter. Following the usual formalism \cite{SPDC_theory}, one can show that the coincidence rate between the detectors is: \begin{equation} \label{eq:final} \begin {split} R_{c}(\tau_{1}, \tau_{2}) =& \int d\omega (\Phi_{0} - \Phi_\alpha (\omega, \tau_{2}) - \Phi_\beta (\omega,\tau_{2}))\cdot \\ &(f(\omega)f^{*}(\omega) + f(\omega)f^{*}(-\omega)e^{-2 i \omega \tau_{1}} ) , \end {split} \end{equation} \noindent where $ \Phi_{0}$ is a constant, \begin{equation} \label{eq:odd} \begin {split} \Phi_\alpha (\omega,\tau_{2})= &e^{-2 i \omega \tau_{2}} e^{i \phi (\Omega_{0} - \omega)} e^{-i \phi (\Omega_{0} + \omega)} + c.c., \end {split} \end{equation} \noindent and \begin{equation} \label{eq:even} \begin {split} \Phi_\beta (\omega,\tau_{2})= &e^{-2 i \Omega_{0} \tau_{2}} e^{-i \phi (\Omega_{0} - \omega)} e^{-i \phi (\Omega_{0} + \omega)} + c.c. \end {split} \end{equation} Although not obvious from the form of equation (\ref{eq:final}), $R_{c}(\tau_{1}, \tau_{2})$ is a real function for any spectrum $f(\omega)$, as can be seen by rewriting Eq. (\ref{eq:final}) in manifestly real form: \begin{eqnarray} \label{eq:final2} R_{c}(\tau_{1}, \tau_{2})&=&\int d\omega\left\{ |f(\omega )|^2 +|f(-\omega )|^2 \right. \nonumber \\&& \qquad +\left.\left[ e^{-2i\omega \tau_1}f(\omega )f^\ast (-\omega )+c.c.\right]\right\} \nonumber\\&& \qquad \qquad\times \left[ \Phi_0 -\Phi_\alpha (\omega )-\Phi_\beta (\omega )\right] \end{eqnarray} This fact ensures that the technique demonstrated here applies to all types of broadband frequency-anticorrelated states of light, including those with nonsymmetric spectral profiles produced in chirped periodically-polled nonlinear crystals. The final coincidence counting rate $R_{c} (\tau_{1},\tau_{2})$ of Eq. (\ref{eq:final}) may also be written as a linear superposition: \begin{equation} \label{eq:coincidence2} R_{c} (\tau_{1},\tau_{2}) = B+ R_{0}(\tau_{1})-R_{even}(\tau_{1},\tau_{2})-R_{odd}(\tau_{1},\tau_{2}). \end{equation} \noindent The first coefficient B incorporates all terms that are not dependent on the variable delay $\tau_{1}$, providing a constant after integration. It establishes a baseline level for the quantum interfererogram. The following terms: \begin{equation} \label{eq:term1} R_{0}(\tau_{1}) = 4 \int d\omega f(\omega)f^{*}(-\omega)e^{-2 i \omega \tau_{1}}, \end{equation} \begin{equation} \begin {split} R_{even}(\tau_{1},&\tau_{2})=\int d\omega f(\omega)f^{*}(-\omega)\cdot\\ e^{-2 i \omega \tau_{1}}& [ e^{-2i\Omega_{0} \tau_{2}} e^{-i \phi (\Omega_{0} - \omega)} e^{-i \phi (\Omega_{0} + \omega)} \\ &+ e^{2 i \Omega_{0} \tau_{2}} e^{i \phi (\Omega_{0} - \omega)} e^{i \phi (\Omega_{0} + \omega)}], \end {split} \end{equation} \begin{equation} \begin {split} R_{odd}(\tau_{1},&\tau_{2})= \int d\omega f(\omega)f^{*}(-\omega)\cdot\\ &[ e^{-2 i \omega (\tau_{1}+\tau_{2})} e^{i \phi (\Omega_{0} - \omega)} e^{-i \phi (\Omega_{0} + \omega)} +\\ &e^{-2 i \omega (\tau_{1}-\tau_{2})} e^{-i \phi (\Omega_{0} - \omega)} e^{i \phi (\Omega_{0} + \omega)}] \end {split} \end{equation} \noindent are responsible for the shape of the interference pattern. The term $R_0 (\tau_1)$ represents a peak centered at $\tau_1=0$ that is simply a Fourier transform of the down converted radiation spectrum and is insensitive to the dispersion associated with $\phi(\omega)$. Since $R_{even} (\tau_1,\tau_2)$ is dependent on the sum $\phi (\Omega_{0} - \omega) + \phi (\Omega_{0} + \omega)$, it is sensitive only to even-order terms in the expansion Eq. (\ref{eq:dispersion}). This manifests odd-order dispersion cancellation and generates a dispersion-broadened function centered around $\tau_1 = 0$. The last term $R_{odd} (\tau_1,\tau_2)$, in contrast, is sensitive only to odd-order dispersion terms in $\phi(\omega)$. This term demonstrates the well known even-order cancellation. The coefficients $e^{-2 i \omega (\tau_{1}+\tau_{2})}$ and $e^{-2 i \omega (\tau_{1}-\tau_{2})}$ shift the two dips away from the center of the interference pattern in opposite directions. Such decomposition of quantum interference terms makes it possible to observe odd-order and even-order dispersion cancellation effects in two distinct regions of the coincidence interferogram. \section{EXAMPLE} Our results are illustrated by a numerical example of quantum interference for a 3-mm thick slab of a strongly-dispersive optical material ZnSe, inserted in one arm of the MZ interferometer to provide the phase shift $\phi(\omega)$. In this experiment we assume the use of frequency-entangled down-converted photons with 100-nm wide spectrum. As illustrated in Fig. 2, one can identify the narrow peak $R_0 (\tau_1)$ in the center, which is insensitive to dispersion, along with the component $R_{even} (\tau_1,\tau_2)$, which is broadened by even-order dispersion contributions only. This central component of the interferogram illustrates the odd-order dispersion cancellation effect. \begin{figure} [ht] \centering \includegraphics [width = 8 cm] {Fig2.eps} \caption{The normalized coincidence rate as a function of $\tau_{1}$ when a 3-mm thick ZnSe sample is placed in the MZ interferometer. The fixed delay $\tau_{2} = 26 $ ps is used. The insert illustrates the odd-order dispersion contribution.} \label {ZnSe} \end{figure} Two symmetric side dips $R_{odd} (\tau_1,\tau_2)$ appear shifted far away from the central peak by the group velocity delay $c_1$ acquired by entangled photons inside the dispersive material. However, this shift can be controlled by properly adjusting the value of the fixed delay $\tau_{2}$. Such a simple adjustment moves both dips back closer to the center and makes it convenient for observing both dispersion cancellation features in a single scan of the variable delay line $(\tau_1)$ inside HOM interferometer (see Fig. 2). The appearance of asymmetric fringes on the side of two dips is a clear sign of the third-order dispersion. \cite{book_Diels_Rudolf_ultrashort}. \section{DISCUSSION} This result can also be understood physically by analyzing all possible probability amplitudes that lead to measured coincidence events between $D1$ and $D2$. The MZ interferometer input is a pair of spectrally-entangled photons separated by time delay $\tau_1$; if the leading photon has a high frequency, the lagging photon will have a low frequency, and vice-versa. We consider first the case when no dispersive element is present, so that the MZ interferometer introduces only a time delay $\tau_2$ between its two arms. We assume that $\tau_2$ is much greater than the photon wave packet width, $\tau_c$. To explain the dependance of the photon coincidence rate on $\tau_1$, as shown in Fig.\ref{ZnSe}, we consider three processes occurring at the input ports of the last beam splitter in the MZ interferometer: 1) If $|\tau_1|> \tau_c$ and $|\tau_2 - \tau_1| >\tau_c$, then the two photons arriving at the final beam splitter will be distinguishable, so that no quantum interference is exhibited. 2) If $ |\tau_1| \approx |\tau_2|$, so that $|\tau_2-\tau_1| < \tau_c$, then quantum interference can occur when the leading photon takes the long path of the MZ interferometer and the lagging photon takes the short path. The two arrive almost simultaneously (within a time $ \tau_c$) at the two ports of the final beam splitter. Then the Hong-Ou-Mandel (HOM) effect is exhibited at the beam splitter, albeit with only 25\% visibility because of the presence of the other possibility that both photons arrive at a single port, leading to a background coincidence rate independent of $\tau_1$. From a different perspective, one may regard this scenario as similar to that obtained in a Franson interferometer \cite{Franson89}, for which photon pairs follow long-long or short-short paths. This scenario explains the components of the coincidence interferogram near $\tau_1 = \pm \tau_2$, and in this case the two spectrally-entangled photons entering separate ports of the final beam splitter lead to quantum interference accompanied by even-order dispersion cancellation. 3) Finally, when $|\tau_1| < \tau_c$, then one possibility is that the photons arrive at separate input ports of the final beam splitter. Since these photons are separated by a time $\tau_2 >> \tau_c$, they are distinguishable and do not contribute to quantum interference. The other possibility is that the pair arrive at the same beam splitter input port. In this case, upon transmission or reflection at the beam splitter there are two alternatives for producing coincidence: transmission of the high-frequency photon and reflection of the low-frequency photon, or vice-versa. This explains the component of the coincidence interferogram near $\tau_1 \approx 0$. In this scenario, which involves two spectrally-entangled photons entering a single port of a beam splitter, quantum interference is accompanied by odd-order dispersion cancellation. We thus see that the quantum interference effects exhibited in scenarios 2) and 3) are accompanied by dispersion cancellation -- although in opposite manners in the two cases. In conclusion, we have demonstrated a new effect in which even- and odd-order dispersion cancellations appear in different regions of a single interferogram. This is achieved via frequency-anticorrelated photons in a new quantum interferometer formed by a variable delay HOM interferometer followed by a single-input, fixed-delay Mach-Zehnder interferometer. The possibility of independently evaluating even- and odd-order dispersion coefficients of a medium has potential for applications in quantum communication and in quantum metrology of complex dispersive photonics structures. In particular, the ability to accurately characterize higher-order dispersion coefficients is of great interest in the study of flattened-dispersion optical fibers \cite{ferrando2001,reeves2002} and in dispersion engineering with metamaterials \cite{elef2005}. The demonstrated potential of even-order dispersion cancellation has stimulated the search for classical analogues \cite{shapiroOCT06,resch07}. We expect that the scheme presented here would also trigger the similar development of nonlinear optical techniques mimicking this quantum effect. Finally, note that our apparatus may be extended by adding a second Mach-Zehnder to the unused HOM output port, allowing the investigation of new four-photon interference effects. \section{ACKNOWLEDGMENTS} We would like to thank Andrey Antipov from SUNY Buffalo for assistance with numerical simulations. This work was supported by a U. S. Army Research Office (ARO) Multidisciplinary University Research Initiative (MURI) Grant; by the Bernard M. Gordon Center for Subsurface Sensing and Imaging Systems (CenSSIS), an NSF Engineering Research Center; by the Intelligence Advanced Research Projects Activity (IARPA) and ARO through Grant No. W911NF-07-1-0629.
1,477,468,750,498
arxiv
\section{Introduction} Elliptic problems with critical Trudinger-Moser nonlinearities have been widely investigated in the literature. We refer the reader to the survey paper of de Figueiredo et al.\! \cite{MR2772124} for an overview of recent results on Trudinger-Moser type inequalities and related critical problems. A model critical problem of this type is \[ \left\{\begin{aligned} - \Delta_N\, u & = \lambda\, |u|^{N-2} u\, e^{\beta\, |u|^{N'}} && \text{in } \Omega\\[10pt] u & = 0 && \text{on } \bdry{\Omega}, \end{aligned}\right. \] where $\Omega$ is a smooth bounded domain in $\R^N,\, N \ge 2$, $\Delta_N\, u = \divg \left(|\nabla u|^{N-2}\, \nabla u\right)$ is the $N$-Laplacian of $u$, $N' = N/(N - 1)$, and $\lambda, \beta > 0$. This problem is a natural analog of the Br\'{e}zis-Nirenberg problem for the $p$-Laplacian in the borderline case $p = N$, where the critical growth is of exponential type and is governed by the Trudinger-Moser inequality \begin{equation} \label{2} \sup_{u \in W^{1,N}_0(\Omega),\, \norm{u} \le 1} \int_\Omega e^{\alpha_N |u|^{N'}} dx < \infty. \end{equation} Here $W^{1,N}_0(\Omega)$ is the usual Sobolev space with the norm \[ \norm{u} = \left(\int_\Omega |\nabla u|^N\, dx\right)^{1/N}, \] $\alpha_N = N \omega_{N-1}^{1/(N-1)}$, and $\omega_{N-1}$ is the area of the unit sphere in $\R^N$ (see Trudinger \cite{MR0216286} and Moser \cite{MR0301504}). A result of Adimurthi \cite{MR1079983} gives a positive solution of this problem for $\lambda \in (0,\lambda_1)$, where $\lambda_1 > 0$ is the first Dirichlet eigenvalue of $- \Delta_N$ in $\Omega$ (see also do {\'O} \cite{MR1392090}). Theorem 1.4 in de Figueiredo et al.\! \cite{MR1386960,MR1399846} gives a nontrivial solution for $\lambda \ge \lambda_1$ in the semilinear case $N = 2$. More recently, Yang and Perera \cite{MR3616328} obtained a nontrivial solution in the general quasilinear case $N \ge 3$ when $\lambda > \lambda_1$ is not an eigenvalue. In the present paper we study the related semipositone problem \begin{equation} \label{1} \left\{\begin{aligned} - \Delta_N\, u & = \lambda u^{N-1} e^{\beta u^{N'}} - \mu && \text{in } \Omega\\[5pt] u & > 0 && \text{in } \Omega\\[5pt] u & = 0 && \text{on } \bdry{\Omega}, \end{aligned}\right. \end{equation} where $\mu > 0$. Since $- \mu < 0$, $u = 0$ is not a subsolution of this problem, which makes finding a positive solution rather difficult (see Lions \cite{MR678562}). This compounds the usual difficulties arising from the lack of compactness associated with critical growth problems. Our main result here is that this problem has a weak positive solution for all sufficiently small $\mu$ when $\lambda < \lambda_1$. \begin{theorem} \label{Theorem 1} If $\lambda \in (0,\lambda_1)$, then there exists $\mu^\ast > 0$ such that for all $\mu \in (0,\mu^\ast)$, problem \eqref{1} has a weak solution $u_\mu \in C^{1,\alpha}_0(\closure{\Omega})$ for some $\alpha \in (0,1)$. \end{theorem} This result seems to be new even in the semilinear case $N = 2$. The outline of the proof is as follows. We consider the modified problem \begin{equation} \label{3} \left\{\begin{aligned} - \Delta_N\, u & = \lambda f(u^+) - \mu\, g(u) && \text{in } \Omega\\[10pt] u & = 0 && \text{on } \bdry{\Omega}, \end{aligned}\right. \end{equation} where $f(t) = t^{N-1} e^{\beta t^{N'}}$ for $t \ge 0$, $u^+(x) = \max \set{u(x),0}$, and \[ g(t) = \begin{cases} 0, & t \le -1\\[5pt] 1 + t, & -1 < t < 0\\[5pt] 1, & t \ge 0. \end{cases} \] Weak solutions of this problem coincide with critical points of the $C^1$-functional \[ E_\mu(u) = \int_\Omega \left[\frac{|\nabla u|^N}{N} - \lambda F(u^+) + \mu\, G(u)\right] dx, \quad u \in W^{1,N}_0(\Omega), \] where \[ F(t) = \int_0^t f(s)\, ds, \quad t \ge 0, \qquad G(t) = \int_0^t g(s)\, ds, \quad t \in \R. \] The functional $E_\mu$ satisfies the \PS{c} condition for all $c \ne 0$ satisfying \[ c < \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1} - \frac{\mu}{2} \vol{\Omega}, \] where $\vol{\cdot}$ denotes the Lebesgue measure in $\R^N$, and it follows from the mountain pass theorem that $E_\mu$ has a uniformly positive critical level below this threshold for compactness for all sufficiently small $\mu > 0$ (see Lemmas \ref{Lemma 2} and \ref{Lemma 6}). This part of the proof is more or less standard. The novelty of the paper lies in the fact that the solution $u_\mu$ of the modified problem \eqref{3} thus obtained is positive, and hence solves our original problem \eqref{1}, if $\mu$ is further restricted. Note that this does not follow from standard arguments based on the maximum principle since the perturbation term $- \mu < 0$. This is precisely the main difficulty in finding positive solutions of semipositone problems as was pointed out in Lions \cite{MR678562}. We will prove that for every sequence $\mu_j > 0,\, \mu_j \to 0$, a subsequence of $u_j = u_{\mu_j}$ is positive in $\Omega$. The idea of the proof is to show that a subsequence of $u_j$ converges in $C^1_0(\closure{\Omega})$ to a solution of the limit problem \[ \left\{\begin{aligned} - \Delta_N\, u & = \lambda f(u^+) && \text{in } \Omega\\[10pt] u & = 0 && \text{on } \bdry{\Omega}. \end{aligned}\right. \] This requires a uniform $C^{1,\alpha}_0(\closure{\Omega})$ estimate of $u_j$ for some $\alpha \in (0,1)$. It is well-known that each $u_j$ belongs to $C^{1,\alpha}_0(\closure{\Omega})$. However, proving that the sequence $\seq{u_j}$ remains bounded in $C^{1,\alpha}_0(\closure{\Omega})$ is a nontrivial task in the critical case. We will obtain the required estimate by proving the following compactness result, which is of independent interest. \begin{theorem} \label{Theorem 2} If $\mu_j > 0,\, \mu_j \to \mu \ge 0$, $\seq{u_j} \subset W^{1,N}_0(\Omega)$, and \[ E_{\mu_j}(u_j) \to c, \qquad E_{\mu_j}'(u_j) \to 0 \] for some $c \ne 0$ satisfying \begin{equation} \label{4} c < \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1} - \frac{\mu}{2} \vol{\Omega}, \end{equation} then a subsequence of $\seq{u_j}$ converges to a critical point of $E_\mu$ at the level $c$. \end{theorem} This theorem implies that \[ \sup_j \int_\Omega e^{b\, |u_j|^{N'}} dx < \infty \] for all $b$ (see Lemma \ref{Lemma 8}). This together with the H\"{o}lder inequality implies that $f(u_j^+)$ is bounded in $L^s(\Omega)$ for all $s > 1$, so $\seq{u_j}$ is bounded in $L^\infty(\Omega)$ by Guedda and V{\'e}ron \cite[Proposition 1.3]{MR1009077}. The global regularity result in Lieberman \cite{MR969499} then gives the desired $C^{1,\alpha}_0(\closure{\Omega})$ estimate. Theorem \ref{Theorem 2} is proved in Section \ref{Section 2} and Theorem \ref{Theorem 1} in Section \ref{Section 3}. In closing the introduction we remark that we have confined ourselves to the model problem \eqref{1} only for the sake of simplicity. The arguments given in this paper can easily be adapted to obtain positive solutions of more general semipositone problems with critical Trudinger-Moser nonlinearities. \section{Proof of Theorem \ref{Theorem 2}} \label{Section 2} In this section we prove Theorem \ref{Theorem 2}. First we collect some elementary estimates for easy reference. \begin{lemma} \label{Lemma 1} For all $t \ge 0$, \begin{enumroman} \item \label{Lemma 1 (i)} $F(t) \le \ds{\frac{N - 1}{\beta N}\, \frac{tf(t)}{t^{N/(N-1)}}}$, \item \label{Lemma 1 (ii)} $F(t) \le F(1) + \dfrac{N - 1}{N(N + \beta - 1)}\, tf(t)$, \item \label{Lemma 1 (iii)} $F(t) \le \dfrac{1}{N}\, tf(t)$, \item \label{Lemma 1 (iv)} $F(t) \le \dfrac{1}{N}\, t^N + \dfrac{\beta}{N}\, t^{N^2/(N-1)} e^{\beta t^{N'}}$, \item \label{Lemma 1 (v)} $F(t) \ge \ds{\frac{1}{N}\, t^N + \frac{\beta (N - 1)}{N^2}\, t^{N^2/(N-1)}}$. \end{enumroman} \end{lemma} \begin{proof} \ref{Lemma 1 (i)}. Integrating by parts, \begin{align*} F(t) & \le \frac{N - 1}{\beta N}\, t^{N-N/(N-1)} e^{\beta t^{N'}} - \frac{N - 2}{\beta} \int_0^t s^{N-N/(N-1)-1} e^{\beta s^{N'}}\, ds\\[3pt] & \le \frac{N - 1}{\beta N}\, \frac{t^N e^{\beta t^{N'}}}{t^{N/(N-1)}}\\[3pt] & = \frac{N - 1}{\beta N}\, \frac{tf(t)}{t^{N/(N-1)}}. \end{align*} \ref{Lemma 1 (ii)}. For $t \le 1$, $F(t) \le F(1)$. For $t > 1$, $F(t) = F(1) + \dint_1^t f(s)\, ds$. Integrating by parts, \begin{align*} \int_1^t f(s)\, ds & \le \frac{1}{N}\, t^N e^{\beta t^{N'}} - \frac{\beta}{N - 1} \int_1^t s^{N-1+N/(N-1)} e^{\beta s^{N'}}\, ds\\[3pt] & \le \frac{1}{N}\, tf(t) - \frac{\beta}{N - 1} \int_1^t f(s)\, ds, \end{align*} and hence $\ds{\int_1^t f(s)\, ds \le \frac{N - 1}{N(N + \beta - 1)}\, tf(t)}$. \ref{Lemma 1 (iii)}. Integrating by parts, \begin{align*} F(t) & = \frac{1}{N}\, t^N e^{\beta t^{N'}} - \frac{\beta}{N - 1} \int_0^t s^{N+N/(N-1)-1} e^{\beta s^{N'}}\, ds\\[3pt] & \le \frac{1}{N}\, tf(t). \end{align*} \ref{Lemma 1 (iv)}. Since $e^t \le 1 + te^t$ for all $t \ge 0$, \begin{align*} F(t) & \le \int_0^t s^{N-1} \left(1 + \beta s^{N/(N-1)} e^{\beta s^{N'}}\right) ds\\[3pt] & \le \frac{1}{N}\, t^N \left(1 + \beta t^{N/(N-1)} e^{\beta t^{N'}}\right). \end{align*} \ref{Lemma 1 (v)}. Since $e^t \ge 1 + t$ for all $t \ge 0$, \begin{align*} F(t) & \ge \int_0^t s^{N-1} \left(1 + \beta s^{N/(N-1)}\right) ds\\[3pt] & = \frac{1}{N}\, t^N + \frac{\beta (N - 1)}{N^2}\, t^{N^2/(N-1)}. \QED \end{align*} \end{proof} Next we prove the following lemma. \begin{lemma} \label{Lemma 4} If $\seq{u_j}$ is a sequence in $W^{1,N}_0(\Omega)$ converging a.e.\! to $u \in W^{1,N}_0(\Omega)$ and \begin{equation} \label{5} \sup_j \int_\Omega u_j^+ f(u_j^+)\, dx < \infty, \end{equation} then \[ \int_\Omega F(u_j^+)\, dx \to \int_\Omega F(u^+)\, dx. \] \end{lemma} \begin{proof} For $M > 0$, write \[ \int_\Omega F(u_j^+)\, dx = \int_{\{u_j^+ < M\}} F(u_j^+)\, dx + \int_{\{u_j^+ \ge M\}} F(u_j^+)\, dx. \] By Lemma \ref{Lemma 1} \ref{Lemma 1 (i)} and \eqref{5}, \[ \int_{\{u_j^+ \ge M\}} F(u_j^+)\, dx \le \frac{N - 1}{\beta NM^{N/(N-1)}} \int_\Omega u_j^+ f(u_j^+)\, dx = \O\! \left(\frac{1}{M^{N/(N-1)}}\right) \text{ as } M \to \infty. \] Hence \[ \int_\Omega F(u_j^+)\, dx = \int_{\{u_j^+ < M\}} F(u_j^+)\, dx + \O\! \left(\frac{1}{M^{N/(N-1)}}\right), \] and the conclusion follows by first letting $j \to \infty$ and then letting $M \to \infty$. \end{proof} We will also need the following result of Lions \cite{MR834360} (see {\em Remark} I.18 $(i)$). \begin{lemma} \label{Lemma 5} If $\seq{u_j}$ is a sequence in $W^{1,N}_0(\Omega)$ with $\norm{u_j} = 1$ for all $j$ and converging a.e.\! to a nonzero function $u \in W^{1,N}_0(\Omega)$, then \[ \sup_j \int_\Omega e^{b\, |u_j|^{N'}}\, dx < \infty \] for all $b < \alpha_N/(1 - \norm{u}^N)^{1/(N-1)}$. \end{lemma} We are now ready to prove Theorem \ref{Theorem 2}. \begin{proof}[Proof of Theorem \ref{Theorem 2}] We have \begin{equation} \label{6} E_{\mu_j}(u_j) = \frac{1}{N} \norm{u_j}^N - \lambda \int_\Omega F(u_j^+)\, dx + \mu_j \int_\Omega G(u_j)\, dx = c + \o(1) \end{equation} and \begin{equation} \label{7} E_{\mu_j}'(u_j)\, u_j = \norm{u_j}^N - \lambda \int_\Omega u_j^+ f(u_j^+)\, dx + \mu_j \int_\Omega u_j\, g(u_j)\, dx = \o(\norm{u_j}). \end{equation} Since \[ \int_\Omega F(u_j^+)\, dx \le F(1) \vol{\Omega} + \frac{N - 1}{N(N + \beta - 1)} \int_\Omega u_j^+ f(u_j^+)\, dx \] by Lemma \ref{Lemma 1} \ref{Lemma 1 (ii)}, $\seq{\mu_j}$ is bounded, and \begin{equation} \label{8} \abs{\int_\Omega u_j\, g(u_j)\, dx} \le \int_\Omega |u_j|\, dx, \qquad \abs{\int_\Omega G(u_j)\, dx} \le \int_\Omega |u_j|\, dx, \end{equation} it follows from \eqref{6} and \eqref{7} that $\seq{u_j}$ is bounded in $W^{1,N}_0(\Omega)$. Hence a renamed subsequence converges to some $u$ weakly in $W^{1,N}_0(\Omega)$, strongly in $L^p(\Omega)$ for all $p \in [1,\infty)$, and a.e.\! in $\Omega$. Moreover, \begin{equation} \label{9} \sup_j \int_\Omega u_j^+ f(u_j^+)\, dx < \infty \end{equation} by \eqref{7} and \eqref{8}, and hence \begin{equation} \label{10} \int_\Omega F(u_j^+)\, dx \to \int_\Omega F(u^+)\, dx \end{equation} by Lemma \ref{Lemma 4}. Clearly, \begin{equation} \label{11} \mu_j \int_\Omega u_j\, g(u_j)\, dx \to \mu \int_\Omega u\, g(u)\, dx, \qquad \mu_j \int_\Omega G(u_j)\, dx \to \mu \int_\Omega G(u)\, dx. \end{equation} We claim that the weak limit $u$ is nonzero. Suppose $u = 0$. Then \begin{equation} \label{12} \int_\Omega F(u_j^+)\, dx \to 0, \qquad \mu_j \int_\Omega u_j\, g(u_j)\, dx \to 0, \qquad \mu_j \int_\Omega G(u_j)\, dx \to 0 \end{equation} by \eqref{10} and \eqref{11}, and hence $c > 0$ and \[ \norm{u_j} \to (Nc)^{1/N} \] by \eqref{6}. Let $(Nc)^{1/(N-1)} < \gamma < \alpha_N/\beta$. Then $\norm{u_j} \le \gamma^{(N-1)/N}$ for all $j \ge j_0$ for some $j_0$. Let $q = \alpha_N/\beta \gamma > 1$. By the H\"{o}lder inequality, \[ \int_\Omega u_j^+ f(u_j^+)\, dx \le \left(\int_\Omega |u_j|^{Np}\, dx\right)^{1/p} \left(\int_\Omega e^{q \beta\, |u_j|^{N'}}\, dx\right)^{1/q}, \] where $1/p + 1/q = 1$. The first integral on the right-hand side converges to zero since $u = 0$, while the second integral is bounded for $j \ge j_0$ since $q \beta\, |u_j|^{N'} = \alpha_N\, |\widetilde{u}_j|^{N'}$ with $\widetilde{u}_j = u_j/\gamma^{(N-1)/N}$ satisfying $\norm{\widetilde{u}_j} \le 1$, so \[ \int_\Omega u_j^+ f(u_j^+)\, dx \to 0. \] Then $u_j \to 0$ by \eqref{7} and \eqref{12}, and hence $c = 0$ by \eqref{6} and \eqref{12}, a contradiction. So $u$ is nonzero. Since $E_{\mu_j}'(u_j) \to 0$, \[ \int_\Omega |\nabla u_j|^{N-2}\, \nabla u_j \cdot \nabla v\, dx - \lambda \int_\Omega f(u_j^+)\, v\, dx + \mu_j \int_\Omega g(u_j)\, v\, dx \to 0 \] for all $v \in W^{1,N}_0(\Omega)$. For $v \in C^\infty_0(\Omega)$, an argument similar to that in the proof of Lemma \ref{Lemma 4} using the estimate \[ \abs{\int_{\{u_j^+ \ge M\}} f(u_j^+)\, v\, dx} \le \frac{\sup |v|}{M} \int_\Omega u_j^+ f(u_j^+)\, dx \] and \eqref{9} shows that $\dint_\Omega f(u_j^+)\, v\, dx \to \dint_\Omega f(u^+)\, v\, dx$, and $\mu_j \dint_\Omega g(u_j)\, v\, dx \to \mu \dint_\Omega g(u)\, v\, dx$ since $g$ is bounded, so \[ \int_\Omega |\nabla u|^{N-2}\, \nabla u \cdot \nabla v\, dx = \lambda \int_\Omega f(u^+)\, v\, dx - \mu \int_\Omega g(u)\, v\, dx. \] Then this holds for all $v \in W^{1,N}_0(\Omega)$ by density, and taking $v = u$ gives \begin{equation} \label{13} \norm{u}^N = \lambda \int_\Omega u^+ f(u^+)\, dx - \mu \int_\Omega u\, g(u)\, dx. \end{equation} Next we claim that \begin{equation} \label{14} \int_\Omega u_j^+ f(u_j^+)\, dx \to \int_\Omega u^+ f(u^+)\, dx. \end{equation} We have \begin{equation} \label{15} u_j^+ f(u_j^+) \le |u_j|^N e^{\beta\, |u_j|^{N'}} = |u_j|^N e^{\beta\, \norm{u_j}^{N'} |\widetilde{u}_j|^{N'}}, \end{equation} where $\widetilde{u}_j = u_j/\norm{u_j}$. Setting $\kappa = \lambda \dint_\Omega F(u^+)\, dx - \mu \dint_\Omega G(u)\, dx$, \[ \norm{u_j}^N \to N(c + \kappa) \] by \eqref{6}, \eqref{10}, and \eqref{11}, so $\widetilde{u}_j$ converges a.e.\! to $\widetilde{u} = u/[N(c + \kappa)]^{1/N}$. Then \begin{equation} \label{16} \norm{u_j}^N (1 - \norm{\widetilde{u}}^N) \to N(c + \kappa) - \norm{u}^N. \end{equation} By Lemma \ref{Lemma 1} \ref{Lemma 1 (iii)}, \[ \int_\Omega u^+ f(u^+)\, dx \ge N \int_\Omega F(u^+)\, dx, \] and it is easily seen that $tg(t) \le N(G(t) + 1/2)$ for all $t \in \R$ and hence \[ \int_\Omega u\, g(u)\, dx \le N \left(\int_\Omega G(u)\, dx + \half \vol{\Omega}\right), \] so it follows from \eqref{13} that $\norm{u}^N \ge N(\kappa - (\mu/2) \vol{\Omega})$. Hence \begin{equation} \label{17} N(c + \kappa) - \norm{u}^N \le N \bigg(c + \frac{\mu}{2} \vol{\Omega}\bigg) < \left(\frac{\alpha_N}{\beta}\right)^{N-1} \end{equation} by \eqref{4}. We are done if $\norm{\widetilde{u}} = 1$, so suppose $\norm{\widetilde{u}} \ne 1$ and let \[ \frac{[N(c + (\mu/2) \vol{\Omega})]^{1/(N-1)}}{(1 - \norm{\widetilde{u}}^N)^{1/(N-1)}} < \widetilde{\gamma} - 2 \eps < \widetilde{\gamma} < \frac{\alpha_N/\beta}{(1 - \norm{\widetilde{u}}^N)^{1/(N-1)}}. \] Then $\norm{u_j}^{N/(N-1)} \le \widetilde{\gamma} - 2 \eps$ for all $j \ge j_0$ for some $j_0$ by \eqref{16} and \eqref{17}, and \begin{equation} \label{18} \sup_j \int_\Omega e^{\beta\, \widetilde{\gamma}\, |\widetilde{u}_j|^{N'}} dx < \infty \end{equation} by Lemma \ref{Lemma 5}. For $M > 0$ and $j \ge j_0$, \eqref{15} then gives \begin{align*} & \phantom{\le \text{ }} \int_{\{u_j^+ \ge M\}} u_j^+ f(u_j^+)\, dx\\[2pt] & \le \int_{\{u_j^+ \ge M\}} u_j^N e^{\beta\, (\widetilde{\gamma} - 2 \eps)\, \widetilde{u}_j^{N'}} dx\\[2pt] & = \norm{u_j}^N \int_{\{u_j^+ \ge M\}} \widetilde{u}_j^N e^{- \eps \beta\, \widetilde{u}_j^{N'}} e^{- \eps \beta\, (u_j/\norm{u_j})^{N'}} e^{\beta\, \widetilde{\gamma}\, \widetilde{u}_j^{N'}} dx\\[2pt] & \le \left(\max_{t > 0}\, t^N e^{- \eps \beta\, t^{N'}}\right) \norm{u_j}^N e^{- \eps \beta\, (M/\norm{u_j})^{N'}} \int_\Omega e^{\beta\, \widetilde{\gamma}\, \widetilde{u}_j^{N'}} dx. \end{align*} The last expression goes to zero as $M \to \infty$ uniformly in $j$ since $\norm{u_j}$ is bounded and \eqref{18} holds, so \eqref{14} now follows as in the proof of Lemma \ref{Lemma 4}. Now it follows from \eqref{7}, \eqref{14}, \eqref{11}, and \eqref{13} that \[ \norm{u_j}^N \to \lambda \int_\Omega u^+ f(u^+)\, dx - \mu \int_\Omega u\, g(u)\, dx = \norm{u}^N, \] and hence $\norm{u_j} \to \norm{u}$. So $u_j \to u$ by the uniform convexity of $W^{1,N}_0(\Omega)$. Clearly, $E_\mu(u) = c$ and $E_\mu'(u) = 0$. \end{proof} \section{Proof of Theorem \ref{Theorem 1}} \label{Section 3} In this section we prove Theorem \ref{Theorem 1}. Recall that $E_\mu$ satisfies the Palais-Smale compactness condition at the level $c \in \R$, or the \PS{c} condition for short, if every sequence $\seq{u_j}$ in $W^{1,N}_0(\Omega)$ such that $E_\mu(u_j) \to c$ and $E_\mu'(u_j) \to 0$, called a \PS{c} sequence, has a convergent subsequence. The following lemma is immediate from the general compactness result in Theorem \ref{Theorem 2}. \begin{lemma} \label{Lemma 2} $E_\mu$ satisfies the {\em \PS{c}} condition for all $c \ne 0$ satisfying \[ c < \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1} - \frac{\mu}{2} \vol{\Omega}. \] \end{lemma} First we show that $E_\mu$ has a uniformly positive mountain pass level below the threshold for compactness given in Lemma \ref{Lemma 2} for all sufficiently small $\mu > 0$. We may assume that $0 \in \Omega$ without loss of generality. Take $r > 0$ so small that $\closure{B_r(0)} \subset \Omega$ and let \[ v_j(x) = \frac{1}{\omega_{N-1}^{1/N}}\, \begin{cases} (\log j)^{(N-1)/N}, & |x| \le r/j\\[10pt] \dfrac{\log (r/|x|)}{(\log j)^{1/N}}, & r/j < |x| < r\\[10pt] 0, & |x| \ge r. \end{cases} \] It is easily seen that $v_j \in W^{1,N}_0(\Omega)$ with $\norm{v_j} = 1$ and \begin{equation} \label{20} \int_\Omega v_j^N\, dx = \O(1/\log j) \quad \text{as } j \to \infty. \end{equation} \begin{lemma} \label{Lemma 6} There exist $\mu_0, \rho, c_0 > 0$, $j_0 \ge 2$, $R > \rho$, and $\vartheta < \dfrac{1}{N} \left(\dfrac{\alpha_N}{\beta}\right)^{N-1}$ such that the following hold for all $\mu \in (0,\mu_0)$: \begin{enumroman} \item \label{Lemma 6 (i)} $\norm{u} = \rho \implies E_\mu(u) \ge c_0$, \item \label{Lemma 6 (ii)} $E_\mu(Rv_{j_0}) \le 0$, \item \label{Lemma 6 (iii)} denoting by $\Gamma = \set{\gamma \in C([0,1],W^{1,N}_0(\Omega)) : \gamma(0) = 0,\, \gamma(1) = Rv_{j_0}}$ the class of paths joining the origin to $Rv_{j_0}$, \begin{equation} \label{21} c_0 \le c_\mu := \inf_{\gamma \in \Gamma}\, \max_{u \in \gamma([0,1])}\, E_\mu(u) \le \vartheta + C_\lambda\, \mu^{N'}, \end{equation} where $C_\lambda = (1 - 1/N) \vol{\Omega}/\lambda^{1/(N-1)}$, \item \label{Lemma 6 (iv)} $E_\mu$ has a critical point $u_\mu$ at the level $c_\mu$. \end{enumroman} \end{lemma} \begin{proof} Set $\rho = \norm{u}$ and $\widetilde{u} = u/\rho$. By Lemma \ref{Lemma 1} \ref{Lemma 1 (iv)} and since \[ \lambda_1 = \inf_{u \in W^{1,N}_0(\Omega) \setminus \set{0}}\, \frac{\dint_\Omega |\nabla u|^N\, dx}{\dint_\Omega |u|^N\, dx}, \] we have \begin{align*} \int_\Omega F(u^+)\, dx & \le \int_\Omega \left[\frac{1}{N}\, |u|^N + \frac{\beta}{N}\, |u|^{N^2/(N-1)} e^{\beta\, |u|^{N'}}\right] dx\\[2pt] & \le \frac{1}{N \lambda_1} \norm{u}^N + \frac{\beta}{N} \pnorm[2N^2/(N-1)]{u}^{N^2/(N-1)} \left(\int_\Omega e^{2 \beta\, |u|^{N'}} dx\right)^{1/2}\\[2pt] & = \frac{\rho^N}{N \lambda_1} + \frac{\beta \rho^{N^2/(N-1)}}{N} \pnorm[2N^2/(N-1)]{\widetilde{u}}^{N^2/(N-1)} \left(\int_\Omega e^{2 \beta \rho^{N'} |\widetilde{u}|^{N'}} dx\right)^{1/2}\\[2pt] & = \frac{\rho^N}{N \lambda_1} + \O(\rho^{N^2/(N-1)}) \quad \text{as } \rho \to 0 \end{align*} since $W^{1,N}_0(\Omega) \hookrightarrow L^{2N^2/(N-1)}(\Omega)$ and $\dint_\Omega e^{2 \beta \rho^{N'} |\widetilde{u}|^{N'}} dx$ is bounded by \eqref{2} when $2 \beta \rho^{N'} \le \alpha_N$. Since $G(t) \ge -1/2$ for all $t \in \R$, then \[ E_\mu(u) \ge \frac{1}{N} \left(1 - \frac{\lambda}{\lambda_1}\right) \rho^N + \O(\rho^{N^2/(N-1)}) - \frac{\mu}{2} \vol{\Omega}. \] Since $\lambda < \lambda_1$, \ref{Lemma 6 (i)} follows from this for sufficiently small $\rho, \mu, c_0 > 0$. Since $v_j \ge 0$, \[ E_\mu(tv_j) = \int_\Omega \left[\frac{t^N}{N}\, |\nabla v_j|^N - \lambda F(tv_j) + \mu tv_j\right] dx \] for $t \ge 0$. By the H\"{o}lder and Young's inequalities, \[ \mu t \int_\Omega v_j\, dx \le \mu t \vol{\Omega}^{1-1/N} \left(\int_\Omega v_j^N\, dx\right)^{1/N} \le C_\lambda\, \mu^{N'} + \frac{\lambda t^N}{N} \int_\Omega v_j^N\, dx, \] so \[ E_\mu(tv_j) \le H_j(t) + C_\lambda\, \mu^{N'}, \] where \[ H_j(t) = \frac{t^N}{N} \left(1 + \lambda \int_\Omega v_j^N\, dx\right) - \lambda \int_\Omega F(tv_j)\, dx. \] By Lemma \ref{Lemma 1} \ref{Lemma 1 (v)}, \[ \int_\Omega F(tv_j)\, dx \ge \frac{t^N}{N} \int_\Omega v_j^N\, dx + \frac{\beta (N - 1)}{N^2}\, t^{N^2/(N-1)} \int_\Omega v_j^{N^2/(N-1)}\, dx, \] so \begin{equation} \label{22} H_j(t) \le \frac{t^N}{N} - \frac{\lambda \beta (N - 1)}{N^2}\, t^{N^2/(N-1)} \int_\Omega v_j^{N^2/(N-1)}\, dx \to - \infty \quad \text{as } t \to \infty. \end{equation} So to prove \ref{Lemma 6 (ii)} and \ref{Lemma 6 (iii)}, it suffices to show that $\exists j_0 \ge 2$ such that \[ \vartheta := \sup_{t \ge 0}\, H_{j_0}(t) < \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1}. \] Suppose $\sup_{t \ge 0}\, H_j(t) \ge (\alpha_N/\beta)^{N-1}/N$ for all $j$. Since $H_j(t) \to - \infty$ as $t \to \infty$ by \eqref{22}, there exists $t_j \ge 0$ such that \begin{equation} \label{23} H_j(t_j) = \frac{t_j^N}{N}\, (1 + \eps_j) - \lambda \int_\Omega F(t_j v_j)\, dx = \sup_{t \ge 0}\, H_j(t) \ge \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1} \end{equation} and \begin{equation} \label{24} H_j'(t_j) = t_j^{N-1} \left(1 + \eps_j - \lambda \int_\Omega v_j^N e^{\beta t_j^{N'}\! v_j^{N'}} dx\right) = 0, \end{equation} where \[ \eps_j = \lambda \int_\Omega v_j^N\, dx. \] Since $F(t) \ge 0$ for all $t \ge 0$, \eqref{23} gives \[ \beta t_j^{N'} \ge \frac{\alpha_N}{1 + \eps_j}, \] and then \eqref{24} gives \begin{equation} \label{25} \frac{1 + \eps_j}{\lambda} = \int_\Omega v_j^N e^{\beta t_j^{N'}\! v_j^{N'}} dx \ge \int_{B_{r/j}(0)} v_j^N e^{\alpha_N v_j^{N'}/(1 + \eps_j)}\, dx = \frac{r^N}{N}\, \frac{(\log j)^{N-1}}{j^{N \eps_j/(1 + \eps_j)}}. \end{equation} By \eqref{20}, $\eps_j \to 0$ and \[ j^{N \eps_j/(1 + \eps_j)} \le j^{N \eps_j} = e^{N \eps_j \log j} = \O(1), \] so \eqref{25} is impossible for large $j$. By \ref{Lemma 6 (i)}--\ref{Lemma 6 (iii)}, $E_\mu$ has the mountain pass geometry and the mountain pass level $c_\mu$ satisfies \[ 0 < c_\mu \le \vartheta + C_\lambda\, \mu^{N'} < \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1} - \frac{\mu}{2} \vol{\Omega} \] for all sufficiently small $\mu > 0$, so $E_\mu$ satisfies the \PS{c_\mu} condition by Lemma \ref{Lemma 2}. So $E_\mu$ has a critical point $u_\mu$ at this level by the mountain pass theorem. \end{proof} Now we show that $u_\mu$ is positive in $\Omega$, and hence a weak solution of problem \eqref{1}, for all sufficiently small $\mu \in (0,\mu_0)$. It suffices to show that for every sequence $\mu_j > 0,\, \mu_j \to 0$, a subsequence of $u_j = u_{\mu_j}$ is positive in $\Omega$. By \eqref{21}, a renamed subsequence of $c_{\mu_j}$ converges to some $c$ satisfying \[ 0 < c < \frac{1}{N} \left(\frac{\alpha_N}{\beta}\right)^{N-1}. \] Then a renamed subsequence of $\seq{u_j}$ converges in $W^{1,N}_0(\Omega)$ to a critical point $u$ of $E_0$ at the level $c$ by Theorem \ref{Theorem 2}. Since $c > 0$, $u$ is nontrivial. \begin{lemma} \label{Lemma 7} A further subsequence of $\seq{u_j}$ is bounded in $C^{1,\alpha}_0(\closure{\Omega})$ for some $\alpha \in (0,1)$. \end{lemma} \begin{proof} Since \[ \left\{\begin{aligned} - \Delta_N\, u_j & = \lambda f(u_j^+) - \mu_j\, g(u_j) && \text{in } \Omega\\[10pt] u_j & = 0 && \text{on } \bdry{\Omega}, \end{aligned}\right. \] it suffices to show that $\seq{u_j}$ is bounded in $L^\infty(\Omega)$ by the global regularity result of Lieberman \cite{MR969499}, and this will follow from Proposition 1.3 of Guedda and V{\'e}ron \cite{MR1009077} if we show that $f(u_j^+)$ is bounded in $L^s(\Omega)$ for some $s > 1$. Let $s > 1$. By the H\"{o}lder inequality, \[ \left(\int_\Omega |f(u_j^+)|^s\, dx\right)^{1/s} \le \left(\int_\Omega |u_j|^p\, dx\right)^{(N-1)/p} \left(\int_\Omega e^{q \beta\, |u_j|^{N'}} dx\right)^{1/q}, \] where $(N - 1)/p + 1/q = 1/s$. The first integral on the right-hand side is bounded since $W^{1,N}_0(\Omega) \hookrightarrow L^p(\Omega)$, and so is the second integral by Lemma \ref{Lemma 8} below. \end{proof} \begin{lemma} \label{Lemma 8} If $\seq{u_j}$ is a convergent sequence in $W^{1,N}_0(\Omega)$, then \[ \sup_j \int_\Omega e^{b\, |u_j|^{N'}} dx < \infty \] for all $b$. \end{lemma} \begin{proof} The case $b \le 0$ is trivial, so suppose $b > 0$ and let $u \in W^{1,N}_0(\Omega)$ be the limit of $\seq{u_j}$. We have \[ |u_j|^{N'} \le (|u| + |u_j - u|)^{N'} \le 2^{N'} \big(|u|^{N'} + |u_j - u|^{N'}\big), \] so \[ \int_\Omega e^{b\, |u_j|^{N'}} dx \le \left(\int_\Omega e^{2^{N'+1} b\, |u|^{N'}} dx\right)^{1/2} \left(\int_\Omega e^{2^{N'+1} b\, |u_j - u|^{N'}} dx\right)^{1/2}. \] The first integral on the right-hand side is finite, and the second integral equals \[ \int_\Omega e^{2^{N'+1} b\, \norm{u_j - u}^{N'} |v_j|^{N'}} dx, \] where $v_j = (u_j - u)/\norm{u_j - u}$. Since $\norm{v_j} = 1$ and $\norm{u_j - u} \to 0$, this integral is bounded by \eqref{2}. \end{proof} By Lemma \ref{Lemma 7}, a renamed subsequence of $u_j$ converges to $u$ in $C^1_0(\closure{\Omega})$. Since $u$ is a nontrivial weak solution of the problem \[ \left\{\begin{aligned} - \Delta_N\, u & = \lambda\, (u^+)^{N-1} e^{\beta\, (u^+)^{N'}} && \text{in } \Omega\\[10pt] u & = 0 && \text{on } \bdry{\Omega}, \end{aligned}\right. \] $u > 0$ in $\Omega$ and its interior normal derivative $\partial u/\partial \nu > 0$ on $\bdry{\Omega}$ by the strong maximum principle and the Hopf lemma for the $p$-Laplacian (see V{\'a}zquez \cite{MR768629}). Since $u_j \to u$ in $C^1_0(\closure{\Omega})$, then $u_j > 0$ in $\Omega$ for all sufficiently large $j$. This concludes the proof of Theorem \ref{Theorem 1}. \subsection*{Acknowledgement} The second author was supported by the 2018-0340 Research Fund of the University of Ulsan. \def\cdprime{$''$}
1,477,468,750,499
arxiv
\section{Introduction} Major advances in metrology and precision spectroscopy were led by the introduction \cite{Udem:99, *Reichert199959} and development \cite{PhysRevLett.82.3568, *PhysRevLett.84.3232, *Jones28042000, *PhysRevLett.84.5102, *PhysRevLett.84.5496, *PhysRevLett.85.2264} of optical frequency combs \cite{Nature.416.233, *RevModPhys.75.325}. The spectrum of a frequency comb consists of a series of equally spaced teeth, i.e., modes of a train of femtosecond pulses spaced by the repetition frequency of a mode-locked laser. By counting the number of teeth between an unknown optical frequency and an optical reference line, this comb is used as a fine ruler to measure an optical frequency instead of the corresponding wavelength, which can be determined much less precisely. This allows one to reach relative accuracies up to $10^{-18}$ \cite{1402-4896-86-6-068101}. By precisely counting optical oscillations, e.g., in trapped-atom and \mbox{-ion} standards, optical frequency combs play a crucial role in the realization of all-optical atomic clocks \cite{RevModPhys.78.1279, Diddams03082001}. In light of the success of optical-frequency-comb metrology, it is desirable {as an ultimate aim} to render this technology available for extreme ultraviolet (XUV) and \mbox{x-ray} frequencies \cite{RevModPhys.78.1297}. \mbox{X-ray} frequency combs promise to enable precise measurements of high-energy transitions paralleling the accuracy achieved for optical frequencies, with an improvement of several orders of magnitude. This is anticipated to allow, to name but a few examples, even more stringent experimental tests of quantum electrodynamics and astrophysical models \cite{BernittNature}, and the search for the variability of the fine-structure constant, to which transitions in highly charged ions are predicted to be more sensitive \cite{PhysRevLett.106.210802}. One may also {eventually} envision ultraprecise \mbox{x-ray} atomic clocks. XUV frequency combs have been generated via intracavity high-order harmonic generation (HHG) \cite{PhysRevLett.94.193201, *Nature.436.234}. While in conventional HHG an optical pulse in a gas produces a spectrum of odd harmonics of the optical frequency, in intracavity HHG a train of coherent optical pulses generates a spectrum which in each harmonic line is structured into a fine comb. Based on this scheme, Ref.~\cite{Nature.482.68} reported the observation of frequency combs at wavelengths of $\sim$\,$40\,\unit{nm}$ (photon energy of $\sim$\,$30\,\unit{eV}$). The required optical peak intensity of $\sim$\,$10^{14}\,\unit{W/cm^2}$ was obtained with a femtosecond enhancement cavity. Yet relativistic effects limit the range in which HHG operates efficiently \cite{Kohler2012159}, {i.e., where \mbox{x-ray} frequency combs are presently advisable with HHG-based methods. Investigations of alternative schemes are, therefore, timely.} Here, we put forward a scheme for coherent \mbox{x-ray} pulse shaping to imprint the structure of an optical frequency comb onto the resonance fluorescence spectrum that is emitted on an \mbox{x-ray} transition. We refer to previous investigations of many-color schemes of resonance fluorescence in multi-level systems \cite{PhysRevA.42.1630, *PhysRevA.43.3748, PhysRevLett.76.388, *PhysRevLett.77.3995, *PhysRevLett.81.293, *PhysRevLett.83.1307, *PhysRevLett.91.123601, *Kiffner_review, *PhysRevLett.106.033001} and to examples of \mbox{x-ray} pulse shaping such as, e.g., studies of electromagnetically induced transparency for x~rays \cite{PhysRevLett.98.253001, *Glover_Nature}, for which an optical field is used to control \mbox{x-ray} absorption. {In our scheme, the imprinting of the optical frequency comb onto the \mbox{x-ray} spectrum takes advantage of a driving \mbox{x-ray} field influencing the precision with which the position of the peaks in the \mbox{x-ray} frequency comb is known. This comb is valuable as a relative ``ruler,'' e.g., to bridge an energy difference between an \mbox{x-ray} reference level and an unknown \mbox{x-ray} frequency at high energies for which, owing to the inefficiency of HHG at high harmonic orders, \mbox{x-ray} frequency-comb generation via HHG-based methods would encounter significant obstacles \cite{Kohler2012159}.} The paper is structured as follows. In Sec.~\ref{Theoretical model} we present our theory in terms of an ensemble of four-level systems used to model the driven particles. We analyze the properties of the coherent and incoherent parts of the spectrum of resonance fluorescence related to many-particle effects and to periodic driving. The four-level scheme is applied in Sec.~\ref{Results and discussion} to He-like $\mathrm{Be}^{2+}$ ions to predict a frequency comb in the coherent part of the spectrum centered on the atomic transition at $\sim$\,$120\unit{eV}$. Section~\ref{Conclusion} concludes the paper. Atomic units are used throughout unless otherwise stated. \section{Theoretical model} \label{Theoretical model} \subsection{Four-level model} In this section, we present the four-level model used throughout the paper. The experimental setup is displayed in Fig.~\ref{fig:TheModel}(a). An \mbox{x-ray} field $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$, an optical continuous-wave (cw) auxiliary field $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$, and a periodic train of optical pulses $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ of an optical-frequency-comb laser, irradiate an ensemble of ions. The fields copropagate in the $y$~direction; at time $t$ and position $\boldsymbol{r}$, for $q\in\{\mathrm{X,\,L,\,C}\}$, the incident fields are given by \begin{equation} \boldsymbol{\mathcal{E}}_{q}(\boldsymbol{r},\,t) = \mathcal{E}_{q,0}({ t'})\cos{[\omega_{q}t + \varphi_{q}({ t'}) + \varphi_{q,0} - \boldsymbol{k}_{q}\cdot\boldsymbol{r}]}\,\hat{\boldsymbol{e}}_q, \label{eq:electricfields} \end{equation} with envelope $\mathcal{E}_{q,0}(t)$, carrier frequency $\omega_{q}$, phase $\varphi_{q}(t)$, carrier-envelope phase (CEP) $\varphi_{q,0}$, wavevector $\boldsymbol{k}_{q} = (\omega_{q}/c)\,\hat{\boldsymbol{e}}_{y}$, and linear polarization vector $\hat{\boldsymbol{e}}_q$. The intensity is \cite{diels2006ultrashort} \begin{equation} I_{q} = \frac{|\mathcal{E}_{q,0}|^2}{ 8\pi\alpha}, \label{eq:intensita} \end{equation} with the speed of light {in vacuum} $c$ and the fine-structure constant $\alpha = 1/c$. Furthermore, {$t' = t-y/v_{q}$ is the retarded time due to the propagation of the pulses along the coordinate $y$, with $v_q$ being the group velocity of the $q$th field. We assume that the \mbox{x-ray} field $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$ and the optical cw auxiliary field $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$ are linearly polarized in the $z$ direction, $\hat{\boldsymbol{e}}_{\mathrm{X}} = \hat{\boldsymbol{e}}_{\mathrm{L}} = \hat{\boldsymbol{e}}_z $, while the train of pulses $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ giving rise to the optical frequency comb is linearly polarized in the $x$ direction, $\hat{\boldsymbol{e}}_{\mathrm{C}} = \hat{\boldsymbol{e}}_x $, where $\hat{\boldsymbol{e}}_x$, $\hat{\boldsymbol{e}}_y$, and $\hat{\boldsymbol{e}}_z$ are unit vectors in the $x$, $y$, and $z$ directions. {Finally, for $q\in\{\mathrm{X,\,L,\,C}\}$ we set the CEPs $\varphi_{q,0} = 0$ and we assume a dilute-gas setting, such that the phase velocity of all electric fields, to a very good approximation, equals $c$, resulting in good phase matching \cite{arXiv:1203.4127, PhysRevA.78.043409, JPSJ.30.518}.} \begin{figure}[t] \centering% \includegraphics[width=\columnwidth, keepaspectratio]{TheModelTotal4LevelsBis.eps} \caption{(Color online) (a) An ensemble of ions is driven by {narrow-bandwidth} x~rays ($\boldsymbol{k}_{\mathrm{X}}$, brown), an auxiliary optical laser ($\boldsymbol{k}_{\mathrm{L}}$, red), both linearly polarized along the $z$~direction, and an optical frequency comb ($\boldsymbol{k}_{\mathrm{C}}$, green), linearly polarized along the $x$~direction. All fields propagate in the $y$~direction. The resonance fluorescence spectrum ($\boldsymbol{k}_{\mathrm{F}}$, blue) exhibits an induced \mbox{x-ray} frequency-comb structure. (b) Four-level scheme of $\mathrm{He}$-like ions interacting with the three light fields.} \label{fig:TheModel} \end{figure} The bandwidth of $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$ is so small that it can be entirely neglected. We initially assume that $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$ has constant amplitude, $\mathcal{E}_{\mathrm{X},0}(t) = \bar{\mathcal{E}}_{\mathrm{X},0}$, and phase, $\varphi_{\mathrm{X}}(t) = 0$; the effect of the \mbox{x-ray} bandwidth is later taken into account by a stochastic approach \cite{PhysRevLett.37.1383}. Finally, the optical frequency comb has, to a very good approximation, constant phase $\varphi_{\mathrm{C}}(t) \equiv 0$ and a periodic amplitude \begin{equation} \mathcal{E}_{\mathrm{{C}},0}(t) = \sum_{k=-\infty}^{+\infty} A_k\,\mathrm{e}^{- \mathrm{i} \frac{2\pi k}{T_{\mathrm{p}}}t}, \end{equation} with {repetition period $T_{\mathrm{p}}$ and} Fourier coefficients \begin{equation} A_k = \frac{1}{T_{\mathrm{p}}}\,\int_{0}^{T_{\mathrm{p}}} \mathcal{E}_{\mathrm{{C}},0}(t) \, \mathrm{e}^{\mathrm{i} \frac{2\pi k}{T_{\mathrm{p}}}t}\,\mathop{}\!\mathrm{d} t. \end{equation} In other words, the envelope can be written as the following sum of identical pulses, \begin{subeqnarray} \label{eq:opticalfrequencycomb} \mathcal{E}_{\mathrm{{C},0}}(t) = \mathcal{E}_{\mathrm{{C},max}}\,\sum_{n = -\infty}^{+\infty} \mathcal{G}(t - nT_{\mathrm{p}}) , \\ \mathcal{G}(t) = \cos^2\biggl[ \frac{\pi}{T_{\mathrm{d}}} \Bigl(t - \frac{T_{\mathrm{d}}}{2}\Bigr)\biggr]\, \mathrm{R}\biggl[ \frac{1}{T_{\mathrm{d}}} \Bigl(t - \frac{T_{\mathrm{d}}}{2}\Bigr)\biggr] , \end{subeqnarray} where, from Eq.~(\ref{eq:intensita}), \begin{equation} \mathcal{E}_{\mathrm{{C},max}} = \sqrt{8\pi\alpha I_{\mathrm{{C},max}}} \end{equation} is the maximum electric-field strength of the train of optical pulses, associated with the maximum intensity $I_{\mathrm{{C},max}}$, and the rectangular function $\mathrm{R}(x)$ is defined in terms of the Heaviside step function $\theta(x)$ as \begin{equation} \mathrm{R}(x) = \theta(x+1/2) - \theta(x-1/2). \end{equation} The full width at half maximum (FWHM) of $\mathcal{G}^2(t)$ is \cite{0953-4075-42-23-235101} \begin{equation} T_{\mathrm{FWHM}} = 2\,T_{\mathrm{d}}\arccos{(\sqrt[4]{1/2})}/\pi, \end{equation} with $T_{\mathrm{d}}$ being the interval in which $\mathcal{G}(t)$ is different from 0, $T_{\mathrm{d}}\ll T_{\mathrm{p}}$. The electric fields $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$, $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$, and $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$, drive isolated, electric-dipole (E1) transitions in the four-level system of Fig.~\ref{fig:TheModel}(b), where level $i$ has energy $\omega_i$ and the energy between levels $i$ and $j$ is given by \begin{equation} \omega_{ij} = \omega_i - \omega_j, \end{equation} with $i,\,j\in S = \{1,\,2_{0,\pm},\,3,\,4_{0,\pm}\}$. The four-level model is applied to $\mathrm{He}$-like ions, with transition energies in the optical and \mbox{x-ray} ranges \cite{PhysRevA.81.022507}. In this case, $|1\rangle$ represents the ground state $1s^2$~$^1S_0$, with total-angular-momentum quantum number $J=0$ and positive parity. The states $|2_-\rangle$, $|2_0\rangle$, and $|2_+\rangle$, constitute the level $1s 2p$~$^3P_1$, with $J=1$ and negative parity. The quantum number $M_J$ associated with the $z$ component of the total-angular-momentum operator is, in the previous states, respectively equal to $-1$, $0$, and $1$. Furthermore, the state $|3\rangle$ is associated with the positive-parity level $1s 2s$~$^1S_0$, with $J=0$, whereas the three states $|4_-\rangle$, $|4_0\rangle$, and $|4_+\rangle$, represent the level $1s 2p$~$^1P_1$, with $J=1$ and negative parity \footnote{In $\mathrm{He}$-like ions, level $3$ has higher (lower) energy than level $2$ for a nuclear charge $Z\geq 7$ ($Z< 7$) \cite{PhysRevA.81.022507}.}. Other levels, such as $1s 2s$~$^3S_1$, $1s 2p$~$^3P_0$, and $1s 2p$~$^3P_2$, are not included in our description because they do not couple via an E1 interaction to the levels in Fig.~\ref{fig:TheModel}(b) and spontaneous-decay times from higher-energy levels to them are by orders of magnitude larger than the repetition period $T_{\mathrm{p}}$ of the optical frequency comb. All three excited levels, i.e., $2$, $3$, and $4$, are nonautoionizing, since, in all configurations, the passive electron remains in the $1s$ orbital, implying that the levels are energetically below the autoionizing threshold \cite{PhysRevA.81.022507}. Furthermore, other norm-nonconserving processes such as single-photon ionization due to $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$ \cite{Als-Nielsen:EM-01, *LANL} and multi-photon ionization due to $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ \cite{1063-7869-47-9-R01, *Perelomov1, *Perelomov3} are safely negligible for the moderate, near-resonant fields employed here. \subsection{Hamiltonian and equations of motion} \label{Electric-dipole Hamiltonians} The interaction of the electric fields $\boldsymbol{\mathcal{E}}_{q}(\boldsymbol{r},\,t)$ in Eq.~(\ref{eq:electricfields}), with $q\in\{\mathrm{X,\,L,\,C}\}$, and $N$ ions, respectively at positions $\boldsymbol{r}_n$, with $n\in\{1,\,\ldots,\,N\} $, is described by the Hamiltonian \begin{equation} \hat{H} = \hat{H}_{0} + \sum_{q \in\{\mathrm{X,\,L,\,C}\}}\hat{H}_{\mathrm{E1},q}, \label{eq:totalHamiltonian} \end{equation} where \begin{equation} \hat{H}_0 = \sum_{n=1}^N \sum_{i\in S}\omega_{i}\,\hat{\sigma}_{ii}^n \end{equation} is the atomic electronic structure Hamiltonian and \begin{equation} \hat{H}_{\mathrm{E1},q} = \sum_{n=1}^N \hat{H}^n_{\mathrm{E1},q} = \sum_{n=1}^N\hat{\boldsymbol{d}}_n\cdot \boldsymbol{\mathcal{E}}_q(\boldsymbol{r}_n,\,t) \label{eq:E1Hamiltonian} \end{equation} are the E1 interaction Hamiltonians \cite{Scully:QuantumOptics, johnson2007atomic}. In the previous equations, \begin{equation} \hat{\sigma}_{ij}^n = |i\rangle_n\,_n\langle j| \end{equation} are the ladder operators, where $i,\,j \in S$ and $n\in\{1,\ldots,\,N\}$, while $\hat{\boldsymbol{d}}_n$ represents the dipole operator of an ion at position $\boldsymbol{r}_n$, \begin{equation} \hat{\boldsymbol{d}}_n = \sum_{i,j\in S} \boldsymbol{d}_{ij,n} \, \hat{\sigma}^n_{ij} , \label{eq:atomicoperator} \end{equation} with matrix elements \begin{equation} \boldsymbol{d}_{ij,n} =\,_n\langle i| \hat{\boldsymbol{d}}_n |j\rangle_n. \end{equation} Since the dipole moment is a property of the ion species, i.e., of the atomic number and the charge of the ion, the matrix elements $\boldsymbol{d}_{ij,n}= \boldsymbol{d}_{ij}$ do not explicitly depend on $n$. Furthermore, because $\hat{\boldsymbol{d}}_n$ is an irreducible tensor operator of rank 1 \cite{johnson2007atomic}, its vector components $\boldsymbol{d}_{ij}$, with $i,\,j \in S$, can be written as \begin{equation} {\boldsymbol{d}}_{ij} = {d}^{\,-1}_{ij}\hat{\boldsymbol{e}}_{\sigma_-} + {d}^{\,0}_{ij}\hat{\boldsymbol{e}}_{z} + {d}^{\,1}_{ij}\hat{\boldsymbol{e}}_{\sigma_+}, \label{eq:rank1operator} \end{equation} where, in addition to the Cartesian unit vectors $\hat{\boldsymbol{e}}_{x}$, $\hat{\boldsymbol{e}}_{y}$, and $\hat{\boldsymbol{e}}_{z}$, we define the circular-polarization vectors \begin{equation} \hat{\boldsymbol{e}}_{\sigma_{\pm}} = {(\mp\hat{\boldsymbol{e}}_x + \mathrm{i}\hat{\boldsymbol{e}}_y)}/{\sqrt{2}}, \label{eq:polarizationvec} \end{equation} with the positive or negative sign for polarizations $\lambda = \pm 1$. These are complex vectors, $\hat{\boldsymbol{e}}_{\sigma_{\pm}}^* = - \hat{\boldsymbol{e}}_{\sigma_{\mp}}$, satisfying the orthogonality relations $\hat{\boldsymbol{e}}_{\sigma_{\pm}}\cdot \hat{\boldsymbol{e}}_{\sigma_{\pm}}^* = 1$ and $\hat{\boldsymbol{e}}_{\sigma_{\pm}}\cdot \hat{\boldsymbol{e}}_{\sigma_{\mp}}^* = 0$. From parity considerations, the component $d^{\,k}_{ij}$ in Eq.~(\ref{eq:rank1operator}), with $i,\,j \in S$ and $k\in\{0,\pm1\}$, does not vanish only if $k$ is equal to the difference $M_{J,i} - M_{J,j}$ between the angular-momentum quantum numbers of the states $i$ and $j$ \cite{johnson2007atomic}. The three driving fields are assumed to be tuned to the respective transition energies, i.e., $\omega_{\mathrm{X}} = \omega_{21}$, $\omega_{\mathrm{L}} = |\omega_{32}|$, and $\omega_{\mathrm{C}} = \omega_{43}$. The effect of a field on the transitions to which it is not tuned is negligible \footnote{For example, the effect of the \mbox{x-ray} driving on the E1-allowed $1 \leftrightarrow 4_0$ transition can be, in the case of $\mathrm{Be}^{2+}$ ions, safely neglected, as it corresponds to a detuning of $\varDelta = 1.8\,\unit{eV}$, whereas the natural decay width of the excited state is $\varGamma_{41} = 5.05\times10^{-4}\unit{eV}$ and the assumed \mbox{x-ray} bandwidth $\gamma_{\mathrm{c}}$ is smaller than $2\pi/T_{\mathrm{p}} =4.14\times10^{-6}\,\unit{eV}$.} and the relevant interactions are highlighted in Fig.~\ref{fig:TheModel}(b). The states $|2_{\pm}\rangle$ and $|4_{0}\rangle$ are neglected, because they are not driven and the decay from higher-energy levels to them is orders of magnitude smaller than to the ground state. With the previously described assumptions and in the rotating-wave approximation \cite{Scully:QuantumOptics}, the Hamiltonian $\hat{H}_{\mathrm{E1,X}}^n$ [Eq.~(\ref{eq:E1Hamiltonian})] describing the interaction of the $n$th ion with the \mbox{x-ray} field $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t) $ in Eq.~(\ref{eq:electricfields}) is given by \begin{equation} \hat{H}_{\mathrm{E1,X}}^n = \frac{d_{12_0}\bar{\mathcal{E}}_{\mathrm{X},0}}{2} \hat{\sigma}^n_{12_0}\,\mathrm{e}^{\mathrm{i}(\omega_{\mathrm{X}}t - \boldsymbol{k}_{\mathrm{X}}\cdot\boldsymbol{r}_n)} \,+\,\mathrm{H.c.} \label{eq:Hamiltonian12} \end{equation} The cw optical field $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$ interacts with the $n$th ion in a completely similar way \footnote{In the following discussion, we assume that $\omega_{32}>0$, this is, however, not an essential element of the derivation and the following calculations can be easily modified for $\omega_{32}<0$ (as necessary in order to apply our theory to $\mathrm{Be}^{2+}$ ions).}. Finally, the train of optical pulses \begin{equation} \boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t) = \mathcal{E}_{\mathrm{C},0}(t - \boldsymbol{r}\cdot\hat{\boldsymbol{e}}_y/v_{\mathrm{C}})\,\cos{(\omega_{\mathrm{C}}t - \boldsymbol{k}_{\mathrm{C}}\cdot\boldsymbol{r})}\,\hat{\boldsymbol{e}}_x, \end{equation} tuned to the transition $|3\rangle \rightarrow |4_{\pm}\rangle$ and linearly polarized along the $\hat{\boldsymbol{e}}_x$ direction, $\hat{\boldsymbol{e}}_x = (\hat{\boldsymbol{e}}^*_{\sigma_{-}} - \hat{\boldsymbol{e}}^*_{\sigma_+})/\sqrt{2}$, interacts with the $n$th ion via the E1 interaction Hamiltonian \begin{equation} \begin{aligned} \hat{H}_{\mathrm{E1,C}}^n =\,&\frac{1}{2} \sum_{j\in\{\pm\}}\hat{\sigma}^n_{34_{j}}\, \boldsymbol{d}_{34_{j}}\,\cdot\,\frac{\hat{\boldsymbol{e}}^*_{\sigma_{-}} - \hat{\boldsymbol{e}}^*_{\sigma_+}}{\sqrt{2}} \\ \,&\times\, \mathcal{E}_{\mathrm{C},0}(t - \boldsymbol{r}_n\cdot\hat{\boldsymbol{e}}_y/{v_{\mathrm{C}}})\,\mathrm{e}^{\mathrm{i}(\omega_{\mathrm{C}}t -\boldsymbol{k}_{\mathrm{C}}\cdot\boldsymbol{r}_n)}\,+\,\mathrm{H.c.}, \end{aligned} \label{eq:HC1} \end{equation} with $\boldsymbol{d}_{34_{\pm}} = d^{\mp 1}_{34_{\pm}}\,\hat{\boldsymbol{e}}_{\sigma_{\mp}}$. Here, $d^{- 1}_{34_{+}}$ and $d^{+ 1}_{34_{-}}$ are the matrix elements of the electric-dipole momentum operator $\hat{\boldsymbol{d}}_n$ [Eq.~(\ref{eq:rank1operator})], which are related via the Clebsch-Gordan coefficients \cite{johnson2007atomic}. In this case, the explicit calculation of the Clebsch-Gordan coefficients allows one to observe that $d^{- 1}_{34_{+}} / d^{+ 1}_{34_{-}} =1$ and, therefore, to define the constant $\tilde{d}_{34} = d^{- 1}_{34_{+}} = d^{+ 1}_{34_{-}}$, in terms of which we rewrite the E1 interaction Hamiltonian~(\ref{eq:HC1}) as \begin{equation} \begin{aligned} \hat{H}_{\mathrm{E1,C}}^n =\,& \frac{\tilde{d}_{34}\mathcal{E}_{\mathrm{C},0}(t- \boldsymbol{r}_n\cdot\hat{\boldsymbol{e}}_y/{v_{\mathrm{C}}})}{2}\\ \,&\times\,\sum_{j\in\{-1,\,+1\}} \frac{j}{\sqrt{2}}\hat{\sigma}^n_{34_j}\,\mathrm{e}^{\mathrm{i}(\omega_{\mathrm{C}}t -\boldsymbol{k}_{\mathrm{C}}\cdot\boldsymbol{r}_n)}\,+\,\mathrm{H.c.} \end{aligned} \label{eq:Hamiltonian34} \end{equation} The ensemble of ions driven by the external fields is described via the density operator $\hat{\rho}(t)$, with elements $\rho_{ij}^n(t) = \langle\hat{\sigma}^n_{ji}(t)\rangle$, where $\langle \cdots \rangle$ stands for the expectation value of a quantum operator. The time evolution of the density operator is given by the Liouville--von Neumann equation with system-reservoir interaction, i.e., by the master equation \cite{PhysRevLett.76.388, *PhysRevLett.77.3995, *PhysRevLett.81.293, *PhysRevLett.83.1307, *PhysRevLett.91.123601, *Kiffner_review, *PhysRevLett.106.033001, Scully:QuantumOptics} \begin{equation} \frac{\mathop{}\!\mathrm{d} \hat{\rho}}{\mathop{}\!\mathrm{d} t} = - \mathrm{i}\, [\hat{H}, \hat{\rho}] + \mathcal{L}[\hat{\rho}], \label{eq:master} \end{equation} where $\hat{H}$ is the Hamiltonian~(\ref{eq:totalHamiltonian}) and the Lindblad operator $\mathcal{L}[\hat{\rho}]$ represents the norm-conserving spontaneous decay of the system, \begin{equation} \mathcal{L}[\hat{\rho}] = \sum_{\substack{i,\,j\in S \\ \omega_{i} < \omega_{j}}} \sum_{n=1}^{N} -\frac{\varGamma_{ji}}{2}(\hat{\sigma}^n_{ji}\hat{\sigma}^n_{ij} \hat{\rho} \, -\, \hat{\sigma}^n_{ij}\hat{\rho}\hat{\sigma}^n_{ji} ) +\,\mathrm{H.c.}, \end{equation} where the decay rates are given by $\varGamma_{ji} = 4\omega_{ji}^3\alpha^3|{\boldsymbol{d}}_{ij}|^2/3$ \cite{Scully:QuantumOptics}, with $i,\,j\in S$. Norm-nonconserving terms such as those from autoionization {or photoionization} are not present in our situation involving moderate, near-resonant fields. The equations of motion (EOMs) from Eq.~(\ref{eq:master}), satisfied by the matrix elements $\rho_{ij}^n(t) = \langle\hat{\sigma}^n_{ji}(t)\rangle$ of the density operator $\hat{\rho}(t)$, can be more easily solved by introducing the slowly varying operators \cite{Scully:QuantumOptics, PhysRevA.22.2098, PhysRevA.45.4706, *PhysRevA.52.525} \begin{subeqnarray} \hat{\varsigma}^n_{2_01}(t) &= &\hat{\sigma}^n_{2_01}(t)\,\mathrm{e}^{-\mathrm{i}(\omega_{21}t - \boldsymbol{k}_{\mathrm{X}}\cdot \boldsymbol{r}_n)},\\ \hat{\varsigma}^n_{32_0}(t) &= &\hat{\sigma}^n_{32_0}(t)\,\mathrm{e}^{-\mathrm{i}(\omega_{32}t - \boldsymbol{k}_{\mathrm{L}}\cdot \boldsymbol{r}_n)},\\ \hat{\varsigma}^n_{31}(t) &= &\hat{\sigma}^n_{31}(t)\,\mathrm{e}^{-\mathrm{i}[\omega_{31}t - (\boldsymbol{k}_{\mathrm{X}} + \boldsymbol{k}_{\mathrm{L}})\cdot \boldsymbol{r}_n]},\\ \hat{\varsigma}^n_{4_{\pm}3}(t) &=& \hat{\sigma}^n_{4_{\pm}3}(t)\,\mathrm{e}^{-\mathrm{i}(\omega_{43}t - \boldsymbol{k}_{\mathrm{C}}\cdot \boldsymbol{r}_n)},\\ \hat{\varsigma}^n_{4_{\pm}2_0}(t) &= &\hat{\sigma}^n_{4_{\pm}2_0}(t)\,\mathrm{e}^{-\mathrm{i}[\omega_{42}t - (\boldsymbol{k}_{\mathrm{L}} + \boldsymbol{k}_{\mathrm{C}})\cdot \boldsymbol{r}_n]},\\ \hat{\varsigma}^n_{4_{\pm}1}(t) &= &\hat{\sigma}^n_{4_{\pm}1}(t)\,\mathrm{e}^{-\mathrm{i}[\omega_{41}t - (\boldsymbol{k}_{\mathrm{X}} + \boldsymbol{k}_{\mathrm{L}} + \boldsymbol{k}_{\mathrm{C}}) \cdot \boldsymbol{r}_n]}, \label{eq:slowlyvarying} \end{subeqnarray} and $\hat{\varsigma}^n_{ij}(t) = [\hat{\varsigma}^n_{ji}(t)]^{\dagger}$. With these operators, we introduce the slowly varying density operator $\hat{\varrho}(t)$ of elements \begin{equation} \varrho_{ij}^n(t) = \langle\hat{\varsigma}^n_{ji}(t)\rangle. \end{equation} From Eqs.~(\ref{eq:master}) and (\ref{eq:slowlyvarying}), the EOMs satisfied by the matrix elements $\varrho_{ij}^n(t)$ of the slowly varying density operator $\hat{\varrho}(t)$ are a set of coupled, linear differential equations with time-dependent coefficients. However, the only coefficients in the EOMs which explicitly depend on time are those associated with the envelope $\mathcal{E}_{\mathrm{C},0}(t- \boldsymbol{r}_n\cdot\hat{\boldsymbol{e}}_y/{v_{\mathrm{C}}})$ of the pulse train $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ appearing in Eq.~(\ref{eq:Hamiltonian34}). From Eq.~(\ref{eq:master}) it follows that the matrix elements of two density operators $\hat{\varrho}^n(t)$ and $\hat{\varrho}^{n'}(t)$, respectively associated with two ions at positions $\boldsymbol{r}_n$ and $\boldsymbol{r}_{n'}$, assume the same values at different times, i.e., \begin{equation} \varrho_{ij}^n(t) = \varrho_{ij}^{n'}[t + (\boldsymbol{r}_{n'} -\boldsymbol{r}_n )\cdot\hat{\boldsymbol{e}}_y/v_{\mathrm{C}}], \label{eq:ritardo} \end{equation} where the retardation effect displayed in Eq.~(\ref{eq:ritardo}) is due to the propagation of the train of pulses $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ through the ensemble of particles. Finally, we note that, for the cw~driving fields $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$ and $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$ and the periodic train of optical pulses $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$, the set of coupled, linear differential equations from~(\ref{eq:master}) exhibits periodic, time-dependent coefficients, with the repetition period $T_{\mathrm{p}}$ of the pulse train $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$. As a result, the master equation~(\ref{eq:master}) admits a periodic solution, in the following denoted as $\hat{\varrho}^{\mathrm{eq}}(t) = \hat{\varrho}^{\mathrm{eq}}(t+ T_{\mathrm{p}})$, which is asymptotically reached after turn-on effects have ceased. \subsection{Spectrum of resonance fluorescence} \label{Many-atom spectrum of resonance fluorescence from periodic EOMs} While not coherently driven, the E1-allowed transition $4\rightarrow 1$ undergoes spontaneous decay [Fig.~\ref{fig:TheModel}(b)]: these photons, decaying from the states $|4_{\pm}\rangle$ with $M_J\in\{+1,\,-1\}$ to the state $|1\rangle$ with $M_J=0$, differ in energy and polarization from those decaying via the $2\rightarrow 1$ transition, i.e., they occupy a different region of the spectrum and can be distinguished via a polarization-dependent detector. In the following, therefore, we are allowed to directly focus on the elements of the emitted electric field and of the ensuing spectrum of resonance fluorescence which are associated with photon emission from the $4\rightarrow 1$ transition. Furthermore, we show that, by writing an optical frequency comb onto the driving cw \mbox{x-ray} field, a scheme connected to four-wave mixing \cite{Scully:QuantumOptics} is developed to imprint a frequency comb onto the coherent part of the spectrum in the forward direction. In order to calculate the spectrum of resonance fluorescence emitted by a set of driven ions, we introduce the electric-field operator \begin{equation} \hat{\boldsymbol{E}}(\boldsymbol{r},\,t) = \hat{\boldsymbol{E}}^+(\boldsymbol{r},\,t) + \hat{\boldsymbol{E}}^-(\boldsymbol{r},\,t), \end{equation} with positive- and negative-frequency components $\hat{\boldsymbol{E}}^+(\boldsymbol{r},\,t)$ and $\hat{\boldsymbol{E}}^-(\boldsymbol{r},\,t)$, respectively, where $\hat{\boldsymbol{E}}^-(\boldsymbol{r},\,t) = [\hat{\boldsymbol{E}}^+(\boldsymbol{r},\,t)]^{\dagger}$. The driven ions behave like oscillating dipole moments, emitting waves with a dipole radiation pattern. In the far-field region, the electric-field operator can be related to the dipole response of the atomic system via the following equality \cite{Scully:QuantumOptics}: \begin{equation} \begin{aligned} \hat{\boldsymbol{E}}^+(\boldsymbol{r}, \, t) = \,& \sum_{j\in \{4_+,\,4_-\}}\biggl[\ \frac{\omega_{41}^2}{c^2\,r}\,[{\boldsymbol{d}}_{j1} - \hat{\boldsymbol{e}}_r\,({\boldsymbol{d}}_{j1}\cdot \hat{\boldsymbol{e}}_r)]\\ \,&\times\,\sum_{n=1}^N \hat{\sigma}^n_{1j}(t - |\boldsymbol{r}- \boldsymbol{r}_n|/c)\biggr]. \label{eq:totalelectricfield} \end{aligned} \end{equation} Here, $\boldsymbol{r} = r\,\hat{\boldsymbol{e}}_r$ is the detection point with respect to the ensemble of ions, at distance $r$ and along the observation direction given by the unit vector $\hat{\boldsymbol{e}}_r$. The power spectrum of resonance fluorescence \cite{Scully:QuantumOptics} is given by \begin{equation} \begin{aligned} S(\boldsymbol{r}, \omega) =&\, \frac{1}{4\pi^2\alpha}\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}\int_{-T/2}^{T/2} \\ \,&\times\bigl[\bigl\langle \hat{\boldsymbol{E}}^-(\boldsymbol{r},\,t_1)\cdot \hat{\boldsymbol{E}}^+(\boldsymbol{r},\,t_2) \bigr\rangle\mathrm{e}^{-\mathrm{i}\omega (t_2 - t_1)}\bigr]\mathop{}\!\mathrm{d} t_1\mathop{}\!\mathrm{d} t_2, \end{aligned} \label{eq:spectrum} \end{equation} where $S(\boldsymbol{r}, \omega)\,\mathop{}\!\mathrm{d} \omega\,r^2\mathop{}\!\mathrm{d}\varOmega$ represents the power emitted into the energy interval $[\omega,\,\omega+\mathop{}\!\mathrm{d}\omega]$ and detected in a solid angle $\mathop{}\!\mathrm{d}\varOmega$ centered at the observation point $\boldsymbol{r}$. Because of Eqs.~(\ref{eq:totalelectricfield}) and (\ref{eq:spectrum}), the spectrum depends on the two-time atomic expectation values $\langle\hat{\sigma}^n_{j1}(t_1)\,\hat{\sigma}^{n'}_{1j'}(t_2)\rangle$ \cite{Scully:QuantumOptics}, with $j,\,j'\in\{4_+,\,4_-\}$. Two contributions can thus be distinguished: a coherent spectrum $S^{\mathrm{coh}}(\boldsymbol{r}, \omega)$, depending on the product of single-time expectation values $\langle\hat{\sigma}^n_{j1}(t_1)\rangle\,\langle\hat{\sigma}^{n'}_{1j'}(t_2)\rangle$, and an incoherent spectrum $S^{\mathrm{inc}}(\boldsymbol{r}, \omega)$, related to $\langle \updelta\hat{\sigma}^n_{j1}(t_1)\,\updelta\hat{\sigma}^{n'}_{1j'}(t_2)\rangle$, where $\updelta\hat{\sigma}_{1j}^n = \hat{\sigma}_{1j}^n - \langle\hat{\sigma}_{1j}^n\rangle$. In the following, we calculate the coherent part of the spectrum of resonance fluorescence which is emitted by the ensemble of ions in the forward direction, with $\hat{\boldsymbol{e}}_r\approx \hat{\boldsymbol{e}}_y$. Because of the emitted electric-field operator given in Eq.~(\ref{eq:totalelectricfield}), the spectrum of resonance fluorescence~(\ref{eq:spectrum}) exhibits position-dependent prefactors given by \begin{equation} [{\boldsymbol{d}}_{j1} - \hat{\boldsymbol{e}}_r\,({\boldsymbol{d}}_{j1}\cdot \hat{\boldsymbol{e}}_r)]\ [{\boldsymbol{d}}_{1j'} - \hat{\boldsymbol{e}}_r\,({\boldsymbol{d}}_{1j'}\cdot \hat{\boldsymbol{e}}_r)], \label{eq:productss} \end{equation} with $j,\,j'\in\{4_+,\,4_-\}$, where the dipole-moment matrix elements $\boldsymbol{d}_{j1}$ and $\boldsymbol{d}_{1j'}$ are rank-1 tensors given by Eq.~(\ref{eq:rank1operator}). In particular, a close inspection of the associated Clebsch-Gordan coefficients allows one to notice that \begin{subeqnarray} \boldsymbol{d}_{4_+1} = \tilde{d}_{41}\,\hat{\boldsymbol{e}}_{\sigma^+}, &\boldsymbol{d}_{4_-1} = \tilde{d}_{41}\,\hat{\boldsymbol{e}}_{\sigma^-},\\ \boldsymbol{d}_{14_+} = -\tilde{d}_{41}^*\,\hat{\boldsymbol{e}}_{\sigma^-}, &\boldsymbol{d}_{14_-} = -\tilde{d}_{41}^*\,\hat{\boldsymbol{e}}_{\sigma^+}, \label{eq:dtildes} \end{subeqnarray} where $\tilde{d}_{41}$ is the amplitude of the dipole-moment matrix element and the circular-polarization vectors $\hat{\boldsymbol{e}}_{\sigma^{\pm}}$ were defined in Eq.~(\ref{eq:polarizationvec}). For observation directions $\hat{\boldsymbol{e}}_r$ close to the forward direction $\hat{\boldsymbol{e}}_y$ along which the three incident fields propagate, Eqs.~(\ref{eq:totalelectricfield}) and (\ref{eq:dtildes}), together with the definition of the resonance fluorescence spectrum~(\ref{eq:spectrum}), lead to \begin{widetext} \begin{equation} \begin{aligned} S^{\mathrm{coh}}(\boldsymbol{r}, \omega) =& \frac{\omega_{41}^4\,|\tilde{d}_{41}|^2}{8\pi^2 c^3 r^2}\,\sum_{j,\,j'\in\{4_+,\,4_-\}} \sum_{n=1}^N \sum_{n'=1}^N \lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}\int_{-T/2}^{T/2} \bigl[\bigl\langle \hat{\varsigma}^n_{j1}(t_1 - |\boldsymbol{r} - \boldsymbol{r}_n|/c)\bigr\rangle\,\bigl\langle\hat{\varsigma}^{n'}_{1j'}(t_2-|\boldsymbol{r} - \boldsymbol{r}_{n'}|/c)\bigr\rangle\\ &\times\,(-1)^{\delta_{jj'}+1}\,\mathrm{e}^{-\mathrm{i}(\omega - \omega_{41})(t_2 - t_1)}\, \mathrm{e}^{\mathrm{i}[(\omega_{41}/c)\,\hat{\boldsymbol{e}}_r -(\boldsymbol{k}_{\mathrm{X}}+ \boldsymbol{k}_{\mathrm{L}}+ \boldsymbol{k}_{\mathrm{C}})]\cdot(\boldsymbol{r}_n -\boldsymbol{r}_{n'})}\bigr]\,\mathop{}\!\mathrm{d} t_1\,\mathop{}\!\mathrm{d} t_2, \end{aligned} \label{eq:cohspectrumfirst} \end{equation} where $\delta_{jj'}$ is the Kronecker $\delta$~symbol. The position-dependent exponential function in the second line of Eq.~(\ref{eq:cohspectrumfirst}) renders the coherent part of the spectrum $S^{\mathrm{coh}}(\boldsymbol{r}, \omega)$ nonvanishing only in a small solid angle centered on $\hat{\boldsymbol{e}}_y$, as we discuss in the following [see, e.g., Eq.~(\ref{eq:eta})]. In this region, the space-dependent factors~(\ref{eq:productss}) do not display appreciable modifications and are therefore approximated with the value they exhibit at $\hat{\boldsymbol{e}}_r = \hat{\boldsymbol{e}}_y$. These factors, calculated by employing the definition of the complex polarization vectors $\hat{\boldsymbol{e}}_{\sigma^{\pm}}$ from Eq.~(\ref{eq:polarizationvec}), are responsible for the term $(-1)^{\delta_{jj'}+1}$ in the second line of Eq.~(\ref{eq:cohspectrumfirst}). In order to proceed with the calculation of the spectrum of resonance fluorescence, we notice that the two states $|4_+\rangle$ and $|4_-\rangle$ are driven with opposite sign by the optical frequency comb. This is apparent from the factor $j$ in the E1 interaction Hamiltonian~(\ref{eq:Hamiltonian34}). As a result, the solution of the EOMs~(\ref{eq:master}) can be shown to satisfy $\varrho_{14_+}(t) = - \varrho_{14_-}(t)$, which can be employed to simplify the previously calculated spectrum~(\ref{eq:cohspectrumfirst}) to \begin{equation} \begin{aligned} S^{\mathrm{coh}}(\boldsymbol{r}, \omega) =& 4 \frac{\omega_{41}^4\,|\tilde{d}_{41}|^2}{8\pi^2 c^3 r^2}\,\sum_{n=1}^N \sum_{n'=1}^N \lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}\int_{-T/2}^{T/2} \bigl[\bigl\langle \hat{\varsigma}^n_{4_+1}(t_1 - |\boldsymbol{r} - \boldsymbol{r}_n|/c)\bigr\rangle\,\bigl\langle\hat{\varsigma}^{n'}_{14_{+}}(t_2-|\boldsymbol{r} - \boldsymbol{r}_{n'}|/c)\bigr\rangle\\ &\times\,\mathrm{e}^{-\mathrm{i}(\omega - \omega_{41})(t_2 - t_1)}\, \mathrm{e}^{\mathrm{i}[(\omega_{41}/c)\,\hat{\boldsymbol{e}}_r -(\boldsymbol{k}_{\mathrm{X}}+ \boldsymbol{k}_{\mathrm{L}}+ \boldsymbol{k}_{\mathrm{C}})]\cdot(\boldsymbol{r}_n -\boldsymbol{r}_{n'})}\bigr]\,\mathop{}\!\mathrm{d} t_1\,\mathop{}\!\mathrm{d} t_2. \end{aligned} \label{eq:cohspectrum} \end{equation} The constructive interference among the four ``paths'' in Eq.~(\ref{eq:cohspectrumfirst}) leads to a reinforcement of the total spectrum, given by the factor four in Eq.~(\ref{eq:cohspectrum}). Similar interference effects in resonance fluorescence were described in Ref.~\cite{PhysRevLett.96.100403, *PhysRevA.73.063814}. Although each ion emits independently of the other ones, in the forward direction, i.e., in the $\hat{\boldsymbol{e}}_y$ direction along which the three driving fields propagate, phase matching of emission from different ions is achieved \cite{arXiv:1203.4127}. This allows one to assume that $k_q = \omega_q/c$, for $q\in\{\mathrm{X,\,L,\,C}\}$. As a result, for $\boldsymbol{r} = r\, \hat{\boldsymbol{e}}_y$, the argument of the second exponential function in the second line of Eq.~(\ref{eq:cohspectrum}) vanishes, since $(\omega_{41}/c)\,\hat{\boldsymbol{e}}_r -(\boldsymbol{k}_{\mathrm{X}}+ \boldsymbol{k}_{\mathrm{L}}+ \boldsymbol{k}_{\mathrm{C}}) = 0$, and the spectrum reduces to \begin{equation} \begin{aligned} S^{\mathrm{coh}}(r\, \hat{\boldsymbol{e}}_y, \omega) =& \frac{\omega_{41}^4\,|\tilde{d}_{41}|^2}{2\pi^2 c^3 r^2}\,\sum_{n=1}^N \sum_{n'=1}^N\,\lim_{T\rightarrow\infty}\frac{1}{T} \int_{-T/2}^{T/2}\int_{-T/2}^{T/2} \bigl[\bigl\langle \hat{\varsigma}^n_{4_+1}(t_1 - |\boldsymbol{r} - \boldsymbol{r}_n|/c)\bigr\rangle\,\bigl\langle \hat{\varsigma}^{n'}_{14_{+}}(t_2-|\boldsymbol{r} - \boldsymbol{r}_{n'}|/c)\bigr\rangle\\ &\times\,\mathrm{e}^{-\mathrm{i}(\omega - \omega_{41})(t_2 - t_1)}\bigr]\,\mathop{}\!\mathrm{d} t_1\,\mathop{}\!\mathrm{d} t_2. \end{aligned} \label{eq:spectrumintermediate} \end{equation} In Eq.~(\ref{eq:spectrumintermediate}), the product of two complex conjugate terms can be recognized, which leads to \begin{equation} \begin{aligned} S^{\mathrm{coh}}(r\, \hat{\boldsymbol{e}}_y, \omega) & = \frac{\omega_{41}^4\,|\tilde{d}_{41}|^2}{2\pi^2 c^3 r^2}\,\lim_{T\rightarrow\infty}\frac{1}{T}\,\left|\sum_{n=1}^N \int_{-T/2}^{T/2} \bigl\langle \hat{\varsigma}^n_{4_+1}(t_1 - |\boldsymbol{r} - \boldsymbol{r}_n|/c)\bigr\rangle\, \mathrm{e}^{\mathrm{i}(\omega - \omega_{41}) t_1}\,\mathop{}\!\mathrm{d} t_1\right|^2\\ & = \frac{\omega_{41}^4\,|\tilde{d}_{41}|^2}{2\pi^2 c^3 r^2}\,N^2\,\lim_{T\rightarrow\infty}\frac{1}{T}\,\left|\int_{-T/2}^{T/2} \bigl\langle \hat{\varsigma}_{4_+1}(t)\bigr\rangle\, \mathrm{e}^{\mathrm{i}(\omega - \omega_{41}) t}\,\mathop{}\!\mathrm{d} t\right|^2. \end{aligned} \label{eq:spectrumfinalforw} \end{equation} The just-described many-atom effect is essential to guarantee the directionality of the emitted coherent radiation. In the following, we describe the properties of the coherent part of the spectrum of resonance fluorescence for observation directions $\hat{\boldsymbol{e}}_r$ around the forward direction $\hat{\boldsymbol{e}}_y$ along which the three driving fields propagate. As discussed in Ref.~\cite{PhysRevA.45.4706, *PhysRevA.52.525}, the intensity of the coherent spectrum of resonance fluorescence rapidly falls for $\hat{\boldsymbol{e}}_r \neq \hat{\boldsymbol{e}}_y $, i.e., in a region for which the position-dependent factors~(\ref{eq:productss}) do not vary significantly and can therefore be assumed constant. From Eq.~(\ref{eq:cohspectrum}), this allows one to identify the rapidly varying, position-dependent term \begin{equation} \begin{aligned} \eta(\hat{\boldsymbol{e}}_r) &= \frac{S^{\mathrm{coh}}(r\hat{\boldsymbol{e}}_r,\omega)}{S^{\mathrm{coh}}(r\hat{\boldsymbol{e}}_y,\omega)} = \Bigl|\frac{1}{N}\sum_{n=1}^N \mathrm{e}^{\mathrm{i}\left[\frac{\omega_{41}}{c}\,\hat{\boldsymbol{e}}_r -(\boldsymbol{k}_{\mathrm{X}}+ \boldsymbol{k}_{\mathrm{L}}+ \boldsymbol{k}_{\mathrm{C}})\right]\cdot\boldsymbol{r}_n }\Bigr|^2 \approx \Bigl|\frac{1}{L} \int_{-L/2}^{L/2}\mathrm{e}^{\mathrm{i} \left[\frac{\omega_{41}}{c}\,(\hat{\boldsymbol{e}}_r -\hat{\boldsymbol{e}}_y)\right]\cdot \,(y \,\hat{\boldsymbol{e}}_y) }\mathop{}\!\mathrm{d} y \Bigr|^2 \\ &= \mathop{}\!\mathrm{sinc}^2{\left\{\frac{\omega_{41} L }{2c}[\cos(\phi) - 1]\right\}}. \end{aligned} \label{eq:eta} \end{equation} \end{widetext} In Eq.~(\ref{eq:eta}), $\phi$ is the angle that the unit vector $\hat{\boldsymbol{e}}_r$, associated with the direction of observation, forms with the $y$ axis, i.e., $\cos\phi = \hat{\boldsymbol{e}}_r \cdot \hat{\boldsymbol{e}}_y$, and $\mathop{}\!\mathrm{sinc}{(x)} = \sin{(x)}/x$. Furthermore, while going from the first to the second line in Eq.~(\ref{eq:eta}), we have approximated the sum over the $N$ ions with an integral over the coordinate $y = \boldsymbol{r}_n\cdot\hat{\boldsymbol{e}}_y$, assuming a length $L$ of the ion sample and a constant linear density $N/L$ \cite{PhysRevA.45.4706, *PhysRevA.52.525}. For $\phi = 0$, $\eta(\hat{\boldsymbol{e}}_r)$ is clearly equal to 1. However, the function $\eta(\hat{\boldsymbol{e}}_r)$ determines an emission cone with opening angle $\phi^*$. This is here defined as the angle corresponding to the first zero of Eq.~(\ref{eq:eta}), i.e., satisfying the identity \begin{equation} \frac{\omega_{41} L }{2c}[\cos(\phi^*) - 1] = \frac{\pi}{2}. \end{equation} The resulting opening angle of the emission cone \begin{equation} \phi^* \approx \sqrt{\frac{2c \pi}{\omega_{41} L}} \end{equation} and the distance $r$ at which the spectrum is observed allow one to define the area \begin{equation} \Delta A = r^2 \int_{\Delta\varOmega} \mathop{}\!\mathrm{d} \varOmega = \pi [r \sin(\phi^*)]^2 \approx \frac{2\pi^2 c r^2}{\omega_{41} L} \label{eq:area} \end{equation} in the solid angle $\Delta\varOmega$ about the forward direction $\hat{\boldsymbol{e}}_y$ in which the radiation is emitted. In contrast to the just-described part of the spectrum of resonance fluorescence, for which the coherent emission in the forward direction gives rise to a multiplication factor of $N^2$ in Eq.~(\ref{eq:spectrumfinalforw}), the incoherent part of the spectrum $S^{\mathrm{inc}}(r\, \hat{\boldsymbol{e}}_r, \omega)$ is only proportional to $N$ and completely lacks space-directionality contributions from many-particle effects \cite{PhysRevA.45.4706, *PhysRevA.52.525}. No terms such as $\eta(\hat{\boldsymbol{e}}_r)$ are present in the incoherent spectrum and the only position-dependent contribution is given by the terms in Eq.~(\ref{eq:productss}) \cite{PhysRevA.45.4706, *PhysRevA.52.525}. In the forward direction, the incoherent part of the spectrum is smaller than the coherent spectrum by a factor $N$ and will hence be neglected in the following. We conclude this section by focusing on the effects on the coherent part of the spectrum of resonance fluorescence due to the periodicity of the EOMs obtained from the master equation~(\ref{eq:master}). As we previously mentioned, since the linear differential equations determining the time evolution of the density operators $\hat{\varrho}^n(t)$ have coefficients which are periodic in time, there exists a periodic solution $\hat{\varrho}^{\mathrm{eq}}(t)$ of the EOMs. This solution has the same period $T_{\mathrm{p}}$ as the repetition time of the train of pulses $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ from the optical frequency comb which drives the ensemble of ions. When turn-on effects have ceased, any solution $\hat{\varrho}^{n}(t)$ converges to $\hat{\varrho}^{\mathrm{eq}}(t)$, independent of the initial state of the system. As discussed in Ref.~\cite{PhysRevA.22.2098}, we can take advantage of this periodic solution in Eq.~(\ref{eq:spectrumfinalforw}) to show that the coherent part of the spectrum emitted on the $4\rightarrow 1$ transition {in the forward direction} consists of an \mbox{x-ray} frequency comb centered on the frequency $\omega_{41}$ with the same tooth spacing as the driving optical frequency comb, \begin{subeqnarray} \label{eq:xraycomb} \slabel{eq:S4114tot} S^{\mathrm{coh}}(r\,\hat{\boldsymbol{e}}_y, \omega) = \,\sum_{m=-\infty}^{+\infty} \mathcal{S}_m\, \delta\Bigl( \omega - \omega_{41} - \frac{2\pi m}{T_{\mathrm{p}}}\Bigr),\\ \slabel{eq:Sm} \mathcal{S}_m = \frac{\omega_{41}^4\,|\tilde{d}_{41}|^2}{\pi c^3 r^2}\,N^2 \ \Bigl|\frac{1}{T_{\mathrm{p}}} \int_{0}^{T_{\mathrm{p}}}\varrho^{\mathrm{eq}}_{4_{+}1}(t)\,\mathrm{e}^{\mathrm{i} \frac{2\pi m} {T_{\mathrm{p}}} t }\,\mathop{}\!\mathrm{d} t\Bigr|^2. \end{subeqnarray} {Here, $\delta(x)$ is the Dirac $\delta$~function, $\varrho_{14_+}^{\mathrm{eq}}(t)$ the relevant matrix element of the periodic, slowly varying density operator $\hat{\varrho}^{\mathrm{eq}}(t)$, and $\tilde{d}_{41}$ was defined in Eq.~(\ref{eq:dtildes}). Because of {many-ion effects}, the photons emitted in the forward direction are focused in a beam whose mean area is given by Eq.~(\ref{eq:area}) \cite{PhysRevA.45.4706, *PhysRevA.52.525}. By recalling that the spectrum of resonance fluorescence is defined as the emitted power per unit area $A$ and unit frequency $\omega$, it follows that \begin{equation} \begin{split} P_m &= \int_{\Delta\varOmega} \int_{\omega_m - \pi/T_{\mathrm{p}}}^{\omega_{m} + \pi/T_{\mathrm{p}}} S^{\mathrm{coh}}(r\,\hat{\boldsymbol{e}}_r, \omega) \,\mathop{}\!\mathrm{d}\omega \,r^2\,\mathop{}\!\mathrm{d}\varOmega = \mathcal{S}_m \, \Delta A \\ &= \frac{ 2\pi \omega_{41}^3\,|\tilde{d}_{41}|^2}{ L c^2 }\,N^2 \, \Bigl|\frac{1}{T_{\mathrm{p}}} \int_{0}^{T_{\mathrm{p}}}\varrho^{\mathrm{eq}}_{4_{+}1}(t)\,\mathrm{e}^{\mathrm{i} \frac{2\pi m} {T_{\mathrm{p}}} t }\,\mathop{}\!\mathrm{d} t\Bigr|^2 \end{split} \label{eq:Pm} \end{equation} describes the power of the $m$th peak in the spectrum at frequency $\omega_m =\omega_{41} + {2\pi m}/{T_{\mathrm{p}}} $. \section{Results and discussion} \label{Results and discussion} In this section, we apply the previously described theoretical model to predict a frequency comb at \mbox{x-ray} frequencies. In particular, we aim at generating a comb with the same number of peaks, i.e., overall width, as the driving optical frequency comb, and with emitted power comparable to the power of present-day XUV combs generated via HHG \cite{Nature.482.68}. In order to bridge an energy difference between two \mbox{x-ray} levels, a sufficiently wide comb is needed. Furthermore, powers on the same order of magnitude guarantee that our predicted comb could be similarly detected and used as XUV combs from presently explored methods. In the following, we describe how we proceed to maximize the number and the power $P_m$ of the peaks in the comb~(\ref{eq:xraycomb}). From Eq.~(\ref{eq:Pm}), the power associated with the $m$th peak is proportional to the modulus squared of the $m$th Fourier coefficient of the periodic function $\varrho^{\mathrm{eq}}_{4_{+}1}(t)$. The properties of a Fourier-series expansion \cite{arfken2011mathematical} imply that the overall width of the spectrum is inversely proportional to the duration of $\varrho_{4_{+}1}^{\mathrm{eq}}(t)$. In order to produce an \mbox{x-ray} frequency comb with as many teeth as in the driving optical frequency comb, the matrix element $\varrho_{4_{+}1}^{\mathrm{eq}}(t)$ needs to consist of pulses closely following the envelope ${\mathcal{E}}_{\mathrm{{C}},0}(t)$ of the pulse train $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ of the driving optical-frequency-comb laser. This can be better understood by introducing the pulse area of a single pulse in the train $Q = \int_{0}^{T_{\mathrm{p}}} |\boldsymbol{d}_{34_+}|\,{\mathcal{E}}_{\mathrm{{C}},0}(t)\,\mathop{}\!\mathrm{d} t $. When the envelope of the driving pulse satisfies the condition $Q = 2n\pi$, the atomic variables of the system perform an integer number of Rabi cycles \cite{Scully:QuantumOptics} after which population and coherence of the highest level are brought back to $0$ exactly at the end of the pulse \cite{PhysRev.40.502, *PhysRevA.23.2496, *0022-3700-17-15-005, PhysRevA.17.247, *Lewenstein:86, *PhysRevA.86.033402}. Conversely, for $Q\neq 2n\pi$, the atomic variables are led to nonvanishing values at the end of the pulse, such that spontaneous decay of the highest level follows. By choosing the peak intensity $I_{\mathrm{{C}, max}}$ of the pulse train $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ to fulfill the condition $Q = 2n \pi$, we guarantee that, in the absence of an optical pulse, the population of the states $|4_{\pm}\rangle$ and the off-diagonal terms ${\varrho}_{4_{\pm}1}^{\mathrm{eq}}(t)$ vanish exactly. The emitted spectrum [Eq.~(\ref{eq:xraycomb})] consists of peaks whose power, from Eq.~(\ref{eq:Pm}), is proportional to the Fourier coefficient of a function which is different from 0 only in the presence of the optical pulses, i.e., in an interval of duration $T_{\mathrm{FWHM}}$ given by the FWHM duration of the pulses giving rise to the optical frequency comb. As a result of the properties of a Fourier-series expansion, the overall width of the spectrum is thus given by $\sim$\,$2\pi/T_{\mathrm{FWHM}}$. Conversely, if the intensity of the pulse train $\boldsymbol{\mathcal{E}}_{\mathrm{C}}(\boldsymbol{r},\,t)$ is not properly chosen, every pulse from the optical-frequency-comb laser gives rise to a subsequent long decay of the atomic variables, ${\varrho}_{4_{+}1}^{\mathrm{eq}}(t) \sim \mathrm{e}^{-\varGamma_{41} t}$, which affects the amplitude of the peaks in Eq.~(\ref{eq:Pm}) and, therefore, results in a spectrum of smaller width, $\varGamma_{41} \ll 2\pi/T_{\mathrm{FWHM}}$, and smaller number of relevant teeth. \begin{figure}[tb] \centering% \includegraphics[width=\linewidth, keepaspectratio]{HeLikeBeTotalNoEqShorterPeriod.eps} \caption{(Color online) Time evolution of the {periodic} density operator $\hat{\varrho}^{\mathrm{eq}}(t)$ and spectrum of resonance fluorescence for $\mathrm{Be^{2+}}$ ions. Present-day parameters are used to model the optical frequency comb [Eq.~(\ref{eq:opticalfrequencycomb})], $T_{\mathrm{FWHM}} = 120\,\unit{fs}$, {$T_{\mathrm{p}} = 1\,\unit{ns}$, $1/T_{\mathrm{p}} = 1\,\unit{GHz}$} \cite{Nature.482.68, Hartl:07, *nphoton.2008.79, *Eidam:10, *Ruehl:10}, i.e., {$2\pi/T_{\mathrm{p}} = 4.1\times 10^{-6}\,\unit{eV}$}. The ion sample has $N = 10^6$ particles over a length of $L = 1\,\unit{cm}$ {and area $1\,\mathrm{mm^2}$}. The driving fields have intensities {$I_{\mathrm{X}} = 1.5\times10^{4}\,\unit{W/cm^2}$, $I_{\mathrm{L}} = 1.7\times10^8\,\unit{W/cm^2}$}, and $I_{\mathrm{C, max}} = 3.0\times10^{10}\,\unit{W/cm^2}$, associated with $2\pi$ optical-frequency-comb pulses. The periodic solutions are (a) $\varrho^{\mathrm{eq}}_{4_+1}(t)$ for $nT_{\mathrm{p}} < t < nT_{\mathrm{p}} + T_{\mathrm{d}}$, with $ \,T_{\mathrm{d}} = \pi T_{\mathrm{FWHM}} /[2\arccos{(\sqrt[4]{1/2})}]$, and (b) $\varrho^{\mathrm{eq}}_{31}(t)$ for $nT_{\mathrm{p}} < t < (n+1)T_{\mathrm{p}} $. The power $P_m$ of each peak in the spectrum of Eq.~(\ref{eq:xraycomb}) is displayed (c) for the whole comb, centered on $\omega_{41} = 123.7\,\unit{eV}$, and (d) around the maximum. In panel (d), $a_1 = 10^5\, \unit{nW^{-1}}$, {$a_2 = 1.86\,\unit{nW}$}.}% \label{fig:HeLikeBeTotal} \end{figure} In Fig.~\ref{fig:HeLikeBeTotal} we show results obtained by applying our four-level scheme to model isolated transitions in $\mathrm{He}$-like $\mathrm{Be}^{2+}$ ions. The decay rates $\varGamma_{ji}$ are calculated with \texttt{grasp2K} \cite{DBLP:journals/cphysics/JonssonHFG07}, while the transition energies $\omega_{21} = 121.9\,\unit{eV}$, $\omega_{23} = 0.2699\,\unit{eV}$, $\omega_{43} = 2.018\,\unit{eV}$, and $\omega_{41} = 123.7\,\unit{eV}$, are taken from Ref.~\cite{PhysRevA.81.022507}. {We assume a density of $\mathrm{Be}^{2+}$ ions of $10^8\,\mathrm{cm^{-3}}$, which can be reached with an electron-beam ion trap \cite{PhysRevLett.107.143002, BernittNature}. For such a dilute sample, good phase matching is achieved \cite{arXiv:1203.4127, PhysRevA.78.043409, JPSJ.30.518}. Alternative experimental settings, e.g., by gas discharge or photoionization by an \mbox{x-ray} pre-pulse \cite{PhysRevLett.107.233001, Rohringer.Nature.481.2012}, may allow for higher densities, but one ought to ensure that a stable environment is obtained, such that all pulses in the optical frequency comb encounter a constant density of ions, atoms, and free electrons. This is discussed in the appendix}. By using an optical frequency comb composed of $2\pi$ pulses, the matrix elements of the periodic density operator $\hat{\varrho}^{\mathrm{eq}}(t)$ related to the states $|4_{\pm}\rangle$ vanish after each pulse. The time evolution of $\varrho^{\mathrm{eq}}_{4_+1}(t)$ in the presence of an optical pulse is exhibited in Fig.~\ref{fig:HeLikeBeTotal}(a), where it is apparent that the vanishing initial value is reached again at the end of the interaction. In the interval in between two optical pulses, when the excited states $|4_{\pm}\rangle$ are completely depopulated, the remaining states $|1\rangle$, $|2_0\rangle$, and $|3\rangle$, behave like a three-level system \cite{PhysRevA.42.1630, *PhysRevA.43.3748} driven by the two cw fields $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(\boldsymbol{r},\,t)$ and $\boldsymbol{\mathcal{E}}_{\mathrm{L}}(\boldsymbol{r},\,t)$. These fields stimulate oscillations of the remaining elements $\varrho^{\mathrm{eq}}_{ij}(t)$ of the density operator, with $i,\,j\in\{1,\,2_0,\,3\}$, as shown in Fig.~\ref{fig:HeLikeBeTotal}(b), and affect the periodic behavior of the entire density operator. In other words, the intensities $I_{\mathrm{X}}$ and $I_{\mathrm{L}}$ determine the amplitude of the oscillating function $\varrho^{\mathrm{eq}}_{31}(t)$ [Fig.~\ref{fig:HeLikeBeTotal}(b)], indirectly influencing also the peak value displayed by $\varrho_{4_+1}^{\mathrm{eq}}(t)$ in Fig.~\ref{fig:HeLikeBeTotal}(a). Given the relationship appearing in Eq.~(\ref{eq:Pm}) between the amplitude of $\varrho_{4_+1}^{\mathrm{eq}}(t)$ and the intensity of the peaks in the emitted \mbox{x-ray} frequency comb, it is important to properly set the peak intensities $I_{\mathrm{X}}$ and $I_{\mathrm{L}}$ in order to maximize the peak value of $\varrho_{4_+1}^{\mathrm{eq}}(t)$ and, consequently, the emitted photon number. Having suppressed the post-pulse decay of $\varrho^{\mathrm{eq}}_{4_+1}(t)$ by choosing a train of $2\pi$-area pulses, the resulting spectrum of resonance fluorescence [Eq.(\ref{eq:xraycomb})] is shown in Fig.~\ref{fig:HeLikeBeTotal}(c); it is centered on $\omega_{41} = 123.7\,\unit{eV}$ and contains {$\sim$\,$10^4$} peaks with an energy spacing of {$2\pi/T_{\mathrm{p}} =4.1\times10^{-6}\,\unit{eV}$}. Figure~\ref{fig:HeLikeBeTotal}(d) highlights the comb structure of the spectrum. The peak intensity of $I_{\mathrm{{C}, max}} = 3.0\times10^{10}\,\unit{W/cm^2}$ is much lower than those needed for the generation of XUV frequency combs via HHG \cite{Hartl:07, *nphoton.2008.79, *Eidam:10, *Ruehl:10} and the power of each peak in the emitted spectrum \footnote{This also holds for lower repetition frequencies, $1/T_{\mathrm{p}}=100\,\mathrm{MHz}$, where peak powers of the order of tens of picowatts are predicted.} is comparable to the power which was measured in Ref.~\cite{Nature.482.68}. The results presented so far were obtained by assuming cw x~rays with a vanishing bandwidth. To incorporate the effect of a finite bandwidth $\gamma_{\mathrm{c}}$ of the \mbox{x-ray} light source, we adopt the approach from Ref.~\cite{PhysRevLett.37.1383} which includes the influence of the temporal fluctuations of the driving field on the spectrum of resonance fluorescence. In this case, the \mbox{x-ray} field $\boldsymbol{\mathcal{E}}_{\mathrm{X}}(t)$ is a stochastic variable which varies in the ensemble of all possible realizations of the stochastic process. Thereby, it is possible to derive the EOMs for the ensemble-averaged density operator and thus obtain the ensemble-averaged spectrum of resonance fluorescence. By following this approach, one obtains an ensemble-averaged spectrum which is a continuous function still displaying peaks at frequencies $\omega_m = \omega_{41} + 2\pi m/T_{\mathrm{p}}$, as in Eq.~(\ref{eq:xraycomb}). However, because of the finite bandwidth $\gamma_{\mathrm{c}}$ of the driving \mbox{x-ray} field, the $\delta$ peaks exhibited by Eq.~(\ref{eq:xraycomb}) are broadened and each peak in the ensemble-averaged spectrum features a spectral FWHM of $\sim$\,$2\gamma_{\mathrm{c}}$. To preserve also for $\gamma_{\mathrm{c}}\neq 0$ the frequency-comb structure we predicted in Eq.~(\ref{eq:xraycomb}), we need to ensure that the spectral width of the teeth in the imprinted comb is lower than their separation energy. From the previous considerations, this implies that the \mbox{x-ray} bandwidth ought to be smaller than the repetition frequency of the optical frequency comb, i.e., {$2\gamma_{\mathrm{c}}< 2\pi/T_{\mathrm{p}} = 4.1\times10^{-6}\,\unit{eV}$}. The many-peak structure is otherwise washed out and the peaks in the spectrum cannot be clearly distinguished. X~rays with such a small bandwidth are not available at present. Yet, by increasing the repetition frequency $2\pi/T_{\mathrm{p}}$ of the optical frequency comb, a wider \mbox{x-ray}-comb tooth spacing would result and a larger \mbox{x-ray} bandwidth may be accommodated. {With a peak intensity of $I_{\mathrm{C,max}} = 3\times 10^{10}\,\mathrm{W/cm^2}$, such an increase in the repetition rate of the optical frequency comb is feasible \cite{PhysRevLett.94.193201, *Nature.436.234, Nature.482.68, Sander:12}.} Furthermore, we notice that the quality and the coherence of \mbox{x-ray} sources have dramatically improved during the last decades. Although present \mbox{x-ray} sources do not provide the resolving powers required here \cite{nphoton.2010.176, *nphoton.2007.76, *nphoton.2011.178, *RepXFEL}, new schemes \cite{nphoton.2012.180, Rohringer.Nature.481.2012, Gabor_paper, *Buth:11, PhysRevLett.100.244802} show the strong need for narrower-bandwidth x~rays and the remarkable attempts to reach them. \section{Conclusions} \label{Conclusion} In this paper, we present an \mbox{x-ray} pulse-shaping method to directly access the time evolution of a driven atomic system and stimulate the periodic emission of x~rays via a three-color scheme in a four-level system. This is investigated by calculating the coherent part of the spectrum of resonance fluorescence which is emitted by an ensemble of ions in the forward direction. The model is applied to imprint an optical frequency comb onto cw x~rays. We employ $\mathrm{He}$-like $\mathrm{Be}^{2+}$ ions as an atomic implementation of the model. We show that a frequency comb is generated, which is centered on the \mbox{x-ray} transition energy at $123.7\,\unit{eV}$ and which requires peak intensities of the driving optical frequency comb which are lower by several orders of magnitude than those presently needed for HHG-based comb-generation methods \cite{Nature.482.68}. Although the four-level model developed in this paper was applied to He-like $\mathrm{Be}^{2+}$ ions, the scheme has general validity and can be employed to describe different systems with potentially higher \mbox{x-ray} transition energies. Similar results, for example, can be obtained from other $\mathrm{He}$-like ions, such as $\mathrm{Ne}^{8+}$. In this case, for $\omega_{41} = 922.0\,\unit{eV}$ \cite{PhysRevA.81.022507}, a comb in the keV~range can be predicted, yet for the transition energy $\omega_{43} = 6.679\,\unit{eV}$ intense optical frequency combs are not available. Our model can be also applied to different atomic transitions, e.g., $1s^2 \rightarrow 1s\,np$ with $n\geq 3$ in heavier ions, for which experimentally accessible \mbox{x-ray} and optical energies can be found; or even to nuclear transitions up to the $\gamma$~range. Our scheme takes advantage of narrow-bandwidth \mbox{x-ray} sources. We recognize that the assumption of a very narrow \mbox{x-ray} bandwidth does not allow an implementation of our scheme with currently available \mbox{x-ray} technology. Nevertheless, we are confident that the advances in \mbox{x-ray} science and the constant improvement in the quality and coherence of \mbox{x-ray} sources will soon provide the experimental conditions necessary to demonstrate the viability of the scheme. Not only does the model tackle the problem of \mbox{x-ray} comb generation, with the advantage of being applicable at energies for which existing methods would not be adequate, but it also represents an example of how the resonance fluorescence spectrum emitted by an ensemble of driven particles can be manipulated by directly controlling the time evolution of the atomic system.\\ \acknowledgments S.M.C.~and Z.H.~acknowledge helpful discussions with J\"org~Evers, Christian~Ott, and Thomas~Pfeifer. The work of Z.H.~was supported by the Alliance Program of the Helmholtz Association (HA216/EMMI).
1,477,468,750,500
arxiv
\section{Introduction} \label{Intro} Predicting the lifetime of safety-critical components in gas turbine engines is crucial to continued improvements in flight safety. Despite a global increase of 90\% in passenger journeys and 40\% in freight transport by air in the decade to 2018 \cite{WB}, the accident rate across the same time period dropped by 30\% \cite{ICAO}. Sustaining this safety improvement relies on mechanistic understanding of the materials serving in such applications, and this same knowledge can steer the development of more capable alloys. Titanium alloys form a key materials system for aerospace applications, with \textalpha{}+\textbeta{} alloys such as Ti--64 (Ti--6Al--4V, wt.\%) widely used in fan and compressor components \cite{Boyer1996}. Deeper understanding of these alloys' response to fatigue loading has been an area of intense effort for several years \cite{Evans1994,Brandes2010}. In \textalpha{}+\textbeta{} alloys, the elastic anisotropy and limited slip system availability of the hcp \textalpha{} phase have a significant impact on the polycrystalline material's overall response to static and cyclic loading regimes \cite{Dunne2007}. This means that, during high cycle or dwell fatigue loading, variations in elastic and plastic behaviour from grain to grain can initiate yielding near the boundary of particularly mismatched grains. The nature of this intragranular plasticity plays a key role in fatigue crack initiation. If easy cross-slip is possible, dislocations are able to travel across an individual grain to intersect its grain boundaries at any location. If dislocations are instead restricted in their ability to cross-slip and travel homogeneously, slip bands are formed and eventually intersect a grain boundary. This results either in slip transmission, where the next grain in its path is well oriented for deformation, or in a dislocation pile-up \cite{Joseph2018}. A sufficiently large pile-up may impose enough stress to nucleate a fatigue crack \cite{Dunne2008}. Groups of similarly-oriented grains (macrozones) and slip bands extending across millimetres have been implicated in dwell fatigue \cite{PilchakFAA,Evans1994,Neal1988}, including in service issues \cite{BEAReport}. Slip band formation and factors promoting the even distribution of slip across the \textalpha{} microstructure are hence of significant interest. Aluminium is commonly included at around \SI{6}{\wtpercent} in \textalpha{}+\textbeta{} alloys, stabilising the \textalpha{} phase and providing solid solution strengthening. Phase segregation produces an \textalpha{} composition closer to \SI{7}{\wtpercent} (\SI{11.8}{\atpercent}) meaning that, at temperatures of 500--\SI{700}{\celsius}, crystallographic ordering of Al can occur and lead to precipitation of the \textalpha{}$_2$ phase (Ti$_3$Al, DO$_{19}$ structure) \cite{Gehlin1970}. The position of the \textalpha{}/\textalpha{}+\textalpha{}$_2$ boundary has been the subject of several studies, with successive iterations of the Ti--Al phase diagram shifting it towards lower Al content \cite{Namboodhiri1973,Sircar1986}. A region of short-range ordering (SRO) between truly disordered \textalpha{} and the \textalpha{}+\textalpha{}$_2$ field has also been proposed \cite{Namboodhiri1983}. Recent diagrams place the boundary at around \SI{10}{\atpercent} in calculated diagrams and \SI{12}{\atpercent} in Schuster and Palm's diagram drawing together numerous experimental observations \cite{Witusiewicz2008,Wang2012,Schuster2006}. Even in the early stages of \textalpha{}$_2$ formation, where some form of ordering (evidenced by faint superlattice reflections) exists prior to precipitates being resolvable in dark field TEM images, there is a significant impact on slip behaviour \cite{Neeraj2001}. Cross-slip is hindered, such that first dislocations passing across a grain disrupt the local ordering, leaving a trail of disrupted structure that offers an easier route for subsequent dislocations. This results in slip band formation and the associated deleterious micromechanical effects, with notable implications for tensile and fatigue response \cite{Brandes2010}. The earliest stages of ordered domain formation have been shown to restrict primary creep in \textalpha{}-Ti \cite{Neeraj2000,Neeraj2001}. Dislocation pinning by \textalpha{}$_2$ precipitates has also been observed \cite{Williams1972}. In macroscopic plastic deformation, \textalpha{}$_2$ causes initial strain hardening followed by localised strain softening\cite{Neeraj2001,Williams1969}, due to initial resistance to slip by the ordered domains being overtaken by the establishment of slip bands as easy paths for slip. Studies on the formation mechanism of \textalpha{}$_2$ are challenging due to the nanometre length scales and small compositional variations between matrix and precipitate that are involved, especially during the early stages of phase separation. Long-term ageing at temperatures around 500--\SI{600}{\celsius} typically produces precipitates 5--\SI{10}{\nano\metre} in size. Morphology is spheroidal, with elongation along the shared \textit{c}-axis occurring as growth proceeds and the precipitates undergo coarsening \cite{Blackburn1967}. The earliest stages of phase separation remain less well understood. Possible mechanisms include homogeneous nucleation or spinodal decomposition, and a spinodal decomposition triggered by SRO has also been suggested \cite{Blackburn1967,Liew1999,Wood1998}. The influence of additional alloying elements on \textalpha{}$_2$ formation is a significant consideration in trying to gauge the propensity of commercial alloys to deleterious Al ordering. Interstitial oxygen content has been shown to promote \textalpha{}$_2$ formation, shifting the \textalpha{}/\textalpha{}+\textalpha{}$_2$ phase boundary to lower Al content and higher temperature \cite{Schuster2006,Waterstrat1988,Lim1976,Gray1990,Bagot2018}. This is suggested to be caused by a reduction in the solubility of Al in Ti with increasing O content \cite{Lim1976}. The presence of \textbeta{} stabilisers in \textalpha{}+\textbeta{} alloys is also thought to influence \textalpha{}$_2$ formation \cite{Radecka2016}. This work investigates the factors controlling \textalpha{}$_2$ formation in an isothermal ageing study of a model alloy series based on Ti--7Al (wt.\%), with additions of O, V and Mo. Microstructures were observed in TEM, and local compositions were measured in atom probe tomography. Precipitate dispersion parameters such as number density were analysed using small angle X-ray scattering. Insights are then drawn regarding the role of vacancies in nucleation, a hypothesis which is then tested, allowing the role of solutes in enhancing \textalpha{}$_2$ precipitation to be understood. \section{Experimental methodology} \label{experimental} Alloys listed in Table~1 were melted from Ti sponge (Toho, Japan), TiO powder and pure Al, V and Mo pellet in an Arcast200 \SI{27}{\kilo\watt} low pressure argon arc melter and cast to produce 23$\times$23$\times$\SI{55}{\milli\metre} ingots. The alloys were then rolled and recrystallised in the \textalpha{} phase followed by ice water quenching. Samples of each alloy in this IWQ (disordered) condition were taken, and \SI{10}{\milli\meter} cubes were then encapsulated under an Ar atmosphere in quartz and aged at \SI{550}{\celsius} for up to \SI{120}{\day} and furnace cooled, to evolve the \textalpha{}$_2$ precipitate dispersions. Separately, samples of each material from the quenched condition were subjected to 2~hours ageing at \SI{550}{\celsius} before air cooling (AC condition), to capture the early stages of ordering. Microstructures were initially observed in backscatter electron imaging on a Zeiss Sigma~300 FEG-SEM operated at \SI{8}{\kilo\volt}. SEM specimens were prepared by electropolishing with a 3\% perchloric acid solution at \SI{-35}{\celsius} and \SI{20}{\volt}. Bulk compositions were confirmed using ICP-OES and combustion analysis provided by TIMET UK Ltd, Table~1. \begin{table*}[h]\begin{small} \centering \caption{Compositions of the Ti--7Al model alloy series measured by ICP-OES and combustion analysis by TIMET (Witton, UK). The hydrogen content in each alloy was measured to be \SI{0.01}{\wtpercent} or less. Alloys in this study are referred to by their nominal compositions. Rolling ($T_{\mathrm{roll}}$) and recrystallisation ($T_{\mathrm{RX}}$) temperatures for these steps were chosen according to the \textbeta{} transus for each alloy, identified by iterative heat treatments and metallography. Recrystallisation times ($t_{\mathrm{RX}}$) were chosen to account for the varying recrystallisation kinetics in each system.} \begin{tabular}{l | c c c c c | c c c c c | c c c} \hline Alloy (nominal & \multicolumn{5}{c|}{Measured composition / wt.\%} & \multicolumn{5}{c|}{Measured composition / at.\%} & $T_{\mathrm{roll}}$ & $T_{\mathrm{RX}}$ & $t_{\mathrm{RX}}$\\ composition) & Al & V & Mo & O & N & Al & V & Mo & O & N & /\si{\celsius} & /\si{\celsius} & /h\\ \hline Ti--7Al & 6.58 & $<$0.01 & $<$0.01 & 0.05 & 0.02 & 11.09 & $<$0.01 & $<$0.01 & 0.14 & 0.06 & 900 & 980 & 1 \\ ~--0.25O & 7.14 & $<$0.01 & $<$0.01 & 0.24 & 0.05 & 11.95 & $<$0.01 & $<$0.01 & 0.68 & 0.16 & 900 & 980 & 1 \\ ~--1.1V & 7.01 & 1.21 & $<$0.01 & 0.07 & 0.04 & 11.79 & 1.08 & $<$0.01 & 0.20 & 0.13 & 900 & 850 & 1 \\ ~--1.1V--0.25O & 7.04 & 1.17 & $<$0.01 & 0.26 & 0.08 & 11.80 & 1.04 & $<$0.01 & 0.73 & 0.26 & 900 & 850 & 1 \\ ~--0.8Mo & 7.17 & $<$0.01 & 0.83 & 0.10 & 0.08 & 12.08 & $<$0.01 & 0.39 & 0.28 & 0.26 & 850 & 850 & 18 \\ ~--0.8Mo--0.25O & 7.16 & $<$0.01 & 0.90 & 0.30 & 0.01 & 12.01 & $<$0.01 & 0.42 & 0.85 & 0.03 & 850 & 850 & 18 \\ \hline \end{tabular} \label{table:process} \end{small}\end{table*} Formation of \textalpha{}$_2$ was observed with conventional TEM methods, with specimens prepared by jet electropolishing with a 3\% perchloric acid solution at \SI{-35}{\celsius} and \SI{20}{\volt} to perforation. Using a JEOL 2100F TEM operated at \SI{200}{\kilo\volt}, selected area electron diffraction patterns were collected for each sample along \hkl<0 1 -1 1> directions. Dark field images of the \textalpha{}$_2$ precipitates were then made using the \hkl{2 -1 -1 0}$_{\alpha_{2}}$ reflections. This provided qualitative observation of precipitate and dispersion characteristics, and allowed measurement of precipitate aspect ratios by measuring several wholly contained, clearly visible precipitates in images for each sample. Measurement of compositional features of the precipitates at sub-nanometre resolution was performed using atom probe tomography (APT). Specimens were prepared by conventional Ga$^+$ FIB lift-out methods \cite{Thompson2007}. Specific grain orientations were targeted using EBSD mapping prior to FIB work. This produced specimens with the APT analysis direction oriented within a few degrees of the \hkl<2 -1 -1 0> zone axis. Titanium and Ti-based alloys are prone to forming hydrides during specimen preparation for TEM and APT, an artefact that can be avoided by performing the final thinning or sharpening of specimens at cryogenic temperatures \cite{Chang2019}. Here, we used the infrastructure described in \cite{Stephenson2018,Rivas2020} for cryogenic preparation, yet upon comparing with specimens obtained on the same FIB at room temperature, no significant differences in the H uptake were noticed. The low solubility of H in \textalpha{}-Ti and the targeted preparation far from any interfaces likely explains these observations. APT samples were then run on a Cameca LEAP 5000 XS operated in voltage mode at \SI{50}{\kelvin}, with a pulse frequency of 200--\SI{250}{\kilo\hertz}, pulse fraction of 20\% and detection rate of 0.20--0.40\%. The data collected were then analysed using Cameca's IVAS analysis suite for reconstruction and MATLAB scripts for further analysis. Quantification of dispersion statistics and precipitate evolution was achieved using small angle X-ray scattering (SAXS). Specimens were prepared by electrodischarge machining of a \SI{3}{\milli\meter} diameter cylinder, from which discs of \SI{300}{\micro\metre} thickness were cut with a precision saw. The discs were then ground to the appropriate thickness ($\sim$\SI{100}{\micro\metre}) by hand using SiC grit papers up to a 4000 grit finish, followed by polishing with a neutralised colloidal silica solution by hand. After thorough cleaning with detergent to remove colloidal silica and isopropanol to remove surface contaminants, specimens were suspended in an amorphous, transparent tape for handling during measurement. SAXS measurements were then taken on the USAXS beamline \cite{Ilavsky2018} at the Advanced Photon Source at Argonne National Laboratory, using a \SI{21}{\kilo\electronvolt} beam and a 800$\times$\SI{800}{\micro\metre} beam area. The data were reduced using the Nika package \cite{Ilavsky2012} and analysed using the Irena package \cite{Ilavsky2009} within Igor Pro. Morphological information from TEM imaging and compositional information from APT were used to guide the fitting of SAXS data with known shapes, aspect ratios and phase compositions, allowing deconvolution of volume fraction and contrast. \section{Results} \label{results} \subsection{Microstructural characterisation} Equiaxed \textalpha{} microstructures were produced, Fig.~\ref{fig:backscatter}, with grain sizes of 10--\SI{50}{\micro\metre} depending on alloy composition. Alloys containing no \textbeta{} stabilisers showed a larger grain size due to the limited opportunity to restrict grain size during processing of material with a very narrow \textalpha{}+\textbeta{} phase field. The Mo-containing alloys contained a small fraction of \textbeta{} due to the very low solubility for Mo in \textalpha{}; it was later demonstrated in APT results that the \textalpha{} phase contained a small amount of Mo, as intended. \begin{figure}[t!] \centering \includegraphics[width=90mm]{p1fig1_bsei_v03.pdf} \caption{Backscatter micrographs of alloy microstructures, showing the intended equiaxed \textalpha{} microstructure. In the Mo-containing alloys, the very limited solubility of Mo in \textalpha{} led to the formation of small, micron-scale \textbeta{} domains at grain boundary triple points, which also contributed to grain size refinement during processing.} \label{fig:backscatter} \end{figure} \subsection{Transmission electron microscopy} Selected area electron diffraction patterns taken for \textbf{B}~=~\hkl<0 1 -1 1>, Fig.~\ref{fig:saedp}, show the development of superlattice reflections as the ordered \textalpha{}$_2$ phase forms and grows. After a short hold of 2~hours at \SI{550}{\celsius}, a small amount of intensity was observable at superlattice spot locations. Upon further ageing for 10~days or more, superlattice reflections became distinct spots and increased in intensity as ageing progressed. \begin{figure}[h!] \centering \includegraphics[width=80mm]{p1fig2_sadp_v04.pdf} \caption{Selected area electron diffraction patterns (\textbf{B}~=~\hkl<0 1 -1 1>) obtained for Ti--7Al--0.05O~(wt.\%), in the ice water quenched (disordered) state, and in selected subsequent ageing states. Diffuse superlattice reflections are faintly visible after 2~hours ageing at \SI{550}{\celsius} (AC condition), which intensify as phase separation progresses at 10 days and 120 days.} \label{fig:saedp} \end{figure} Dark field images provide a qualitative view of trends in precipitate morphology, size and number density during ageing, Fig.~\ref{fig:darkfield}. Imaging was attempted for the 2~hour aged specimens, but no image contrast was evident. The base alloy Ti--7Al--0.05O showed formation of nanoscale precipitates after 10~days, which coarsened over time whilst growing in size and increasing in aspect ratio. The addition of oxygen to the alloy system causes an increase in precipitate number density, and produces smaller precipitates. The effect of oxygen on volume fraction of the \textalpha{}$_2$ phase is unclear from qualitative micrographs; it should be recalled that these are projections of contrast through the foil thickness. TEM images do not show evidence of a significant effect of vanadium on the precipitate dispersion. The addition of molybdenum considerably restricts precipitate sizes, and precipitate aspect ratio does not increase as significantly over the duration of the study. However, these micrographs only provide a qualitative impression of the alloying trends; for a quantitative comparison we turn next to atom probe tomography and SAXS. \begin{figure*}[p] \centering \includegraphics[width=0.9\textwidth]{p1fig3_dfgrid_v06.pdf} \caption{Dark field transmission electron micrographs recorded for specimens of each alloy at different ageing times, using a two-beam condition with the \hkl[2 -1 -1 0] reflection for \textbf{B}~=~\hkl[0 1 -1 1]. The base alloy Ti--7Al--0.05O~(wt.\%) shows formation of spheroidal precipitates that increase in size and aspect ratio as ageing progresses. Additional solutes modify the way in which precipitate size, aspect ratio, spacing and number density evolve over time.} \label{fig:darkfield} \end{figure*} \subsection{Atom probe tomography} Atom probe tomography results provided a quantitative analysis of local compositional features. Of specific interest were the compositions of phases present, segregation of solutes between these phases, compositional features of the \textalpha{}/\textalpha{}$_2$ interface, and the crystallographic site partitioning of V and Mo on the \textalpha{}$_2$ DO$_{19}$ lattice. Measurements were performed for Ti--7Al--0.05O in the quenched condition as a reference dataset, for this alloy aged for 49~days, and for this alloy and Ti--7Al--0.25O, Ti--7Al--1.1V--0.25O and Ti--7Al--0.8Mo--0.25O in the 120-day aged condition to observe \textalpha{}$_2$ precipitates. Data reconstruction was informed by TEM observations of precipitate morphology and by crystallographic information about the material. Since specimens were prepared from a known crystallographic orientation, partial indexing of desorption maps was possible and confirmed that the pole approximately parallel to the analysis direction was \hkl<2 -1 -1 0> in each specimen. This allowed calibration of reconstruction parameters by guiding the reconstruction according to the known interplanar spacing of \hkl{2 -1 -1 0} planes in \textalpha{}-Ti. Analysis of Ti--7Al--0.05O in the quenched condition revealed a homogeneous distribution of all solutes in species density maps, with no indications of phase separation. It is noted that the presence of short-range ordering versus true disorder cannot necessarily be inferred from APT data due to sub-100\% ion detection efficiency. Upon ageing to 49 days, a dispersion of \textalpha{}$_2$ was evident in the specimen as regions of increased Al content, displaying the ellipsoidal morphology observed in TEM, Fig.~\ref{fig:atmapiso}. After 120 days at \SI{550}{\celsius}, this alloy displayed the expected coarsening of precipitates. APT observations of alloys containing O, V and Mo additions showed precipitate dispersions with characteristics as expected from earlier TEM observations, with increased number density upon adding these solutes and reduced precipitate size in the Mo-containing alloys. \begin{figure}[t!] \centering \includegraphics[width=88mm]{p1fig4_atmapiso_v09.pdf} \caption{Examples of atom probe tomography results. (a) Atom maps showing the distribution of ion types detected for a specimen of Ti--7Al--1.1V--0.25O aged for 120 days at \SI{550}{\celsius}. Note the domains of increased Al content, which are indicative of \textalpha{}$_2$ phase formation. O was detected only within TiO$^{n+}$ complex ions. 8.5\% Al concentration isosurfaces were used to indicate phase boundaries in aged specimens. The coarsening of \textalpha{}$_2$ precipitates is evident in comparing Ti--7Al--0.05O after ageing at \SI{550}{\celsius} for 49 days and 120 days, (b). The precipitate dispersion characteristics seen in dark field TEM (Fig.~\ref{fig:darkfield}) are reflected in the Al isosurfaces shown for different alloys after ageing for 120 days, (c).} \label{fig:atmapiso} \end{figure} Proximity histograms (composition profiles calculated as a function of the distance to a specific isosurface \cite{Hellman2000}) were produced to analyse the nature and extent of phase segregation for each solute, Fig.~\ref{fig:proxi}. The \textalpha{}$_2$ phase was identified according to its Al enrichment to around \SI{25}{\atpercent}. Elements seen to promote \textalpha{}$_2$ formation in dark field TEM observations were expected to show segregation to this phase. For both O and V, segregation to the \textalpha{} matrix was instead observed. Previously we have shown \cite{Bagot2018} that O enhances \textalpha{}$_2$ formation whilst segregating to the $\alpha$ phase, owing the curvature of the phase boundary in the Ti--Al--O ternary system. Mo showed no segregation between the phases despite its significant effect on the \textalpha{}$_2$ precipitate dispersions. For each specimen, proximity histograms were used to choose values for a set of Al concentration isosurfaces at 6.5\% to select \textalpha{} and 10.5\% to select \textalpha{}$_2$ without including the interfacial region. This approach was used to obtain phase compositions, Table~\ref{table:aptcomp}. These phase compositions were used to provide contrast values for SAXS analysis, allowing deconvolution of the volume fraction and compositional contributions to peak size in the SAXS data. The analysed \textalpha{} compositions appear low in Ti (or, equivalently, high in Al) compared to the bulk ICP-OES analysis (Table~\ref{table:process}). This is a direct and unavoidable consequence of the large difference in evaporation field between Ti and Al, causing Ti to be preferentially under-counted due to multiple evaporation events \cite{Kingham1982,Peng2018}. This bias is more pronounced in the \textalpha{} phase, and it should thus be noted that this could cause a slight underestimate (on the order of a few per cent) of SAXS contrast and hence a slight overestimate of the volume fractions presented in the below analysis of SAXS data. This consideration was incorporated in estimates of uncertainty for the dispersion characteristics calculated from the SAXS data. \begin{figure}[t!] \centering \includegraphics[width=80mm]{p1fig5_proxi_v07.pdf} \caption{Proximity histograms constructed for the \textalpha{}/\textalpha{}$_2$ interface in APT datasets: (a) Ti--7Al--0.05O, aged 49~d; (b) Ti--7Al--0.05O, aged 120~d; (c) Ti--7Al--0.25O, aged 120~d; (d) Ti--7Al--1.1V--0.25O, aged 120~d; (e) Ti--7Al--0.8Mo--0.25O, aged 120~d. Segregation of aluminium to the \textalpha{}$_2$ at a concentration of approximately \SI{25}{\atpercent} is evident, corresponding well with the expected Ti$_3$Al stoichiometry. Notably, oxygen is found to segregate to the matrix \textalpha{} phase in each alloy, despite the fact that it promotes \textalpha{}$_2$ formation. Vanadium shows a similar behaviour to oxygen, while molybdenum showed no clear segregation to either phase. (Ti, Al shown against black axis; O, V, Mo shown against red axis.)} \label{fig:proxi} \end{figure} \begin{table}[h!]\begin{small} \centering \setlength{\tabcolsep}{4pt} \caption{Bulk and phase compositions measured in APT (no background correction applied). Bulk compositions were analysed counting all ions in a dataset. For phase compositions, proximity histograms were used to determine values for a set of aluminium isoconcentration surfaces to isolate each phase, excluding the interfacial region.} \label{table:aptcomp} \begin{tabular}{l l l r r r r r r} \hline Material & Ageing & & \multicolumn{6}{c}{Composition / \si{\atpercent}} \\ & state & & \multicolumn{1}{c}{Ti} & \multicolumn{1}{c}{Al} & \multicolumn{1}{c}{O} & \multicolumn{1}{c}{V} & \multicolumn{1}{c}{Mo} & \multicolumn{1}{c}{N} \\ \hline Ti--7Al & IWQ & Bulk & 84.0 & 14.4 & 0.8 & & & 0.7 \\ & 49 d & Bulk & 83.3 & 15.2 & 0.8 & & & 0.6 \\ & & \textalpha{} & 84.2 & 14.2 & 0.9 & & & 0.7 \\ & & \textalpha{}$_2$ & 74.6 & 24.6 & 0.3 & & & 0.7 \\ & 120 d & Bulk & 82.9 & 15.4 & 1.0 & & & 0.7 \\ & & \textalpha{} & 83.3 & 14.8 & 1.1 & & & 0.7 \\ & & \textalpha{}$_2$ & 74.1 & 24.7 & 0.4 & & & 0.8 \\ ~--0.25O & 120 d & Bulk & 81.8 & 16.4 & 1.1 & & & 0.7 \\ & & \textalpha{} & 83.7 & 14.4 & 1.3 & & & 0.6 \\ & & \textalpha{}$_2$ & 72.7 & 25.7 & 0.9 & & & 0.6 \\ ~--1.1V--0.25O & 120 d & Bulk & 82.2 & 15.2 & 1.0 & 1.2 & & 0.4 \\ & & \textalpha{} & 83.6 & 13.7 & 1.1 & 1.3 & & 0.4 \\ & & \textalpha{}$_2$ & 72.4 & 25.5 & 0.6 & 0.9 & & 0.6 \\ ~--0.8Mo--0.25O & 120 d & Bulk & 83.7 & 14.5 & 1.0 & & 0.4 & 0.4 \\ & & \textalpha{} & 85.2 & 12.9 & 1.1 & & 0.4 & 0.4 \\ & & \textalpha{}$_2$ & 75.8 & 22.7 & 0.6 & & 0.4 & 0.4 \\ \hline \end{tabular} \end{small}\end{table} \subsection{Small angle X-ray scattering} SAXS curves for each alloy in the ageing study, Fig.~\ref{fig:saxstime}, showed evolution of a peak at high scattering vector (\textit{Q}) over time, along with a low-\textit{Q} peak that showed no systematic variation with the ageing process. These are ascribed to \textalpha{}$_2$ precipitation and to the presence of grain boundaries as large scatterers respectively. The \textalpha{}$_2$ peak is distinctly visible for larger precipitates, but for very fine dispersions as in the Mo-containing materials it is less easily discerned. For all alloys, a slight difference in curve shape at high \textit{Q} between quenched and air-cooled states is seen. \begin{figure*}[t!] \centering \includegraphics[width=135mm]{p1fig7_saxstime_v04.pdf} \caption{SAXS data obtained for the ageing study of the Ti--7Al model alloy series. In each case, a low-\textit{Q} peak is present due to scattering from grain boundaries, and a peak at high \textit{Q} develops with ageing time as the \textalpha{}$_2$ phase forms and grows. The variable influence of structure factor effects can be seen in different specimens, e.g. the Ti--7Al--1.1V--0.05O 10-day sample, as a dip in intensity at the low-\textit{Q} shoulder of the \textalpha{}$_2$ peak.} \label{fig:saxstime} \end{figure*} The raw data were fitted using two scatterer populations for each of the main features. A prolate spheroidal model was used for \textalpha{}$_2$ precipitates, based on TEM images showing that precipitate aspect ratio increases with time. This shape is described with a shorter equatorial radius $r_e$ and longer polar radius $r_p$, so that particle aspect ratio is given by $A = r_p/r_e$ and precipitate volume can be calculated as $V = \frac{4}{3}\pi{}r_{e}^{3}A$. As seen in Fig.~\ref{fig:saxstime}, a structure factor effect occurs in some of the samples (appearing as a dip in measured intensity at the low-\textit{Q} shoulder of the \textalpha{}$_2$ peak). This effect is strengthened or subdued according to the competing effects of precipitate growth and coarsening on the extent to which each dispersion can be considered dilute. Contrast values were calculated using the Irena analysis package \cite{Ilavsky2009}, using composition data from APT. The \textalpha{} and \textalpha{}$_2$ compositions for each sample were used to calculate the average atomic weight of each phase. This was then used to estimate the density of the phase, assuming no difference in unit cell volume compared to pure Ti, and these phase compositions and densities were then used to calculate scattering length density contrast. After calculating this for the different samples, values between 1.7--\SI{2.7E20}{\centi\metre^{-4}} were obtained but showed no systematic variation with alloy composition, so an average value of \SI{2.2E20}{\centi\metre^{-4}} was taken for fitting of all SAXS datasets in this study. In the fitting process, $A$ is an input parameter along with phase contrasts calculated in the Irena analysis suite using APT data. Model outputs include $r_e$ and volume fraction of \textalpha{}$_2$ phase, $f$. These can be used to calculate precipitate number density, $n = f/v$, and if, in the absence of any specific model or description, a simple cubic array of precipitates in the matrix is assumed, the average spacing may be calculated as $s_\mathit{eff} = n^{-1/3}$. The low-\textit{Q} peak associated with grain boundaries was modelled as a cylindrical disc of appropriate diameter and thickness, and fitted using an arbitrary contrast that was not deconvolved from volume fraction for this microstructural component. Direct comparison of $f$ between alloys is possible but, due to the variation in precipitate aspect ratio with both alloy and ageing time, the model output parameter $r_e$ is not an appropriate metric for comparing precipitate sizes across the study. A directly comparable quantity is the average volume of a precipitate, $V = \frac{4}{3}\pi{}r_{e}^{3}A$. For easy cross-reference with TEM images and APT reconstructions, the diameter of a sphere having equal volume to the modelled spheroid can be calculated as $d_{eq} = 2r_eA^{1/3}$. This equivalent sphere diameter provides a directly comparable metric of precipitate size at different ageing times, and allows easy comparison with TEM and APT data. Fitting results for \textalpha{}$_2$ volume fraction, size and spacing showed similar trends with time for each alloy composition, but clear differences were apparent between different alloys, Fig.~\ref{fig:saxsoutcome}. Volume fraction $f$ showed the expected rapid initial increase followed by plateauing towards the equilibrium volume fraction for each system. An equilibrium volume fraction was reached for the base alloy and variants containing additions of O, V or both. A higher fraction of \textalpha{}$_2$ was observed in Ti--7Al--0.25O compared to Ti--7Al. The addition of V alone caused no significant difference in volume fraction, and the addition of O to the V-containing alloys did not influence the fraction of \textalpha{}$_2$. The Mo-containing alloys did not appear to reach an equilibrium state, with volume fraction still apparently increasing after 120 days at \SI{550}{\celsius}. The final measured volume fractions for these alloys after 120 days were slightly higher than for the base alloy. As with the V-containing alloys, no significant difference in volume fraction was seen between the Mo alloys with different O levels. Precipitate size and spacing were reduced compared to the Ti--7Al base alloy upon adding any of the three solutes investigated. Mo had the most significant effect, producing very fine dispersions of small, closely-spaced precipitates. As with volume fraction, precipitate size and spacing showed no significant influence from oxygen content in the V- and Mo-containing materials. In each system, number density was found to vary as expected for precipitate coarsening behaviour, with an initial rapid increase corresponding to nucleation and early growth of precipitates. This was followed by a more gradual decrease as the microstructure underwent coarsening, with larger precipitates growing at the expense of smaller ones. Comparing IWQ and AC datasets (i.e. quenched/disorded and SRO/early nucleation stage), a difference in curve shape is consistently seen across the different alloys. This takes the form of an increased intensity across a broad \textit{Q} range from around 0.01 to \SI{0.05}{\per\angstrom}. Modelling of the AC datasets was attempted using a low aspect ratio spheroid, but produced unphysical modelling results. \begin{figure}[t!] \centering \includegraphics[width=90mm]{p1fig8_saxsoutcome_v11.pdf} \caption{SAXS-derived quantitative analysis of the evolution of volume fraction $f_{\alpha_2}$, precipitate equivalent diameter $d_{eq}$, number density $n$ and effective square spacing $s_\mathit{eff}$. N.B. the $10\times$ change in scale for the number density in the Mo-containing samples (shown with red axes); this solute has the most significant effect on \textit{n}.} \label{fig:saxsoutcome} \end{figure} \subsection{APT crystallography} Attempts were made to analyse the APT data for crystallographic information, specifically regarding site partitioning of substitutional solutes V and Mo on the \textalpha{}$_2$ lattice. Spatial distribution maps (SDMs) were calculated for individual precipitates that had been identified as being located directly on a \hkl{2 -1 -1 0} pole, for Ti--Ti, Ti--Al and Ti--V species pairs, Fig.~\ref{fig:aptx}. However, due to the significant differences in evaporation field between Ti and Al under the measurement conditions applicable to these alloys, artefacts were seen in the interplanar spacing both in atom maps and in SDMs. This artefact has been described by Vurpillot \textit{et al.} \cite{Vurpillot2000}. By comparison, in Ni--Al \textgamma{}--\textgamma{}$^\prime$ alloys, the evaporation field difference is smaller so that site partitioning is more easily accessible through on-zone APT \cite{Bagot2017}. The low solubility of V and Mo in the \textalpha{} and \textalpha{}$_2$ phases also made this analysis challenging due to limited V or Mo atoms available for measurement and SDM analysis. \begin{figure}[h] \centering \includegraphics[width=80mm]{p1fig10_aptx_v03.pdf} \caption{Crystallographic analysis was attempted for on-zone APT datasets containing ordered \textalpha{}$_2$ Ti$_3$Al precipitates, shown here for TI--7Al--1.1V--0.25O (wt.\%). For samples analysed parallel to a \hkl<2 -1 -1 0> direction, \textalpha{}$_2$ precipitates lying on a crystallographic pole in the reconstruction were analysed to produce spatial distribution maps, (a). An artefact previously described by Vurpillot \textit{et al.} \cite{Vurpillot2000} was seen in the interplanar spacings analysed for this dataset in both the spatial distribution maps (a) and the Al atom maps (b).} \label{fig:aptx} \end{figure} \section{Discussion} The use of TEM, APT and SAXS in combination has allowed the analysis of volume fraction, size and spacing of \textalpha{}$_2$ in this set of model alloys. Comparisons between the various alloying elements may be made. \subsection{Volume fractions} First, considering volume fraction, the increase from 6\% to 10\% due to increased oxygen content supports the small shift in the position of the \textalpha{}/\textalpha{}+\textalpha{}$_2$ boundary upon adding oxygen that has been previously suggested \cite{Lim1976,Waterstrat1988,Gray1990}. Based on the results of this study, vanadium does not appear to significantly alter the volume fraction of \textalpha{}$_2$ produced after 120~days ageing at \SI{550}{\celsius}. Molybdenum causes a slight increase from 6\% to 8\%, but due to retardation of phase separation kinetics by this solute, the Mo-containing systems did not reach equilibrium volume fractions during the 120~days of this study. \subsection{Size and spacing} Regarding the size and spacing of precipitates, these were largest at all times for the base alloy Ti--7Al--0.05O. Additions of any of the three solutes investigated caused refinement of the \textalpha{}$_2$ dispersion. Molybdenum had the most significant effect on this, followed by oxygen, while vanadium had a fairly minimal effect on the size and spacing of precipitates. The varying degrees of refinement are reflected in the extent to which each sample's scattering curve shows structure factor effects. Considering also the number density at short and long times for the different alloys, all solutes are seen to increase \textit{n} during the early stages of phase separation. This suggests that adding any of these solutes causes increased nucleation density. Molybdenum produces an order of magnitude increase in early number density compared to the other alloys in the study. It is suggested that the resulting reduction in interparticle distances then causes smaller precipitate sizes due to soft impingement. In the V- and Mo-containing alloys, there is little difference in volume fraction, size and spacing of \textalpha{}$_2$ precipitates between the low- and high-oxygen variants in each case. This indicates that the \textalpha{}/\textalpha{}+\textalpha{}$_2$ boundary in the Ti--Al--O--V and Ti--Al--O--Mo quaternary systems becomes less sensitive to O as the \textbeta{} stabiliser content increases. \subsection{Coarsening and LSW modelling} In order to compare the precipitate growth rate and coarsening between alloys, the Lifshitz--Slyozov--Wagner (LSW) model was applied for the precipitate effective radius, \textit{r} = \textit{d}$_{\mathrm{eq}}/2$ (for direct comparison between samples with different precipitate aspect ratios). This model describes the evolution of precipitate size with time according to \[r^{3}(t)-r_{0}^{3} = \frac{8\mathit{\Gamma{}}DCV_{m}^{2}}{9RT}t = K_{\mathrm{LSW}}t,\] \noindent where $\mathit{\Gamma{}}$ is the precipitate/matrix interfacial energy, $D = D_{0}exp(-Q/RT)$ is the diffusion coefficient of the rate-limiting species through the matrix, $C$ is the equilibrium concentration of the rate-limiting species in the matrix, $V_{m}$ is the molar volume of the precipitate phase, $R$ is the ideal gas constant and $T$ is the absolute temperature at which phase separation has been observed \cite{Lifshitz1961,Wagner1961}. The gradients of linear fits hence provide a rate constant, $K_{\mathrm{LSW}}$, that can be compared between alloys. There may to be overlap between the growth and coarsening regimes for the \textalpha{}$_2$ dispersions observed, Fig.~\ref{fig:saxsoutcome}; in the first few days and in the Mo-containing alloys the phase fraction increases with time. The LSW approach does not deconvolve these two processes, such that the extracted $K_{\mathrm{LSW}}$ are not pure coarsening rates, especially in the case of Mo, but also incorporate growth rate to an extent depending on the degree of overlap between growth and coarsening stages. Nonetheless, this analysis allows some comparison of the phase separation kinetics between alloys. It is interesting to note that in the Ti--7Al--0.25O and V-containing alloys, a fairly consistent coarsening rate of 4--\SI{7}{\nano\meter\cubed\per\day} is obtained. \begin{figure}[t!] \centering \includegraphics[width=88mm]{p1fig9_lsw_v05.pdf} \caption{Lifshitz--Slyozov--Wagner modelling assumes that a precipitation coarsening process is controlled by the diffusion of a rate-limiting species through the matrix, leading to a linear proportionality between precipitate volume and time. Plotting $r^{3}$ against time, good linear fits were obtained for each alloy in the study, supporting a matrix diffusion-controlled coarsening mechanism for \textalpha{}$_2$ in \textalpha{}-Ti--Al.} \label{fig:lsw} \end{figure} \begin{table}[t!!]\begin{small} \centering \setlength{\tabcolsep}{2pt} \caption{Lifshitz--Slyozov--Wagner modelling was successfully applied to the SAXS data, Fig.~\ref{fig:lsw}, with least-squares goodness of fit \textit{R}$_{LS}^{2}$ of 0.91 or better. This model provides coarsening rate constants $K_{\mathrm{LSW}}$ and an estimate of the coarsening rate of precipitates in each alloy in terms of volume per unit time. It is noted that, for these alloys, there is likely a significant overlap of the growth and coarsening regimes. The peak precipitate number density during the coarsening process, as analysed in SAXS, is shown for comparison.} \begin{tabular}{l|c c c|c} \hline Material & $K_{\mathrm{LSW}}$ & \textit{R}$_{LS}^{2}$ & Coarsening rate & Peak $n$\\ & / \SI{E-31}{\meter\cubed\per\second} & & / \si{\nano\meter\cubed\per\day}\ & / 10$^{22}$~m$^{-3}$\\ \hline Ti--7Al & 1.7$\pm$0.1 & 0.98 & 15$\pm$1 & 3.7 \\ ~--0.25O & 0.46$\pm$0.03 & 0.98 & 4.0$\pm$0.3 & 19.3 \\ ~--1.1V & 0.83$\pm$0.07 & 0.97 & 7.2$\pm$0.6 & 7.9 \\ ~--1.1V--0.25O & 0.79$\pm$0.07 & 0.96 & 6.8$\pm$0.6 & 17.7 \\ ~--0.8Mo & 0.08$\pm$0.01 & 0.96 & 0.68$\pm$0.06 & 34.6 \\ ~--0.8Mo--0.25O & 0.13$\pm$0.02 & 0.91 & 1.1$\pm$0.2 & 73.3 \\ \hline \end{tabular} \label{table:lsw} \end{small}\end{table} A nonzero value of $r_0$ would indicate an incubation period between the introduction of isothermal ageing conditions and the onset of precipitation. In this study, $r_{0} = 0$ was set for LSW fitting as no incubation period is expected nor evident in the data. Least squares linear regression for each alloy gave goodness of fit \textit{R}$_{LS}^{2}$ values of 0.9 or above for a \nicefrac{1}{3} power law, Fig.~\ref{fig:lsw}, indicating a diffusion-limited growth process, rather than an interface-controlled mechanism \cite{Ardell2013}. The $K_{\mathrm{LSW}}$ values obtained for these alloys, Table~3, are reasonable when compared to those reported in an analagous study of \textgamma{}--\textgamma{}$^{\prime}$ Ni superalloys \cite{vorontsov2016}, considering the much slower formation of \textalpha{}$_2$ in Ti--Al alloys than of \textgamma{}$^{\prime}$ in Ni--Al alloys. Comparing the rates obtained for the different alloy compositions, Table~3, and considering that a matrix diffusion-controlled growth mechanism is supported by the good fits obtained for the LSW model, it may be anticipated that the growth rate would depend only on the diffusivity of the key species, aluminium. Although Mo has been seen to slow phase separation kinetics for the \textalpha{}$\rightarrow$\textbeta{} transformation \cite{Ackerman2020}, in this instance the Mo concentration is very low and considered unlikely to have such a stark effect on number density through modification of Al diffusivity alone. It is suggested instead that, due to soft impingement of solute fields around growing \textalpha{}$_2$ precipitates, the coarsening rate is controlled by the nucleation density of the precipitate dispersion, reflected by a correspondence between rate and interprecipitate spacing $s_{\mathrm{eff}}$. A further outcome of this study is that a value for the \textalpha{}/\textalpha{}$_2$ interfacial energy may be calculated, since this is a term contained in the rate constant $K_{\mathrm{LSW}}$. Using the values given in Table~\ref{table:lswdata}, a value of $\mathit{\Gamma{}} =$~107~mJ~m$^{-2}$ was obtained for the base alloy Ti--7Al--0.05O. Since coarsening of Ti$_3$Al in Ti has not been quantified before, literature estimates for comparison are not available. It should be noted that the value inferred for $\mathit{\Gamma{}}$ depends strongly on that assumed for $D$, which is itself an extrapolation from literature measurements at higher temperatures, and may be strongly affected by e.g. vacancy and minor solute content in the samples studied. For a factor of 10 change to the coarsening rate, if assumed to be entirely driven by $\mathit{\Gamma{}}$, this would imply an order of magnitude difference in the size of $\mathit{\Gamma{}}$ between the alloys in this series, since the relationship is linear. This is unlikely given only minor differences in composition between alloys; a more sophisticated coarsening and growth analysis would be required to fully deconvolve the effects for the Mo-containing alloys, rather than the simplest possible dilute coarsening LSW analysis performed here. \begin{table}[h!]\begin{small} \centering \caption{Parameters used in the calculation to estimate \textalpha{}/\textalpha{}$_2$ interfacial energy based on LSW fitting for the Ti--7Al--0.05O alloy. The equilibrium concentration $C_{\mathrm{Al}}$ corresponds to the \SI{14.8}{\atpercent}~Al measured for the \textalpha{} phase of this alloy after 120~d ageing at \SI{550}{\celsius}. $D_{\mathrm{Al}}(550~^{\circ{}}\mathrm{C})$ and $V_{\mathrm{m}}$ are calculated using data from \cite{LandW}.} \begin{tabular}{c c c c} \hline $K_{\mathrm{LSW}}$ & $D_{\mathrm{Al}}(550~^{\circ{}}\mathrm{C})$ & $C_{\mathrm{Al}}$ & $V_{\mathrm{m}}$\\ / \SI{E-31}{\meter\cubed\per\second} & / \si{\metre\squared\per\second} & / \si{\mol\per\metre\cubed} & / \si{\metre\cubed\per\mol}\\ \hline 1.7 & $6\times{}10^{-24}$ & $12.1\times{}10^{3}$ & $10.7\times{}10^{-6}$\\ \hline \label{table:lswdata} \end{tabular} \end{small}\end{table} \subsection{Effect of quenching temperature on \textalpha{}$_2$ formation} A second heat treatment study was conducted in order to establish a clearer mechanistic connection between the tertiary solutes (O, V, Mo) and differences in \textalpha{}$_2$ formation. Upon adding any of the three solutes, refinement of the \textalpha{}$_2$ dispersion was observed, to a greater or lesser extent depending on the solutes included. Noting the homogeneous distribution of \textalpha{}$_2$ precipitates across \textalpha{} grains in these alloys, it was suggested that the nucleation points must correspond to a homogeneously distributed lattice defect. A plausible candidate is the vacancy concentration in each alloy. It was proposed that, upon adding any solute, the resulting increased entropy of the alloy causes an increase in vacancy concentration, and that this is the common underlying mechanism controlling nucleation density. To establish whether a link exists between vacancy concentration and \textalpha{}$_2$ nucleation density, a single alloy composition was used, Ti--7Al--0.05O, while the thermal history of the samples prior to \textalpha{}$_2$ ageing was varied. Vacancy concentration in metals is known to have an empirically Arrhenius-type dependence on temperature. Aiming to control the vacancy concentration in samples prior to ageing, pieces of the alloy in the IWQ starting condition were annealed at \SI{750}{\celsius} and \SI{950}{\celsius} to generate two different vacancy concentrations while remaining within the \textalpha{} phase field and staying above \textalpha{}$_2$ formation temperatures. The samples were then ice water quenched, cleaned to remove any oxide, and encapsulated under argon in a quartz ampoule before ageing at \SI{550}{\celsius} for 23~days. The resulting \textalpha{}$_2$ dispersions were then characterised using dark field TEM imaging and SAXS measurements Fig.~\ref{fig:vacresults}. The SAXS data were fitted using a spheroidal precipitate shape with an aspect ratio of 2.0 and a contrast value of 2.2~$\times{}~10^{-20}$~cm$^{-4}$, following the same methodology as for the main SAXS dataset. In order to establish whether these results are consistent with a vacancy-controlled nucleation mechanism, the number densities in each sample were compared to predicted vacancy concentration behaviour. The vacancy concentration at a temperature $T$ is empirically given by \[ N = N_{0}e^{-E_{\mathrm{f}}/k_{\mathrm{B}}T},\] where $N_{0}$ is a constant prefactor and $E_{\mathrm{f}}$ is the vacancy formation energy. For two different temperatures, $T_1$ and $T_2$, \[ \frac{N_1}{N_2} = \frac{\mathrm{exp}(-E_{\mathrm{f}}/k_{\mathrm{B}}T)}{\mathrm{exp}(-E_{\mathrm{f}}/k_{\mathrm{B}}T)},\] such that \[ E_{\mathrm{f}} = k_{\mathrm{B}} \left(\frac{1}{T_{2}}-\frac{1}{T_{1}}\right)^{-1} \mathrm{ln}\left(\frac{N_{1}}{N_{2}}\right).\] If it is assumed that $n \propto{} N_{0}$ independent of temperature, then the \textalpha{}$_2$ number densities $n$ may be used to estimate the vacancy formation energy in Ti--7Al--0.05O. For the samples quenched from \SI{750}{\celsius} and \SI{950}{\celsius}, number densities of $2.2\times{}10^{21}$~m$^{-3}$ and $3.5\times{}10^{22}$~m$^{-3}$ were obtained from SAXS data fitting respectively, Table~\ref{table:vacancy}. This gives an estimate of $E_{\mathrm{f}} = 1.5\pm{}0.4$~eV. Previous experimental studies for vacancy formation in \textalpha{}-Ti are relatively scarce, but comparison may be made with the findings of Hashimoto~\textit{et~al.} \cite{Hashimoto1984} who measured a value of $1.27\pm{}0.05$~eV using positron annihilation. First-principles calculations for $E_{\mathrm{f}}$ in \textalpha{}-Ti have found values of 1.87~eV \cite{Connetable2011} and 1.97~eV \cite{Raji2009}. The value obtained in the present work is consistent with these earlier studies, lending support to a vacancy-mediated nucleation mechanism for \textalpha{}$_2$ in \textalpha{}-Ti--Al--X alloys, as a second-order effect of tertiary solute additions. \begin{table}[h!]\begin{small} \centering \caption{SAXS fitting results for specimens solutionised at and quenched from different solutionising temperatures, \textit{T}$_{sol}$, prior to ageing for 23~days at \SI{550}{\celsius} in order to compare the effects of different vacancy concentrations on volume fraction \textit{f}$_{\alpha_{2}}$, precipitate size as equivalent sphere diameter \textit{d}$_{\mathrm{eq}}$ and precipitate number density $n$. For both materials, a contrast value of \SI{2.2E20}~cm$^{-4}$ and an aspect ratio of 2.0 were used.} \begin{tabular}{c c c c c c} \hline \textit{T}$_{sol}$ / \si{\celsius} & \textit{f}$_{\alpha_{2}}$ & \textit{d}$_{\mathrm{eq}}$ / nm & \textit{n} / $10^{21}$ m$^{-3}$\\ \hline 750 & 0.012 & 21.8 & 2.21 \\ 950 & 0.058 & 14.7 & 34.9 \\ \hline \label{table:vacancy} \end{tabular} \end{small}\end{table} \begin{figure}[b!] \centering \includegraphics[width=80mm]{p1fig11_vactem_v09.pdf} \caption{Effect of quenching temperature on $\alpha_2$ formation, where Ti--7Al--0.05O specimens were aged for 23~days after quenching from annealing temperatures $T_{\mathrm{anneal}}$ (a) \SI{750}{\celsius} and (b) \SI{950}{\celsius}, freezing in lower and higher vacancy concen trations prior to ageing respectively. Dark field TEM shows the expected spheroidal morphology and a qualitative indication of different dispersion characteristics. SAXS measurements of these samples showed clear differences in the position and intensity of the \textalpha{}$_2$ peak, as well as the influence of structure factor (associated with higher number density) for the \SI{950}{\celsius} sample.} \label{fig:vacresults} \end{figure} \section{Conclusions}\label{conclusions} In this study, the precipitation and coarsening of \textalpha{}$_2$ Ti$_3$Al in a Ti--Al alloy series was studied, using TEM to identify the existence of the precipitates and their morphology, APT to characterise their composition, and SAXS to quantify their number density, size and fraction. The effect of interstitial solute O and substitutional solute V and Mo, and of quenching temperature, were examined. The following conclusions are drawn. \begin{itemize}\setlength{\itemsep}{-1mm} \item Interstitial O increases the volume fraction of \textalpha{}$_2$ formed at equilibrium, which is in the region of 8-10 vol\%, while V and Mo have a relatively small effect. The precipitates grow in size to up to \SI{30}{\nano\metre} after \SI{120}{\day} at \SI{550}{\celsius} (Ti--7Al--0.05O), with interparticle spacing of a similar magnitude; \item Addition of O, V or Mo increases the nucleation density of \textalpha{}$_2$, and leads to a finer precipitate dispersion; \item Growth of \textalpha{}$_2$ can be described using an LSW model, indicating diffusion control (rather than interface coherency control); \item A secondary study comparing \textalpha{}$_2$ formation between samples differing only by quenching temperature showed a difference in nucleation number density. This gave an activation energy consistent with a vacancy nucleation mechanism, $E_{\mathrm{f}} = 1.5\pm{}0.4$~eV; \item This leads to the inference that the effect of solute O, V and Mo is, broadly, to increase the nucleation number density and thereby slow coarsening due to an earlier onset of soft impingement. \end{itemize} \section*{Acknowledgements} \noindent\small FFD was funded by Rolls-Royce plc and by the EPSRC Centre for Doctoral Training in the Advanced Characterisation of Materials (EP/L015277/1). DD was funded by a Royal Society Indstrial Fellowship and EPSRC (EP/K034332/1). BG and PK are grateful for funding from the Max Planck Society through the Laplace project. The authors are grateful to U. Tezins, C. Bross and A. Sturm for their technical support of the APT and FIB facilities at the Max-Planck Institut f\"{u}r Eisenforschung, and for useful discussions with S. Balachandran. This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. Useful conversations and technical assistance are also gratefully acknowledged from D. Isheim at Northwestern and A. Minor, R. Zhang and R. Traylor at UC Berkeley and Lawrence Berkeley National Laboratory, along with the help of K.M. Rahman and I. Bantounas at Imperial with the alloy processing. \bibliographystyle{model1-num-names}
1,477,468,750,501
arxiv
\section{Introduction} \label{intro} Since the late 90's of the last century, a lot of efforts have been made to describe the observable universe as a brane embedded in a higher dimensional space \cite{sundrum/1999}-\cite{flanagan/2000}. Some results obtained from such a set up for the universe are remarkable. Braneworld models of dark energy were recently presented in References \cite{jawad/2015}-\cite{rani/2016}. In Reference \cite{sahni/2005}, the possibility of the $\Lambda$CDM cosmological model be a braneworld model in disguise was investigated. In the astrophysics of compact objects context, braneworld models are able to predict some deviations from standard General Relativity (GR) outcomes and get in touch with some peculiar observations \cite{germani/2001}-\cite{lugones/2017}. To be in touch with recent literature on braneworld models applications, we suggest References \cite{prasetyo/2018}-\cite{barbosa-cendejas/2014}. The braneworld scenario was originally proposed as an alternative to the hierarchy problem, as it can be checked, for instance, in References \cite{yang/2012}-\cite{das/2008}. The concept of extra dimensions has also been used in attempts to unify the four fundamental forces of nature \cite{hall/2002}-\cite{appelquist/1984}. Not only on braneworld scenarios lie the extra dimensional universe configurations. There are also the renowned Kaluza-Klein (KK) models \cite{visser/1985}-\cite{hohm/2013}. The relic density of KK dark matter in universal extra dimensions was calculated \cite{kong/2006}. The virtual effects of KK states on Higgs physics in universal extra dimensional models were examined \cite{petriello/2002}. F. Darabi and P.S. Wesson introduced a generalized gravitational conformal invariance in the context of non-compactified five-dimensional (5D) KK theory \cite{darabi/2002}. In astrophysics, stability of strange stars in extra dimensions has been investigated recently \cite{malheiro/2019}. P.S. Wesson, has contributed very significantly to KK cosmology as well as for the interpretation of the extra dimension \cite{wesson/1992}-\cite{wesson/1992b}. Together with collaborators, Wesson has also investigated the effective properties of matter in KK theory \cite{liu/1994}, outlined a Machian interpretation of KK gravity \cite{mashhoon/1994}, applied some classical tests to the theory \cite{kalligas/1995} and derived the referred equation of motion \cite{wesson/1995}. The outcomes of some more recent articles by Wesson et al. on five-dimensional (5D) universe can be appreciated in the following. J.M. Overduin et al. have used measurements of geodesic precession from Gravity Probe B experiment and constrained possible departures from Einstein's GR for a spinning test body in KK theory \cite{overduin/2013}. C. Zhang and collaborators have used Wetterich's parameterization equation of state (EoS) to obtain cosmological solutions in a 5D Ricci-flat Universe \cite{zhang/2006}. In \cite{seahra/2002}, some relations for the embedding of spatially flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) cosmological models in flat KK manifolds were presented. The cosmological constant problem, namely, the huge discrepancy between theoretical and observed values of the cosmological constant in standard $\Lambda$CDM cosmology, was investigated in KK gravity by P.S. Wesson and H. Liu \cite{wesson/2001}. F. Darabi et al. have derived a quantum cosmology from KK theory with non-compactified extra dimension \cite{darabi/2000}. In \cite{wesson/2000}, Wesson et al. have obtained an exact solution of the 5D field equations that describes a shock wave moving in time and extra KK coordinate. Such a solution suggested that the four-dimensional (4D) big bang was a 5D shock wave. Particularly regarding the interpretation of the extra dimension in the 4D observable universe, Wesson has proposed the so-called Induced Matter Model (IMM), which can be appreciated in \cite{liu/1994},\cite{moraes/2015}-\cite{fukui/2001}. It consists of the following concept. The KK field equations read \begin{equation}\label{i1} G_{AB}=0, \end{equation} with $G_{AB}$ being the Einstein tensor and the indices $A,B$ run from $0$ to $4$. From Eq.(1), it can be seen that the KK field equations depend only on the 5D metric $g_{AB}$. The Wesson's idea consists of collecting in Eq.(1) the terms that depend on the extra coordinate and make them play the role of an induced energy-momentum tensor in 4D. Further applications of the IMM can be appreciated in Refs.\cite{ponce_de_leon/2010}-\cite{halpern/2000}. In the present article we intend to develop - by meaning of Wesson's model - and investigate the Friedmann-like equations derived from a 5D metric. The field equations will be taken as Eq.(1) in the presence of a 5D cosmological constant $\Lambda$, that is \cite{sahni/2003} \begin{equation}\label{ie} G_{AB}+\Lambda g_{AB}=0. \end{equation} We will be particularly concerned with the role of the extra-dimension scale factor in the metric \cite{mm/2012}-\cite{la_camera/2010} \begin{equation}\label{i3} ds^{2}=dt^{2}-a(t)^{2}[dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})]-\xi(t)^{2}dl^{2}. \end{equation} In Eq.(\ref{i3}), $a(t)$ is the scale factor of the observable universe and $\xi(t)$ is the extra-dimension scale factor. Moreover, we are assuming the spatial curvature of the universe to be null, in accordance with recent observational data on the fluctuations of temperature of the cosmic microwave background radiation \cite{hinshaw/2013}. Still in (\ref{i3}), $t$ is the time coordinate, $r,\theta$ and $\phi$ are the polar spherical coordinates and $l$ is the extra spatial coordinate. Throughout the article, natural units will be assumed, unless otherwise advised. In the present article, we shall investigate the cosmological solutions obtained from the substitution of (\ref{i3}) in (\ref{i1}) and in (\ref{ie}). We will search for being in touch with recent cosmological observational data, which shall naturally constrain the extra dimensional features of the model. \section{4D dynamics from 5D empty space} In the present section we will substitute Eq.(\ref{i3}) in Eqs.(\ref{i1}) and (\ref{ie}). For all cases we will consider that matter in the 4D observable universe is a manifestation of a 5D universe devoid of matter, through the application of the IMM. That is to say that the terms on the 5D Einstein tensor for (\ref{i3}) which somehow depend on the extra coordinate will ``be moved'' to the {\it rhs} of Eqs.(\ref{i1}) and (\ref{ie}) to play the role of an induced energy-momentum tensor. Throughout the whole article, the energy-momentum tensor of a perfect fluid will be assumed, that is, $T_{A}^{B}=\mbox{diag}(\rho,-p,-p,-p,0)$, with $\rho$ being the matter-energy density and $p$ the pressure of the universe. Note that $T_4^4=0$ since we will consider, such as in braneworld models, that matter is restricted to the 4D universe. \subsection{Field equations without cosmological constant}\label{ss:fewcc} The non-null components of the Einstein tensor obtained when substituting metric (3) in field equations (1) read: \begin{equation} G_{0}^{0}=3\left[\left(\frac{\dot{a}}{a}\right)^{2}+\left(\frac{\dot{a}}{a}\right)\left(\frac{\dot{\xi}}{\xi}\right)\right], \label{t4} \end{equation} \begin{equation} G_{1}^{1}=G_{2}^{2}=G_{3}^{3}=\left(\frac{\dot{a}}{a}\right)^{2}+2\left(\frac{\dot{a}}{a}\right)+2\left(\frac{\ddot{a}}{a}\right)+\frac{\ddot{\xi}}{\xi}, \end{equation} \begin{equation} G_{4}^{4}=3\left[\left(\frac{\dot{a}}{a}\right)^{2}+\left(\frac{\ddot{a}}{a}\right)\right], \label{t6} \end{equation} where dots indicate time derivatives. By applying the IMM in such components, one has \begin{eqnarray} \rho= - \frac{3}{8\pi}\frac{\dot{a}}{a}\frac{\dot{\xi}}{\xi}, \label{fewcc1}\\ p= \frac{1}{4\pi}\left(\frac{\dot{a}}{a}\frac{\dot{\xi}}{\xi}+\frac{1}{2}\frac{\ddot{\xi}}{\xi}\right)\label{fewcc2}, \end{eqnarray} and from $G_{44}=0$ we also obtain the constrain equation \begin{equation} \left(\frac{\dot{a}}{a}\right)^{2}+\frac{\ddot{a}}{a}=0. \label{fewcc3} \end{equation} By solving Eq.(\ref{fewcc3}), we have \begin{equation}\label{scalefactor} a(t) = c_{1}\sqrt{t}. \end{equation} Throughout the article, $c_i$, with $i=1,2,3,...$, are constants. It is worth to remark that Eq.(10) is in agreement with a radiation-dominated universe, since in standard cosmology, $a\sim t^{1/2}$ occurs exactly for such a stage of the universe evolution \cite{ryden/2003}. Remarkably, when Kaluza developed his extradimensional theory of gravity, today called KK gravity, his intent was to describe from $G_{AB}=0$ uniquely, both 4D Einstein's field equations with matter and Maxwell's equations for electromagnetism, as it can be checked in \cite{overduin/1997}. We can substitute (10) in the $G_{00}$ component of Eq.(1) for metric (3) and derive the solution for $\xi(t)$ in \begin{equation}\label{00} \frac{\dot{a}}{a}+\frac{\dot{\xi}}{\xi}=0. \end{equation} The result is \begin{equation}\label{ecscalefactor} \xi(t) = \frac{c_2}{\sqrt{t}}. \end{equation} It can be verified that Eqs. (10) and (12) are solutions of Eq.(1). It is interesting to remark that the solution obtained for $\xi(t)$ may indicate a compactification of the extra coordinate as time passes by. This can be clearly verified by deriving the referred Hubble parameter $H_l=\dot{\xi}/\xi$, which reads \begin{equation}\label{echubble} H_l(t) = -\frac{1}{2\, t}, \end{equation} and a negative Hubble parameter would indicate compactification rather than expansion of the referred space. Solutions (10) and (12) when substituted in (7) and (8) yield, respectively, \begin{equation}\label{rho1} \rho(t) = \frac{3}{32\pi \, t^2}, \end{equation} \begin{equation}\label{p1} p(t) = \frac{1}{32\pi \, t^2}. \end{equation} We can see from Eqs.(14) and (15) that $\rho$ and $p$, in this model, have a quadratic term on $t$. Such a behaviour can also be seen in braneworld models \cite{sahni/2003,keresztes/2007}. We can also note that, remarkably, by dividing (15) by (14), one has $\omega=p/\rho=1/3$, which is the EoS parameter of a radiation-dominated universe \cite{ryden/2003}. This result can also be verified in the Friedmann-like equations (7)-(8). \subsection{Field equations with cosmological constant}\label{ss:fewcc2} \subsubsection{Case I: $\Lambda>0$}\label{sss:lg0} Let us now work with Eq.(2). By substituting metric (3) in (2), we can write, through the IMM application, the following Friedmann-like equations: \begin{eqnarray} \rho = - \frac{3}{8\pi}\left(\frac{\dot{a}}{a}\frac{\dot{\xi}}{\xi}+\frac{\Lambda}{3}\right), \label{t21}\\ p = \frac{1}{8\pi}\left(2\frac{\dot{a}}{a}\frac{\dot{\xi}}{\xi}+\frac{\ddot{\xi}}{\xi}+\Lambda\right), \label{t22}\\ \left(\frac{\dot{a}}{a}\right)^{2}+\frac{\ddot{a}}{a}=-\frac{\Lambda}{3}. \label{t23} \end{eqnarray} Eq.(18) can be solved for the scale factor, yielding \begin{equation}\label{t24} a(t) = c_3\sqrt{\Big\vert\sin\left(\sqrt{\frac{2}{3}\Lambda} \, t\right)\Big\vert}. \end{equation} The evolution of the scale factor (19) in time can be appreciated in Fig.\ref{fig1}. \begin{figure*}[] \centering \includegraphics[width=105mm]{alg0} \caption{Evolution of the scale factor as a function of time in natural units, for $c_3=\Lambda=1$.} \label{fig1} \end{figure*} By analysing Fig.\ref{fig1} we are led to conclude that a positive 5D cosmological constant yields a cyclic or bouncing universe \cite{steinhardt/2002}-\cite{battefeld/2015}. In possession of Eq.(19), we can use the non-null components of Eq.(2) to write \begin{equation}\label{lp1} \xi(t) = c_4\frac{\big\vert\cos\left(\sqrt{\frac{2}{3}\Lambda} \, t\right)\big\vert}{\sqrt{\big\vert \sin \left(\sqrt{\frac{2}{3}\Lambda} \, t\right) \big\vert}}. \end{equation} \begin{figure*}[h!] \centering \includegraphics[width=105mm]{xilg0} \caption{Evolution of the extra-dimension scale factor as a function of time in natural units, for $c_4=\Lambda=1$.} \label{fig2} \end{figure*} From Fig.\ref{fig2}, we can see that $\xi$ completes each cycle in the same time scale as $a$ does. We can also see that, by keeping in mind that $a=1$ at present, the length scale of the extra dimension is minimum today, which could justify the absence of evidences of extra dimensions in the Large Hadron Collider \cite{chatrchyan/2012}-\cite{datta/2013}. From (19) and (20), we can write the explicit solutions for $\rho(t)$ and $p(t)$ as \begin{eqnarray} \rho(t) = \frac{\Lambda}{16 \pi } \cot^2\left(\sqrt{\frac{2}{3}\Lambda} \, t\right), \\ p(t) = \frac{\Lambda}{48 \pi }\left[\cot^2\left(\sqrt{\frac{2}{3}}\Lambda \, t \right)+4\right]. \end{eqnarray} \label{sec:1} Although bouncing models have their importance specially because they evade the Big-Bang singularity, we should discard the present model due to the impossibility of predicting the late-time accelerated expansion regime of the universe \cite{riess/1998,perlmutter/1999} from Eq.(19). \subsubsection{Case II: $\Lambda<0$}\label{sss:ll0} Following the same approach of the previous section now for $\Lambda<0$, we obtain the scale factors as \begin{eqnarray}\label{t25} a(t) &=& c_5\sqrt{\sinh\left(\sqrt{\frac{2}{3}\vert\Lambda\vert} \, t \right)} \, , \\ \xi(t) &=& c_6 \frac{\cosh \left(\sqrt{\frac{2}{3} \vert\Lambda\vert} \, t \right)}{\sqrt{\sinh \left(\sqrt{\frac{2}{3} \vert\Lambda\vert } \, t \right)}}. \label{5DscaleNegConstant} \end{eqnarray} The evolution of those scale factors can be appreciated in Figures \ref{fig3}-\ref{fig4} below. \begin{figure*}[h!] \centering \includegraphics[width=105mm]{all0} \caption{Evolution of the scale factor as a function of time in natural units, for $c_5=1$ and $\Lambda=-1$.} \label{fig3} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=105mm]{xill0} \caption{Evolution of the extra-dimension scale factor as a function of time in natural units, for $c_6=1$ and $\Lambda=-1$.} \label{fig4} \end{figure*} We can see from Fig.\ref{fig3} that $a(t)$ assumes an exponential behaviour as time grows, which may be an indication of the recent cosmic acceleration \cite{riess/1998,perlmutter/1999}. This will be clarified in Fig.\ref{fig5} below. From Fig.\ref{fig4}, we can see that the extra dimension apparently is large for the primordial stages of the universe. Then, it naturally suffers a process of compactification, assuming its minimum value for $t\sim1$. Then, it maximizes its length scale once again. It is possible to derive a relation between the scale factors $a(t)$ and $\xi(t)$. Starting from (4) and (6) for $\Lambda<0$ we obtain the system of equations \begin{eqnarray} \frac{\dot{a}}{a}\frac{\dot{\xi}}{\xi}+\left(\frac{\dot{a}}{a}\right)^{2} &=& \frac{\Lambda}{3} \label{t29} \\ \left(\frac{\ddot{a}}{\dot{a}}\right)+\left(\frac{\dot{a}}{a}\right)^{2} &=& \frac{\Lambda}{3} \label{t30} \end{eqnarray} Subtracting (27) from (28), leads to \begin{equation} \frac{\dot{\xi}}{\xi}=\frac{\ddot{a}}{\dot{a}} \, . \end{equation} Thus, we obtain a relation between the extra-dimension scale factor $\xi$ and the time derivative of $a$ as \begin{equation} \xi=K\dot{a}, \end{equation} with constant $K$. Therefore, \begin{equation} \frac{\xi}{a}=KH=\frac{c_6}{c_5}\sqrt{\frac{6}{\vert\Lambda\vert}}H, \label{t39} \end{equation} where $H=H(t)=\frac{\dot{a}}{a}$ is the Hubble parameter. The solutions for the induced matter content read \begin{eqnarray} \rho(t) &=& \frac{\vert\Lambda\vert}{16 \pi} \coth ^2\left(\sqrt{\frac{2}{3}\vert\Lambda\vert} \, t \right) \, ,\label{rholl0} \\ p(t) &=& \frac{\vert\Lambda\vert} {48 \pi } \left[\coth ^2\left(\sqrt{\frac{2}{3}\vert\Lambda\vert} \, t \right)-4\right] \, ,\label{pll0} \end{eqnarray} and the induced density can be rewritten as \begin{equation} \rho=\frac{\vert\Lambda\vert}{16\pi}\left(\frac{c_5}{c_6}\frac{ \, \xi}{ \, a}\right)^{2} \end{equation} or \begin{equation} \rho=\frac{3H^{2}}{8\pi}, \end{equation} where \begin{equation} H(t)=\sqrt{\frac{\left|\Lambda\right|}{6}}\, \coth\left(\sqrt{\frac{2}{3}\left|\Lambda\right|} \, t\right). \end{equation} The Hubble parameter has its evolution in time shown in Figure \ref{fig5}. \begin{figure*}[h!] \centering \includegraphics[width=105mm]{Hll0} \caption{Evolution of the Hubble parameter as a function of time in natural units, for $\Lambda=-1$.} \label{fig5} \end{figure*} We see from Fig.\ref{fig5} that the predicted Hubble parameter starts evolving as $\sim1/t$, which is, indeed, expected from standard model predictions \cite{ryden/2003}. After a period of time, $H(t)\sim\ constant$. It is known that an exponential scale factor describes the cosmic acceleration. From the definition of the Hubble parameter, an exponential scale factor yields a constant Hubble parameter. In this way, the constant behaviour that $H(t)$ assumes for high values of time is an indication of the recent cosmic acceleration. \section{Bag model-like equation of state for the universe evolution and the deceleration parameter}\label{sec: eos} In this section we will deeper investigate the solutions obtained in the previous section for the negative 5D cosmological constant case. We will show that the induced matter-energy density and pressure can be related through a bag model-like EoS. We will also derive, from the scale factor solution, the deceleration parameter of the model. \subsection{Unified equation of state for the universe evolution} Considering the cases in which $\Lambda\neq0$, the expressions for the density and pressure can be remarkably written in a unified form as \begin{equation} p=\frac{\left(\rho\pm4B\right)}{3}, \label{t36} \end{equation} where the constant $B=\frac{|\Lambda|}{16\pi}$ and the positive sign stands for $\Lambda>0$ while the negative sign for $\Lambda<0$. Eq.(\ref{t36}) remarkably resembles the MIT bag model equation of state (check, for instance \cite{maieron/2004,nicotra/2006,dkmkmr/2019}), for which $B$ is the so-called bag constant. Naturally we are not claiming that the universe is made of quarks confined inside a bag, but that the EoS for the universe has the same mathematical form. The bag constant here is in fact the bulk energy density necessary to create the vacuum in the flat 5D space, and in this sense, it plays the same role of the bag constant in the MIT model, that is, the energy density necessary to create a bag in the QCD vacuum. As we will see, for the universe evolution given by this EoS, the constant $B$ will be identified with the dark energy in the 4D universe. For the case of cosmological interest, namely the case $\Lambda<0$, we can separate the density in matter-radiation and dark energy components, so that \begin{eqnarray} \label{t541} \rho &=& \rho_{m}+\rho_{\Lambda} \, , \\ \rho_{m} &=& \frac{|\Lambda|}{16\pi} {cosech}^{2}\left(\sqrt{\frac{2}{3}|\Lambda|}t\right) \, , \\ \rho_{\Lambda} &=& \frac{|\Lambda|}{16\pi} \, . \label{t54} \end{eqnarray} We can see from (35) and (38) that the bag energy constant $B$ in this bag model-like unified EoS for the universe plays the role of dark energy in the 4D universe. One can write the EoS parameter as \begin{equation} \omega=-1+\frac{4}{3}{sech}^{2}\left(\sqrt{\frac{2}{3}|\Lambda|}t\right) \end{equation} whose evolution in time can be appreciated in Fig.6. \begin{figure*}[h!] \centering \includegraphics[width=105mm]{omegall0} \caption{Evolution of the equation of state parameter as a function of time in natural units, for $\Lambda=-1$.} \label{fig6} \end{figure*} The model prediction for the evolution of the EoS parameter in time, according to Fig.\ref{fig6}, is remarkable. One can note that for small values of time, $\omega\sim1/3$. According to standard model (as it was mentioned above), the primordial value of $\omega$ is indeed $1/3$, as the primordial universe dynamics is dominated by radiation, such that $p=\rho/3$ \cite{ryden/2003}. As the universe expands and cools down, it allows pressureless matter to be formed. This stage represents the matter-dominated stage of the universe, for which $\omega\sim0$, that is also depicted in Fig.\ref{fig6}. Last, but definitely not least, Fig.\ref{fig6} indicates that for high values of time, $\omega\sim-1$. According to recent observations on the cosmic microwave background radiation temperature fluctuations, $\omega=-1.073^{+0.090}_{-0.089}$ \cite{hinshaw/2013}. This negative pressure fluid is the responsible for the cosmic acceleration in standard model. Therefore, our present approach has also revealed a dominant negative pressure fluid for high values of time, but remarkably, it has also predicted other stages of the universe evolution, named radiation and matter-dominated eras, in a continuous and analytical form. It is important to show that the expression (36) for the density satisfies the continuity equation in 4D. Starting from the continuity equation \begin{equation} \dot{\rho}+3\frac{\dot{a}}{a}(\rho+p)=0, \, \end{equation} substituting (35) and integrating on both sides leads to \begin{equation} \rho(t) = \frac{c_{7}}{a(t)^{4}} + B. \end{equation} Comparing the last expression with Eqs. (36) and (38) and using the expression (23) for $a(t)$, we obtain $c_{7}={c_{5}}^{4} \frac{|\Lambda|}{16\pi}$, that proves that the continuity equation in 4D is satisfied in our model. \subsection{The deceleration parameter} The deceleration parameter is defined as \begin{equation} q(t) =-\frac{\ddot{a} \, a}{\dot{a}^{2}} \end{equation} so that $q>0$ indicates a decelerated expansion and $q<0$ indicates an accelerated expansion. In the present model, it can be show that \begin{equation} q=\frac{\dot{\xi}}{\xi}\frac{a}{\dot{a}}=-\frac{H_{l}}{H}. \end{equation} Therefore, remarkably the deceleration factor in our model is the negative ratio between the Hubble parameter of the extra-dimension scale factor and the Hubble parameter in 4D. Explicitly, the deceleration parameter for $\Lambda<0$ reads \begin{equation} q(t) = 1-2 \tanh^2\left(\sqrt{\frac{2}{3}|\Lambda|}\, t \right). \, \label{t48} \end{equation} \section{Cosmological parameters in terms of redshift and the observational analysis} \label{Hubble} With the purpose of confronting our solutions with observational data, we will study the behavior of the Hubble parameter and other cosmological parameters in terms of the redshift rather than of time. We will concentrate our attention in the case $\Lambda<0$. Taking into account the scale factor obtained in (23), the redshift can be written as \begin{equation}\label{redshiftLambdaPositive} z(t) = -1 + \frac{1}{c_5\sqrt{\sinh\left(\sqrt{\frac{2}{3}\vert\Lambda\vert} \, t \right)}} . \end{equation} The Hubble parameter is then expressed in terms of redshift as follows \begin{equation} H(z)= \sqrt{\frac{\vert\Lambda\vert}{6}}\coth \left[arcsinh\left[\frac{1}{c_5(1+z)}\right]^{2}\right\} \, . \end{equation} The above equation gives a relation between the Hubble constant, the cosmological constant and the integration constant $c_5$ as \begin{equation} H_0=\sqrt{\frac{\vert\Lambda\vert}{6}}\coth\left[arcsinh\left(\frac{1}{c_5}\right)^{2}\right] \, . \label{H0c5L} \end{equation} \subsection{Observational constraints} Hubble parameter data as a function of redshift yields one of the most straightforward cosmological tests today. It consists on constraining the cosmological models with values of the expansion rate as a function of redshift. It is even more interesting when the Hubble parameter data come from estimates of differential ages of objects at high redshifts, because it is inferred from astrophysical observations alone, not depending on any background cosmological models (check References \cite{SternEtAl10,zt}). The data we use here comes from the 51 $H(z)$ data compilation from Maga\~na {\it et al.} \cite{Magana2018}. This compilation consists of 20 clustering (from Baryon Acoustic Oscillations and Luminous Red Galaxies) and 31 differential age $H(z)$ data. We choose to work here only with the 31 differential age $H(z)$ data\footnote{Marked as ``DA'' in Table 1 of Ref. \cite{Magana2018}.}, because it does not depend on any background cosmological model. The age estimates depend only on models of chemical evolution of objects at high redshifts. $H(z)$ estimates from clustering like Baryon Acoustic Oscillations usually assume a standard cosmological model in order to obtain the data from surveys. In all analyses here, we have written a $\chi^2$ function for parameters, with the likelihood given by ${\mathcal L}\propto e^{-\chi^2/2}$. The $\chi^2$ function for $H(z)$ data is given by the following: \begin{equation} \chi^2_H = \sum_{i = 1}^{31}\frac{{\left[ H_{obs,i} - H(z_i,\mathbf{s})\right] }^{2}}{\sigma^{2}_{H_i,obs}} , \label{chi2H} \end{equation} where $\mathbf{s}$ is the parameter vector, which we choose to be $\mathbf{s}=(c_5,H_0)$. $\Lambda$ can be related to these parameters through Eq.(\ref{H0c5L}). In Figure \ref{Hz31} below, we can see the 31 $H(z)$ data used here and the best fit $H(z)$ we have found by minimizing $\chi^2_H$. \begin{figure*}[h!] \centering \includegraphics[width=105mm]{Hz31cl.pdf} \caption{Hubble parameter as a function of redshift for the best fit parameters from the 31 $H(z)$ data ($H_0=72.2$ km/s/Mpc, $c_5=0.60$). We also show a curve with $H_0=67.4$ km/s/Mpc, in agreement with Planck data \cite{BAO,Planck} for the Hubble parameter and for a universe age of $13.8$ Gyr, that corresponds to $c_5=0.58$. The blue region corresponds to a 2$\sigma$ (95.4\%) c.l. around the best fit.} \label{Hz31} \end{figure*} In order to find the constraints over the free parameters, we have assumed flat priors for $c_5$ and $H_0$ and have sampled the posteriors with the so called Affine Invariant Monte Carlo Markov Chain Ensemble Sampler by \cite{GoodWeare}, which was implemented in {\sffamily Python} language with the {\sffamily emcee} software by \cite{ForemanMackey13}. In order to plot all the constraints on each model, we have used the freely available software {\sffamily getdist}\footnote{{\sffamily getdist} is part of the great Affine Invariant Monte Carlo Markov Chain Ensemble Sampler, {\sffamily COSMOMC} \cite{cosmomc}.}, in its {\sffamily Python} version. The results of this analysis can be seen in Fig.\ref{Hz31-triangle} and Table \ref{tabHz31}. \begin{figure*}[h!] \centering \includegraphics[width=.8\linewidth]{gmitbagHz31-triangle.pdf} \caption{Confidence contours from 31 $H(z)$ data analysis of the free parameters of the model, $c_5$ and $H_0$. We also show the constraints over the total age, $t_0$, which is a derived parameter ($H_0$ in km/s/Mpc, $t_0$ in Gyr). The contours correspond to 68\% and 95\% c.l..} \label{Hz31-triangle} \end{figure*} \begin{table}[h!] \centering \begin{tabular} { l c} Parameter & 95\% limits\\ \hline {\boldmath$c_5 $} & $0.600^{+0.061}_{-0.058} $\\ {\boldmath$H_0 $} & $72.2^{+5.3}_{-5.5} $\\ $t_0 $ & $12.59^{+0.69}_{-0.62} $\\ \hline \end{tabular} \caption{Mean value and 95\% limits of the model parameters. In bold face are the free parameters and $t_0$ is a derived parameter. $H_0$ is in units of km/s/Mpc and $t_0$ in Gyr.} \label{tabHz31} \end{table} In Fig. \ref{fig41} we present the scale factor of the extra dimension as function of the redshift, which can be written as \begin{equation} \xi(z) = c_5 \, c_6 \, (1+z) \sqrt{1+ \frac{1}{c_5^4 \, (1+z)^4}} \, . \label{xiz} \end{equation} \begin{figure*}[h!] \centering \includegraphics[width=115mm]{lambdanegativoxi_z_small} \caption{Evolution of the extra-dimension scale factor as a function of the redshift in natural units, for $c_5=0.60$ and $c_6=1$.} \label{fig41} \end{figure*} It is interesting to note that the extra-dimension scale factor has a free constant $c_6$, which is not fixed by the cosmological analysis. This happens because the extra dimensional dependence of the cosmological parameters only appears through the fraction $\dot{\xi}/\xi$. This means that, in this model, even if the scale of the extra dimension is very small, the cosmological effects would still be measurable. As a consequence, the extra dimensional lenght scale can be arbitrarly small. The deceleration parameter as function of the redshift is given by \begin{equation} q(z) = 1 - \frac{2}{1+c_5^4 \, (1+z)^4} \, , \end{equation} whose behavior can be seen in Fig.\ref{fig44}. We can see that the model gives an accelerated expansion of the universe ($q<0$) for the present epoch. Also, the solution obtained within the IMM prescription gives a transition from a decelerated to an accelerating universe expansion, as expected from supernova observations, in particular the SN 1997f data photometric observations by the \emph{Hubble Space Telescope} \cite{decelerate2001}. In the analysis of \cite{decelerate2002}, the transition is expected to occur for $z\sim0.5$, which is qualitatively compatible to our model prediction ($z\sim 0.66$). \begin{figure*}[h!] \centering \includegraphics[width=115mm]{lambdanegativoq_z_} \caption{Evolution of the universe deceleration parameter as a function of the redshift, for $c_5=0.60$.} \label{fig44} \end{figure*} The analytical expression for the EoS parameter as function of redshift is \begin{equation} \omega(z) = \frac{1}{3}\left[1- \frac{4}{1+c_5^4(1+z)^4}\right], \end{equation} whose pattern is shown in Fig.\ref{fig43}. It presents the EoS parameter evolution for different epochs of the universe. As expected by the standard cosmological model \cite{PDG}, for recent redshifts the parameter is $<-1/3$ (dark energy era) and for past times the EoS parameter presents a null value, which is compatible with the matter-dominated phase. \begin{figure*}[h!] \centering \includegraphics[width=115mm]{lambdanegativoomega_z_} \caption{Evolution of the EoS parameter as a function of the redshift in natural units, for $c_5=0.60$.} \label{fig43} \end{figure*} \section{Discussion and Conclusion}\label{sec:dis} In the present article we have applied the IMM to a general 5D metric with scale factors acting in the usual three space coordinates and in the extra spatial coordinate. We have considered 5D field equations with null and non-null (namely, positive and negative) cosmological constant. The IMM is a purely geometrical approach in the sense that matter in the 4D universe appears as a manifestation of a 5D empty space. The mechanism which describes that is the collection of the extradimensional dependent terms in the 5D Einstein tensor, which are ``moved'' to the {\it rhs} of the field equations, playing the role of an induced energy-momentum tensor. In this sense, what we have is a realization of the Mach's principle \cite{sciama/1953}-\cite{liu/1995}, which was desired by Einstein for a theory of gravity. From a quite general approach we have obtained some cosmological features particularly interesting. We have shown in Section \ref{ss:fewcc} that general KK models with null cosmological constant are restrict to a radiation-dominated universe - which evolves as $a\sim t^{1/2}$. Since the matter content obtained from the IMM application has a traceless energy-momentum tensor, this is in agreement with Kaluza's original idea of unifying gravitation and electromagnetism. Also, we have shown that the extra-dimension scale factor yields a negative Hubble parameter for the extra coordinate, i.e., the extra coordinate length compactifies. In Section \ref{sss:lg0}, we have inserted a positive 5D cosmological constant in the field equations. The approach has led to a cyclic or bouncing universe, i.e., a universe that goes from a collapsing era to an expanding era without displaying the singularity that standard model carries. Bouncing cosmological models are well-known alternatives to inflation and also provide the cosmological perturbations we see today. For a deeper understanding of bouncing cosmological models, besides \cite{steinhardt/2002}-\cite{battefeld/2015}, we refer the reader to \cite{brandenberger/2017}. In Section \ref{sss:ll0}, we considered $\Lambda<0$. It is interesting to remark here that usually braneworld models contain negative bulk cosmological constant as a consequence of the appearance of terms $\sim\sqrt{-\Lambda}$ in their Friedmann-like equations \cite{ida/2000,bajc/2000}. Our negative cosmological constant model has shown to be able to uniquely describe the radiation, matter and dark energy eras of the universe evolution in a continuous and analytical form, what can be clearly seen, for instance in Fig. \ref{fig6}. This is a quite non-trivial result. Cosmological models able to describe from a single analytical equation of state the whole history of the universe evolution are rarely obtained in the literature \cite{ms/2016,lima/2013}. References \cite{ms/2016,lima/2013} show cosmological scenarios obtained from $f(R,T^\phi)$ gravity, with $R$ being the Ricci scalar and $T^\phi$ the trace of the energy-momentum tensor of a scalar field $\phi$, and decaying vacuum models, respectively. This interesting feature is a consequence of the remarkable hyperbolic solution obtained for the scale factor. While we have obtained such a feature from the model, some other approaches use this solution as a prior {\it ansatz} \cite{chawla/2012,pradhan/2014,mishra/2013,maurya/2017,mishra/2016,nagpal/2019}. This kind of hyperbolic solution also is found in the flat $\Lambda$CDM concordance model, by neglecting radiation. However, in this case, the time dependence of the scale factor is like $a(t)\sim\left[\sinh(t)\right]^{2/3}$, and not $a(t)\sim\left[\sinh(t)\right]^{1/2}$, as we have found here \cite{weinberg/1989,KolbTurner90,LimaBasilakos11,Piattella18}. Furthermore, from our solution for the matter-energy density (25), it is clear that $\rho\rightarrow constant$ for high values of time, which is also in agreement with standard model. Here, this constant reads $\vert\Lambda\vert/16\pi$, while in standard model it is $\Lambda/8\pi$. The factor $2$ between these energy densities may be due to the fact that the former refers to a 5D space-time, and therefore should be more diluted than a 4D cosmological constant. Our model also satisfactorily fits the observational data for the experimental measurement of the Hubble parameter, as shown in Section 4.1. The adopted method resulted in $H_0 = 72.2^{+5.3}_{-5.5}$ km/s/Mpc, which is in agreement with the most recent estimate from local observations, $H_0=74.03\pm1.42$ km/s/Mpc \cite{RiessEtAl19}, and, in the limit, also in agreement with the Planck collaboration estimate, $H_0=67.4\pm0.5$ km/s/Mpc \cite{Planck}, in the context of flat $\Lambda$CDM cosmology. As a derived parameter, we have obtained the total age of the Universe as $t_0=12.59^{+0.69}_{-0.62}$ Gyr, which is in agreement with most of age estimates of objects today. Jimenez {\it et al.} \cite{JimenezEtAl19} have obtained, with 22 globular clusters (GC) age estimates from \cite{OMalleyEtAl17}, an weighted average of $t_{GC}=13.0\pm0.4$ Gyr, which is in agreement with our superior limit ($t_0=13.28$ Gyr at 95\% c.l.). Our result is also in agreement with estimates of absolute ages of very-low-metallicity stars, estimated in the range of 13.0 -- 13.535 Gyr, as explained on \cite{JimenezEtAl19} and references therein. Finally, since our unified EoS for the universe was obtained from the AdS$_5$ space-time and $|\Lambda|$ is of the same order of $\Lambda_{4}$, the very small value observed for the cosmological constant has its origin in the energy to create the vacuum in the 5D space-time and is not necessarily related to the vacuum energy of quantum fields in 4D \cite{weinberg/1989}. \bigskip \begin{acknowledgements} M.M.Lapola thanks CAPES, for financial support. PHRSM would like to thank S\~ao Paulo Research Foundation (FAPESP), grants 2015/08476-0 and 2018/20689-7, for financial support. WP thanks CAPES, grant 88881.309870/2018-01 and CNPQ, grants 313236/2018-6 and 438562/2018-6. M.Malheiro thanks CAPES, CNPq and the FAPESP thematic project 2013/26258-5 for financial support. R. Valentim would like to thank by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo - FAPESP who was supported by thematic project process no. 2013/26258-2 and regular project process no. 2016/09831-0. JFJ is supported by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo - FAPESP (Process number 2017/05859-0). \end{acknowledgements}
1,477,468,750,502
arxiv
\section{Introduction} \IEEEPARstart{A}{dvances} in wireless communications and low-power sensing are enabling a new generation of ``smart cities,'' which promise to improve the performance of municipal services and reduce operating costs through real-time analytics and control \cite{caragliu_2011}. While some applications of ``smart'' infrastructure have received a great deal of attention---such as autonomous vehicles\cite{Dimitrakopoulos_2010, Zanella_2014}, energy grid management \cite{Zanella_2014}, and structural health monitoring\cite{lynch_2005, Zanella_2014}---integration of these technologies into water systems has lagged behind. However, ``smart'' water systems offer new inroads for dealing with many of our most pressing urban water challenges, including flash flooding, aquatic ecosystem degradation, and runoff pollution. The goal of this paper is to provide an end-to-end blueprint for the next generation of autonomous water systems, with a particular focus on managing urban stormwater. Towards this goal, we introduce \textit{open storm}, an open source framework that combines sensing, real-time control, wireless communications, web-services and domain-specific models. We illustrate the potential of \textit{open storm} through two real-world case studies: 1) a 2,200 km$^2$ wireless flood forecasting network in Texas, and 2) an 11 km$^2$ real-time stormwater control network in Michigan. Most importantly, to encourage broader adoption by the water resources community, this paper is accompanied by extensive supplementary materials on \texttt{open-storm.org}, including videos, photos, source code, hardware schematics, manufacturing guides, and deployment instructions. These materials make it possible for newcomers to implement their own ``smart'' stormwater systems, without extensive experience in programming or embedded systems design. \section{Background} \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{./img/Figure1c.png} \caption{The \textit{open storm} hardware layer. The left panel shows the complete sensor node along with a representative schematic of its placement in an urban watershed. The right panel shows typical sensors and actuators used in \textit{open storm} research projects.} \label{fig:1} \end{figure*} \subsection{Motivation} Effective management of water supply and water excess are some of the largest engineering problems faced by cities today \cite{mays_2010}, and in the wake of rapid urbanization, aging infrastructure, and a changing climate, these challenges are expected to intensify in the decades to come \cite{bronstert_2002, stocker_2014}. Floods are the leading cause of severe weather fatalities worldwide, accounting for roughly 540,000 deaths between 1980 and 2009 \cite{doocy_2013}. Furthermore, large quantities of metals, nutrients, and other pollutants are released during storm events, making their way via streams and rivers into lakes and coastal zones \cite{Ahn_2005, Carey_2014}. The need to manage pollutant loads in stormwater has persistently been identified as one of our greatest environmental challenges \cite{V_r_smarty_2010}. To contend with these concerns, most communities maintain dedicated gray infrastructure (pipes, ponds, basins, wetlands, etc.) to convey and treat water during storm events. However, many of these systems are approaching the end of their design life \cite{epa_2016}. At the same time, stormwater systems are being placed under greater stress due to larger urban populations, changes in land use, and the increasing frequency of extreme weather events\cite{mays_2010, stocker_2014}. In some communities, stormwater and wastewater are combined, meaning they share the same pipes. For these systems, large storms often lead to combined sewer overflows, which release viruses, bacteria, nutrients, pharmaceuticals, and other pollutants into estuaries downstream \cite{Sercu_2011}. When coupled with population stressors, it comes as little surprise that the current state of stormwater infrastructure in the United States has been given a near-failing grade by the American Society of Civil Engineers \cite{asce_2013}. Engineers have traditionally responded to increasing demands on stormwater systems by expanding and constructing new \textit{gray} infrastructure. However, the upsizing of pipes and storage elements can prove expensive, time-consuming, and may even result in deleterious long-term side effects. Benefits from stormwater conveyance facilities can be diminished if individual sites are not designed in a global context. Even when best management practices are followed, discharges from individual sites may combine to induce downstream flows that are more intense than those produced under unregulated conditions \cite{Emerson_2005}. Without system-level coordination, gray infrastructure expansion may lead to overdesigned solutions that adversely impact flooding, increase stream erosion, and impair water quality\cite{Hawley_2016}. In response to these concerns, \textit{green} infrastructure (GI) has been proposed as an alternative to traditional ``steel and concrete'' stormwater solutions. These systems use smaller, distributed assets---such as bioswales, green roofs and rain gardens---to condition flows and improve water quality. However, recent research has raised questions about the scalability and maintenance requirements of green infrastructure \cite{epa_2013}. Regardless of the choice between ``gray'' or ``green'', new construction is limited by cost, and often cannot keep pace with evolving community needs. To preserve watershed and ecological stability, there is an urgent need to incorporate systems thinking into stormwater designs and to engineer solutions that can optimize stormwater system performance---not only for individual sites, but for entire watersheds. \subsection{The promise of sensing and control} ``Smart'' water systems promise to improve the study and management of water resources by extending monitoring and control beyond centralized facilities and into watersheds as a whole. With increased access to inexpensive sensors and wireless communications, the feasibility of deploying and maintaining large sensor networks across urban landscapes is now within reach for many public utilities and research groups. While many of the technologies have existed for some time, it was not until the integration of wireless sensor networks with web services (i.e. the \textit{Internet of Things}) that large networks consisting of hundreds or thousands of heterogeneous devices could be managed reliably\cite{wong_2016b}. This in turn has enabled watersheds to be studied at spatial and temporal scales that were previously unattainable. By densely instrumenting urban watersheds, researchers can finally begin to understand the complex and spatially variable feedbacks that govern water flow and quality across the built environment. A system-level understanding of urban watershed dynamics will provide decision makers with actionable insights to alert the public, and improve stewardship of water resources. Beyond new insight gained through sensing, the ability to dynamically regulate water levels across a watershed will reduce flooding, preserve riparian ecosystems, and allow for distributed treatment of stormwater. While these functions were previously achieved only through construction of static gray infrastructure or centralized treatment facilities, the addition of remotely-controlled valves and pumps promises to realize the same benefits while at the same time reducing costs, expanding coverage, and allowing system performance to scale flexibly with changing hydrologic conditions. Adding valves to existing stormwater facilities, for instance, can extend hydraulic retention time, thereby promoting the capture of sediment-bound pollutants\cite{kerkez_2016, Mullapudi_2017}. Modulation of flows (hydrograph shaping) may reduce erosion at downstream locations by ensuring that discharges do not exceed critical levels\cite{kerkez_2016}. More fundamentally, distributed control will enable operators to coordinate outflows from stormwater sites (tens to hundreds) across an entire city. Along with reducing flooding, this will allow water managers to utilize the latent treatment capacity of existing ponds and wetlands---effectively allowing a watershed to function as a distributed wastewater treatment plant\cite{Mullapudi_2017}. Such a vision for ``smart'' stormwater systems is no longer limited by technology. Rather, adoption of smart water systems has been hindered by (i) a reliance on proprietary technologies, (ii) a lack of proven case studies, and (iii) an absence of end-to-end solutions that are specifically designed and tested for water resources applications. To enable truly holistic management and control, there is an urgent need to combine modern technologies with domain knowledge from water sciences---something which present solutions do not address or make transparent. These solutions are reviewed next, after which the \textit{open storm} framework is introduced as an end-to-end blueprint for ``smart'' water systems management. This open-source framework combines low-power wireless hardware with modern cloud computing services and domain-specific applications to enable scalable, real-time control of urban water systems. \section{Existing technologies} Real-time sensing and control of water infrastructure is not a new idea. Supervisory control and data acquisition (SCADA) systems have long been used to monitor and control critical water infrastructure \cite{mays_2000}. In addition to these traditional technologies, there has been a recent explosion in the development of wireless sensor networks (WSNs) for water resources management. While these technologies have made great strides in enabling monitoring and control of water systems, a lack of end-to-end solutions has inhibited system-scale management of watersheds. In this section, we review existing technological solutions for water system monitoring and control, and describe how \textit{open storm} advances the state of the art by providing the first open source, end-to-end solution for distributed water systems management. \subsection{SCADA systems} Most water utilities use supervisory control and data acquisition (SCADA) systems to manage the conveyance, treatment and distribution of water \cite{mays_2000}. These systems comprise collections of devices, communication protocols, and software that enable remote monitoring and control of water assets \cite{mays_2000}. Most commonly applied in water distribution systems, SCADA systems typically monitor parameters that indicate service quality---such as flows, pressures, and chemical concentrations---and then use this information to control the operation of pumps and valves in real-time \cite{mays_2000}. Control may be manual or automatic, and in some cases may integrate optimization algorithms, decision support systems and advanced control logic \cite{mays_2000}. While legacy SCADA systems remain popular among water utilities, they suffer from limitations in three major areas: interoperability, scalability and security. Perhaps the most critical limitation of legacy SCADA systems is the lack of interoperability between systems, reliance on proprietary protocols, and non-extensible software \cite{powell_1999}. Traditional SCADA systems are often isolated and incapable of intercommunication \cite{powell_1999}. Systems that manage water in one municipality, for instance, may be incapable of communicating with those in another municipality, despite sharing the same service area. Moreover, different SCADA systems within the same jurisdiction may also be isolated, meaning that management of stormwater systems may not in any way inform the operation of wastewater treatment facilities downstream. This lack of communication between water management architectures makes it difficult to coordinate control actions at the watershed scale. Proprietary SCADA systems are also often unable to interface with modern software layers, like Geographic Information Systems (GIS), network analysis software, or hydrologic models \cite{powell_1999}. For this reason, SCADA-based control often cannot take advantage of modern domain-specific tools that would enable system-scale optimization of watershed resources. The capacity of SCADA systems to implement watershed-scale control is also limited by a lack of spatial coverage. Due to their large power footprint and maintenance requirements, traditional SCADA systems are typically limited to centralized water assets with dedicated line power, such as drinking water distribution systems and wastewater treatment facilities \cite{awwa_2001}. Sensors are usually deployed at a select few locations within the network---like treatment plants, pump stations and boundaries with other systems---and in many cases plant and pump station discharges are not even recorded \cite{mays_2000}. For decentralized applications, such as stormwater networks or natural river systems, the cost and power usage of traditional SCADA systems are prohibitive. As such, these distributed resources often go unmonitored and uncontrolled. Recent studies have also raised concerns about the security of SCADA systems, many of which were designed and installed decades ago \cite{igure_2006,mays_2004}. Many legacy SCADA systems rely on specialized protocols without built-in support for authentication, such as MODBUS/TCP, EtherNet/IP and DNP317 \cite{igure_2006,mays_2004}. The use of unsecured protocols means that it is possible for unauthorized parties to execute commands remotely on a device in the SCADA network \cite{igure_2006}. To cope with this problem, SCADA networks are often isolated from public networks, such as the internet. However, remote attacks are still possible---particularly through the use of unsecured radio channels \cite{mays_2004}. Moreover, isolation from public networks limits the use of modern web services such as cloud computing platforms. Reliance on closed networks and proprietary interfaces may also lend a false sense of security to legacy SCADA systems---a concept known as security through obscurity \cite{igure_2006}. For these reasons, SCADA systems have gained the reputation of being relatively closed and only manageable by highly-trained operators or specialized local consultants. While SCADA systems remain the most popular platform for managing urban water systems, new tools are needed to improve security, expand coverage, and encourage integration with modern software. \subsection{Wireless sensor networks} The past decade has witnessed a large reduction in the cost and power consumption of wireless electronics; leveraging these advances, wireless sensor networks (WSNs) have opened up new frontiers in environmental monitoring, with applications ranging from biodiversity monitoring \cite{cerpa_2001}, forest fire detection \cite{hefeeda_2007, soliman_2010}, precision agriculture \cite{kim_2008}, glacier research \cite{martinez_2004}, and structural health monitoring \cite{lynch_2005}. Unlike SCADA systems, WSNs are ideal for low-cost, low-power, and low-maintenance applications, making them well-suited for the monitoring of large water systems like rivers and watersheds. WSNs have been applied to great success in applications ranging from flood monitoring to real-time stormwater control; however, current implementations are generally experimental or proprietary, resulting in a lack of discoverability, limited interoperability, and duplication of effort among projects. Within the water sciences, flood monitoring represents a particularly important application area for WSNs. While several groups have worked to expand the capabilities of existing legacy flood detection networks \cite{bonnet_2000, imielinski_2000, chen_2014}, only a small number of groups have designed and deployed their own flood monitoring WSNs. Hughes et al. (2008) describe a 15-node riverine flood monitoring WSN in the United Kingdom, which interfaces with remote models, performs on-site computation, and sends location-specific flood warnings to stakeholders \cite{hughes_2008, smith_2009}. Other riverine flood monitoring networks include a 3-node river monitoring network in Massachusetts, a 4-node network in Honduras \cite{basha_2008}, and---perhaps the largest unified flood monitoring network in the US---the Iowa Flood Information System (IFIS), which draws on a network of over 200 cellular-enabled sensor nodes \cite{demir_2013}. While most existing flood-monitoring networks focus on large-scale river basins, flash-flooding has received considerably less attention in the WSN community. Marin-Perez et al. (2012) construct a 9-node WSN for flash flood monitoring in a 660 km$^2$ semiarid watershed in Spain \cite{marin_perez_2012}, while See et al. (2011) use a Zigbee-based WSN to monitor gully-pot overflows in an urban sewer system \cite{see_2011}. While most deployments are still pilot-scale, these projects demonstrate the potential of WSNs for distributed flood monitoring across a variety of scales and environments. In addition to monitoring watershed hazards, a limited---but promising---number of projects are illustrating the potential of WSNs for real-time control. Web-enabled sensor nodes have been used to develop adaptive green infrastructure at a select number of pilot sites---for instance, by using weather forecasts to facilitate predictive rainwater harvesting and capture of sediment-bound pollutants\cite{ewri_2015}. At larger scales, a combined sewer network in South Bend, Indiana uses over 120 flow and depth sensors along with nine valves to actively modulate flows into the city's combined sewer system \cite{montestruque_2015}. This network optimizes the use of existing in-line storage and has achieved a roughly five-fold reduction in combined sewer overflows from 2006-2014 \cite{montestruque_2015}---all without the construction of additional infrastructure. While distributed control of storm and sewer systems shows promise, most existing implementations are proprietary. A lack of transparency makes these solutions inaccessible to decision makers and the water resources community at large. Although many research groups have realized the potential for real-time watershed monitoring, existing WSN deployments are generally small-scale and experimental in nature. In order for these networks to be accepted as ``critical infrastructure'' by the water resources community at large, consistent standards for design, deployment and functionality are needed. In designing their own WSNs, researchers tend to look towards previous research projects \cite{basha_2008}. However, research papers rarely include the detailed documentation needed to implement an end-to-end sensor platform \cite{basha_2008}. As a result, researchers are often forced to design and deploy their own WSNs from scratch. To prevent duplication of effort and ensure best practices, a community-driven \textit{how-to guide} is urgently needed. Moreover, while proprietary control networks have proven their effectiveness in improving the performance of stormwater systems, an open source alternative is needed to encourage transparency, interoperability, and extensibility. Without open software, standards, and documentation, these new technologies risk becoming like the SCADA systems of old: isolated, proprietary, and incapable of intercommunication. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{./img/open-storm.png} \caption{The \textit{open storm} stack. The hardware layer (left) comprises the sensor node along with auxiliary sensors and actuators. The cloud services layer (center) includes the database backend, along with a series of publication and subscription services for controlling sensor node behavior and interfacing with applications. The applications layer (right) allows for real-time supervision and control of field-deployed devices. The rightmost panel shows an example dashboard, including sensor feeds and network status visualizations.} \label{fig:2} \end{figure*} \section{The \textit{open storm} platform} \textit{Open storm} provides a transparent and unified framework for sensing and control of urban watersheds. To our knowledge, it is the only open-source, end-to-end platform that combines real-time sensing, control and cloud services for the purpose of water resources management. The project is designed to foster engagement by lowering the technological barriers for stakeholders, decision makers, and researchers. To this end, the \textit{open storm} framework is accompanied by a body of reference material that aims to make it easy for non-experts to deploy their own sensors and controllers. This living document, available at \texttt{open-storm.org}, provides tutorials, documentation, supported hardware, and case studies for end-to-end sensor network management. In addition to documenting core features, this guide details the (literal) \textit{nuts-and-bolts} of sensor network deployment, including information that is typically not available in journal articles---such as mounting hardware, assembly instructions and deployment techniques. The \textit{open storm} framework can broadly be divided into three layers: hardware, cloud services, and applications (Figure 2). The \textit{hardware} layer includes devices that are deployed in the field---such as sensors for collecting raw data, actuators for controlling water flows, microprocessors, and wireless transmitters. The \textit{cloud services} layer includes processing utilities that receive, store and process data, and interact with field-deployed devices through user-defined applications. Finally, the \textit{application} layer defines how users, algorithms, and real-time models interact with field-deployed devices. This three-tier architecture allows for applications to be developed at a high level, without the need for low-level firmware programming. Together, these layers comprise a scalable framework that can easily be adapted to the needs of a wide variety of users and applications. \subsection{Hardware} \subsubsection{The sensor node} At its core, the \textit{open storm} hardware layer (Figure 1) is enabled by the sensor node---a custom low-power embedded computer with wireless capabilities. The sensor node collects measurements from attached sensors, transmits and receives data from a remote server, and executes control actions. A microcontroller (PSOC5-LP by Cypress Semiconductor) serves at the processing unit for the board. This microcontroller is programmed with a simple operating system that schedules the tasks to be executed, and interfaces with a series of device drivers that control the behavior of attached sensors and actuators. The operating system is designed to minimize power use and consists of a single routine which (i) wakes the device from sleep mode, (ii) downloads pending instructions from the cloud server, (iii) takes sensor readings and triggers actuators, (iv) transmits sensor data to the server, and (v) puts the device back into sleep mode. The sensor node spends the majority of its deployment in sleep mode, allowing it to conserve battery power and remain in the field for an extended period of time. The sensor node uses wireless telemetry to transmit and receive data from a remote server. While internet connectivity can be achieved through a number of wireless protocols, \textit{open storm} nodes currently use a cellular communications protocol, which enables telemetry through 2G, 3G and 4G LTE cellular networks. Cellular connectivity is implemented through the use of a cellular module (by Telit), along with a small antenna for broadcasting the wireless signal. Compared to other protocols (such as satellite or wi-fi), cellular telemetry is especially suitable for urban and suburban environments due to (i) consistent coverage, (ii) relatively low cost, and (iii) high data throughput. At the time of writing, IoT cellular data plans can be purchased for under \$5 per month per node (1-10 MB), making it financially feasible for even small research groups to maintain large-scale networks. The sensor node is equipped with a power regulation subsystem to provide power to the microcontroller and attached devices. The power supply system consists of four components: (i) a battery, (ii) a solar panel, (iii) a charge controller, and (iv) a voltage converter. The voltage converter permits the sensor node to be powered across a range of 3-40V. While most sensor nodes are powered by a 3.7V Lithium Ion battery, 12V batteries can also be used for higher-voltage sensors and actuators. The solar panel and solar charger are used to recharge the battery, allowing the device to remain in the field without routine maintenance. At the time of writing, many field-deployed sensor nodes have reported data for over a year without loss of power. Detailed technical information regarding the sensor node---including parts, schematics and programming instructions---are available online at \texttt{open-storm.org/node}. Excluding the cost of auxiliary sensors, the sensor node can currently be assembled from off-the-shelf parts for a price of approximately \$350 per node. \subsubsection{Sensors and actuators} The \textit{open storm} platform supports an extensive catalog of digital and analog environmental sensors. Typical sensors include (i) ultrasonic and pressure-based water level sensors, (ii) soil moisture sensors, (iii) tipping-bucket and optical rain gages, (iv) automated grab samplers for assessing pollutant loads, and (v) in-situ water quality sensors, including probes for dissolved oxygen, pH, temperature, conductivity, dissolved solids, and oxidation-reduction potential. While many sensors are known to work ``out of the box'', new sensors can be quickly integrated by adding device drivers to the sensor node firmware. Support for arbitrary sensors is provided by the microcontroller's system-on-chip (SoC), which allows for analog and digital peripherals---like analog-to-digital converters, multiplexers, and logic gates---to be generated using programmable blocks in the device firmware. In addition to environmental sensors, the sensor node also includes internal sensors that report device health statistics, including battery voltage, cellular reception strength, and connection attempts. These device health statistics help to diagnose network issues, and can be used as inputs to remote trigger routines. Sensors can be configured remotely using web services (see \textit{cloud services} section). This capability allows users to turn sensors on or off, or to change the sampling frequency of a sensor without reprogramming the device in the field. The \textit{open storm} platform also supports an array of actuators that can be used to move mechanical devices in the field. These devices are used to guide the behavior of water systems in real-time, by controlling the flow of water in ponds, channels and pipes. Butterfly valves are one common type of actuating device, and are typically used to control discharge from storage elements such as retention basins. Valves can be opened, closed, or configured across any number of partially opened configurations to modulate flows. As with onboard sensors, these devices are operated remotely using commands sent from a server. Control signals can be specified manually, or through automated control algorithms. Detailed technical information regarding supported sensors and actuators, along with guides for integrating new devices are provided online at \texttt{open-storm.org/sensors}. \subsection{Cloud services} While sensor nodes can function independently by storing data and making decisions on a local level, integration with cloud services enables system-scale supervision, configuration, and control of field-deployed devices. Like a traditional SCADA system, the cloud services layer facilitates telemetry and storage of sensor data, provides visualization capabilities, and enables remote control of devices---either through manual input or through automated routines. However, unlike a traditional SCADA system, the cloud services layer also allows sensor nodes to communicate with a wide variety of user-defined web applications---including advanced data visualization tools, control algorithms, GIS software, external data ingesters, alert systems, and real-time hydrologic models. By combining real-time supervision and control with domain-specific tools, this architecture enables flexible system-scale control of water assets. In brief, the cloud services layer performs the following core functions: (1) stores and processes remotely-transmitted data, (2) simplifies management and maintenance of field-deployed sensor nodes, and (3) enables integration with a suite of real-time models, control algorithms, and visualizations. These services are environment-agnostic, meaning that they can be deployed on a local server or a virtual server in the cloud. In practice, however, current \textit{open storm} projects are deployed on popular cloud services---such as Amazon Elastic Compute Cloud (EC2)\cite{amazon_2017} or Microsoft Azure\cite{microsoft_2017}---to ensure that computational resources flexibly scale with demand. In the following section, we describe the basic architecture, and present example applications that are included with the \textit{open storm} platform. The cloud services layer follows a simple design pattern, in which applications communicate with sensor nodes through a central database. On the device side, sensor nodes push sensor measurements to the database, and then query the database to determine the latest desired control actions. On the server side, applications query the latest sensor readings from the database, feed these sensor readings into user-defined applications, and then write commands to the database to control the behavior of field hardware remotely. This architecture allows field-deployed sensors to be managed through a single endpoint, and also allows new applications to be developed without modifying critical device firmware. The database serves dual purposes as both a storage engine for sensor data, and as a communication layer between field-deployed sensors and web applications. The primary purpose of the database is to store incoming measurements from field-deployed sensors. Sensor nodes report measurements directly to the database via a secure web connection---using the same protocol that one might use to access web pages in a browser (HTTPS). The database address (URL) is specified in the sensor node firmware, allowing the user to write data to an endpoint of their choosing. In addition to storing sensor measurements, the database also enables bidirectional communication between the node and cloud-based applications by storing device configuration data, command signals, and data from external sources. Server applications communicate with the sensor node by writing commands to the database. These commands are then downloaded by the sensor node on each wakeup cycle. For example, a real-time control application might adjust outflow from a storage basin by writing a sequence of valve positions to the database. At each sampling interval, the sensor node will query the latest desired valve position and enact the appropriate control action. This system enables bidirectional communication with field-deployed sensor nodes without the need for complex middleware. For its database backend, the \textit{open storm} project uses InfluxDB, a time-series database that is optimized for high availability and throughput of time-series data\cite{influxdata_2017}. Communications with the database backend are secured through the use of basic authentication (i.e. a username and password), as well as Transport Layer Security encryption (TLS/SSL). The use of basic authentication prevents unauthorized parties from executing malicious commands on the network, while the use of encryption prevents attackers from intercepting sensitive data. Because applications communicate with the sensor node through the database, this means that applications are secured automatically against attackers as well. Altogether, this system comprises a data backend that is secure, maintainable, and extensible. \subsection{Applications} The \textit{open storm} platform features a powerful application layer that enables users to process and analyze data, build user interfaces, and control sensor nodes remotely. Applications are implemented by creating a series of subscriptions on the central database. These subscriptions perform one of three actions: (i) \textit{read} from the database, (ii) \textit{write} new entries to the database, and (iii) \textit{trigger} actions based on user-specified conditions. While seemingly simple, this system allows for the development of a wide range of applications. A data visualization platform, for instance, is implemented by continuously querying sensor streams from the database; similarly, automated control is implemented by writing a continuous stream of commands. In the following section, we demonstrate the potential of the open storm application platform by presenting example applications, including a data visualization portal, a push alert system, adaptive control, and real-time integration with hydrologic models. \subsubsection{Network supervision and maintenance tools} Much like a traditional SCADA system, the \textit{open storm} platform provides a web-based graphical user interface for real-time visualization and device configuration. Figure \ref{fig:2} shows an example dashboard, with time series of cellular connection strength (top), radial gauges for monitoring battery voltage (center), and real-time depth readings from two sensor nodes (bottom). Time series visualizations are implemented using the Grafana analytics platform\cite{grafana_2017}, which allows users to develop customized dashboards that suit their individual needs. To facilitate remote configuration of sensor nodes, \textit{open storm} also includes a web portal that allows users to change device parameters (such as sampling frequency), control actuator behavior, and set event triggers using a web browser. \subsubsection{Automated alerts and adaptive control} In addition to enabling manual supervision and control, \textit{open storm} also provides a rich interface for triggering automatic actions based on user-specified conditions. Push alerts are one common type of trigger event. Alerts can be used to notify stakeholders of hazardous field conditions, such as flooding, or to recommend control strategies to operators in real time. Alerts are also used to notify the user about the health of the network---for instance, by sending push warnings when node battery voltages drop below a threshold, or by emitting a critical alert when data throughput ceases. These system health alerts allow network outages to be promptly diagnosed and serviced. Alerts can be pushed to a variety of endpoints, including email, text messages, or to social media platforms such as Twitter and Slack\cite{twitter_2017, slack_2017}. The wide variety of available push notification formats means that the \textit{open storm} platform is suited to handling both (i) confidential alerts for system operators, and (ii) public emergency broadcasts. In addition to the alert system, subscriptions are also used to implement adaptive sampling and automatic control. Adaptive sampling allows the sampling frequency of the node to be changed remotely in response to weather forecasts, data anomalies, or manual user input\cite{wong_2016}. This in turn allows hydrologically interesting events---such as storm events and dam releases---to be measured at an enhanced resolution. To manipulate sampling frequencies in response to changing weather conditions, for instance, weather forecasts are first downloaded into the \textit{open storm} database using an external data ingester. Next, the subscription service parses the incoming data. If the service detects a probability of rain, the sampling frequency of a node is increased. When no precipitation is anticipated, the sampling frequency is decreased, allowing the node to conserve battery power. The same principle is used to implement automated control. The subscription service can be configured as a simple set-point or PID controller, for instance, by computing a control signal based on an input data stream\footnote{An example script for a PID controller is included in the Supplementary Information document}. This controller can in turn be used to optimize outflow from a retention pond, by controlling the position of an outlet valve. More sophisticated control schemes can be implemented by attaching the subscription service to an online model, which optimizes control strategies over an entire stormwater network, achieving system-level benefits. Examples include the MatSWMM and pySWMM software packages\cite{Ria_o_Brice_o_2016, pyswmm_2017}, which are used to simulate real-time control strategies for urban drainage networks. Detailed information regarding cloud services and applications can be found on \texttt{open-storm.org/cloud}. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{./img/fig3_attempt4.png} \caption{Flood monitoring network in the Dallas--Fort Worth metroplex. The map (left) shows current and proposed sensor sites, while the detail photos (bottom-right) show an example bridge-mounted depth sensor node. Time series (top-right) show the response in stream depth to a series of storm events from August 5-6, 2016. From these stage hydrographs, it can be seen that the response varies widely even within a relatively small geographic area.} \label{fig:3} \end{figure*} \section{Case Studies} To demonstrate the capabilities of the \textit{open storm} platform, we present two ongoing case studies. The first is a real-time flash flood warning network for the Dallas--Fort Worth metroplex in Texas. This deployment detects flash floods at the level of individual roadways, allowing targeted alerts for motorists and improved routing of emergency services during storm events. The second case study is a ``smart'' stormwater control network in the City of Ann Arbor, Michigan. This deployment aims to improve water quality and mitigate stormwater damage by adaptively timing releases from retention basins across an entire watershed. \subsection{Case study 1: Flood monitoring} Located in ``flash-flood alley'', the Dallas--Fort Worth (DFW) metroplex has historically been one of the most flood-prone areas in the United States \cite{sharif_2014}. Chronic flooding results in an average of 17 fatalities per year in the state of Texas, with a majority of deaths arising from flash floods \cite{sharif_2014}. Despite recent efforts to improve stormwater management \cite{lee_leslie_2016}, lack of fine-scale runoff measurements inhibits prediction and communication of flash flood risks. To address this problem, we are using the \textit{open storm} platform to build a real-time flash flood monitoring network. Drawing on the \textit{open storm} real-time alert system, this network aims to improve disaster response by communicating flood risks to emergency managers in real-time, and by generating targeted alerts that will allow motorists to safely navigate around inundated roadways. To date, urban flash flooding remains a poorly-understood phenomenon. There is currently no model that is capable of generating reliable flash flood estimates in urban areas \cite{Hapuarachchi_2011}. Modeling of urban flash floods is complicated by an absence of natural flow paths and interaction of runoff with man-made structures \cite{Hapuarachchi_2011}. However, lack of data at appropriate spatial and temporal scales also presents a major challenge. For reliable modeling of flash floods, Berne (2004) recommends using rainfall data at a minimum spatial resolution of 500 meters\cite{Berne_2004}, while a recommended temporal resolution of 1-15 minutes for rainfall is recommended by Smith (2007) \cite{Smith_2007}. Existing rain gages and river stage monitors are often too sparsely distributed to meet these requirements. Within the DFW metroplex, NWS maintains 12 quality-controlled gages \cite{nws_2017}, while USGS provides precipitation data at 30 sites \cite{usgs_2017}. This means that the current spatial resolution of validated rain gages within the DFW metroplex is roughly 1 gage per 600 km$^2$---too sparse for reliable prediction of flash floods. Likewise, current river stage monitors for the DFW region are largely deployed along mainstems of creeks and rivers with contributing areas ranging from 20 km$^2$ to 21,000 km$^2$ (and a median contributing area of 220 km$^2$). While these gages provide excellent coverage of riverine flooding, they offer limited potential for capturing flash floods. To fill coverage gaps and enable real-time flash flood forecasting, we are building a wide-area flood monitoring network that is specifically tailored to monitoring flash floods over small-scale catchments (ranging from about 3 to 80 km$^2$ in size). Our approach is to leverage a large array of inexpensive depth sensors to capture runoff response at the scale of individual roadways, creeks, and culverts. By using inexpensive hardware, we are able to scale our network to a size that would be infeasible with state-of-the-art stage monitoring stations (such as those used by NOAA or USGS). At the time of writing, 40 sensor nodes have been allocated and built for the DFW flood monitoring project, with over 15 nodes currently deployed and reporting. These 40 sensor nodes have been built for a cost of \$20,000 USD---less than the cost as a single USGS gaging station \cite{sheehan_2017}.\footnote{The installation cost for a USGS stage-discharge streamgaging station is roughly \$20,000, with an annual recurring cost of approximately \$16,000.} Figure \ref{fig:3} presents an overview of the DFW flood monitoring network. The left panel shows a map of the DFW metroplex, with current and proposed sensor node locations. The bottom-left panel shows a detail of a typical sensor node installation. Like most nodes in the network, this node is mounted to a bridge deck with an ultrasonic depth sensor pointed at the stream surface below. The sensor node records the depth to the water surface at a typical time interval of 3-15 minutes. The top-right plot shows a time series of stream depth during two distinct storm events for a sample of nodes on the network. From this plot, it can be seen that the runoff response varies widely between sensor locations, even in a relatively concentrated geographic area. During the second event, for instance, Node 2 (yellow) reports a large increase in discharge, while Node 9 (purple) reports no change in discharge. Comparison of the hydrographs with NEXRAD\cite{Crum_Alberty_1993} radar data shows that the variability in stage is largely explained by spatial variability in the rainfall fields\footnote{See https://github.com/open-storm/docs.open-storm.org/wiki/Case-study:-Flood-Monitoring-in-Dallas-Fort-Worth}. This result confirms the need for increased spatial resolution in stream stage measurements for flash flood monitoring. The \textit{open storm} platform enables detection and communication of flood risks on spatial and temporal scales appropriate for real-time disaster response and control. Adaptive management of traffic during extreme weather events represents one important application of this technology. The Dallas--Fort Worth flood monitoring network could improve disaster response by communicating flood risks to motorists in real-time, thereby allowing them to safely navigate around flooded roadways. This is especially important given that in the US, roughly 74\% of fatalities from flooding are motor-vehicle related \cite{doocy_2013}, and in Texas, as much as 93\% of flood-related deaths result from walking or driving into floodwaters \cite{sharif_2014}. Current alert systems are to a large extent insensitive to spatial variability in flood response \cite{smith_2009}. However, the \textit{open storm} framework enables targeted alerts that can be integrated into existing mobile navigation apps. In a future that may be characterized by autonomous vehicles and vehicle-to-infrastructure communication \cite{jiang_2008}, this technology could one day be used to adaptively route traffic during extreme weather events. \begin{figure}[!htb] \centering \includegraphics[height=7cm]{./img/arb_map_with_inset.png} \caption{Map of the Ann Arbor stormwater control network. Sensor nodes are concentrated within the impervious southern reach of the Malletts Creek watershed (blue). The outlet of the watershed drains into the Huron River mainstem (upper-right).} \label{fig:4} \end{figure} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{./img/fig5_3.png} \caption{Malletts Creek control experiment in Ann Arbor. The left panel shows time series of water depth from 12:00 pm on December 2 to 6:00 am on December 4, 2016. The right panel shows the location of the three sites in the watershed, with the partitioned contributing areas of each location corresponding to the colors of the time series plots.} \label{fig:5} \end{figure*} \subsection{Case study 2: Controlling Watersheds} As illustrated by the Dallas--Fort Worth flood-monitoring network, real-time measurements can play a pivotal role in providing alerts to stakeholders and improving our understanding of watershed dynamics. However, with the addition of active control, it is possible to not only monitor adverse events, but also to prevent them. The \textit{open storm} platform is capable of enacting control on a watershed scale using distributed valve controllers, adaptive control schemes, and cloud-hosted hydrologic models. Instead of building bigger stormwater systems, operators may use real-time control to make better use of existing water infrastructure, mitigate flooding, and decrease contaminant loads into sensitive ecosystems. The \textit{open storm} framework is presently being used to control an urban watershed in the City of Ann Arbor, Michigan. The \textit{Malletts Creek} watershed---a 26.7 km$^2$ tributary of the Huron River---has traditionally served as a major focal point in the city's strategy to combat flooding and reduce runoff-driven water quality impairments \cite{hrwc_2011}. Given its proximity to the Great Lakes, water resource managers have placed an emphasis on reducing nutrient loads from urban runoff. A majority of the discharge in Malletts creek originates from the predominantly impervious upstream (southwestern) reach of the watershed, while a significant, but smaller portion of the discharge originates from the central reach of the watershed. For this reason, local water resource managers have constructed a number of flood-control basins in the upstream segments of the catchment. It is these basins that are now modified to allow for real-time control of the watershed. The watershed is modified for control at two locations by retrofitting existing basin outlets with remotely-operated valves (Figure \ref{fig:4}). The first control point is a stormwater retention pond in the southern part of the watershed (shown in red in Figure \ref{fig:5}). While originally designed as a flow-through (detention) pond, the addition of two 30 cm diameter gate valves allows for an additional 19 million liters of water to be actively retained or released. The second control point is a smaller retention pond, located in the central reach of the watershed (shown in green in Figure \ref{fig:5}). This control site is retrofitted with a rugged 30 cm diameter butterfly valve. The position of each valve is controlled via an attached sensor node, which relays commands from a remote server. Each sensor node is equipped with a pair of ultrasonic sensors: one to measure the water depth at the pond, and one to measure the depth of the outflow stream. The control sites operate entirely on 12V battery power, along with a solar panel to recharge the battery during daylight hours. This configuration allows the controller to remain in the field permanently, without the need for a dedicated external electricity source.\footnote{With two people, installation at each site takes approximately one day. This includes time dedicated to mounting valves, sensors, and remotely-testing the equipment.} In addition to the two control sites, the Ann Arbor network is also instrumented with more than twenty sensor nodes that monitor system performance and characterize real-time site conditions. Using a combination of ultrasonic depth sensors, optical rain gages, and soil conductivity sensors, these nodes report stream stage, soil moisture, soil temperature, and precipitation accumulation approximately once every 10 minutes (with an increased resolution of 2-3 minutes during storms). An additional set of nodes is deployed to measure water quality---including dissolved oxygen, pH, temperature, oxidation reduction potential, conductivity, temperature---as well as an automated grab sampler for capturing contaminants of interest (such as heavy metals and microbes). These nodes are deployed at the inlet and outlet of constructed wetlands to determine how real-time control affects the removal of pollutants. Measurements from the sensor network are validated using an external United States Geological Survey flow measurement station (USGS station 4174518), located at the watershed outlet. These federally-certified measurements are available freely on the web, making them relatively easy to ingest into the \textit{open storm} framework as an external data source. Furthermore, localized weather forecasts are ingested from public forecasting services (darksky.net) to provide daily, hourly, and minute-level forecasts to inform the control of each site in the network\cite{darksky_2017}. These external data sources allow for near-instant validation of sensor data, and provide a holistic ``snapshot'' of system states. We confirm the effectiveness of the control network through a simple experiment. In this experiment, stormwater is retained at an upstream control site, then released gradually to maximize sedimentation and reduce erosion downstream. While it is known that the addition of control valves affords many localized benefits---such as the ability to increase retention and capture sediments \cite{gaborit_2013}---the goal of this experiment is to test the extent to which control of individual sites can improve watershed-scale outcomes. The control experiment takes place on a river reach that stretches across three sites: a retention pond (upstream), a constructed wetland (center), and the watershed outlet. Figure \ref{fig:5} (right) shows the three test sites within the watershed, with the fractional contributing area of each site indicated by color. In this system, runoff flows from the retention pond (red) to the watershed outlet (blue) by way of an end-of-line constructed wetland (green) designed to treat water, capture sediments, and limit downstream erosion. Erosion, in particular, has been shown to be primary source of phosphorus in the watershed \cite{wong_2016}, thus emphasizing the need to reduce flashy flows. While the wetland serves a valuable purpose in improving water quality, it is sized for relatively small events. Specifically, the basin is designed to hold up to 57 million liters of stormwater but experiences as much as 760 million liters during a ten-year storm. Thus, it often overflows during storms, meaning that treatment benefits are bypassed. To maximize treatment capacity, a sensor node is placed into the wetland to measure the local water level and determine the optimal time to release from the retention pond upstream. At the outset of the experiment, water is held in the upstream retention pond following a storm on December 1, 2016. Residual discharge from the original storm event can be observed as a falling hydrograph limb at the USGS gaging station (blue) during the first 10 hours of the experiment (Figure \ref{fig:5}). The sensor located at the wetland is used to determine the time at which it is safe to release upstream flows without overflowing the wetland (Figure \ref{fig:5}). Water is initially released from the pond at 4:00 pm on December 2, as indicated by a drop in the water level of the pond. Two hours later, the water level in the wetland begins to rise due to the discharge arriving from upstream. Finally, after another three hours, the discharge wave reaches the outlet, where it is detected by the USGS flow station. Over the course of the controlled release, the station registers roughly 19 million liters of cumulative discharge. The control experiment shows demonstrable improvements in system performance compared to the uncontrolled case. While the water quality benefits will be measured in the coming year, a number of likely benefits can be posited. As measured, over 19 million liters were removed from the storm window and retained in the basin following the storm event. The residence time of the water in the pond increased by nearly 48 hours, increasing the potential for sedimentation \cite{gaborit_2013}. The removal of stormwater flows also resulted in attenuation of the downstream hydrograph. The peak flows at the watershed outlet were measured to be 0.28 $m^3 / s$ during the storm, but would have been nearly 0.60 $m^3 / s$ had the valves in the basin not been closed. Based on prior studies in the watershed---which showed that flows in the stream correlate closely with suspended sediment concentrations---it can be estimated that the flows from the basin were discharged at roughly 60 mg/L, rather than 110 mg/L, thus nearly halving the concentration of suspended solids and total phosphorus in the flows originating from the controlled basin\cite{wong_2016}. Moreover, the controlled experiment enhanced the effective treatment capacity at the wetland downstream, which would have overflowed during the storm, thus not treating the flows from the upstream pond. As such, the simple addition of one upstream valve provided additive benefits across a long chain of water assets, demonstrating firsthand how system-level benefits can be achieved beyond the scale of individual sites. While the water quality impacts of active control deserve further assessment, this study opens the door for adaptive stormwater control at the watershed scale. Rather than optimizing the performance of isolated sites, the \textit{open storm} platform can be used to determine the optimal control strategy for an entire watershed, then enact it in real-time. \section{Conclusion} \textit{Open storm} is an all-in-one, ``batteries included'' platform for monitoring and managing urban water systems. Its emphasis on extensive configurability, real-time response, and automated control make it an ideal choice for water system managers and environmental researchers alike. While many open hardware platforms exist, \textit{open storm} is the first open-source, end-to-end platform that combines sensing, control and cloud computing in service of water resources management. Aside from providing a technological blueprint, \textit{open storm} addresses the real-world requirements that can be expected in water resources applications, such as field-robustness, low-power operation and system-scale coordination. The \textit{open storm} project has shown proven results in extending the capabilities of existing stormwater systems: both by increasing the spatiotemporal resolution of measurements, and by actively improving water quality through real-time control. However, \textit{open storm} is not just a platform---it's also a community of researchers, stakeholders and decision-makers who are dedicated to realizing smarter water systems. To assist in the dissemination and development of smart water systems, we are creating a living document at \texttt{open-storm.org} in order to share standards, reference materials, architectures, use cases, evaluation metrics, and other helpful resources. We invite users to participate in this project by sharing their experiences with designing, deploying and maintaining smart water systems.
1,477,468,750,503
arxiv
\section{Introduction} We consider the system of two coupled nonlinear Schr\"odinger equations \begin{eqnarray} &&\imath {\mathcal U}_t+{\mathcal U}_{xx}+(\kappa {\mathcal U}{\mathcal U}^{\ast} +\chi {\mathcal V}{\mathcal V}^{\ast}){\mathcal U}=0, \nonumber \\ && \label{manakov} \\ &&\imath {\mathcal V}_t+{\mathcal V}_{xx} +( \chi{\mathcal U}{\mathcal U}^{\ast}+ \rho {\mathcal V}{\mathcal V}^{\ast}){\mathcal V}=0, \nonumber \end{eqnarray} where $\kappa,\chi,\rho$ are some constants. The integrability of this system is proven by Manakov \cite{ma74} only for the case $\kappa=\chi=\rho$, which we shall refer as {\it Manakov system}. The equations (\ref{manakov}) are important for a number of physical applications when $\chi$ is positive and all remaining constants equals to 1. For example for two-mode optical fibers $\chi=2$ \cite{ccp82} and for propagation of two modes in fibers with strong birefringence $\chi=\frac{2}{3}$ \cite{me87} and in the general case $\frac{2}{3}\leq \chi \leq 2$ for elliptical eigenmodes. The special value $\chi=1$ (Manakov system) corresponds to at least two possible cases, namelly the case of a purely electrostrictive nonlinearity or, in the elliptical birefringence, when angle between the major and minor axes of the birefingence ellipse is approximately $35^{o}$. The experimental observation of Manakov solitons in crystals is reported in \cite{ksaa96}. Recently Manakov model appear in the Kerr-type approximation of photorefractive crystals \cite{kpsv98}. The pulse-pulse collision between wavelength-division-multiplexed channels of optical fiber transmission systems are described with equations (\ref{manakov}) $\chi=2$, \cite{meg91, kmw96, hk95,ko97}. General quasi-periodic solutions in terms of $n$-phase theta functions for integrable Manakov system are derived in \cite{ahh93}, while a series of special solutions are given in \cite{allss95,puc98,phf98,pp99}. The authors of this paper discussed already quasi-periodic and periodic solutions associated to Lam\'e and Treibich-Verdier potentials for nonintegrable system of coupled nonlinear Schr\"odinger equations in frames of a special ansatz \cite{ceek95}. We also mention the method of constructing elliptic finite-gap solutions of the stationary KdV and AKNS hierarchy, based on a theorem due to Picard, is proposed in \cite{gw96,gw98b,gw98a} and the method developed by Smirnov in series of publications, the review paper\cite{sm94} and \cite{sm97,sm97a}. In the present paper we investigate integrable Manakov system being restricted to the system integrable in terms of ultraelliptic functions by introducing special ansats, which was recently apllied by {\it Porubov and Parker} \cite{pp99} to analyse special classes of elliptic solutions of the Manakov system $(\kappa=\chi=\rho=1)$. More precisely, we seek solution of (\ref{manakov}) in the form \begin{eqnarray} {\mathcal U}(x,t)=q_1(x) \,\mathrm{ exp}\left \{\imath a_1 t+\imath C_1\int\limits_{\cdot}^x \frac{{\mathrm d}x}{q_1^2(x)}\right\},\label{ansatz}\\ {\mathcal V}(x,t)=q_2(x) \,\mathrm{ exp}\left \{\imath a_2 t+\imath C_2\int\limits_{\cdot}^x \frac{{\mathrm d}x}{q_2^2(x)}\right\},\nonumber \end{eqnarray} where the functions $q_{1,2}(x)$ are supposed to be real and $a_1,a_2,C_1,C_2$ are real constants. Substituting (\ref{ansatz}) into (\ref{manakov}) we reduce the system to the equations \begin{eqnarray} \frac{\partial^2 q_1 }{\partial x^2} +\rho q_1^3+\chi q_1q_2^2-a_1q_1-\frac{C_1^2}{q_1^3}=0 \label{system2}\\ \frac{\partial^2 q_2 }{\partial x^2} +\kappa q_2^3+\chi q_2q_1^2-a_2q_2-\frac{C_2^2}{q_2^3}=0. \nonumber \end{eqnarray} The system (\ref{system2}) is the natural hamiltonian two-particle system with the hamiltonian of the form \begin{eqnarray} H&=& \frac12p_1^2+ \frac12p_2^2+\frac14( \rho q_1^4+2\chi q_1^2q_2^2+\kappa q_2^4) \cr&-&\frac12 a_1q_1^2 -\frac12a_2q_2^2+\frac12\frac{C_1^2}{q_1^2}+\frac12\frac{C_2^2}{q_2^2}, \end{eqnarray} where $p_1(t)= {\mathrm d}q_i(t)/dt$. These equations describe the motion of particles interacting with the quartic potential $Aq_1^4+Bq_1^2q_2^2+Cq_2^4$ and perturbed by inverse squared potential. Nowdays four nontrivial cases of complete integrabilty are known for nonperturbed potential (i) A:B:C= 1:2:1, (ii) A:B:C= 1:12:16, (iii) A:B:C= 1:6:1,(iv) A:B:C= 1:6:8. Cases (i), (ii) and (iii) are separable in respectively ellipsoidal, paraboloidal and Cartesian coordinates, while the case (iv) is separable in general sence \cite{rrg94}. The cases (ii) appears as one of the entries to polynomial hierarchy discussed in \cite{eekl93aa} the cases (iii) and (iv) are proved to be canonically equivalent under the action of Miura map restricted to the stationary of coupled KdV systems associated with fourth order Lax operator\cite{bef95}. Moreover all the cases (i)-(iv) permit the deformation of the potential by linear conbination of inverse squares and squares with certain limitations on the coefficients \cite{eekl93aa,bef95}. There are also known Lax representations for all these cases which yield hyperelliptic algebraic curves in the cases (i) and (ii) and 4-gonal curve in the cases (iii) and (iv). Although each from the system enumerated yield nontrivial classes of solutions of the system (\ref{manakov}) we shall discuss further only the case (i). The integrability of this case and separability in ellipsoidal coordinates was proved by {\it Wojciechowski} \cite{w85} (see also \cite{k89,t95}). We employ this result to integrate the system in terms of ultraelliptic functions (hyperelliptic functions of the genus two curve) and then execute reduction of hyperelliptic functions to elliptic ones by imposing additional constrains on the parameters of the system. The paper is organised as follows. In the first section we construct the Lax representation of the system, develop the genus two algebraic curve, which is associated to the system and reduce the problem to solution of the Jacobi inversion problem associated with genus two algebraic curve. In the section two develop the integration of the system in terms of {\it Kleinian hyperelliptic functions} which represent a natural generalization of Weierstrass elliptic functions to hyperelliptic curved of hidher genera; recently this realization of abelian functions was discussed in \cite{bel97b,bel97c,eel99}. We explain in the section the outline of the Kleinian realization of hyperelliptic functions and give the principle formulae for the case of genus two curve. In Section 4 we develop reduction of Kleinian hyperelliptic function to elliptic functions in terms of {\it Darboux coordinates} for the curve admiting additional involution. In this way a quasiperodic solution in terms of elliptic functions is obtained. In the last section we construct a set of elliptic periodic solutions on the basis of application of spectral theory for the Hill equation with elliptic potential. \section{Lax representation} The system $1:2:1$ $(\kappa=\chi=\rho=1)$ is a completely integrable hamiltonian system \begin{eqnarray} \frac{\partial^2 q_1 }{\partial x^2} +(q_1^2+q_2^2)q_1-a_1q_1-\frac{C_1^2}{q_1^3}=0,\cr \label{system}\\ \frac{\partial^2 q_2 }{\partial x^2} +(q_1^2+q_2^2)q_2-a_2q_2-\frac{C_2^2}{q_2^3}=0 \nonumber \end{eqnarray} with the Hamiltonian \begin{equation} H=\frac12\sum_{i=1}^2 p_i^2+\frac14\left(q_1^{2}+q_2^{2}\right)^2-\frac12a_1 q_1^2- \frac12 a_2 q_2^2+ \frac12\frac{C_1^2}{q_1^2}+ \frac12\frac{C_2^2}{q_2^2}, \label{H}\end{equation} where the variables $(q_1,p_1;q_{2},p_{2})$ are the canonicaly conjugated variables with respect to the standard Poisson bracket, $\{\cdot\;;\;\cdot\}$. This system permits the Lax representation (special case of Lax representation given in \cite{k98}). \begin{eqnarray} \frac{\partial L(\lambda)}{\partial \zeta}&=&[M(\lambda),L(\lambda)],\cr \quad L(\lambda)&=&\left( \begin{array}{cc} V(\lambda) & U(\lambda) \\ W(\lambda) & -V(\lambda) \end{array} \right),\quad M=\left( \begin{array}{cc} 0 & 1 \\ Q(\lambda) & 0 \end{array} \right) \label{lax} \end{eqnarray} is equvalent to the (\ref{system}), where $U(\lambda),W(\lambda),Q(\lambda)$ have the form \begin{eqnarray*} U(\lambda)&=&-a(\lambda)\left(1+\frac{1}{2}\frac{q_1^2}{\lambda-a_1} +\frac{1}{2}\frac{q_2^2}{\lambda-a_2}\right), \label{u} \\ V(\lambda)&=&-\frac12\frac{\mathrm{d}} {\mathrm{d\zeta}} U(\lambda) , \label{v} \\ W(\lambda)&=&a(\lambda)\left(-\lambda+\frac{q_1^2}{2}+\frac{q_2^2}{2} +\frac12\left(p_1^2+\frac{C_1^2}{ q_1^2}\right)\frac{1}{\lambda-a_1}\right.\cr&+&\left. \frac12\left(p_2^2+\frac{C_2^2}{q_2^2}\right) \frac{1}{ \lambda-a_2} \right) , \label{w} \\ Q(\lambda)&=&\lambda-q_1^2-q_2^2, \label{q} \end{eqnarray*} where $a(\lambda)=(\lambda-a_1)(\lambda -a_2)$. The Lax representation yields hyperelliptic curve $V=(\nu,\lambda)$, \[ \det(L(\lambda)-\frac12\nu 1_2)=0, \] where $1_2$ be $2\times2$ unit matrix and is given explicitly as \begin{eqnarray} \nu^2&=&4(\lambda-a_1)(\lambda-a_2)(\lambda^3-\lambda^2(a_1+a_2) +\lambda(a_1a_2-H)-F)\cr &-&C_1^2(\lambda-a_2)^2-C_2^2(\lambda-a_1)^2, \label{curve} \end{eqnarray} where $H$ is the hamiltonian (\ref{H}) and the second independent integral of motion $F$, $\{F;H\}=0$ is given as \begin{eqnarray} F&=&\frac14(p_1q_2-p_2q_1)^2+\frac12(q_1^2+q_2^2)(a_1a_2-\frac12a_2q_1^2 -\frac12a_1q_2^2)\cr &-&\frac12p_1^2a_2-\frac12p_2^2a_1 -\frac14\frac{(2a_2-q_2^2)C_1^2}{q_1^2} -\frac14\frac{(2a_1-q_1^2)C_2^2}{q_2^2}. \label{F}\end{eqnarray} We remark, that the parameters $C_i$ are linked with coordinates of the points $(a_i,\nu(a_i))$ by the formula \begin{equation} C_i^2=-\frac{\nu(a_i)^2}{(a_i-a_j)^2},\quad i,j=1,2.\label{ccc} \end{equation} Let us write the curve (\ref{curve}) in the form \begin{eqnarray} \nu^2=4\lambda^5+\alpha_4\lambda^4+\alpha_3\lambda^3+\alpha_2\lambda^2 +\alpha_1\lambda+\alpha_0,\label{curvecan} \end{eqnarray} where the {\it moduli} of the curve $\alpha_i$ are expressible in terms of physical parameters - level of energy $H$ and constants $a_1,a_2$, $C_1,C_2$ as follows \begin{eqnarray} \alpha_4&=&-8(a_1+a_2),\cr \alpha_3&=&-4H+4(a_1+a_2)^2+8a_1a_2,\cr \alpha_2&=&4H(a_1+a_2)-4F-C_1^2-C_2^2-8a_1a_2(a_1+a_2),\cr \alpha_1&=&4F(a_1+a_2)-4a_1a_2H+2C_1^2a_2+2C_2^2a_1 +4a_1^2a_2^2,\cr \alpha_0&=&-4a_1a_2F-C_1^2a_2^2-C_2^2a_1^2.\nonumber \end{eqnarray} Let us define new coordinates $\mu_1,\mu_2$ as zeros of the entry $ U(\lambda)$ to the Lax operator. Then \begin{eqnarray} q_1^2=2\frac{(a_1-\mu_1)(a_1-\mu_2)}{a_1-a_2},\quad q_2^2=2 \frac{ (a_2-\mu_1)(a_2-\mu_2)}{a_2-a_1}.\label{qcoord} \end{eqnarray} The definition of $\mu_1,\mu_2$ in the combination with the Lax representation comes to the equations \begin{equation} \nu_i=V(\mu_i)=-\frac12\frac{\partial}{\partial x}U(\mu_i),\quad i=1,2, \end{equation} wich can be transformed to the equations of the the form\footnote{In what follows we shall denote the integral bounds by the second coordinate of the curve $V=V(\nu,\lambda)$. (\ref{curve})} \begin{eqnarray} u_1=\int_{a_1}^{\mu_1}\mathrm{d}u_1 +\int_{a_2}^{\mu_2}du_1, \\ u_2=\int_{a_1}^{\mu_1}\mathrm{d}u_2 +\int_{a_2}^{\mu_2} du_2\label{jip} \end{eqnarray} where $\mathrm{d}u_{1,2}$ denote independent canonical holomorphic differentials \begin{equation} \mathrm{d}u_1= \frac{\mathrm{d}\lambda}{\nu},\quad \mathrm{d}u_2=\frac{\lambda \mathrm{d}\lambda}{\nu} .\label{hodbas} \end{equation} and $u_1=a,u_2=2x+b$ with the constants $a,b$ defining by the initial conditions. The integration of the problem is then reduces to the solving of the {\it Jacobi inversion problem } associated with the curve, which consist in the expession of the symmetric functions of $(\mu_1,\mu_2,\nu_1,\nu_2)$ as function of two complex variables $(u_1,u_2)$. \section[Kleinian hyperelliptic functions]{Exact solutions in terms of Kleinian hyperelliptic functions} In this section we give the trajectories of the system under consideration in terms of Kleinan hyperelliptic functions (see, e.g. \cite{ba97,bel97c}), being associated with the algebraic curve of genus two (\ref{curvecan}) which can be also written in the form \begin{eqnarray} \nu^2&=4\prod_{i=0}^{4}(\lambda-\lambda_{i}), \label{gen2} \end{eqnarray} where $\lambda_{i}\neq \lambda_{i}$ are branching points. At all real branching points the closed intervals $[\lambda_{2i-1},\lambda_{2i}],i=0,\ldots 4$ will be referred further as lacunae \cite{zmnp80,mm75}. Let us equip the curve with a homology basis $({\mathfrak a}_1,{\mathfrak a}_2; {\mathfrak b}_1, {\mathfrak b}_2)\in H_1(V,{\mathbb Z})$ and fix the basis in the space of holomorphic differentials as in (\ref{hodbas}). The associated canonical meromorphic differentials of the second kind $\mathrm{ d}\boldsymbol {r}^T=(\mathrm{ d}r_1,{\mathrm d} r_2)$ have the form \begin{equation}{\mathrm d}r_1=\frac{\alpha_3\lambda+2\alpha_4\lambda^2+12\lambda^3}{ 4\nu}d\lambda,\qquad {\mathrm d}r_2=\frac{\lambda^2}{ \nu}d\lambda.\label{rr} \end{equation} The $2\times 2$ matrices of their periods, \begin{eqnarray*} 2\omega&=&\left(\oint_{{\mathfrak a}_k}{\mathrm d} u_l\right)_{k,l=1,2},\quad 2\omega'=\left(\oint_{{\mathfrak b}_k}{\mathrm d} u_l\right)_{k,l=1,2},\\ 2\eta&=&\left(\oint_{{\mathfrak a}_k}{\mathrm d} r_l\right)_{k,l=1,2},\quad 2\eta'=\left(\oint_{{\mathfrak b}_k}{\mathrm d} r_l\right)_{k,l=1,2} \end{eqnarray*} satisfy the equations, \[\omega'\omega^T-\omega{\omega'}^T=0,\quad \eta'\omega^T-\eta{\omega'}^T=-\frac{\imath\pi}{2}1_2,\quad \eta'\eta^T-\eta{\eta'}^T=0, \] which generalizes the Legendre relations between complete elliptic integrals to the case $g=2$. The fundamental $\sigma$ function in this case is a natural generalization of the Weierstrass elliptic $\sigma$ function and is defined as follows \begin{eqnarray*} \sigma(\boldsymbol{u})&=&\frac{\pi}{\sqrt{\mathrm{det}(2\omega)}} \frac{\epsilon}{\sqrt[4]{\prod_{1\leq i<j\leq 5}(a_i-a_j)}}\\ &\times&\exp\left\{\boldsymbol{ u}^T\eta(2\omega)^{-1}\boldsymbol{u}\right\} \theta[\varepsilon]((2\omega)^{-1} \boldsymbol{ u}|\omega'\omega^{-1}), \end{eqnarray*} where $\epsilon^8=1$ and $\theta[\varepsilon](\boldsymbol{ v}|\tau)$ is the $\theta$ function with an odd characteristic $[\varepsilon]=\left[\begin{array}{cc}\varepsilon_1&\varepsilon_2\\ \varepsilon_1'&\varepsilon_2'\end{array}\right]$, which is the characteristic of the vector of Riemann constants, \[\theta[\varepsilon](\boldsymbol{ v}|\tau)=\sum_{\boldsymbol{ m}\in{\mathbb Z}^2}\mathrm{\exp}\;\imath\pi\left\{ (\boldsymbol{ m}+{\boldsymbol\varepsilon})^T\tau (\boldsymbol{ m}+{\boldsymbol \varepsilon})+2 (\boldsymbol{ v}+{\boldsymbol \varepsilon}')^T\tau (\boldsymbol{ m}+{\boldsymbol\varepsilon})\right\}. \] Alternatively the $\sigma $ function can be defined by its expansion near $\boldsymbol{u}=0$ \begin{equation} \sigma (\boldsymbol{u})=u_1+\frac{1}{24} \alpha_2u_1^3-\frac{1}{3}u_2^3+o(\boldsymbol{u}^5 )\label{ex} \end{equation} and the further terms can be computed with the help of bilinear differential equation \cite{ba07}. The $\sigma$-function posses the following periodicity property: put \[ \boldsymbol{E}(\boldsymbol{ m},\boldsymbol{ m}')=\eta\boldsymbol{ m} +\eta'\boldsymbol{ m}',\quad\text{and}\quad \boldsymbol{\Omega}(\boldsymbol{ m},\boldsymbol{ m }') =\omega\boldsymbol{ m} +\omega' \boldsymbol{ m}', \] where $\boldsymbol{ m},\boldsymbol{ m}'\in \mathbb{Z}^{n}$, then \begin{align*} &\sigma[\varepsilon](\boldsymbol{z}+ 2{\boldsymbol\Omega} (\boldsymbol{ m }, \boldsymbol{ m }'),\omega,\omega') =\mathrm{exp} \big\{ 2\boldsymbol{E}^T (\boldsymbol{ m},\boldsymbol{ m}') \big({\boldsymbol z}+ \boldsymbol{\Omega}(\boldsymbol{ m }, \boldsymbol{ m }')\big)\big\}\\ &\times \mathrm{exp} \{ -\pi \imath {\boldsymbol m }^T{\boldsymbol m}' -2\pi \imath {\boldsymbol\varepsilon }^T{\boldsymbol m}' \} \sigma[\varepsilon]( {\boldsymbol z},\omega,\omega') \end{align*} As modular function the Kleinian $\sigma$-function is invariant under the transformation of the symplectic group, what represents the important characteristic feature. We introduce the Kleinian hyperelliptic functions as second logarithmic derivatives \begin{eqnarray*} \wp_{11}(\boldsymbol{ u})&=&-\frac{\partial^2}{ \partial u_1^2}\mathrm{ ln}\; \sigma(\boldsymbol{ u}),\quad \wp_{12}(\boldsymbol{ u})=-\frac{\partial^2}{\partial u_1\partial u_2}\mathrm{ ln}\; \sigma(\boldsymbol{ u}),\cr \wp_{22}(\boldsymbol{ u})&=&-\frac{\partial^2}{\partial u_2^2} \mathrm{ ln}\; \sigma(\boldsymbol{ u}). \end{eqnarray*} The multi-index symbols $\wp_{i,j,k}$ etc. are defined as logarithmic derivatives by the variable $u_i,u_j,u_k$ on the corresponding indices $i,j,k$ etc. The principal result of the theory is the formula of Klein, which reads in the case of genus two as follows \begin{eqnarray} &&\sum_{k,l=1}^2\wp_{kl}\left(\int_{\infty}^{\mu} {\mathrm d}{\mathbf u}- \int_{\infty}^{\mu_1 } {\mathrm d}{\mathbf u}- \int_{\infty}^{\mu_2 } {\mathrm d}{\mathbf u}\right)\mu^{k-1}\mu_i^{l-1}\cr &=&\frac{F(\mu,\mu_i)+2\nu\nu_i}{4(\mu-\mu_i)^2}, \quad i=1,2,\label{klein} \end{eqnarray} where \begin{equation} F(\mu_1,\mu_2)=\sum_{r=0}^2\mu_1^r\mu_2^r[2\alpha_{2r} +\alpha_{2r+1}(\mu_1+\mu_2)].\label{fx1x2} \end{equation} By expanding these equalities in the vicinity of the infinity we obtain the complete set of the relations for hyperelliptic functions. The first group of the relations represents the solution of the Jacobi inversion problem in the form \begin{equation} \lambda^2-\wp_{22}(\boldsymbol{u})\lambda -\wp_{12}(\boldsymbol{u})=0,\label{bolza2}\end{equation} that is, the pair $(\mu_1,\mu_2)$ is the pair of roots of (\ref{bolza2}). So we have \begin{equation}\wp_{22}(\boldsymbol{u}) =\mu_1+\mu_2,\;\wp_{12}(\boldsymbol{u})=-\mu_1\mu_2.\label{b1}\end{equation} The corresponding $\nu_i$ is expressed as \index{$\wp$ function!fundamental relation!for genus two} \begin{equation} \nu_i=\wp_{222}(\boldsymbol{u})\mu_i+\wp_{122}(\boldsymbol{ u}),\quad i=1,2. \label{y2} \end{equation} The functions $\wp_{22},\wp_{12}$ are called basis functions. The function $\wp_{11}(\boldsymbol{u})$ can be also espressed as symmetric function of $\mu_1,\mu_2$ and $\nu_1,\nu_2$: \begin{equation} \wp_{11}(\boldsymbol{u})= \frac{F(\mu_1,\mu_2)-2\nu_1\nu_2}{4(\mu_1-\mu_2)^2}, \label{b2} \end{equation} where $F(\mu_1,\mu_2)$ is given in (\ref{fx1x2}). Further from (\ref{y2}) we have \begin{eqnarray} \wp_{222}(\boldsymbol{ u})&=&\frac{\nu_1-\nu_2}{ \mu_1-\mu_2},\quad \wp_{221}(\boldsymbol{ u})=\frac{\mu_1\nu_2-\mu_2\nu_1}{ \mu_1-\mu_2}, \nonumber \\ \wp_{211}(\boldsymbol{ u})&=&-\frac{\mu_1^2\nu_2-\mu_2^2\nu_1}{ \mu_1-\mu_2}, \nonumber \\ \wp_{111}(\boldsymbol{ u})&=&\frac{\nu_2\psi(\mu_1,\mu_2)-\nu_1 \psi(\mu_2,\mu_1)}{ 4(\mu_1-\mu_2)^3}, \label{thirdder} \end{eqnarray} where \begin{eqnarray} \psi(\mu_1,\mu_2) &=& 4\alpha_0 + \alpha_1(3\mu_1 + \mu_2) + 2\alpha_2\mu_1(\mu_1 +\mu_2) \nonumber \\ &+& \alpha_3\mu_1^2(\mu_1 +3\mu_2) + 4\alpha_4\mu_1^2\mu_2 +4\mu_1^2\mu_2(3\mu_1 +\mu_2). \nonumber \end{eqnarray} The next group of the relations, which can be drived by the expanding of the equations (\ref{klein}) are the pairwise products of the $\wp_{ijk}$ functions being expressed in terms of $\wp_{22},\wp_{12},\wp_{11}$ and constants $\alpha_s$ of the defining equation (\ref{gen2}). We give here only basis equations \begin{eqnarray*} \wp_{222}^2&=4\wp_{22}^3+4\wp_{12}\wp_{22}+\alpha_4\wp_{22}^2+4\wp_{11} +\alpha_3\wp_{22}+\alpha_2,\\ \wp_{222}\wp_{122}&=4\wp_{12}\wp_{22}^2 +2\wp_{12}^2-2\wp_{11}\wp_{22}+\alpha_4\wp_{12}\wp_{22} \\ &+\frac12\alpha_3\wp_{12}+\frac12\alpha_1 , \\ \wp_{122}^2&=4\wp_{22}\wp_{12}^2-4\wp_{11}\wp_{12}+ \alpha_4\wp_{12}^2-\alpha_0. \end{eqnarray*} All such expressions may be rewritten in the form of an {\it extended cubic relation} as follows. For arbitrary $\boldsymbol{ l},\boldsymbol { k}\in {\mathbb C}^4$ the following formula is valid \cite{ba07} \begin{equation}\boldsymbol{l}^T\pi\pi^T \boldsymbol{k}=-\frac14{\det}\;\left(\begin{array}{cc}H&\boldsymbol{l}\\ \boldsymbol{k}^T&0\end{array}\right),\label{kum1}\end{equation} where $\pi^T=(\wp_{222},-\wp_{221},\wp_{211},-\wp_{111})$ and $H$ is the $4\times4$ matrix: \begin{equation} H= \left ( \begin{array}{cccc} \alpha_0& \frac{1}{2} \alpha_1&-2 \wp_{11}&-2 \wp_{12} \\ \frac{1}{2} \alpha_1& \alpha_2+4 \wp_{11}& \frac{1}{2} \alpha_3+ 2 \wp_{12}&-2 \wp_{22} \\-2 \wp_{11}& \frac{1}{2} \alpha_3+2 \wp_{12}& \alpha_4+4 \wp_{22}&2 \\ -2 \wp_{12}&-2 \wp_{22}&2&0 \end{array} \right) . \end{equation} The vector $\pi$ satisfies the equation $H\pi=0$, and so the functions $\wp_{22},\wp_{12}$ and $\wp_{11}$ are related by the equation \begin{equation}\mathrm{ det}\; H=0.\label{kum5} \end{equation} The equation (\ref{kum5}) defines the quartic Kummer surface $\mathbb K$ in $\mathbb C^3$ \cite{hu05}. The next group of the equations, which is derived as the result of expansion of the equalities (\ref{klein}) are the expressions of four index symbols $\wp_{ijkl}$ as quadrics in $\wp_{ij}$ \begin{eqnarray} &\wp_{2222}=6\wp_{22}^2+\frac12\alpha_3+\alpha_4\wp_{22} +4\wp_{12},\label{eeq1}\\ &\wp_{2221}=6\wp_{22}\wp_{12}+\alpha_4\wp_{12}-2\wp_{11},\label{eeq3}\\ &\wp_{2211}=2\wp_{22}\wp_{11}+4\wp_{12}^2+ \frac12\alpha_3\wp_{12}.\label{eeq5}\\ &\wp_{2111}=6\wp_{12}\wp_{11}+\alpha_2\wp_{12} -\frac12\alpha_1\wp_{22}-\alpha_0,\label{eeq4}\\ &\wp_{1111}=6\wp_{11}^2-3\alpha_0\wp_{22} +\alpha_1\wp_{12}+\alpha_2\wp_{11}- \frac12\alpha_0\alpha_4+\frac18\alpha_1\alpha_3.\label{eeq2} \end{eqnarray} These equations can be identified with completely integrable partial differential equations and dynamical systems, which are solved in terms of Abelian functions of hyperelliptic curve of genus two. In particular, the first two equations represent the KdV hierarhy with ``times" $(t_1,t_2)=(u_2,u_1)=(x,t)$, \begin{equation}{\mathcal X}_{k+1}[{\mathsf U}]={\mathcal R}{\mathcal X}_{k}[{\mathsf U}] \end{equation} where ${\mathcal R}=\partial_x^2- {\mathsf U}+c -\frac12{\mathsf U}_x\partial^{-1}$, $c=\alpha_4/12$ is the Lenard recursion operator. The first two equations from the hierarchy are \begin{equation} {\mathsf U}_{t_1}={\mathsf U}_{x},\quad {\mathsf U}_{t_2}=\frac12({\mathsf U}_{xxx}-6{\mathsf U}_{x}{\mathsf U}), \label{kdv} \end{equation} the second equation is the KdV equation, which is obtained from (\ref{eeq1}) as the result of differentiation by $x=u_2$ and setting ${\mathsf U}=2\wp_{22}+\alpha_4/6$. The equation (\ref{eeq1}) plays role of the stationary equation in the hierarchy and is obtained as the result of action of the recursion operator. Let us introduce finaly the {\it Baker-Akhiezer} function, which in the frames of the formalizm developed is expressible in terms of the Kleinian $\sigma$-function as follows \begin{equation} \Psi(\lambda,\boldsymbol{u})= \frac{ \sigma\left(\int_{\infty}^{\lambda}{\mathrm d} \boldsymbol{ u}- {\mathbf u} \right)} {\sigma(\boldsymbol{u}) } \mathrm {exp}\left\{ \int_{\infty}^{\lambda}{\mathrm d} {\mathbf r}^T \boldsymbol{u} \right\},\label{BAF} \end{equation} where $\lambda$ is arbitrary and $\boldsymbol u$ is the Abel image of arbitrary point $(\nu_1,\mu_1)\times (\nu_2,\mu_2)\in V \times V $. It is straighforward to show by the direct calculation, being bases on the usage of the relations for three and four-index Kleinian $\wp$--functions that $\Psi(\lambda,\boldsymbol{u})$ satisfy to the Schr\"odinger equation \begin{equation} (\frac{\partial^2}{{\partial u_2}^2}-2\wp_{22}(\boldsymbol{u})) \Psi(\lambda,\boldsymbol{u})= \left(\lambda+\frac14\alpha_{4}\right)\Psi(\lambda,\boldsymbol{u})\label{sch} \end{equation} for all $(\nu,\mu)$. Now we are in position to write the solution of the of the system in terms of Kleinian $\sigma$-functions and identify the constants in terms of the moduli of the curve. Using (\ref{b1}),(\ref{qcoord}) the solutions of (\ref{system}) have the following form in terms of Kleinian functions $\wp_{22}(\boldsymbol{ u}), \wp_{12}(\boldsymbol{ u})$ \begin{eqnarray} q_1^2&=&2\frac{a_1^{2}-\wp_{22}(\boldsymbol u)a_1-\wp_{12}(\boldsymbol u)}{a_1-a_2}, \cr q_2^2&=&2\frac{a_2^{2}-\wp_{22}(\boldsymbol u)a_2-\wp_{12}(\boldsymbol u)}{a_2-a_1}, \label{solution} \end{eqnarray} where the vector $\boldsymbol {u}^T=(a,2x+b)$. \section[Quasi-periodic elliptic solutions] {Periodic solutions expressed in terms of elliptic functions of different moduli} We consider in this section the reduction Jacobi (see e.q.\cite{kr03} ) of hyperelliptic integrals to elliptic ones, when the hyperelliptic curve $V$ has the form \begin{equation} w^2=z(z -1)(z -\alpha )(z -\beta )( z -\alpha \beta ) \label{curver} \end{equation} The curve (\ref{curver}) covers two-sheetedly two tori $$\pi _{\pm }:V=(w,z)\rightarrow E_{\pm }=(\eta _{\pm },\xi _{\pm }),$$ \begin{equation} \eta _{\pm }^2=\xi _{\pm }(1-\xi _{\pm })(1-k_{\pm }^2\xi _{\pm }) \end{equation} with Jacobi moduli \begin{equation} k_{\pm }^2=-\frac{(\sqrt{\alpha }\mp \sqrt{\beta })^2}{(1-\alpha )(1-\beta )} , \label{jacmod} \end{equation} The covers $\pi _{\pm }$ are described by the formulae \begin{eqnarray} \eta _{\pm }=-\sqrt{(1-\alpha )(1-\beta )}\frac{z\mp \sqrt{\alpha \beta }}{ (z-\alpha )^2(z-\beta )^2}w, \label{r1} \\ \xi=\xi _{\pm}=\frac{(1-\alpha )(1-\beta )z}{(z-\alpha )(z-\beta )}. \label{r2} \end{eqnarray} The following formula is valid for the reduction of holomorphic hyperelliptic differential to the elliptic ones: \begin{equation} \frac{d\xi _{\pm }}{\eta _{\pm }}=-\sqrt{(1-\alpha )(1-\beta )}(z\mp \sqrt{ \alpha \beta })\frac{\mathrm{d} z}{w}. \label{r3} \end{equation} Suppose that the spectral curve (\ref{curvecan}) admits the symmetry of the (\ref{curver}) and apply the discussed reduction case to the problem. Then the equations of the Jacobi inversion problem (\ref{jip}) can be rewritten in the form \begin{eqnarray} \sqrt{(1-\beta)(1-\alpha)}\sum_{i=1}^2\int_{z_0}^{z_i} (z -\sqrt{\alpha\beta})\frac{\mathrm{d}z} {w}=2u_{+}, \label{j11} \\ \sqrt{(1-\beta)(1-\alpha)}\sum_{i=1}^2\int_{x_0}^{z_i} (z +\sqrt{\alpha\beta})\frac{\mathrm{d}z} {w}=2u_{-} .\label{j22} \end{eqnarray} with $(\nu_i,\mu_i)=(2w_i,z_i)$ and \begin{equation} u_{\pm }=-\sqrt{(1-\alpha )(1-\beta )}(u_2\mp \sqrt{\alpha \beta }u_1) \end{equation} Reduce in (\ref{j11},\ref{j22}) hyperelliptic integrals to elliptic ones according to (\ref{r1},\ref{r2}). \begin{eqnarray*} \int_{0}^{\sqrt{\xi(\mu_1)}}\frac{\mathrm{d}x}{\sqrt{(1-x^2)(1-k^2_{\pm}x^2)}} +\int_{0}^{\sqrt{\xi(\mu_2)}}\frac{\mathrm{d}x} {\sqrt{(1-x^2)(1-k^2_{\pm}x^2)}} =u_{\pm}, \end{eqnarray*} One can further express the symmetric functions of $\mu_1,\mu_2, \nu_1, \nu_2$ on $V\times V$ in term of elliptic functions of tori $E_{\pm}$. To this end we introduce the {\it Darboux coordinates} (see \cite{hu05}, p.105 ) \begin{eqnarray} X_1=\mbox{sn}(u_{+},k_{+})\mbox{sn}(u_{-},k_{-}),\cr X_2=\mbox{cn}(u_{+},k_{+})\mbox{cn}(u_{-},k_{-}), \label{darb} \\ X_3=\mbox{dn}(u_{+},k_{+})\mbox{dn}(u_{-},k_{-}), \nonumber \end{eqnarray} where $\mbox{sn}(u_{\pm },k_{\pm }),\mbox{cn}(u_{\pm },k_{\pm }),\mbox{dn }(u_{\pm },k_{\pm })$ are standard Jacobi elliptic functions. We apply further the addition theorem for Jacobi elliptic functions, \begin{eqnarray} \mbox{sn}(u_1+u_2,k)=\frac{s_1^2-s_2^2}{s_1c_2d_2-s_2c_1d_1},\cr \mbox{cn }(u_1+u_2,k)=\frac{s_1c_1d_2-s_2c_2d_1}{s_1c_2d_2-s_2c_1d_1},\cr \mbox{dn} (u_1+u_2,k)=\frac{s_1d_1c_2-s_2d_2s_1}{s_1c_2d_2-s_2c_1d_1}, \nonumber \end{eqnarray} where we denoted $s_i=\mbox{sn}(u_i,k),c_i=\mbox{cn}(u_i,k),d_i=\mbox{ dn}(u_i,k) $, $i=1,2$ and formulae (\ref{b1},\ref{b2}) for the Kleinian hyperelliptic functions. The straightforward calculations lead to the formulae \begin{eqnarray} X_1=-\frac{(1-\alpha )(1-\beta )(\alpha \beta +\wp _{12})}{(\alpha +\beta )(\wp _{12}-\alpha \beta )+\alpha \beta \wp _{22}+\wp _{11}}, \label{x1} \cr X_2=-\frac{(1+\alpha \beta )(\alpha \beta -\wp _{12})-\alpha \beta \wp _{22}-\wp _{11}}{(\alpha +\beta )(\wp _{12}-\alpha \beta )+\alpha \beta \wp _{22}+\wp _{11}}, \label{x2} \\ X_3=\frac{\alpha \beta \wp _{22}-\wp _{11}}{(\alpha +\beta )(\wp _{12}-\alpha \beta )+\alpha \beta \wp _{22}+\wp _{11}}. \label{x3}\nonumber \end{eqnarray} The formulae (\ref{x2}) can be inverted as follows \begin{eqnarray} \wp_{11}=(B-1)\frac{A(X_2+X_3)-B(X_3+1)}{X_1+X_2-1}, \label{wp11} \\ \wp_{12}=(B-1)\frac{1+X_1-X_2}{X_1+X_2-1}, \label{wp12} \\ \wp_{22}=\frac{A(X_2-X_3)+B(X_3-1)}{X_1+X_2-1}, \label{wp22} \end{eqnarray} where $A=\alpha +\beta $, $B=1+\alpha \beta $. The obtained results permit to present few solutions in elliptic functions of the initial problem, which are quasi-periodic in $\zeta$. Using (\ref{wp12}) and (\ref{wp22}) for solutions of the (\ref{system}) in the form (\ref{solution}) we have \begin{eqnarray*} &&q_1^2=2\frac{1}{a_1-a_2}\left(a_1^{2}- \frac{A(X_2-X_3)+B(X_3-1)}{X_1+X_2-1}a_1\right.\cr&&\left.\qquad- (B-1)\frac{1+X_1-X_2}{X_1+X_2-1}\right), \\ &&q_2^2=2\frac{1}{a_2-a_1}\left(a_2^{2}- \frac{A(X_2-X_3)+B(X_3-1)}{X_1+X_2-1}a_2\right.\cr&&\left.\qquad- (B-1)\frac{1+X_1-X_2}{X_1+X_2-1}\right),\end{eqnarray*} where \begin{equation} u_{\pm }=-2\sqrt{(1-\alpha )(1-\beta )}(x\mp c ) \end{equation} and $c$ is the constant depending on initial conditions. We also remark, that the derived quasi perodic solution was associated with the Jacobi reduction case in which the ultraelliptic integrals were reduced to elliptic ones by the aid of second order substitution. This means on the language of two-dimensional $\theta$-functions, that the associated period matrix is equvalent to the matrix with the off-diagonal element $\tau_{12}=\frac12$. Such the reduction case was considered in various places (see e.g.\cite{bbeim94}). Solutions of this type for nonlinear Schr\"odinger equation ($\sigma=0$) are recently obtained in \cite{ch95}. The anologous technique can be carried out for other well documented case of reduction , when $\tau_{12}=1/N$ and the $N=3,4,\ldots$. In general such the reduction can be caried out for covers of arbitrary degree within the Weierstrass-Poincar\'e reduction theory (see e.g. \cite{kr03,bbeim94}). \section{Elliptic periodic solutions} In this section we develop a method (see also \cite{k89,ek94,ee94b}) which allows us to construct periodic solutions of (\ref{system}) in a straightforward way based on the application of spectral theory for the Schr\"odinger equation with elliptic potentials \cite{amm77,mm75}. We start with the formula (\ref{eeq1}) and with equation with equation for Baker function $\Psi(\lambda;\boldsymbol{ u})$. \begin{eqnarray} &&\frac {\mathrm{d}^2} {\mathrm{d} x^2} \Psi(\lambda,\boldsymbol{ u}) -{\mathsf U} \Psi(x,\boldsymbol{ u}) = (\lambda+\frac{\alpha_{4}}{4})\Psi(\lambda,\boldsymbol{ u}), \label{baker} \end{eqnarray} where we identify the potential $${\mathsf U}=2\wp_{22}+\frac16\alpha_{4}.$$ We assume, without loosing generality, that the associated curve has the property $\alpha_4=0$. To make this assumption applicable to the initial curve of the system (\ref{system}) being derived from the Lax representation, we undertake the shift of the spectral parameter, \begin{equation} \lambda\longrightarrow \lambda+\Delta,\qquad \Delta=\frac25a_1+\frac25a_2. \label{shift} \end{equation} Suppose, that ${\mathsf U}$ be two gap Lam\'e or two gap {\it Treibich-Verdier} potential, what means, that \begin{equation} {\mathsf U}(x)=2\sum_{i=1}^N \wp(x-x_i) \label{TVP}, \end{equation} where $\wp(x)$ is standard Weierstrass elliptic functions with periods $2\omega,2\omega'$ and numbers $x_i$ takes the values from the set $\{ 0,\omega_1=\omega,\omega_2=\omega+\omega',\omega_3=\omega'\}$. It is known, that the set of such the potentials is exhausted by six potentials\cite{tv90,ee95a} \begin{eqnarray} {\mathsf U}_3(x)&=&6\wp(x), \label{L3} \\ {\mathsf U}_4(x)&=&6\wp(x)+2\wp(x+\omega_i),\quad i=1,2,3,\label{TV4}\\ {\mathsf U}_5(x)&=&6\wp(x)+2\wp(x+\omega_i)+2\wp(x+\omega_j), \cr&& \qquad\qquad i\neq j=1,2,3, \label{TV5}\\ {\mathsf U}_6(x)&=&6\wp(x)+6\wp(x+\omega_i),\quad i=1,2,3,\nonumber\\ {\mathsf U}_8(x)&=&6\wp(x)+2\sum_{i=1}^3\wp(x+\omega_i), \nonumber\\ {\mathsf U}_{12}(x)&=&6\wp(x)+6\sum_{i=1}^3\wp(x+\omega_i), \nonumber \end{eqnarray} where the subscript shows the number of $2\wp$ functions involved and display the degree of the cover of the associated genus two curve over elliptic curve. Because the last three potentials can be obtained from the first three by Gauss transform we shall call the first three as {\it basis potentials}. The potential (\ref{L3}) is two gap Lam\'e potential, which is associated with three sheeted cover of elliptic curve; the potentials (\ref{TV4},\ref{TV5}) are Treibich-Verdier potentials \cite{ve90,tv90} associated with four and five sheeted cover correspondingly. To display the class of periodic solutions of system (\ref{system}) we introduce the {\it generalized Hermite polynomial} ${\mathcal F}(x,\lambda)$ by the formula \begin{equation} {\mathcal F}(x,\lambda)=\lambda^2-\pi_{22}(x)\lambda-\pi_{12}(x) \end{equation} with $\pi_{22}(x)$ and $\pi_{12}(x)$ given as follows \begin{eqnarray} \pi_{22}(x) &=&\sum_{j=1}^N \wp (x - x_j) + \frac13 \sum_{j=1}^5 \lambda_j,\label{tr1} \cr \pi_{12}(x) &=&-3\,\sum_{i<j} \wp(x - x_i) \wp(x -x_j) - \frac{Ng_2}{8} \nonumber\\ &&-\frac16 \sum_{i<j} \lambda_i \lambda_j + \frac{1}{ 6}\left(\sum_{j=1}^5 \lambda_j^2 \right) \label{tr2} \end{eqnarray} where $x_i$ are half-periods and $N$ is the degree of the cover (see for example \cite{ek94}). The introduction of this formula is based on the possibility to compute the symmetric function $\mu_1\mu_2$ in terms of differential polynomial of the first one with the help of the equation (\ref{eeq1}), which serves in this context as the ``trace formula" \cite{zmnp80}. The solutions of the system (\ref{system}) are then given as \begin{equation} q_1^2(x)=2\frac{{\mathcal F}(x,a_{1}-\Delta)} {a_{1}-a_{2}} , \quad q_2^2(x)=2\frac{{\mathcal F}(x,a_{2}-\Delta)} {a_{2}-a_{1}} ,\label{answer} \end{equation} The final formula for the solutions of the system (\ref{manakov}) then reads \begin{eqnarray} {\mathcal U}(x,t)=\,\sqrt{2\frac{{\mathcal F}(x,a_{1}-\Delta)} {a_{1}-a_{2}}} \mathrm{ exp}\left \{\imath a_1 t-\frac12\nu(a_i-\Delta)\int\limits_{\cdot}^x \frac{{\mathrm d}x}{{\mathcal F}(x,a_1-\Delta)}\right\},\cr \label{final}\\ {\mathcal V}(x,t)=\sqrt{2\frac{{\mathcal F}(x,a_{2}-\Delta)} {a_{2}-a_{1}}} \mathrm{ exp}\left \{\imath a_2 t-\frac12\nu(a_2-\Delta)\int\limits_{\cdot}^x \frac{{\mathrm d}x}{{\mathcal F}(x,a_2-\Delta)}\right\} ,\nonumber \end{eqnarray} where we used (\ref{answer}) and (\ref{ccc}). It is important to remark for our consideration, that if the potential is known, then the associated algebraic curve of genus two can be described with the help of the Novikov equation \cite{no74}. Let us consider the two-gap potential for normalized by its expansion near the singular point as \begin{equation} {\mathsf U}(x) = \frac{6}{ x^2} + a x^2 + b x^4 + c x^6 + d x^8 + O(x^{10}). \label{decomposition} \end{equation} Then the algebraic curve associated with this potential has the form \cite{be89b} \begin{eqnarray} \nu^2&=&\lambda^5 - \frac{5\cdot7}{2} a\lambda^3 +\frac{3^2\cdot7}{2} b\lambda^2 \cr &+& \left( \frac{3^4\cdot7}{8} a^2 +\frac{3^3\cdot11}{4}c \right)\lambda -\frac{3^4\cdot17}{4}ab+\frac{3^2\cdot11\cdot13}{2}d\label{curvelame}. \end{eqnarray} We shall consider below examples of genus two curves, which are associated with the two gap elliptic potentials (\ref{L3}), (\ref{TV4}) and (\ref{TV5}). Consider the potential ${\mathsf U}_3$ and construct the associated curve (\ref{curvelame}) \begin{equation}{\mathcal L}^2=(\lambda^2-3g_2)(\lambda+3e_1) (\lambda+3e_2)(\lambda+3e_3),\label{curve3}\end{equation} The Hermite polynomiaal ${\mathcal F}_3(\wp(x),\lambda)$ \cite{ww86} associated to the Lame potential (\ref{L3}), which is already normalized as in (\ref{decomposition}) has the form \begin{equation} {\mathcal F}_3(\wp(x),\lambda)=\lambda^{2}- 3\wp(x)\lambda + 9\wp^{2}(x)-\frac{9}{4}g_{2}. \label{HerPol} \end{equation} Then the finite and real solution to the system (\ref{system}) is given by the formula (\ref{answer}) with the Hermite polynomial depending in the argument $x+\omega'$ (the shift in $\omega'$ provides the holomorphity of the solution). The solution is real under the choice of the arbitrary constants $a_{1,2}$ in such way, that the constants $a_{1,2}-\Delta$ lie in {\it different } lacunae. According to (\ref{ccc}) the constants $C_{i}$ are then given as \[C_i^2=-\frac{4\nu^2(a_i-\Delta)}{(a_i-a_j)^2}\label{cccc}, \] where $\Delta$ is the shift (\ref{shift}) and $\nu$ is the coordinate of the curve (\ref{curve3}) and the integrals $H$ and $F$ have the following form \begin{eqnarray} && H=\frac{1}{25}\left(a_1+a_2\right)^3+\frac{21}{4} g_2, \nonumber \\ && F=\frac{1}{25}\left(a_{{1}}+a_{{2}}\right)^{3}-\frac{1}{4}C_{1}^{2} -\frac{1}{4}C_{2}^{2}-\frac{27}{4}g_{{3}}-\frac{21}{20}g_{{2}} \left(a_{1}+a_{2}\right). \nonumber \end{eqnarray} These results are in complete agreement with solutions obtained in \cite{pp99} by introducing the ansatz of the form $$q_i(x)=\sqrt{A_i\wp(x)^2+B_i\wp(x)+C_i},\quad i=1,2$$ with the constants $A_i,B_i,C_i$ which are defined from the compatibility condition of the ansatz with the equations of motion. In the foregoing examples we are considering the solutions of the form $$q_i(x)=\sqrt{{\mathcal Q}_i(\wp(x))}, $$ where ${\mathcal Q}_i$ are rational functions of $\wp(x)$. Consider with this purpose the Treibich Verdier potential \begin{equation} {\mathsf U}_4(x)=6\wp(x)+2\wp(x+\omega_1)-2e_1,\label{TV4} \end{equation} associated with four sheeted cover. The potential is normalized according to (\ref{decomposition}). The associated spectral curve is of the form \begin{eqnarray}{\nu}^2&=&4(\lambda+6e_1)\prod_{k=1}^4 (\lambda-\lambda_k) \label{ctv4}\\ \lambda_{1,2}&=&e_3+2e_2\pm 2\sqrt{(5e_3+7e_2)(2e_3+e_2)}\\ \lambda_{3,4}&=&e_2+2e_3\pm 2\sqrt{(5e_2+7e_3)(2e_2+e_3)}\nonumber, \end{eqnarray} The Hermite polynomials are given by the formula \begin{eqnarray} {\mathcal F}(x,\lambda)&=& \lambda^2-(3\wp(x)+\wp(x+\omega_1)-e_1)\lambda\label{hertv4}\\&+& 9\wp(x)(\wp(x)+\wp(x+\omega)-e_1)-3e_1\wp(x+\omega_1)\cr&+& \frac{9}{4}g_2-51e_1^2; \nonumber \end{eqnarray} The finite real solution of (\ref{system}) results the substitution this Hermite polynomial $ {\mathcal F}(x+\omega',\lambda)$ into (\ref{answer}) depending in shifted by imaginary half period argument into the formula (answer). To provide the reality of the solution we shall fix the parameters $a_i-\Delta$ in the permitted zones. The constants $C_i$ are computed by the formula (\ref{cccc}) at which $\nu$ means the coordinate of the curve (\ref{ctv4}). Consider further the Treibich Verdier potential \begin{equation} {\mathsf U}_5(x)=6\wp(x)+2\wp(x+\omega_2)+2\wp(x+\omega_3)+2e_1, \label{TV4} \end{equation} associated with four sheeted cover. The pontial is normalized according to (\ref{decomposition}). The associated spectral curve is of the form \begin{eqnarray}\nu^2&=&(\lambda+6e_2-3e_3)(\lambda+6e_3-3e_2) \nonumber\\ &\times&\left[\lambda^3+3e_1\lambda^2-(29e_2^2-22e_2e_3+29e_3^2)\lambda \right.\cr &+&\left.159(e_2^3+e_3^3)-51e_2e_3(e_2+e_3)\right] \label{ctv5} \end{eqnarray} The associated Hermite polynomials are given by the formula \begin{eqnarray*} {\mathcal F}(x,\lambda) &=&\lambda^2-(3\wp(x)+\wp(x+\omega_2))+\wp(x+\omega_3)+e_1) \lambda\\ &+&9\wp(x)(\wp(x)+\wp(x+\omega_2)+\wp(x+\omega_3 ) )+3\wp(x+\omega_2)\wp(x+\omega_3)\cr&+&3e_1(3\wp(x)+\wp(x+\omega_2)) +\wp(x+\omega_3))-\frac{39}{2}g_2+54e_1^2. \end{eqnarray*} The solution of the system results the substitution of these expressions to (\ref{answer}) as before, but this solution is blowing up. We remark, that since McKean, Moser and Airlault paper \cite{amm77} is well known, that all elliptic potentials of the Schr\"odinger equations and their isospectral transformation under the action of the KdV flow has the form, \begin{equation} {\mathsf U}(x)=2\sum_{i=1}^N\wp(x-x_i(t)),\label{elpot} \end{equation} The number $N$ is a positive integer $N>2$ (the number of ``particles") and the numbers $\boldsymbol{ x}=(x_1(t),\ldots,x_N(t))$ belongs to the locus ${\mathcal L}_N$, i.e., the geometrical position of the points given by the equations \begin{equation} {\mathcal L}_N=\left\{(\boldsymbol{ x});\sum_{i\neq j}\wp'(x_i(t)-x_j(t))=0,\; j=1,\ldots N\right\}.\label{locus} \end{equation} If the evolution of the particles $x_i$ over the locus is given by the equations, \[\frac{{\mathrm d} x_i}{ {\mathrm d} t}=6\sum_{j\neq i}\wp(x_i(t)-x_j(t)) \] then the potential (\ref{elpot}) is the elliptic solution to the KdV equation. Henseforth the elliptic potentials discuss can serve as imput for the isospectral deformation along the locus. Moreover these elliptic potential do not exhausted all the viriety of elliptic potential; we can mention here the elliptic potentials of Smirnov (\cite{sm89,sm94}) for which the shifts $x_i$ are not half periods. The involving of these objects to the subject can enlarge the classes of elliptic solutions to the system (\ref{manakov}) \section{Conclusions} In this paper we have described a family of elliptic solutions for the coupled nonlinear Schr\"dinger equations using Lax pair method and the general method of reduction of Abelian functions to elliptic functions. Our approach is systematic in the sense that special solutions (periodic, soliton etc.) are obtained in a unified way. We considered only the family of elliptic solutions associated with the integrable case $1:2:1$ of quartic potential, the approach developed can be applied to other integrable cases being enumerated in the introduction. In fiber optics applications of the periodic and quasi-periodic waves are of interest in optical transmission systems.
1,477,468,750,504
arxiv
\section{Introduction} \subsection{Model Description} \label{subs-intro-model} We consider a static game played by $n$ agents $i = 1, \ldots, n$, in which the $i^{\text{th}}$ agent is assigned a type $w_i$ from a type space ${\mathcal W}$, and is allowed to choose an action $x_i$ from a subset ${\mathcal C} (w_i)$ of the action space ${\mathcal X}$, so as to minimize its objective function $J_i^n (w_1, \ldots, w_n, x_1, \ldots, x_n)$, which can be viewed as a cost. Both ${\mathcal W}$ and ${\mathcal X}$ are assumed to be metric spaces, and for $w \in {\mathcal W}$, ${\mathcal C}(w)$ represents the set of admissible actions allowed for an agent of type $w$. We restrict our attention to anonymous games, in which an individual agent's objective function is influenced by its own type and action, but depends on the other agents' types and actions only through the empirical distribution of type-action pairs (rather than on the full configuration of types and actions of individual agents), and in addition, the form of this dependence is the same for all agents. More precisely, if we let $\delta_{(w, x)}$ denote the Dirac delta measure at the point $(w, x) \in {\mathcal W} \times {\mathcal X}$, the objective function of the $i^{\text{th}}$ agent in the $n$-player game takes the form \[ J_i^n (w_1, \ldots, w_n, x_1, \ldots, x_n) = F \left(\frac{1}{n} \sum_{k=1}^n \delta_{(w_k, x_k)}, w_i, x_i \right), \] for a suitable function $F: {\mathcal P} ({\mathcal W} \times {\mathcal X}) \times {\mathcal W} \times {\mathcal X} \mapsto {\mathbb R}$, where ${\mathcal P}({\mathcal W}\times{\mathcal X})$ is the set of Borel probability measures on ${\mathcal W}\times{\mathcal X}$. We are interested in properties of Nash equilibria when the number of players is large and seek to understand the behavior of agents in terms of their \emph{types}, not their \emph{names}. Here, a \emph{Nash equilibrium with type vector} $\vec{w} = (w_1,\ldots,w_n) \in {\mathcal W}^n$ is any vector $(x_1,\ldots,x_n) \in {\mathcal X}^n$ such that for each $i$, $x_i$ lies in the set ${\mathcal C}(w_i)$ of admissible actions and the objective function satisfies \[ J^n_i(w_1,\ldots,w_n,x_1,\ldots,x_n) = \inf_{y \in {\mathcal C}(w_i)}J^n_i(w_1,\ldots,w_n,x_1,\ldots,x_{i-1},y,x_{i+1},\ldots,x_n). \] Note that an agent's type $w_i$ influences not only the objective function but also the set ${\mathcal C}(w_i)$ of admissible actions. For simplicity, we work exclusively with pure strategies, although we refer the interested reader to Blanchet-Carlier \cite[Section 4]{blanchet2014nash} for extensions of a similar model setup to cover mixed strategies. For pure strategies, even existence of a Nash equilibrium in an $n$-player game is not always guaranteed, but we will be concerned with the large class of games for which Nash equilibria are known to exist (see Section \ref{se:congestion}). However, such games often admit multiple equilibria, and so we will in general not assume uniqueness of Nash equilibria. \subsection{Discussion of Results and Related Work.} It is in general hard to explicitly identify the set of Nash equilibria, especially in large games. Thus, we instead study the behavior of Nash equilibria in the limit as the number of agents goes to infinity. Specifically, under the assumption that the types of agents in the $n$-player game are sampled independently from a common type distribution $\lambda_0 \in {\mathcal P}({\mathcal W})$, where ${\mathcal P}({\mathcal W})$ denotes the space of probability measures on ${\mathcal W}$, the goal of this work is to study the asymptotic behavior of corresponding Nash equilibria as the number of agents goes to infinity. To emphasize that they are random, we will use capital letters $\{W_i\}$ to denote the sequence of i.i.d.\ types and the array $\{X^n = (X_1^n,\ldots, X_n^n)\}$ of associated agent actions in the sequence of $n$-player games. Under fairly general conditions (see the standing assumption in Section \ref{sec-mainres} below), we first state a strong law of large numbers (Theorem \ref{th:intro-limit}) that shows that almost sure limit points of sequences of (random) empirical type-action distributions $\frac{1}{n}\sum_{i=1}^n\delta_{(W_i,X_i^n)}$ associated with Nash equilibria can be characterized as Cournot-Nash equilibria of a certain nonatomic game associated with the type distribution $\lambda_0$. When there is a unique Cournot-Nash equilibrium for the nonatomic game, this implies almost-sure covergence, as $n \rightarrow \infty$, of the empirical type-action distributions of Nash equilibria of $n$-player games to the corresponding Cournot-Nash equilibrium. Our precise framework is related to several existing models in the literature. In particular, the nonatomic game is similar to the model considered by Blanchet and Carlier \cite{blanchet2014nash}, which is itself a reparametrization of the seminal framework of Mas-Colell \cite{mas1984theorem}. The particular definition that we use (see Section \ref{subs-nonatomic}) is a slight generalization that has two new features. First, it allows for the incorporation of a constraint map ${\mathcal C}$ that specifies the admissible set of actions associated with each agent type. This extension is necessary to cover interesting examples such as the class of congestion games described in Section \ref{subs-intro-mot}. Theorem \ref{th:intro-limit} is one of many related (and largely equivalent) laws of large numbers in the literature on large games, notably \cite{green1984continuum,housman1988infinite,carmona2004nash,kalai2004large,blanchet2014nash}. Second, our model involves \emph{unknown} or \emph{random} types, whereas all of these papers work with \emph{known} or \emph{deterministic} sequences of type vectors $(w^n_1,\ldots,w^n_n)$ satisfying $\frac{1}{n}\sum_{i=1}^n\delta_{w^n_i} \rightarrow \lambda_0$. Although limited to complete information and homogeneous beliefs, our model setup is nonetheless also reminiscent of Harsanyi's formalism of \emph{Bayesian games} \cite{harsanyi1967games}. The primary focus of this work is the estimation of the probability that a Nash equilibrium of an $n$-player game makes a large deviation from the law of large numbers limit when $n$ is large. Our first main set of results, stated in Section \ref{subs-ld1}, concern the large deviations behavior of any sequence of (random) empirical type-action distributions associated with Nash equilibria, under the assumption that there is a unique Cournot-Nash equilibrium for the corresponding nonatomic game. Specifically, Theorem \ref{th:intro-LDP} establishes a large deviations principle (LDP) for such a sequence, which provides precise asymptotic estimates of the exponential rate of decay of probabilities of the occurrence of rare Nash equilibria (i.e., those that are far from the Cournot-Nash equilibrium), and the exponential decay rate is expressed in terms of quantities that are derived from the more tractable nonatomic game. Establishing an LDP (as opposed to just obtaining large deviation bounds) sheds light on the behavior of Nash equilibria conditioned on a rare event, as exemplified by the conditional limit result in Theorem \ref{th:intro-conditional-limit}. Uniqueness of the Cournot-Nash equilibrium holds for many nonatomic games, including the important class of potential games with strictly convex potential. This covers many congestion games, discussed in more detail in Section \ref{se:congestion}. The foundational work on finite potential games is \cite{monderer-shapley}, but we refer to \cite{blanchet2015optimal} for interesting developments on potential games for general (possibly uncountable) type and action spaces. In more recent work \cite{blanchet2014remarks,blanchet2016computation}, Blanchet et al.\ exploit a connection with optimal transport to develop methods for computing Cournot-Nash equilibria even for non-potential games. However, there are also cases of interest for which the nonatomic game admits multiple equilibria. To address this situation, in Section \ref{se:intro:LDP-set} we also consider the large deviation behavior of the set of empirical distributions induced by all the Nash equilibria of an $n$-player game. We first state an analogous law of large numbers result in Theorem \ref{th:intro-limit-setvalued} that shows convergence of the sequence of sets of Nash equilibria to the corresponding set of Cournot-Nash equilibria for the nonatomic game, and then establish a corresponding LDP in Theorem \ref{th:LDP-setvalued}. The choice of topology on the space of sets of distributions for this LDP is rather subtle. One needs to identify a topology that is weak enough for the LDP to hold, but strong enough that the LDP can provide useful information. We show in Corollary \ref{co:setvalued} that, indeed, our LDP provides interesting information on the probability of outliers or rare equilibria even in the non-unique setting. Additionally, as elaborated below, in Section \ref{se:PoA}, we also show that the LDP is useful for obtaining interesting asymptotic results on the price of anarchy. Our results appear to be the first LDPs for any kind for large games. Philosophically, the paper that is closest to ours is that of Menzel \cite{menzel2016inference}, which adopts a similar statistical perspective to large-$n$ asymptotics in order to derive a central limit theorem in addition to a law of large numbers like Theorem \ref{th:limit}. Although the model specification in \cite{menzel2016inference} is very different from our own, Menzel interprets his results as ``expansions'' of the $n$-player games around the nonatomic game ``limit'', which is useful because the latter is typically more tractable. Likewise, our results provide asymptotics for $n$-player quantities in terms of quantities derived from the associated nonatomic game, namely, the rate functions in Theorems \ref{th:intro-LDP} and \ref{th:intro-LDP-setvalued}). However, rather than addressing econometric questions as in Menzel, we focus on the probabilistic nature of equilibria arising from a large number of random types. Finally, we apply our large deviation analysis to derive high-probability bounds of the so-called \emph{price of anarchy} as the number of agents grows. The \emph{price of anarchy}, a term first introduced by Koutsoupias and Papadimitriou \cite{koutsoupias1999worst}, is a measure of the degradation of efficiency in a system due to the selfish behavior of its agents, and it is defined roughly as follows. Given a type vector $\vec{w} \in {\mathcal W}^n$, the socially optimal cost is the least average cost of all players over all associated admissible type-action pairs, and the price of anarchy of the $n$-player game is the ratio of the worst-case (or highest) average cost induced by any Nash equilibrium to the corresponding socially optimum cost (see Section \ref{se:PoA} for a precise definition). The price of anarchy measures the degradation of efficiency in a system due to the selfish behavior of its agents. \subsection{Motivation} \label{subs-intro-mot} The motivation for this study is twofold. Firstly, our results apply to a class of games introduced by Rosenthal in \cite{rosenthal1973class} called \emph{congestion games}, which have found widespread application in modeling traffic routing, in both physical and communications networks, particularly in the field of algorithmic game theory \cite{algorithmicgametheory}. In the context of traffic modeling, the congestion game is played on a network, represented by a finite graph, and the type of an agent is associated with a certain source-destination pair, represented by a pair of vertices in the graph. The distribution of types could be assumed to be known from historical data. Given a realization of these types, the agents of the game can be viewed as drivers who competitively choose their routes (between their associated source and destination) to minimize travel time, leading to a corresponding traffic outcome determined by the Nash equilibrium (or the set of Nash equilibria, when multiple exist). In managing network traffic, an important quantity is the average travel time (latency) faced by the agents. A central planner managing the network might prefer to assign to each agent a \emph{socially optimal} route, which minimizes the average travel time, but this is rarely feasible. When agents choose routes selfishly, to minimize their own travel times, the resulting social cost or average travel time is typically socially suboptimal, and the price of anarchy is a popular measure of this suboptimality \cite{roughgarden2002bad}. In Section \ref{se:congestion}, we describe the class of congestion games, and illustrate how our main results can be used to provide new probabilistic bounds on the price of anarchy for such games. In particular, Corollary \ref{co:PoA-congestion} shows how to translate a bound on the price of anarchy in a nonatomic game into a high probability bound on the price of anarchy in the corresponding finite (but large) game. In particular, our results complement existing worst-case bounds such as those of Christodoulou and Koutsoupias \cite{christodoulou2005price} on the price of anarchy for $n$-player congestion games determined by a class of linear cost functions by providing with-high-probability bounds for the price of anarchy arising from a fixed cost function in that class. While we focus on congestion games as a motivating example, our framework encompasses many different types of large static games appearing in applications, with notable examples including \emph{entry games} \cite{berry1992estimation,bresnahan1991empirical,bajari2010identification} and \emph{auctions} \cite{krishna2009auction,klemperer-guide}. In both of these examples, it is more natural to interpret agents as \emph{maximizing a payoff}, which we identify with $-F$, the negative of the cost function. A prototypical entry game, borrowed from \cite{berry1992estimation}, has ${\mathcal X}=\{0,1\}$, an arbitrary type space ${\mathcal W}$, and payoff $-F(m,w,x) = x[f(m^x\{1\}) + g(w)]$, for some functions $f$ and $g$, where $f$ is decreasing. The action $x=1$ means the agent ``enters the market.'' An agent that does not enter receives no payoff, while an agent that enters receives a payoff which is decreasing in the fraction $m^x\{1\}$ of agents that enter. All of our main results apply to entry games, as long as $f$ and $g$ are continuous. We discuss entry games in somewhat more detail in Section \ref{se:conditional-limit}, as an illustration of our conditional limit theorem. On the other hand, our results do not apply to many models of auctions, for which the payoff function is discontinuous. More specifically, a typical auction model has ${\mathcal W} = {\mathcal X} \subset [0,\infty)$ and a payoff function $F(m,w,x)$ with discontinuities at points where $x = \max\mathrm{supp}(m^x)$, where $\mathrm{supp}(m)$ represents the support of the distribution $m$. For instance, in an auction of a single unit of a single good, the classical first-price auction has payoff $-F(m,w,x) = (w-x)1_{\{x \ge \max\mathrm{supp}(m^x)\}}$, with the type $w$ representing the intrinsic value of the good. That is, when the bid $x$ of a given agent becomes the maximum bid, the payoff of the agent jumps from zero to $w-x$. It is not clear if our main results should still hold in the presence of such discontinuities. Extending our results to include these other applications would be an interesting problem for future work. An additional motivation for our work relates to the study of Nash equilibria in \emph{dynamic} $n$-player games, on which a vibrant literature has emerged recently. These games arise in a variety of settings and are harder to analyze than static games. Various law of large numbers type limit theorems and approximation results are now fairly well understood, both in discrete time \cite{weintraub2008markov,adlakha2008oblivious,adlakha2013mean,gomes2010discrete} and in continuous time \cite{lasrylionsmfg,cardaliaguet2015master,lacker2014general,fischer-mfgconnection,carmonadelarue-mfg}, and are expressed in terms of associated dynamic games with a continuum of agents which largely go by the name of \emph{mean field games}. The present work grew in part out of early efforts to understand large deviations in dynamic mean field games, especially in the case when the mean field game admits multiple equilibria. In many dynamic models, the random variables $\{W_i\}$ which we called \emph{types} are better interpreted as \emph{noises}. For instance, the continuous time models typically involve controlled diffusion processes driven by a sequence $\{W_i\}$ of i.i.d.\ Brownian motions. This \emph{noise} interpretation is equally valid for the static games of this paper, if we think of $W_i$ as a random shock to agent $i$. We hope that our large deviation analysis of static games be useful not only on its own merit but also as a first step toward understanding large deviations in dynamic games. \section{Statements of Main Results} \label{sec-mainres} In this section, we precisely state our results, the proofs of which are presented in Section \ref{se:proofs}, with some auxiliary results required for the proofs deferred to Appendices \ref{ap:vietoris} and \ref{ap:congestiongames}. In what follows, given a metric space ${\mathcal S} $, we let ${\mathcal P}({\mathcal S})$ denote the space of Borel probability measures on ${\mathcal S}$, equipped with the topology of weak convergence. We will refer to convergence in this topology as convergence in distribution, and denote this convergence by $m_n \rightarrow m$, which we recall means that $\int\varphi\,dm_n \rightarrow \int\varphi\,dm$ for every bounded continuous function $\varphi$ on ${\mathcal S}$. We will most often consider the case ${\mathcal S} = {\mathcal W}$ or ${\mathcal S} = {\mathcal W} \times {\mathcal X}$. Throughout the paper, we make the following assumptions on the model. \begin{standingassumption*} \label{as-main} The following model parameters are given: \begin{enumerate} \item The \emph{action space} ${\mathcal X}$ is a compact metric space. \item The \emph{type space} is a complete separable metric space ${\mathcal W}$. \item The constraint map ${\mathcal C}$, which maps elements of ${\mathcal W}$ to nonempty closed subsets of ${\mathcal X}$, is continuous. Here, continuity of the set-valued map ${\mathcal C}$ means both that the graph $\mathrm{Gr}({\mathcal C}) = \{(w,x) \in {\mathcal W} \times {\mathcal X} : x \in {\mathcal C}(w)\}$ is closed and that, if $w_n \rightarrow w$ in ${\mathcal W}$ and $x \in {\mathcal C}(w)$, then there exist $n_k$ and $x_{n_k} \in {\mathcal C}(w_{n_k})$ such that $x_{n_k} \rightarrow x$. \item The \emph{objective function} $F : {\mathcal P}({\mathcal W}\times{\mathcal X}) \times {\mathcal W} \times {\mathcal X} \rightarrow {\mathbb R}$ is bounded and continuous, where ${\mathcal P}({\mathcal W}\times{\mathcal X}) \times {\mathcal W} \times {\mathcal X}$ is equipped with the product topology. \end{enumerate} \end{standingassumption*} The compactness of ${\mathcal X}$ assumed in (1) is important but could likely be replaced by a coercivity assumption on $F$. In our main application to congestion games, both sets ${\mathcal W}$ and ${\mathcal X}$ are finite, in which case the continuity assumptions (3) and (4) hold automatically. Fix throughout the paper an arbitrary probability space $(\Omega,{\mathcal F},{\mathbb P})$, and assume it is rich enough to support all of the random variables of interest. We also assume throughout that for each $n \ge 1$ and each type vector $\vec{w} = (w_1,\ldots,w_n) \in {\mathcal W}^n$, the set of Nash equilibria with type vector $\vec{w}$ is non-empty. \subsection{Nonatomic games and Cournot-Nash equilibria} \label{subs-nonatomic} Let $\hat{x}^n_i: {\mathcal W}^n \mapsto {\mathcal X}$ be measurable functions such that $(\hat{x}^n_1(\vec{w}),\ldots,\hat{x}^n_n(\vec{w}))$ is a Nash equilibrium with type vector $\vec{w}$, for each $\vec{w} \in {\mathcal W}^n$ (it is shown in Lemma \ref{le:measurable-selection} that such a measurable selection always exists under our assumptions). Now, suppose that $W_1,\ldots,W_n$ are the i.i.d.\ types sampled from a distribution $\lambda_0 \in {\mathcal P}({\mathcal W})$, which we fix once and for all. Let $X^n_i=\hat{x}^n_i(W_1,\ldots,W_n)$ denote the associated random Nash equilibrium vector. The equilibrium type-action distribution is the random probability measure (on ${\mathcal W}\times{\mathcal X}$) given by \[ \mu_n := \frac{1}{n}\sum_{i=1}^n\delta_{(W_i,X^n_i)}. \] Our main results concern the asymptotic behavior of $\{\mu_n\}$, which, as mentioned in the introduction, is expressed in terms of equilibria for the corresponding \emph{nonatomic game}, also called \emph{Cournot-Nash equilibria}, defined as follows. \begin{definition}[Cournot-Nash equilibria] \label{def-CNeqb} For $\lambda \in {\mathcal P}({\mathcal W})$, the set ${\mathcal M}(\lambda)$ of \emph{Cournot-Nash equilibria with type distribution $\lambda$} is defined as the set of $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$ with first marginal equal to $\lambda$ that satisfy \[ m\left\{(w,x) \in {\mathcal W}\times{\mathcal X} : x \in {\mathcal C}(w), \ F(m,w,x) = \inf_{y \in {\mathcal C}(w)}F(m,w,y)\right\}=1, \] that is, $x \in {\mathcal C}(w)$ and $F(m,w,x) = \inf_{y \in {\mathcal C}(w)}F(m,w,y)$ hold for $m$-almost every $(w,x)$. \end{definition} Intuitively, a Cournot-Nash equilibrium $m \in {\mathcal M}(\lambda)$ describes an equilibrium distribution of type-action pairs in a game consisting of a continuum of infinitesimally small agents. Although, in an $n$-player game (pure-strategy) Nash equilibria need not in general exist, a standard argument in Proposition \ref{pr:existence} below (adapted from \cite{mas1984theorem}) shows that there always exists a Cournot-Nash equilibrium, i.e., ${\mathcal M}({\lambda}) \neq \emptyset$ for all $\lambda \in {\mathcal P}({\mathcal W})$. \subsection{Large deviation results for sequences of Nash equilibria} \label{subs-ld1} In a Cournot-Nash equilibrium no individual agent has direct influence on the equilibrium distribution $m$. Agents thus optimize independently, facing i.i.d.\ types, and a law of large numbers heuristic suggests that $m$ should, in equilibrium, agree with the distribution of type-action pairs. This heuristic is justified by the following rigorous result: \begin{theorem} \label{th:intro-limit} Given that agent types are i.i.d.\ with distribution $\lambda_0 \in {\mathcal P} ({\mathcal W})$, for any metric $d$ on ${\mathcal P}({\mathcal W}\times{\mathcal X})$ compatible with weak convergence, it holds with probability one that $d(\mu_n,{\mathcal M} ({\lambda_0})) := \sup_{m \in {\mathcal M} ({\lambda_0})}d(\mu_n,m) \rightarrow 0$. \end{theorem} We prove a somewhat more general form of this result in Section \ref{se:proofs} (see Theorem \ref{th:limit}(ii) therein), which allows for approximate Nash equilibria and correlated types, although we do not push this result to the utmost generality because it is not the main novelty of the paper. \begin{remark} \label{rem-mixed} For an idea of how to adapt Theorem \ref{th:intro-limit} to mixed strategies, which we do not explore in this paper, see \cite[Theorem 4.2]{blanchet2014nash}. \end{remark} We know from Theorem \ref{th:intro-limit} that the limit points of $\{\mu_n\}$ lie in the set ${\mathcal M}({\lambda_0})$. Our first main result, Theorem \ref{th:intro-LDP} below, lets us estimate how unlikely it is that $\mu_n$ remains ``far'' in some sense from this limiting set. To state the theorem precisely, we introduce some definitions. Write $\lambda \ll \lambda_0$ when $\lambda$ is absolutely continuous with respect to $\lambda_0$, and define the relative entropy as usual by \begin{align} H(\lambda | \lambda_0) := \int_{\mathcal W}\frac{d\lambda}{d\lambda_0}\log\frac{d\lambda}{d\lambda_0}\,d\lambda, \text{ for } \lambda \ll \lambda_0, \quad\quad H(\lambda | \lambda_0) = \infty \text{ otherwise}. \label{def:entropy} \end{align} Define \begin{equation} \label{def-cneqb} {\mathcal M} = \bigcup_{\lambda \in {\mathcal P}({\mathcal W})}{\mathcal M}(\lambda), \end{equation} to be the set of all Cournot-Nash equilibria, with any type distribution. For $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$ let $m^w$ and $m^x$ denote the first and second marginals, respectively, of $m$. Throughout the paper, we adopt the convention that $\inf\emptyset = \infty$ and $\sup \emptyset = -\infty$. \begin{theorem} \label{th:intro-LDP} Assume that ${\mathcal M}({\lambda})$ is a singleton for each $\lambda \in {\mathcal P}({\mathcal W})$ with $\lambda \ll \lambda_0$. Then, for every measurable set $A \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$, \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mu_n \in A) &\le -\inf_{m \in \overline{A} \cap {\mathcal M}}H(m^w|\lambda_0), \\ \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mu_n \in A) &\ge -\inf_{m \in A^\circ \cap {\mathcal M}}H(m^w|\lambda_0), \end{align*} where $A^\circ $ and $\overline{A}$ denote the interior and closure, respectively, of $A$. In other words, $\{\mu_n\}$ satisfies an LDP on ${\mathcal P}({\mathcal W}\times{\mathcal X})$ with (good) rate function \[ m \mapsto \begin{cases} H(m^w | \lambda_0) &\text{if } m \in {\mathcal M}, \\ \infty &\text{otherwise.} \end{cases} \] \end{theorem} Theorem \ref{th:intro-LDP} follows from a more general result, Theorem \ref{th:LDP}, proved in Section \ref{subs-pf-ldp}. In applications, one can use Theorem \ref{th:intro-LDP} to estimate the asymptotic probabilities of what are best interpreted as \emph{rare equilibrium outcomes}. Given an event $A \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ whose closure is disjoint from ${\mathcal M}({\lambda_0})$, for example, $A = \{m \in {\mathcal P}({\mathcal W}\times{\mathcal X}) : d(m,{\mathcal M}({\lambda_0})) \ge \epsilon\}$, Theorem \ref{th:intro-limit} says that ${\mathbb P}(\mu_n \in A) \rightarrow 0$, and Theorem \ref{th:intro-LDP} says that this happens exponentially quickly, making the event \emph{rare} in the sense that roughly ${\mathbb P}(\mu_n \in A) \approx e^{-nc_A}$ for a constant $c_A > 0$. Indeed, it is easy to show (see Lemma \ref{le:inf>0} below) that $c_A := \inf_{m \in A \cap {\mathcal M}}H(m^w|\lambda_0) > 0$ for the particular set $A$ chosen above, so that the upper bound of Theorem \ref{th:intro-LDP} is nontrivial. For a more tangible application, for a closed set $B \subset {\mathcal X}$ we can estimate the probability \[ {\mathbb P}\left(X^n_i \in B \text{ for some } i\right) = {\mathbb P}\left(\mathrm{supp}(\mu_n^w) \cap B \neq \emptyset\right), \] that the action of some agent belongs to the set $B$; here $\mathrm{supp}(m)$ denotes the support of a measure $m$. For instance, in a traffic congestion game, this event could represent some agent utilizing a seemingly inefficient or slow route. This event is ``rare'' as long as $B$ does not intersect the support of $m_0^x$ where $m_0$ is the unique element of ${\mathcal M}({\lambda_0})$. Again, by ``rare'' we mean $\inf\{H(m^w|\lambda_0) : m \in {\mathcal M}, \ \mathrm{supp}(m^x) \cap B \neq \emptyset\} > 0$, so that the upper bound of Theorem \ref{th:intro-LDP} is nontrivial. Theorem \ref{th:intro-LDP} is of course related to Sanov's theorem and indeed reduces to it in degenerate cases (e.g., when ${\mathcal X}$ is a singleton). Our framework also admits an analog of Cram\'er's theorem: If ${\mathcal X}$ is a subset of a Euclidean space, then we can estimate probabilities involving the \emph{average} of agents' actions, such as ${\mathbb P}(\frac{1}{n}\sum_{i=1}^nX^n_i \in B)$ for $B \subset {\mathcal X}$. A full LDP, which explicitly characterizes asymptotic large deviation upper and lower bounds provides information [about the system] that cannot be obtained by just one-sided bounds. Specifically, in the spirit of the so-called Gibbs conditioning principle (see, for instance, \cite{csiszar1984sanov,dembozeitouni}), the LDP of Theorem \ref{th:intro-LDP} can be used to derive the following \emph{conditional limit theorem}, which tells us about the typical behavior of $\mu_n$ given that a rare event of the form $\{\mu_n \in A\}$ occurs: \begin{theorem} \label{th:intro-conditional-limit} Let $A \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ be measurable and define \begin{eqnarray} \label{IA} I(A) & := & \inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ \overline{A} \cap {\mathcal M}(\lambda) \neq \emptyset\right\}, \\ S(A) & := & \left\{m \in \overline{A} \cap {\mathcal M} : H(m^w |\lambda_0) = I(A)\right\}. \label{def:S(A)} \end{eqnarray} Suppose $I(A) < \infty$. Then $S(A)$ is nonempty and compact. Assume that \begin{align} I(A) = \inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ {\mathcal M}(\lambda) \subset A^\circ\right\}, \label{def:conditional-assumption} \end{align} and also that $\mathbb{P}(\mu_n \in A)$ is nonzero for all sufficiently large $n$. Then, letting $d$ denote any metric on ${\mathcal P}({\mathcal W}\times{\mathcal X})$ compatible with weak convergence, for each $\epsilon > 0$ there exists $c > 0$ such that, for all sufficiently large $n$, \begin{align} \label{decay} {\mathbb P}\left(\left. d(\mu_n,S(A)) \ge \epsilon \right| \mu_n \in A\right) \le e^{-cn}. \end{align} In particular, every limit point of the sequence of conditional distributions of $\mu_n$ given $\{\mu_n \in A\}$, $n \in \mathbb{N}$, is supported on the set $S(A)$. If $S(A) = \{\nu\}$ is a singleton, then these conditional distributions converge to the point mass at $\nu$. Finally, if ${\mathcal M}(\lambda)$ is a singleton for every $\lambda \ll \lambda_0$, then in fact \eqref{def:conditional-assumption} is equivalent to the following condition: \begin{align} I(A) = \inf_{m \in \overline{A} \cap {\mathcal M}}H(m^w | \lambda_0) = \inf_{m \in A^\circ \cap {\mathcal M}}H(m^w | \lambda_0). \label{def:conditional-assumption-original} \end{align} \end{theorem} The proof of this conditional limit theorem is given in Section \ref{se:conditional-limit}. The challenge in applying Theorem \ref{th:intro-conditional-limit} lies in checking the assumption \eqref{def:conditional-assumption}, or equivalently \eqref{def:conditional-assumption-original} when there is uniqueness, and also showing that the set $S(A)$ of \eqref{def:S(A)} is a singleton. The key difficulty is that the set ${\mathcal M}$ is never convex in nontrivial cases, which makes the minimization problems in \eqref{def:conditional-assumption-original} more difficult than those that arise from the usual Gibbs conditioning principle. However, these assumptions can be verified in several cases of interest. As an illustration, in Section \ref{se:conditional-limit} we discuss in detail a simple example of an entry game in which both assumptions can be verified. Theorem \ref{th:intro-LDP} applies to a given sequence (more precisely, triangular array) of Nash equilibria $\{X^n_i,1 \le i \le n\}_{n \in \mathbb{N}}$, under a crucial uniqueness assumption. Notice that the uniqueness assumption is imposed \emph{only at the limit}, for the Cournot-Nash equilibrium, and no uniqueness is required of the equilibria of $n$-player games. It is evident that some kind of uniqueness assumption at the limit is necessary. Suppose, for instance, that ${\mathcal X}$ contains at least two elements, that ${\mathcal C}(w)={\mathcal X}$ for all $w$, and that the cost function is the trivial $F \equiv 0$. Then there is no hope for an LDP because \emph{any choice of actions} is a Nash equilibrium. Uniqueness is known to hold in various particular models as well as for a broad class of games known as \emph{potential games}, at least when the potential is strictly convex, and we will encounter a class of examples in our discussion of congestion games in Section \ref{se:congestion}. Nonetheless, uniqueness is not to be expected in general. \subsection{Large deviation results for the set of equilibria} \label{se:intro:LDP-set} We now address the case when there are multiple Cournot-Nash equilibria for the limiting nonatomic game. Let $\widehat{{\mathcal N}}_n : {\mathcal W}^n \rightarrow 2^{{\mathcal P}({\mathcal W}\times{\mathcal X})}$ denote the set-valued map that assigns to each type vector the corresponding set of equilibrium type-action distributions: \begin{align} \widehat{{\mathcal N}}_n(w_1,\ldots,w_n) := \left\{\frac{1}{n}\sum_{i=1}^n\delta_{(w_i,x_i)} : (x_1,\ldots,x_n) \text{ is Nash for types } (w_1,\ldots,w_n)\right\}. \label{def:intro-Nn} \end{align} Again, let $\{W_i\}$ be a sequence of i.i.d.\ ${\mathcal W}$-valued random variables with distribution $\lambda_0$, and let ${\mathcal N}_n = \widehat{{\mathcal N}}_n(W_1,\ldots,W_n)$ denote the random set of equilibrium type-action distributions. It is shown in Proposition \ref{pr:Nash-UHC} that $\widehat{{\mathcal N}}_n(\vec{w})$ and ${\mathcal M}(\lambda)$ are always closed sets. Thus, in the following theorem, we topologize the space $\mathfrak{C}$ of closed subsets of ${\mathcal P}({\mathcal W}\times{\mathcal X})$ with the \emph{upper Vietoris topology}, generated by the base of open sets of the form $\{A \in \mathfrak{C} : A \subset E\}$, where $E$ is an open subset of ${\mathcal P}({\mathcal W}\times{\mathcal X})$. See Appendix \ref{ap:vietoris} for a short discussion of the basic properties of this topology, the most important of which is that it topologizes \emph{upper hemicontinuity} of set-valued maps. First, Theorem \ref{th:intro-limit-setvalued} states that ${\mathcal N}_n$ converges almost surely to ${\mathcal M}({\lambda_0})$, thus establishing a law-of-large numbers result in the upper Vietoris topology, which we prove in Section \ref{subs-pf-lln1}. \begin{theorem} \label{th:intro-limit-setvalued} The sequence of random sets $\{{\mathcal N}_n\}$ converges almost surely to ${\mathcal M}({\lambda_0})$. \end{theorem} The next main result is an LDP for the \emph{set of Nash equilibria}. This not only does away with {the uniqueness assumption on Cournot-Nash equilibria imposed in Theorem \ref{th:intro-LDP}, but it carries more information than Theorem \ref{th:intro-LDP} even when there is uniqueness. As shown in Remark \ref{rem-LDP-setvalued}, Theorem \ref{th:intro-LDP-setvalued} follows from a more general result, Theorem \ref{th:LDP-setvalued}, established in Section \ref{subs-pf-ldp}. \begin{theorem} \label{th:intro-LDP-setvalued} For Borel sets $\mathfrak{U} \subset \mathfrak{C}$, \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}({\mathcal N}_n \in \mathfrak{U}) &\le -\inf\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \in \overline{\mathfrak{U}}\}, \\ \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}({\mathcal N}_n \in \mathfrak{U}) &\ge -\inf\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \in \mathfrak{U}^\circ \}. \end{align*} In other words, $\{{\mathcal N}_n\}$ satisfies an LDP on $\mathfrak{C}$ with (good) rate function \begin{align} A \mapsto \inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) = A\right\}. \label{def:setvalued-rate-function} \end{align} \end{theorem} At first, this theorem may appear too abstract to be useful, especially given that the upper Vietoris topology is rather coarse (even non-Hausdorff). On the contrary, it yields several interesting concrete results, a key example of which stems from the following simple corollary. \begin{corollary} \label{co:setvalued} If $E \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ is closed, then \begin{align} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}\left({\mathcal N}_n \cap E \neq \emptyset\right) &\le -\inf\left\{H(m^w|\lambda_0) : m \in {\mathcal M} \cap E\right\}. \label{def:outliers} \end{align} If $E$ is open, then \begin{align*} \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}\left({\mathcal N}_n \subset E\right) &\ge -\inf\left\{H(\lambda|\lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}_\lambda \subset E\right\}. \end{align*} \end{corollary} Corollary \ref{co:setvalued} can be interpreted in terms of \emph{outliers}, or rare equilibria. Indeed, the left-hand side of \eqref{def:outliers} is the probability that there exists a Nash equilibrium for the $n$-player game that lies in the set $E$. If ${\mathcal M}({\lambda_0})\cap E = \emptyset$, we know from Theorem \ref{th:intro-limit} that equilibria in $E$ should be rare when $n$ is large in the sense that ${\mathbb P}({\mathcal N}_n \cap E \neq \emptyset) \rightarrow 0$. The bound \eqref{def:outliers} shows that this probability decays exponentially and quantifies precisely the exponential decay rate. The proofs of the large deviations results in Theorem \ref{th:intro-LDP-setvalued} can be found in Section \ref{subs-pf-ldp} (see Theorem \ref{th:LDP-setvalued}) and hinge on the well-known \emph{contraction principle}, once the $n$-player games and the nonatomic game are set on a common topological space (as in Section \ref{subs-coupling}). It should also be mentioned that a map of the form $\mathfrak{C} \ni A \mapsto G(A) := \sup_{m \in A}g(m) \in {\mathbb R}$ is upper semicontinuous whenever $g$ is upper semicontinuous. If $g$ is continuous, and if it is constant on a set $A$, then $G$ is continuous at $A$. These facts (proven in Lemma \ref{le:lhcVietoris}) can be used to derive large deviation bounds for a sequence of random variables of the form $\sup_{m \in {\mathcal N}_n}g(m)$, which we interpret as the worst case value of $g$, in equilibrium. The following section investigates a somewhat more complex instance of this observation. \subsection{Price of anarchy} \label{se:PoA} We now provide a precise definition of the price of anarchy for both $n$-player and nonatomic games. We assume that $F \geq 0$, which is essentially without loss of generality due to the boundedness assumption (4). For each $n$ and each type vector $\vec{w} = (w_1, \ldots, w_n) \in {\mathcal W}^n$, define the set of \emph{all admissible} type-action distributions by \begin{align} \widehat{{\mathcal A}}_n(w_1,\ldots,w_n) := \left\{\frac{1}{n}\sum_{k=1}^n\delta_{(w_k,x_k)} : x_i \in {\mathcal C}(w_i), \ i=1,\ldots,n\right\}. \label{def:An} \end{align} The \emph{average cost} of the game, for a fixed type-action distribution $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$, is defined by \begin{align} V(m) := \int_{{\mathcal W}\times{\mathcal X}} F(m,w,x)\,m(dw,dx). \label{def:V} \end{align} Finally the price of anarchy is the ratio of the worst-case Nash equilibrium cost to the socially optimal cost, or \[ \mathrm{PoA}_n(\vec{w}) := \frac{\sup_{m \in \widehat{{\mathcal N}}_n(\vec{w})}V(m)}{\inf_{m \in \widehat{{\mathcal A}}_n(\vec{w})}V(m)}, \] where recall the definition of $\widehat{{\mathcal N}}_n$ from \eqref{def:intro-Nn}. Recall that $\widehat{{\mathcal N}}_n(\vec{w})$, and thus $\widehat{{\mathcal A}}_n(\vec{w})$, is non-empty due to our standing assumption on the existence of Nash equilibria for $n$-player games. Assume that $V$ is strictly positive, which by continuity implies that $V$ is bounded from below away from zero on the non-empty compact set $\widehat{{\mathcal A}}_n(\vec{w})$, for each fixed $n$ and $\vec{w} \in {\mathcal W}^n$. Moreover, $V$ is bounded since $F$ is bounded by our standing assumption (4). Thus, the numerator above is also a finite positive number. Hence, $\mathrm{PoA}_n(\vec{w})$ is well defined. Finally, define the price of anarchy for the nonatomic game as follows. For $\lambda \in {\mathcal P}({\mathcal W})$, set \begin{align} {\mathcal A}(\lambda) := \left\{m \in {\mathcal P}({\mathcal W}\times{\mathcal X}) : m^w = \lambda, \ m\{(w,x) : x \in {\mathcal C}(w)\}=1\right\}. \label{def:A_lambda} \end{align} This is simply the set of all admissible type-action distributions for the nonatomic game with type distribution $\lambda$. The price of anarchy is then \[ \mathrm{PoA}(\lambda) := \frac{\sup_{m \in {\mathcal M}(\lambda)}V(m)}{\inf_{m \in {\mathcal A}(\lambda)}V(m)}. \] Under our standing assumptions, $V$ is bounded above and by Proposition \ref{pr:existence}, ${\mathcal M} (\lambda) \neq \emptyset$ for each fixed $\lambda \in {\mathcal P}({\mathcal W})$. Again, if $V > 0$ pointwise then by continuity $V$ is bounded from below away from zero on the non-empty compact set ${\mathcal A}(\lambda)$, for each fixed $\lambda \in {\mathcal P}({\mathcal W})$, and $\mathrm{PoA}(\lambda)$ is well defined. As before, let $\{W_i\}$ be i.i.d.\ ${\mathcal W}$-valued random variables with distribution $\lambda_0$. See Section \ref{se:PoA-proofs} for the proof of the following: \begin{proposition} \label{pr:PoA} Assume $V>0$ pointwise. It holds almost surely that \[ \limsup_{n\rightarrow\infty}\mathrm{PoA}_n(W_1,\ldots,W_n) \le \mathrm{PoA}(\lambda_0). \] Moreover,\footnote{Equivalently, $\mathrm{PoA}_n(W_1,\ldots,W_n)$ satisfies an LDP on $({\mathbb R} \cup \{-\infty\},\tau)$ with good rate function $r \mapsto \inf\left\{H(\lambda|\lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ \mathrm{PoA}(\lambda) =r\right\}$, where $\tau = \{[-\infty,a) : a \in {\mathbb R} \cup \{-\infty\}\}$ is the lower topology.} for each $r$, \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mathrm{PoA}_n(W_1,\ldots,W_n) \ge r) &\le -\inf\left\{H(\lambda|\lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ \mathrm{PoA}(\lambda) \ge r\right\}, \\ \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mathrm{PoA}_n(W_1,\ldots,W_n) < r) &\ge -\inf\left\{H(\lambda|\lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ \mathrm{PoA}(\lambda) < r\right\}. \end{align*} \end{proposition} \subsection{Congestion games} \label{se:congestion} We now introduce the class of congestion games alluded to in Section \ref{subs-intro-mot}. To specify the model, we work with a finite set ${\mathcal W}$ of types. Given a finite set $E$ of \emph{elements}, the action space is the set ${\mathcal X} = 2^E\backslash \{\emptyset\}$ of nonempty subsets. The constraint map ${\mathcal C}$ is arbitrary for the moment. A continuous increasing function $c_e : [0,\infty) \rightarrow [0,\infty)$ is given for each $e \in E$, which represents the \emph{cost} faced by an agent when using element $e$, as a function of the current \emph{load} or \emph{congestion} on that element. The cost function $F$ is defined by \begin{equation} \label{def-costF} F(m,w,x) := \sum_{e \in x}c_e\left(\ell_e(m)\right), \quad \text{ where } \quad \ell_e(m) := m\{(w,x) \in {\mathcal W}\times{\mathcal X} : e \in x\}. \end{equation} Here $\ell_e(m)$ is the \emph{load} on the edge $e$ imposed by the type-action distribution $m$, which is defined as the fraction of agents using the element $e$. The cost on a route is additive along edges, and the cost at each edge depends on the corresponding load. Notice that the type does not enter explicitly into $F$, and its only role is to govern the constraints. A typical class of examples, representing a traffic network congestion game, originating with the seminal work of Wardrop \cite{wardrop1952road}, is as follows. The set $E$ is the set of edges of some (directed) graph $(V,E)$, so that an action $x \in {\mathcal X}$ is a set of edges. The type space ${\mathcal W}$ is a subset of $V^2$, so that the \emph{type} $w=(i,j)$ of an agent represents the \emph{source} $i$ and the \emph{destination} $j$ of this agent. The constraint set ${\mathcal C}(w)$ is the set of all (Hamiltonian) paths connecting the source $i$ to the destination $j$, for $w=(i,j)$. \subsubsection{Existence and uniqueness of equilibrium} \label{subsub-pot} Congestion games are well known to belong to the class of \emph{potential games} \cite{monderer-shapley}, for which (pure-strategy) Nash equilibria always exist, and for which the uniqueness assumption of Theorem \ref{th:intro-LDP} can be established simply by proving a certain function is strictly convex. Consider the function $U : {\mathcal P}({\mathcal W}\times{\mathcal X}) \rightarrow {\mathbb R}$ given by \begin{align} U(m) = \sum_{e\in E}\int_0^{\ell_e(m)}\!\!\!c_e(s)\,ds. \label{def:U} \end{align} Because $c_e \ge 0$ is increasing, the function $t \mapsto \int_0^tc_e(s)\,ds$ is convex, and thus $U$ is itself convex. Moreover, recalling the definition of ${\mathcal A}(\lambda)$ from \eqref{def:A_lambda}, it can be shown that for each $\lambda \in {\mathcal P}({\mathcal W})$ the set of minimizers of $U$ on the set ${\mathcal A}(\lambda)$ is precisely ${\mathcal M}(\lambda)$, the set of Cournot-Nash equilibria with type distribution $\lambda$. Hence, when $U$ is strictly convex, the set ${\mathcal M}(\lambda)$ is a singleton for every $\lambda$. The following two propositions justify and elaborate on these claims. At least the first of the two is well known, but we provide the short proofs in Appendix \ref{ap:congestiongames} to keep the paper self-contained. In the following, $|E|$ denotes the cardinality of a set $E$ and for a statement $H$, $1_H$ is $1$ if the statement $H$ holds and is zero otherwise. \begin{proposition} \label{pr:congestion-potential} Fix $\lambda \in {\mathcal P}({\mathcal W})$. Then $m$ minimizes $U(\cdot)$ on ${\mathcal A}(\lambda)$ if and only if $m \in {\mathcal M}(\lambda)$. \end{proposition} The final proposition, regarding uniqueness of the Cournot-Nash equilibrium, is likely suboptimal but is merely meant to illustrate that uniqueness is not an unreasonable request of a congestion game: \begin{proposition} \label{pr:congestion-unique} Enumerate ${\mathcal W} = \{w_1,\ldots,w_{|{\mathcal W}|}\}$ and ${\mathcal X} = \{x_1,\ldots,x_{|{\mathcal X}|}\}$. Let $\mathbb{T}$ denote the space of $|{\mathcal W}| \times |{\mathcal X}|$ stochastic matrices, i.e., matrices with nonnegative entries whose columns sum to one. Assume $c_e$ is differentiable with a strictly positive derivative. Suppose $\lambda \in {\mathcal P}({\mathcal W})$ is such that the span of $\{(\lambda\{w_i\}1_{\{e\in x_j\}})_{i,j} : e \in E\}$ contains $\mathbb{T}$. Then $U$ has a unique minimizer on ${\mathcal A}(\lambda)$. \end{proposition} \subsubsection{Price of anarchy} There is a rich literature on \emph{worst-case} bounds, which are typically valid for a large class of cost functions and model specifications. For instance, for the class of linear cost functions, the seminal paper of Roughgarden and Tardos \cite[Theorem 4.5]{roughgarden2002bad} provides a worst-case bound of $4/3$ for the PoA in nonatomic games. More precisely, if $c_e$ is linear for each $e$, then $\mathrm{PoA}(\lambda) \le 4/3$ for all $\lambda \in {\mathcal P}({\mathcal W})$. On the other hand, for finite games with linear cost functions, Christodoulou and Koutsoupias showed in \cite[Theorem 1]{christodoulou2005price} that the worst-case bound on the PoA is $5/2$. That is, if $c_e$ is linear for each $e$, then $\mathrm{PoA}_n(\vec{w}) \le 5/2$ for all $n$ and all $\vec{w} \in {\mathcal W}^n$. These PoA bounds are sharp in the sense that there exist linear cost functions and type distributions for which the bound holds with equality. Nonetheless, the following result asserts that for a \emph{fixed} choice of linear cost functions $\{c_e\}_{e \in E}$, the probability of the PoA in the $n$-player exceeding $4/3$ decays super-exponentially in $n$. \begin{corollary} \label{co:PoA-congestion} In the congestion game model described above, let $R = \sup_{\lambda \ll \lambda_0}\mathrm{PoA}(\lambda)$, and assume that for each $w \in {\mathcal W}$ and every $x \in {\mathcal C}(w)$ there exists $e \in x$ such that $c_e(t) > 0$ for all $t > 0$. Suppose $\{W_i\}$ is an i.i.d.\ sequence of types with distribution $\lambda_0$. Then, for every $\epsilon > 0$ and $c > 0$, there exists $N$ such that, for all $n \ge N$, \[ {\mathbb P}\left(\mathrm{PoA}_n(W_1,\ldots,W_n) \ge R + \epsilon\right) \le e^{-cn}. \] \end{corollary} \begin{remark} The assumption in Corollary \ref{co:PoA-congestion} is not very restrictive; it means that if an admissible route for a given agent has a nonzero load on every edge, then the route has nonzero travel time. This holds, for instance, if $c_e(t) > 0$ for all $t > 0$ and for all $e \in E$. \end{remark} \begin{proof}[Proof of Corollary \ref{co:PoA-congestion}] In this model, since ${\mathcal W}$ and ${\mathcal X}$ are finite, using \eqref{def:V} and \eqref{def-costF}, for any $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$ we can write \begin{align*} V(m) &= \sum_{w \in {\mathcal W}}\sum_{x \in {\mathcal C}(w)}m\{(w,x)\}\sum_{e \in x}c_e(\ell_e(m)). \end{align*} Choose $w \in {\mathcal W}$ and $x \in {\mathcal C}(w)$ such that $m\{(w,x)\} > 0$. By assumption, we may find $e \in x$ such that $c_e(t) > 0$ for all $t > 0$. Then \begin{align*} \ell_e(m) = \sum_{w' \in {\mathcal W}}\sum_{x' \in {\mathcal C}(w')}1_{e \in x'}m\{(w',x')\} \ge 1_{e \in x}m\{(w,x)\} > 0, \end{align*} which implies \begin{align*} V(m) &\ge m\{(w,x)\}c_e(\ell_e(m)) > 0. \end{align*} We are now in a position to apply Proposition \ref{pr:PoA}. Because $\inf\emptyset = \infty$ by convention, \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\log\,&{\mathbb P}\left(\mathrm{PoA}_n(W_1,\ldots,W_n) \ge R + \epsilon\right) \\ &\le -\inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ \mathrm{PoA}(\lambda) \ge R + \epsilon\right\} \\ &= -\infty. \end{align*} \end{proof} As discussed above, Roughgarden and Tardos showed that the constant $R$ of Corollary \ref{co:PoA-congestion} is at most $4/3$ when $c_e$ is linear for each $e$. Even though the finite $n$-player game worst-case $\mathrm{PoA}_n$ bound of $5/2$ is optimal \emph{among the class of linear cost functions}, our results show that for large $n$, it is highly unlikely for any \emph{fixed collection of linear cost functions $\{c_e\}_{e \in E}$} to produce a PoA over $4/3$ when sampling i.i.d.\ random types. More generally, Corollary \ref{co:PoA-congestion} produces a high-probability PoA bound for a large but finite population game from a PoA bound for the corresponding class of nonatomic congestion games. \section{Extensions and proofs of main results} \label{se:proofs} We begin our analysis in Section \ref{subs-coupling} by embedding the $n$-player games and the associated nonatomic game on a common space, inspired by a construction of Housman \cite{housman1988infinite}. Then, in Sections \ref{subs-pf-lln1} and \ref{subs-pf-ldp} we prove, respectively, the law of large numbers and large deviation results. Finally, we prove the conditional limit theorem in Section \ref{se:conditional-limit} and our results on the price of anarchy in Section \ref{se:PoA-proofs}. \subsection{A common embedding of $n$-player and nonatomic games} \label{subs-coupling} Let $\mathrm{Gr}({\mathcal C})$ denote the graph of the constraint set-valued map ${\mathcal C}$: \[ \mathrm{Gr}({\mathcal C}) = \{(w,x) \in {\mathcal W}\times{\mathcal X} : x \in {\mathcal C}(w)\}. \] We wish to define an \emph{equilibrium map} ${\mathcal N} = {\mathcal N}(\lambda,\epsilon,u)$, which maps certain elements of ${\mathcal P}({\mathcal W}) \times [0,\infty) \times [0,1]$ to subsets of ${\mathcal P}({\mathcal W}\times{\mathcal X})$. The first input parameter, $\lambda \in {\mathcal P}({\mathcal W})$, denotes the distribution of types, while the parameter $\epsilon \in [0,\infty)$ signifies that we are interested in $\epsilon$-Nash equilibria (defined precisely in Remark \ref{rem-map}(4) below). Finally, the parameter $u \in [0,1]$ is interpreted as the \emph{size} (or degree of influence) of an agent. We are only interested in sizes belonging to $\overline{{\mathbb N}}^{-1} := \{1/n : n =1,2,\ldots\} \cup \{0\}$. When the size is $1/n$ we are only interested in discrete probability distributions of the form $m = \frac{1}{n}\sum_{i=1}^n\delta_{(w_i,x_i)}$, where $(w_i,x_i) \in \mathrm{Gr}({\mathcal C})$, $i=1, \ldots, n$. Thus, the domain of the map ${\mathcal N}$ is a certain subset of ${\mathcal P}({\mathcal W}) \times [0,\infty) \times [0,1]$, whose definition requires the following notation. For any set ${\mathcal S}$, and positive integer $n$, let $\mathcal{E}_{1/n}({\mathcal S}) := \{\frac{1}{n}\sum_{i=1}^n\delta_{e_i} : e_i \in {\mathcal S}\}$ denote the set of empirical distributions of $n$ points in ${\mathcal S}$. When ${\mathcal S}$ is a metric space, the convention $\mathcal{E}_0({\mathcal S}) := {\mathcal P}({\mathcal S})$ will be useful as well, where as usual, ${\mathcal P}({\mathcal S})$ is the set of Borel probability measures on ${\mathcal S}$. Define ${\mathcal D}({\mathcal N})$ to be the set of $(\lambda,\epsilon,u) \in {\mathcal P}({\mathcal W})\times [0,\infty) \times [0,1]$ such that $u \in \overline{\mathbb{N}}^{-1}$ and $\lambda \in \mathcal{E}_u({\mathcal W})$. That is, \begin{align} {\mathcal D}({\mathcal N}) &:= \bigcup_{u \in \overline{{\mathbb N}}^{-1}}\left(\mathcal{E}_{u}({\mathcal W}) \times [0,\infty) \times \{u\}\right) \label{def-dnn} \end{align} Next, define a real-valued function $G$ by \begin{align} G(m,u,w,x) := F(m,w,x) - \inf_{y \in {\mathcal C}(w)}F\left(m + u(\delta_{(w,y)}-\delta_{(w,x)}),w,y\right), \label{def:Gfunction} \end{align} for $((m,u),w,x)$ in ${\mathcal D} \times{\mathcal W}\times{\mathcal X}$, where \begin{align} {\mathcal D} := \bigcup_{u \in \overline{{\mathbb N}}^{-1}}\left(\mathcal{E}_u({\mathcal W}\times{\mathcal X}) \times \{u\}\right). \label{def-dg} \end{align} Finally, define the equilibrium map ${\mathcal N}$ on ${\mathcal D}({\mathcal N})$ by \begin{align} {\mathcal N}(\lambda,\epsilon,u) &= \left\{m \in \mathcal{E}_u({\mathcal W}\times{\mathcal X}) : m(\text{Gr}({\mathcal C})) = 1, \ m^w = \lambda, \ G(m,u,w,x) \le \epsilon \text{ for } m\text{-a.e.\ } (w,x)\right\}. \label{def:N_n} \end{align} \begin{remark} \label{rem-map} Several comments are in order here. \begin{enumerate} \item ${\mathcal N}(\lambda,0,0)$ is precisely the set ${\mathcal M}(\lambda)$ of Cournot-Nash equilibria; here the ``error'' parameter $\epsilon$ and the ``size'' parameter $u$ are both zero, which means that $\mathcal{E}_0({\mathcal W}\times{\mathcal X})={\mathcal P}({\mathcal W}\times{\mathcal X})$ contains all probability distributions on ${\mathcal W} \times {\mathcal X}$. \item When $u =1/n > 0$ for some positive integer $n$, there are $n$ agents, each of ``size'' $1/n$, and ${\mathcal N}(\lambda,\epsilon,u)$ is a subset of $\mathcal{E}_u({\mathcal W}\times{\mathcal X})$, the empirical distributions of $n$ points in ${\mathcal W} \times {\mathcal X}$. \item The term $u(\delta_{(w,y)} - \delta_{(w,x)})$ appearing in $F$ in the definition \eqref{def:Gfunction} of $G$ accounts for the effect on the distribution $m$ of agents when an agent of size $u$ changes its strategy. \item If $(w_1,\ldots,w_n) \in {\mathcal W}^n$ is a type vector for the $n$-player game, it is straightforward to see that ${\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{w_i},\epsilon,1/n\right)$ is precisely the set of empirical distributions $\frac{1}{n}\sum_{i=1}^n\delta_{(w_i,x_i)}$, where $(x_1,\ldots,x_n)$ is an $\epsilon$-Nash equilibrium with type vector $(w_1,\ldots,w_n)$, in the sense that $G(\frac{1}{n}\sum_{i=1}^n\delta_{(w_i,x_i)},1/n,w_i,x_i) \leq \epsilon$ for every $i$. Most importantly, recalling the definition of $\widehat{{\mathcal N}}_n(w_1,\ldots,w_n)$ from \eqref{def:intro-Nn}, we have \begin{align*} {\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{w_i},0,\frac{1}{n}\right) = \widehat{{\mathcal N}}_n(w_1,\ldots,w_n). \end{align*} \end{enumerate} \end{remark} The key result of this section, inspired by \cite{housman1988infinite}, is that the map ${\mathcal N}$ is \emph{upper hemicontinuous}, a crucial property that is used in the proofs of most of the main results. Let us first recall some basic definitions regarding set-valued functions. Let $X$ and $Y$ be topological spaces, and let $\Gamma : X \rightarrow 2^Y$ map points in $X$ to subsets of $Y$. We say that the set-valued map $\Gamma$ is \emph{upper hemicontinuous} if $\{x \in X : \Gamma(x) \subset A\}$ is open in $X$ for every open set $A \subset Y$, and we say that $\Gamma$ is \emph{lower hemicontinuous} if $\{x \in X : \Gamma(x) \cap A \neq \emptyset\}$ is open in $X$ for every open set $A \subset Y$. Say that $\Gamma$ is \emph{continuous} if it is both upper and lower hemicontinuous. If $Y$ is compact Hausdorff, and if $\Gamma(x)$ is closed for each $x$, then $\Gamma$ is upper hemicontinuous if and only if its graph $\mathrm{Gr}(\Gamma) = \{(x,y) \in X \times Y : y \in \Gamma(x)\}$ is closed \cite[Theorem 17.11]{aliprantisborder}. On the other hand, if $X$ and $Y$ are metric spaces, there is a useful sequential characterization (c.f.\ Theorems 17.16 and 17.19 of \cite{aliprantisborder}): first, $\Gamma$ is lower hemicontinuous if and only if, whenever $x_n \rightarrow x$ in $X$ and $y \in \Gamma(x)$, there exist integers $1 \le n_1 < n_2 < \ldots$ and $y_{n_k} \in \Gamma(x_{n_k})$ such that $y_{n_k} \rightarrow y$. Second, a map $\Gamma$ with compact values is upper hemicontinuous if and only if, whenever $x_n \rightarrow x$ in $X$ and $y_n \in \Gamma(x_n)$ for all $n$, the sequence $\{y_n\}$ is precompact, and every limit point belongs to $\Gamma(x)$. \begin{proposition} \label{pr:Nash-UHC} The sets ${\mathcal D}({\mathcal N})$ and ${\mathcal D}$ in \eqref{def-dnn} and \eqref{def-dg} are closed. The set-valued map ${\mathcal N}$ in \eqref{def:N_n} is upper hemicontinuous on ${\mathcal D}({\mathcal N})$ with compact values, and the function $G$ in \eqref{def:Gfunction} is continuous. In particular, ${\mathcal N}(\lambda,0,0) = {\mathcal M}(\lambda)$ is closed for all $\lambda \in {\mathcal P}({\mathcal W})$. \end{proposition} \begin{proof} Let $(\lambda_n,\epsilon_n,u_n) \in {\mathcal D}({\mathcal N})$ and $(\lambda_\infty,\epsilon_\infty,u_\infty) \in {\mathcal P}({\mathcal W}) \times [0,\infty) \times [0,1]$ with $(\lambda_n,\epsilon_n,u_n) \rightarrow (\lambda_\infty,\epsilon_\infty,u_\infty)$. If $u_\infty=0$, then trivially $\lambda_\infty \in \mathcal{E}_0({\mathcal W}) = {\mathcal P}({\mathcal W})$, and so $(\lambda_\infty,\epsilon_\infty,u_\infty) \in {\mathcal D}({\mathcal N})$. If $u_\infty \neq 0$, then there exists $N$ such that $u_\infty = u_n = u$ for all $n \ge N$. But then $\lambda_n$ belongs to the closed set $\mathcal{E}_{u_\infty}({\mathcal W})$ for all $n \ge N$, and thus so does $\lambda_\infty$. Moreover, by definition $\epsilon_\infty \in [0,\infty)$. This shows that ${\mathcal D}({\mathcal N})$ is closed, and the same argument shows that ${\mathcal D}$ is closed. To show that ${\mathcal N}$ is upper hemicontinuous, we use the sequential characterization described above. Let $(\lambda_n,\epsilon_n,u_n) \in {\mathcal D}({\mathcal N})$ with $(\lambda_n,\epsilon_n,u_n) \rightarrow (\lambda_\infty,\epsilon_\infty,u_\infty)$, and let $m_n \in {\mathcal N}(\lambda_n,\epsilon_n,u_n)$ for every $n$. First, note that $m_n^w = \lambda_n$ for each $n$, which implies that $\{m_n^w\} \subset {\mathcal P}({\mathcal W})$ is tight by Prokhorov's theorem and our standing assumption (2) that ${\mathcal W}$ is a complete separable metric space. Because ${\mathcal X}$ is compact, $\{m_n^x\} \subset {\mathcal P}({\mathcal X})$ is also tight. Thus $\{m_n\} \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ is tight, and by Prokhorov's theorem it admits a subsequential limit point $m_\infty$. It remains to show that $m_\infty \in {\mathcal N}(\lambda_\infty,\epsilon_\infty,u_\infty)$. We will abuse notation somewhat by assuming $m_n \rightarrow m_\infty$. Because $m_n(\mathrm{Gr}({\mathcal C}))=1$ for each $n$ and $\mathrm{Gr}({\mathcal C})$ is closed, the Portmanteau theorem yields $m_\infty(\mathrm{Gr}({\mathcal C}))=1$. It is clear also that \[ \lambda_\infty = \lim_{n\rightarrow\infty}\lambda_n = \lim_{n\rightarrow\infty}m^w_n = m^w_\infty, \] where the limits are in the sense of weak convergence. The continuity of $F$ and ${\mathcal C}$ of standing assumptions (3-4) implies the continuity of $G$ by Berge's theorem \cite[Theorem 17.31]{aliprantisborder}. Define measures $\eta_n$ on ${\mathcal D} \times {\mathcal W}\times{\mathcal X}$ by \begin{align*} \eta_n(dm,du,dw,dx) = \delta_{(m_n,u_n)}(dm,du)m_n(dw,dx). \end{align*} Then $\eta_n \rightarrow \eta_\infty$ because $(m_n,u_n) \rightarrow (m_\infty,u_\infty)$, and it follows from the Portmanteau theorem that for any $\Delta > 0$, \begin{align*} m_\infty\left\{(w,x) : G(m_\infty,u_\infty,w,x) \le \epsilon_\infty+ \Delta\right\} &= \eta_\infty\left\{(m,u,w,x) : G(m,u,w,x) \le \epsilon_\infty + \Delta \right\} \\ &\ge \limsup_{n\rightarrow\infty}\eta_n\left\{(m,u,w,x) : G(m,u,w,x) \le \epsilon_\infty + \Delta \right\} \\ &= \limsup_{n\rightarrow\infty}m_n\left\{(w,x) : G(m_n,u_n,w,x) \le \epsilon_\infty + \Delta \right\} \\ &\geq \limsup_{n\rightarrow\infty} m_n\left\{(w,x) : G(m_n,u_n,w,x) \le \epsilon_n \right\} \\ &= 1, \end{align*} where the last inequality uses the fact that, since $\epsilon_n \rightarrow \epsilon_\infty$, $\epsilon_n \leq \epsilon_\infty + \Delta$ for all sufficiently large $n$, and the last equality holds because $m_n \in {\mathcal N}(\lambda_n,\epsilon_n,u_n)$. Since $\Delta > 0$ is arbitrary, it follows that $G(m_\infty,u_\infty,w,x) \le \epsilon_\infty$ for $m_\infty$ a.e.\ $(w,x)$. It remains to check that $m_\infty$ belongs to $\mathcal{E}_{u_\infty}({\mathcal W}\times{\mathcal X})$. If $u_\infty=0$, there is nothing to prove because of the convention $\mathcal{E}_0({\mathcal W}\times{\mathcal X}) = {\mathcal P}({\mathcal W}\times{\mathcal X})$. If $u_\infty > 0$, then there exists $N$ such that $u_n=u_\infty$ for all $n \ge N$. Then $m_n$ belongs to the closed set $\mathcal{E}_{u_\infty}({\mathcal W}\times{\mathcal X})$ for all $n \ge N$, and hence, so does $m_\infty$. \end{proof} \subsection{Existence of Cournot-Nash equilibria} \label{se:existence} Under our standing assumptions, there always exist Cournot-Nash equilibria for the nonatomic game. The proof uses a simple argument due to Mas-Colell \cite{mas1984theorem}, appropriately modified to incorporate the constraint map ${\mathcal C}$. \begin{proposition} \label{pr:existence} For each $\lambda \in {\mathcal P}({\mathcal W})$, ${\mathcal M}(\lambda) \neq \emptyset$. \end{proposition} \begin{proof} Let $\mathrm{Gr}({\mathcal C}) = \{(w,x) \in {\mathcal W}\times{\mathcal X} : x \in {\mathcal C}(w)\}$ as before, and define ${\mathcal A}(\lambda)$ as in \eqref{def:A_lambda}. Note that ${\mathcal A}(\lambda)$ is closed, as $\mathrm{Gr}({\mathcal C})$ is closed by our standing assumption (3). Because ${\mathcal X}$ is compact and ${\mathcal W} \times {\mathcal X}$ is a complete separable metric space by our standing assumptions (1-2), it is straightforward to check that ${\mathcal A}(\lambda)$ is tight and thus compact. Consider the map $\Phi$ from ${\mathcal A}(\lambda)$ into subsets of ${\mathcal A}(\lambda)$, given by \[ \Phi(m) = \left\{\widetilde{m} \in {\mathcal A}(\lambda) : \int_{{\mathcal W} \times {\mathcal X}}G(m,0,w,x)\widetilde{m}(dw,dx) \le 0 \right\} \] for $m \in {\mathcal A}(\lambda)$. Note that $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$ is a Cournot-Nash equilibrium with type distribution $\lambda$ if and only if $m \in \Phi(m)$, i.e., $m$ is a fixed point of $\Phi$. Clearly ${\mathcal A}(\lambda)$ is convex, and hence $\Phi(m)$ is convex for each $m$. The graph $\mathrm{Gr}(\Phi) = \{(m,\widetilde{m}) \in {\mathcal A}(\lambda)\times{\mathcal A}(\lambda) : \widetilde{m} \in \Phi(m)\}$ is easily seen to be closed, using the fact that $G$ is continuous (due to Proposition \ref{pr:Nash-UHC}) and bounded (by standing assumption (4)). To check that $\Phi(m)$ is nonempty for each $m$, note that there exists (e.g., by \cite[Theorem 18.19]{aliprantisborder}) a measurable function $\hat{x} : {\mathcal A}(\lambda) \times {\mathcal W} \rightarrow {\mathcal X}$ such that $\hat{x}(m,w) \in {\mathcal C}(w)$ and $F(m,w,\hat{x}(m,w))=\inf_{y \in {\mathcal C}(w)}F(m,w,y)$ for each $(m,w) \in {\mathcal A}(\lambda) \times {\mathcal W}$. Then $\hat{m}(dw,dx) = \lambda(dw)\delta_{\hat{x}(m,w)}(dx)$ always belongs to $\Phi(m)$. Because $\Phi$ has a closed graph and nonempty convex values, it admits a fixed point by Kakutani's theorem \cite[Corollary 17.55]{aliprantisborder}. \end{proof} \subsection{Proof of laws of large numbers} \label{subs-pf-lln1} Using Proposition \ref{pr:Nash-UHC}, we give streamlined proofs of Theorems \ref{th:intro-limit-setvalued} and \ref{th:intro-limit}, and even an extension of the latter. \begin{proof}[Proof of Theorem \ref{th:intro-limit-setvalued}] Proposition \ref{pr:Nash-UHC} shows that ${\mathcal N}$ is upper hemicontinuous. According to Lemma \ref{le:uhcVietoris}, this implies that ${\mathcal N}$ is continuous as a map from ${\mathcal D}({\mathcal N})$ to the space $\mathfrak{C}$ of closed subsets of ${\mathcal P}({\mathcal W}\times{\mathcal X})$ endowed with the upper Vietoris topology. Because $(\frac{1}{n}\delta_{W_i},0,\frac{1}{n})$ converges almost surely to $(\lambda_0,0,0)$, it follows (see Remark \ref{rem-map}(4) for the first equality) that, almost surely, \begin{align*} \widehat{{\mathcal N}}_n(W_1,\dots,W_n) &= {\mathcal N}\left(\frac{1}{n}\delta_{W_i},0,\frac{1}{n}\right) \rightarrow {\mathcal N}(\lambda_0,0,0) = {\mathcal M}(\lambda_0). \end{align*} \end{proof} We next turn to the proof of Theorem \ref{th:intro-limit}, which we precede with a reassuring technical lemma. First, for $\epsilon \ge 0$ and $n \in \mathbb{N}$, let $N^\epsilon_n (\vec{w})$ denote the set of $\epsilon$-Nash equilibria with type vector $\vec{w} \in {\mathcal W}^n$. By Remark \ref{rem-map}(4), this can be expressed in terms of the function $G$ of \eqref{def:Gfunction} as \[ N^\epsilon_n (w_1, \ldots, w_n) = \left\{ \vec{x} \in {\mathcal X}^n: x_i \in {\mathcal C}(w_i) \mbox{ and } G\left(\frac{1}{n} \sum_{i=1}^n \delta_{(w_i, x_i)}, \frac{1}{n}, w_i, x_i\right)\leq \epsilon, \, \forall i= 1, \ldots, n\right\}. \] Also, let $D^\epsilon_n$ be the set of $\vec{w} \in {\mathcal W}^n$ for which there exists an associated $\epsilon$-Nash equilibrium: \[ D^\epsilon_n = \left\{ \vec{w} \in {\mathcal W}^n: N^\epsilon_n (\vec{w}) \neq \emptyset\right\}. \] \begin{lemma} \label{le:measurable-selection} For each $n$ and $\epsilon \ge 0$, the set $D^\epsilon_n$ is closed. Moreover, there exists a universally measurable map $\hat{x} : D^\epsilon_n \rightarrow {\mathcal X}^n$ such that $\hat{x}(\vec{w}) \in N^\epsilon_n(\vec{w})$ for each $\vec{w} \in D^\epsilon_n$. \end{lemma} \begin{proof} Continuity of $G$ (proven in Proposition \ref{pr:Nash-UHC}) and closedness of the graph of ${\mathcal C}$ (one of our standing assumptions) together imply that the graph \[ \mathrm{Gr}(N^\epsilon_n) = \left\{(\vec{w},\vec{x}) \in {\mathcal W}^n \times {\mathcal X}^n : \vec{x} \in N^\epsilon_n(\vec{w})\right\} \] is closed. The projection from ${\mathcal W} \times {\mathcal X}$ to ${\mathcal W}$ is a closed map, since ${\mathcal X}$ is compact, which shows that $D^\epsilon_n$ is closed. The existence of $\hat{x}$ follows from the Jankov-von Neumann theorem \cite[Proposition 7.49]{bertsekasshreve}. \end{proof} \begin{theorem} \label{th:limit} Let $\epsilon_n \ge 0$ be such that $\epsilon_n \rightarrow 0$. Suppose, for each $n$ and each $\vec{w} \in {\mathcal W}^n$, we are given an $\epsilon_n$-Nash equilibrium $\hat{x}^n(\vec{w})=(\hat{x}^n_1(\vec{w}),\ldots,\hat{x}^n_n(\vec{w}))$ with type vector $\vec{w}$. By Lemma \ref{le:measurable-selection} we may assume each $\hat{x}^n_i$ is universally measurable. Finally, suppose $\vec{W}^n=(W^n_1,\ldots,W^n_n)$ is a ${\mathcal W}^n$-valued random vector, and define the random empirical distributions \[ \mu_n = \frac{1}{n}\sum_{i=1}^n\delta_{(W^n_i,\hat{x}^n_i(\vec{W}^n))}, \quad\quad\quad \mu^w_n := \frac{1}{n}\sum_{i=1}^n\delta_{W^n_i} . \] Then the following hold: \begin{enumerate}[(i)] \item If the sequence $\{\mu_n^w\}$ is tight, then so is the sequence $\{\mu_n\}$, and every subsequential limit in distribution $\mu$ of $\{\mu_n\}$ satisfies $\mu \in {\mathcal M}(\mu^w)$, almost surely. \item If $\mu^w_n \rightarrow \lambda_0$ in distribution, where $\lambda_0 \in {\mathcal P}({\mathcal W})$ is deterministic, then every subsequential limit in distribution of $\{\mu_n\}$ is supported on ${\mathcal M}({\lambda_0})$. In particular, \begin{align} \lim_{n\rightarrow\infty}{\mathbb P}(d(\mu_n,{\mathcal M} ({\lambda_0})) \ge \epsilon) = 0, \label{def:convergence-in-prob} \end{align} for every $\epsilon > 0$, and any metric $d$ on ${\mathcal P}({\mathcal W}\times{\mathcal X})$ compatible with weak convergence. \item If $W^n_1,\ldots,W^n_n$ are i.i.d.\ with distribution $\lambda_0 \in {\mathcal P}({\mathcal W})$, for each $n$, then $d(\mu_n,{\mathcal M}({\lambda_0})) \rightarrow 0$ almost surely, for $d$ as in (ii). \end{enumerate} \end{theorem} \begin{proof} {\ } \\ \begin{enumerate}[(i)] \item By \cite[Proposition 2.2(ii)]{sznitman}, tightness of $\{{\mathbb P} \circ (\mu_n^w)^{-1}\} \subset {\mathcal P}({\mathcal P}({\mathcal W}))$ is equivalent to tightness of the sequence of mean measures $\{{\mathbb E}[\mu_n^w]\} \subset {\mathcal P}({\mathcal W})$, where ${\mathbb E}[\mu_n^w](\cdot) := {\mathbb E}[\mu_n^w(\cdot)]$. The mean measure ${\mathbb E}[\mu_n]$ has first marginal ${\mathbb E}[\mu_n^w]$, and because ${\mathcal X}$ is compact we conclude that $\{{\mathbb E}[\mu_n]\} \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ is tight. Again using \cite[Proposition 2.2(ii)]{sznitman}, we conclude that $\{{\mathbb P} \circ \mu_n^{-1}\} \subset {\mathcal P}({\mathcal P}({\mathcal W}\times{\mathcal X}))$ is tight. Now, by Skorohod's representation theorem, we may assume that (along a subsequence) $\mu_n$ converges almost surely to a random element $\mu$ of ${\mathcal P}({\mathcal W}\times{\mathcal X})$. This implies $\mu^w_n \rightarrow \mu^w$ a.s. Since $\mu_n \in {\mathcal N}(\mu^w_n,\epsilon_n,1/n)$ by assumption, the upper hemicontinuity of ${\mathcal N}$ implies that a.s., $\mu$ must belong to ${\mathcal N}(\mu^w,0,0) = {\mathcal M}(\mu^w)$, where the last equality holds by Remark \ref{rem-map}(1). \item Suppose the random measure $\mu$ is a subsequential limit in distribution of $\{\mu_n\}$. Because $\mu^w_n \rightarrow \lambda_0$, we must have $\mu^w=\lambda_0$ a.s. We conclude from (i) that ${\mathbb P}(\mu \in {\mathcal M}(\lambda_0))=1$. Thus, for any subsequential limit $\mu$ of $\{\mu_n\}$ and $\epsilon > 0$, we have ${\mathbb P}(d(\mu,{\mathcal M}(\lambda_0)) \ge \epsilon) = 0$. When combined with the Portmanteau theorem, the closedness of the set ${\mathcal M}(\lambda_0)$ established in Proposition \ref{pr:Nash-UHC} and the consequent closedness of $\{m \in {\mathcal P}({\mathcal W}\times{\mathcal X}) : d(m,{\mathcal M}(\lambda_0)) \ge \epsilon\}$ imply the claim \eqref{def:convergence-in-prob}. \item Almost surely, the following holds: Because $\mu^w_n \rightarrow \lambda_0$ due to $\{W_i^n\}$ being i.i.d., and $\mu_n \in {\mathcal N}(\mu^w_n,\epsilon_n,1/n)$, upper hemicontinuity of ${\mathcal N}$ implies that that the limit $\lim_{k\rightarrow\infty}\mu_{n_k}$ exists along some subsequence, and every such limit belongs to ${\mathcal N}(\lambda_0,0,0)$. By Remark \ref{rem-map}(1), this is enough to show $d(\mu_n,{\mathcal M}(\lambda_0)) \rightarrow 0$. \end{enumerate} \end{proof} \subsection{Proofs of large deviation results} \label{subs-pf-ldp} We are now prepared to state and prove an extension of our main result (Theorem \ref{th:intro-LDP-setvalued}) that allows for approximate equilibria. Recall from Section \ref{se:intro:LDP-set} the definition of the space $\mathfrak{C}$, equipped with the upper Vietoris topology. Having identified the suitable space, topology and mappings, the proof of this extension follows from a simple application of the contraction principle from large deviations theory. As we will use it on several occasions, it is worth recalling here the general definition of an LDP. We say that a sequence of Borel probability measures $\{\nu_n\}$ on a topological space $S$ satisfies an LDP with good rate function $I : S \rightarrow [0,\infty]$ if the level set $\{s \in S : I(s) \le c\}$ is compact for each $c \ge 0$ and if the following holds for every Borel set $A \subset S$: \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\log \nu_n(A) &\le -\inf_{s \in \overline{A}}I(s), \\ \liminf_{n\rightarrow\infty}\frac{1}{n}\log \nu_n(A) &\ge -\inf_{s \in A^\circ}I(s), \end{align*} where $\overline{A}$ and $A^\circ$ denote the closure and interior. We say a sequence of $S$-valued random variables satisfies an LDP if the corresponding sequence of probability measures does. In the following, recall the definition of the relative entropy $H$ from \eqref{def:entropy}. \begin{theorem} \label{th:LDP-setvalued} Suppose $\epsilon_n \rightarrow 0$ and $\{W_i\}$ is an i.i.d.\ sequence of ${\mathcal W}$-valued random variables with common type distribution $\lambda_0$. Then the sequence of sets of $\epsilon_n$-Nash equilibria \[ {\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right), \quad n \in \mathbb{N}, \] satisfies an LDP on $\mathfrak{C}$ with good rate function \begin{align} J(A) = \inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) = A\right\}. \label{def:J} \end{align} \end{theorem} \begin{proof} First recall from Proposition \ref{pr:Nash-UHC} that ${\mathcal N}(\lambda,\epsilon,u)$ is closed and thus belongs to $\mathfrak{C}$, for every $(\lambda,\epsilon,u) \in {\mathcal D}({\mathcal N})$. Define two ${\mathcal D}({\mathcal N})$-valued random variables \[ M_n = \left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right), \quad\quad\quad M_n^0 = \left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},0,0\right). \] By Sanov's theorem and the contraction principle \cite[Theorem 4.2.1]{dembozeitouni}, applied to the continuous map ${\mathcal P}({\mathcal W}) \ni \lambda \mapsto (\lambda,0,0) \in {\mathcal P}({\mathcal W})\times [0,\infty)\times [0,1]$, $\{M^0_n\}$ satisfies an LDP on ${\mathcal P}({\mathcal W})\times [0,\infty)\times [0,1]$ with good rate function \begin{align*} (\lambda,\epsilon,u) &\mapsto \begin{cases} H(\lambda | \lambda_0) &\text{if } \epsilon=u=0, \\ \infty &\text{otherwise}. \end{cases} \end{align*} The sequences $\{M_n\}$ and $\{M^0_n\}$ are exponentially equivalent in the sense that (c.f.\ \cite[4.2.10]{dembozeitouni}) \begin{align} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\bar{d}(M_n, M_n^0) \ge a) = -\infty, \text{ for all } a > 0, \label{pf:exp-equiv} \end{align} where we define the metric $\bar{d}$ on ${\mathcal P}({\mathcal W})\times [0,\infty)\times [0,1]$ by \[ \bar{d}((\lambda',\epsilon',u'),(\lambda,\epsilon,u)) = d(\lambda,\lambda') + |\epsilon-\epsilon'| + |u-u'|, \] where $d$ is any metric on ${\mathcal P}({\mathcal W})$ compatible with weak convergence. In fact, the probability in \eqref{pf:exp-equiv} is zero for sufficiently large $n$. Thus, $\{M_n\}$ satisfies an LDP with the same rate function \cite[Theorem 4.2.13]{dembozeitouni}. Because ${\mathcal N}$ is upper hemicontinuous as a set-valued map (by Proposition \ref{pr:Nash-UHC}), it is continuous as a map from ${\mathcal D}({\mathcal N})$ to $\mathfrak{C}$, equipped with the upper Vietoris topology (see Lemma \ref{le:uhcVietoris}). Thus, the contraction principle (see \cite[Theorem 4.2.1]{dembozeitouni}, which does not actually need the spaces to be Hausdorff) implies that $\{{\mathcal N}(M_n)={\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right)\}$ satisfies an LDP on $\mathfrak{C}$ with good rate function \begin{align*} A &\mapsto \inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal N}(\lambda,0,0) = A\right\} \\ &= \inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) = A\right\} \\ &= J(A), \end{align*} where the first equality uses the fact that ${\mathcal N} (\lambda, 0, 0) = {\mathcal M} (\lambda)$ from Remark \ref{rem-map}(4). \end{proof} \begin{remark} \label{rem-LDP-setvalued} Recalling from Remark \ref{rem-map}(4) that $\widehat{{\mathcal N}}_n(w_1,\ldots,w_n) = {\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{w_i},0,1/n\right)$, Theorem \ref{th:intro-LDP-setvalued} is an immediate corollary of Theorem \ref{th:LDP-setvalued}. \end{remark} \begin{remark} Because the proof of Theorem \ref{th:LDP-setvalued} relies on the contraction principle, a similar result holds if we weaken the assumptions on the type sequence $\{W_i\}$. They need not be i.i.d., as long as the sequence of empirical distributions $\frac{1}{n}\sum_{i=1}^n\delta_{W_i}$ satisfies some LDP. \end{remark} We next state an extension of Theorem \ref{th:intro-LDP} and prove it using Theorem \ref{th:LDP-setvalued} and some elementary properties of the upper Vietoris topology. Interestingly, even without uniqueness we find upper and lower bounds, although they do not match in general. \begin{theorem} \label{th:LDP} Use the notation and assumptions of Theorem \ref{th:limit}, and assume also that $W^n_i=W_i$ for $1 \le i \le n$, where $\{W_i\}$ is an i.i.d.\ sequence with distribution $\lambda_0 \in {\mathcal P}({\mathcal W})$. Then we have the following bounds, valid for every measurable set $A \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$: \begin{align} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mu_n \in A) &\le -\inf\left\{H(m^w|\lambda_0) : m \in \overline{A} \cap {\mathcal M}\right\}, \label{th:def:LDP-upperbound} \\ \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mu_n \in A) &\ge -\inf\{H(\lambda|\lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \subset A^\circ\}. \label{th:def:LDP-lowerbound} \end{align} Moreover, if ${\mathcal M}(\lambda)$ is a singleton for every $\lambda \ll \lambda_0$, then \begin{align} \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mu_n \in A) &\ge -\inf\left\{H(m^w|\lambda_0) : m \in A^\circ \cap {\mathcal M}\right\}. \label{th:def:LDP-lowerbound2} \end{align} \end{theorem} \begin{proof} Suppose $A$ is closed. Then $\mathfrak{U} := \{E \in \mathfrak{C} : E \subset A^c\} = \{E \in \mathfrak{C} : E \cap A = \emptyset\}$ is open in the upper Vietoris topology, so its complement is closed. Thus, using the upper bound of Theorem \ref{th:LDP-setvalued}, \begin{align*} \limsup_{n\rightarrow\infty}&\frac{1}{n}\log{\mathbb P}\left({\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right) \cap A \neq \emptyset\right) \\ &\le -\inf_{B \in \mathfrak{U}^c}J(B) \\ &= -\inf_{B \in \mathfrak{U}^c}\inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) = B\right\} \\ &= -\inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \cap A \neq \emptyset\right\} \\ &= -\inf\left\{H(m^w | \lambda_0) : m \in {\mathcal M} \cap A\right\}. \end{align*} Indeed, this last equality follows from two simple observations: If ${\mathcal M}(\lambda) \cap A \neq \emptyset$, then there exists $m \in {\mathcal M} (\lambda) \cap A \subset {\mathcal M} \cap A$ such that $m^w = \lambda$. On the other hand, if $m \in {\mathcal M} \cap A$, then $m \in {\mathcal M}(m^w)$, so ${\mathcal M}(m^w)\cap A \neq \emptyset$. Finally, the upper bound \eqref{th:def:LDP-upperbound} follows from the inequality \[ {\mathbb P}\left(\mu_n \in A\right) \le {\mathbb P}\left({\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right) \cap A \neq \emptyset\right), \] which holds because, by Remark \ref{rem-map}(4), $\mu_n \in {\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right)$ a.s. To prove the lower bound, let $A$ be open, and notice that $\mathfrak{U} = \{E \in \mathfrak{C} : E \subset A\}$ is open in the upper Vietoris topology. Theorem \ref{th:LDP-setvalued} then implies \begin{align*} \liminf_{n\rightarrow\infty}&\frac{1}{n}\log{\mathbb P}\left({\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right) \subset A\right) \\ &\ge -\inf_{B \in \mathfrak{U}}J(B) \\ &= -\inf_{B \in \mathfrak{U}}\inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) = B\right\} \\ &= -\inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \subset A\right\}, \end{align*} where the last equality uses the property that ${\mathcal M} (\lambda)$ is closed for every $\lambda \in {\mathcal P}({\mathcal W})$ (see Proposition \ref{pr:Nash-UHC}). Then the lower bound \eqref{th:def:LDP-lowerbound} follows from the inequality \[ {\mathbb P}(\mu_n \in A) \ge {\mathbb P}\left({\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right) \subset A\right), \] which again holds because $\mu_n \in {\mathcal N}\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\epsilon_n,\frac{1}{n}\right)$ a.s.; see Remark \ref{rem-map}(4). Finally, we deduce \eqref{th:def:LDP-lowerbound2} from \eqref{th:def:LDP-lowerbound}. Again let $A$ be open and note first that $H(\lambda | \lambda_0) < \infty$ only if $\lambda \ll \lambda_0$. Supposing ${\mathcal M}(\lambda) = \{M[\lambda]\}$ is a singleton for all $\lambda \ll \lambda_0$, then trivially $m= M[m^w]$ for all $m \in {\mathcal M}$, and \begin{align*} \inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \subset A\right\} &= \inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ M[\lambda] \in A\right\} \\ &= \inf\left\{H(m^w | \lambda_0) : m \in {\mathcal M} \cap A, \ m^w \ll \lambda_0\right\} \\ &=\inf\left\{H(m^w | \lambda_0) : m \in {\mathcal M} \cap A\right\} . \end{align*} \end{proof} In applications, it is important to know if the bounds in the large deviation principles of Theorems \ref{th:LDP} and \ref{th:LDP-setvalued} are nonzero. The following straightforward lemmas help to check this. \begin{lemma} \label{le:inf>0-setvalued} Let $J$ be as in \eqref{def:J}, and let $\mathfrak{U} \subset \mathfrak{C}$ be a closed set with ${\mathcal M}(\lambda_0) \notin \mathfrak{U}$. Then $\inf_{A \in \mathfrak{U}}J(A) > 0$. \end{lemma} \begin{proof} Note that \[ \inf_{A \in \mathfrak{U}}J(A) = \inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \in \mathfrak{U}\right\}. \] Recall that $\lambda \mapsto {\mathcal M}(\lambda) = {\mathcal N}(\lambda,0,0) \in \mathfrak{C}$ is continuous by Proposition \ref{pr:Nash-UHC} and Lemma \ref{le:uhcVietoris}. Hence, the set $S = \{\lambda \in {\mathcal P}({\mathcal W}) : {\mathcal M}(\lambda) \in \mathfrak{U}\}$ is closed because $\mathfrak{U}$ is. Because $\lambda \mapsto H(\lambda | \lambda_0)$ is lower semicontinuous and has compact sub-level sets, there exists $\lambda^* \in S$ such that $H(\lambda^* | \lambda_0) = \inf_{A \in \mathfrak{U}}J(A)$. But ${\mathcal M}(\lambda^*) \in \mathfrak{U}$ implies $\lambda^* \neq \lambda_0$, and thus $H(\lambda^* | \lambda_0) > 0$. \end{proof} For our second observation, Lemma \ref{le:inf>0} below, we need the following simple property: \begin{lemma} \label{le:Mclosed} The set ${\mathcal M}$ is closed. Moreover, the sub-level set $\{m \in {\mathcal P}({\mathcal W}\times{\mathcal X}) : H(m^w | \lambda_0) \le c\}$ is compact for every $c < \infty$. \end{lemma} \begin{proof} Suppose $m_n \in {\mathcal M}$ and $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$ with $m_n \rightarrow m$. Then $\lambda_n := m_n^w$ converges to $\lambda := m^w$. Then $m_n \in {\mathcal M}(\lambda_n)= {\mathcal N}(\lambda_n,0,0)$, and the upper hemicontinuity of ${\mathcal N}$ (proven in Proposition \ref{pr:Nash-UHC}) implies that the unique limit point $m$ must belong to ${\mathcal N}(\lambda,0,0) \subset {\mathcal M}$. This proves that ${\mathcal M}$ is closed. The second statement follows because ${\mathcal X}$ is compact and the sub-level set $\{\lambda \in {\mathcal P}({\mathcal W}) : H(\lambda | \lambda_0) \le c\}$ is compact for each $c < \infty$ \cite[Lemma 1.4.3(c)]{Dupuis-Ellis}. \end{proof} \begin{lemma} \label{le:inf>0} If $A \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ is closed and $A \cap {\mathcal M}(\lambda_0) = \emptyset$, then $\inf_{m \in A \cap {\mathcal M}}H(m^w|\lambda_0) > 0$. \end{lemma} \begin{proof} Since ${\mathcal M}$ is closed and the sublevel sets $\{m: H(m^w|\lambda_0) \le c\}$ are compact by Lemma \ref{le:Mclosed}, $A \cap {\mathcal M}$ is closed and there exists $m_* \in A \cap {\mathcal M}$ such that $H(m_*^w | \lambda_0) = \inf_{m \in A \cap {\mathcal M}} H(\lambda|\lambda_0)$. But $A \cap {\mathcal M}(\lambda_0) = \emptyset$ implies $m_*^w \neq \lambda_0$, and thus, $H(m_*^w | \lambda_0) > 0$. \end{proof} \subsection{The conditional limit theorem and entry games} \label{se:conditional-limit} In this section we first prove Theorem \ref{th:intro-conditional-limit}. Then, to illustrate the tractability of the assumptions, we apply the theorem to an example from the class of entry games discussed in Section \ref{subs-intro-mot}. \begin{proof}[Proof of Theorem \ref{th:intro-conditional-limit}] Fix a measurable set $A \subset {\mathcal P}( {\mathcal W} \times {\mathcal X})$. Since $I(A) < \infty$, the closedness of ${\mathcal M}$ and the compactness of the sub-level sets $\{m \in {\mathcal P}({\mathcal W}\times{\mathcal X}) : H(m^w | \lambda_0) \le c\}$ established in Lemma \ref{le:Mclosed} imply that the set $S(A)$ of minimizers in \eqref{def:S(A)} is non-empty and compact. Next, use the lower bound of Theorem \ref{th:LDP} to get \begin{align*} \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}\left(\mu_n \in A\right) &\ge -\inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \subset A^0\right\} \\ &= -I(A) \\ &= -\inf\left\{H(\lambda | \lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ {\mathcal M}(\lambda) \cap \overline{A} \neq \emptyset\right\} \\ &= -\inf\left\{H(m^w | \lambda_0) : m \in {\mathcal M} \cap \overline{A}\right\}, \end{align*} where we have used the assumption \eqref{def:conditional-assumption} in the third line. Because the set \[ A_\epsilon := \{m \in A : d(m,S(A)) \ge \epsilon\} \] is closed, the upper bound of Theorem \ref{th:LDP} yields \begin{align*} \limsup_{n\rightarrow\infty}&\frac{1}{n}\log{\mathbb P}\left(\left. d(\mu_n,S(A)) \ge \epsilon \right| \mu_n \in A\right) \\ &= \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}\left( \mu_n \in A_\epsilon\right) - \liminf_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}\left(\mu_n \in A\right) \\ &\le \inf_{m \in \overline{A} \cap {\mathcal M}}H(m^w | \lambda_0) - \inf_{m \in A_\epsilon \cap {\mathcal M}}H(m^w|\lambda_0) \\ & =: C, \end{align*} Then clearly $C \le 0$, as $\overline{A} \supset A_\epsilon$. If $C=0$, then there exists $m \in A_\epsilon \cap {\mathcal M}$ such that $H(m^w|\lambda_0) = \inf_{m \in \overline{A} \cap {\mathcal M}}H(m^w | \lambda_0)$. But this implies $m \in S(A)$, which contradicts the fact that $S(A)$ and $A_\epsilon$ are disjoint. Thus $C < 0$, and the proof of \eqref{decay} is complete. Finally, if ${\mathcal M}(\lambda) = \{M(\lambda)\}$ is a singleton for every $\lambda \ll \lambda_0$, then the identity $(M(\lambda))^w = \lambda$ implies \begin{align*} \inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ {\mathcal M}(\lambda) \subset A^\circ\right\} &= \inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ M(\lambda) \in A^\circ\right\} \\ &= \inf\left\{H(m^w | \lambda_0) : m \in A^\circ \cap {\mathcal M}, \ m^w \ll \lambda_0\right\} \\ &= \inf\left\{H(m^w | \lambda_0) : m \in A^\circ \cap {\mathcal M} \right\}. \end{align*} Similarly, we noted above already that \[ I(A) = \inf\left\{H(\lambda | \lambda_0) : \lambda \ll \lambda_0, \ {\mathcal M}(\lambda) \cap \overline{A} \neq \emptyset\right\} = \inf\left\{H(m^w | \lambda_0) : m \in {\mathcal M} \cap \overline{A}\right\}. \] \end{proof} Let us now consider a simple entry game, specified as follows. There are two types and two actions, with ${\mathcal W} = \{1,2\}$ and ${\mathcal X} = \{0,1\}$, there are no constraints in this model, so ${\mathcal C}(w)={\mathcal X}$ for all $w \in {\mathcal W}$, and the objective function is given by \[ F(m,w,x) = -x(-3m^x\{1\} + w), \] for $m \in {\mathcal P}({\mathcal W} \times {\mathcal X})$, $w \in {\mathcal W}$, and $x \in {\mathcal X}$. Think of each agent as facing a fixed payoff $w$ from entering the market, i.e., choosing $x=1$. This payoff is offset by a loss of $3 m^x\{1\}$ which increases with the fraction of agents entering the market. If the net payoff is negative, the agent will choose $x=0$ and not enter the market. For $q \in [0,1]$, let $\lambda_q = q\delta_{2} + (1-q)\delta_{1}$ denote the type distribution in which the fraction of type-$2$ agents is $q$. Of course, $\{\lambda_q : q \in [0,1]\}$ exhausts all of ${\mathcal P}({\mathcal W})$. To apply our conditional limit theorem, we first characterize all possible Cournot-Nash equilibria: \begin{proposition} \label{pr:entrygame} For the entry game described above, for each $q \in [0,1]$ the Cournot-Nash equilibrium is unique. That is, ${\mathcal M}(\lambda_q) = \{m_q\}$, where $m_q \in {\mathcal P} ({\mathcal W} \times {\mathcal X})$ is defined by \begin{align*} \left(\begin{matrix} m_q\{(1,0)\} & m_q\{(1,1)\} \\ m_q\{(2,0)\} & m_q\{(2,1)\} \end{matrix}\right) = \begin{cases} \left(\begin{matrix} 2/3 & 1/3-q \\ 0 & q \end{matrix}\right) &\text{if } q \le 1/3,\vspace{0.4em} \\ \left(\begin{matrix} 1-q & 0 \\ 0 & q \end{matrix}\right) &\text{if } 1/3 < q < 2/3,\vspace{0.4em} \\ \left(\begin{matrix} 1-q & 0 \\ q-2/3 & 2/3 \end{matrix}\right) &\text{if } q \ge 2/3. \end{cases} \end{align*} \end{proposition} \begin{proof} We know ${\mathcal M}(\lambda_q) \neq \emptyset$ for each $q \in [0,1]$, thanks to Proposition \ref{pr:existence}. Fix $q \in [0,1]$ and $m \in {\mathcal M}(\lambda_q)$, and abbreviate $p=m^x\{1\}$. We will show that $m=m_q$. Note first that \begin{align*} \arg\min_{x \in \{0,1\}}F(m,w,x) = \begin{cases} \{1\} &\text{if } w > 3p, \\ \{0\} &\text{if } w < 3p, \\ \{0,1\} &\text{if } w=3p. \end{cases} \end{align*} Next, there are three cases to check. First, if $3p \notin \{1,2\}$, then $m\{(w,1)\} = 1_{\{w > 3p\}}$ for each $w$. Thus, \[ p = m^x\{1\} = q1_{\{2 > 3p\}} + (1-q)1_{\{1 > 3p\}} = \begin{cases} 0 &\text{if } p > 2/3 \\ q &\text{if } 2/3 > p > 1/3 \\ 1 &\text{if } p < 1/3. \end{cases} \] This can only hold if $p=q$ and $1/3 < p < 2/3$. For the second case, suppose $p=1/3$. Then all type-$2$ agents enter since $2 > 3p$; that is, $m\{(2,1)\} = q$ and $m\{(2,0)\}=0$. Therefore, we have \[ 1/3 = p = m\{(1,1)\} + m\{(2,1)\} = m\{(1,1)\} + q, \] which implies $m\{(1,1)\} = 1/3-q$, which only makes sense for $q \le 1/3$. For the final case, suppose $p=2/3$. Then type-$1$ agents do not enter since $1 < 3p$; that is, $m\{(1,0)\} = 1-q$ and $m\{(1,1)\}=0$. This implies \[ 2/3 = p = m\{(1,1)\} + m\{(2,1)\} = m\{(2,1)\}. \] Since $q = m\{(2,0)\} + m\{(2,1)\} = m\{(2,0)\} + 2/3$, we must have $q \ge 2/3$. \end{proof} Similarly, in the $n$-player game, we can argue that there exists a Nash equilibrium with type vector $\vec{w}$, for every fixed $\vec{w}=(w_1,\ldots,w_n) \in {\mathcal W}^n$. To construct an example, there are three cases, depending again on the fraction $q$ of $(w_1,\ldots,w_n)$ which equal $2$. In each case, we construct one example (though there may be more) of an equilibrium, recalling that agent $i$ \emph{enters the market} if $x_i=1$: \begin{enumerate} \item Suppose $q \in [1/3,2/3]$. All type-$2$ agents enter, while none of the type-$1$ agents enter. \item Suppose $q < 1/3$. All type-$2$ agents enter. Let $k$ be the greatest integer less than or equal to $n(1/3 - q)$. Then $k$ of the type-$1$ agents enter, and the rest do not. \item Suppose $q > 2/3$. All type-$1$ agents choose not to enter. Let $k$ be the greatest integer less than or equal to $2/3$. Then $k$ of the type-$2$ agents enter, and the rest do not. \end{enumerate} Note that we have constructed multiple equilibria in the latter cases, although they share a common type-action distribution. Now that we have computed the Cournot-Nash equilibria and are confident that $n$-player equilibria exist, we are ready to apply Theorem \ref{th:intro-conditional-limit}. Now, let $\mu_n$ denote the empirical type-action distribution as usual, where the types are i.i.d.\ samples from the distribution $\lambda_{2/3}$. That is, each of the $n$ agents is independently assigned type $2$ with probability $2/3$ and type $1$ with probability $1/3$. By Theorem \ref{th:limit} and Proposition \ref{pr:entrygame}, we know that $\mu_n$ converges a.s. to the unique element $m_{2/3}$ of ${\mathcal M}(\lambda_{2/3})$. Let us show using Theorem \ref{th:intro-conditional-limit} that, for $r \in (1/3,2/3)$, if we condition on the rare event $\{\mu_n^x(1) \le r\}$, then $\mu_n \rightarrow m_r$. More precisely, the conditional law of $\mu_n$ converges to the point mass at $m_r$. Intuitively, this rare event means that no more than a fraction of $r$ of the agents enters the market, and the most likely way for this to happen (asymptotically) is for precisely a fraction of $r$ of the agents to enter. Let $1/3 < r < 2/3$, and consider the set \[ A = \left\{m \in {\mathcal P}({\mathcal W}\times{\mathcal X}) : m^x\{1\} \le r\right\}. \] We then compute \begin{align*} \inf_{m \in A \cap {\mathcal M}}H(m^w|\lambda_{2/3}) &= \inf\left\{H(m_q^w|\lambda_{2/3}) : q\in [0,1], \ m_q \in A\right\} \\ &= \inf\left\{H(\lambda_q|\lambda_{2/3}) : q \in [0,r]\right\} \\ &= \inf\left\{H(\lambda_q|\lambda_{2/3}) : q \in [0,r)\right\} \\ &= \inf_{m \in A^\circ \cap {\mathcal M}}H(m^w|\lambda_{2/3}), \end{align*} where the second to last equality holds by continuity of $q \mapsto H(\lambda_q|\lambda_{2/3})$ at $q=r$. Moreover, the unique minimizer on the left-hand side is $m_r$, since $q \mapsto H(\lambda_q|\lambda_{2/3})$ is strictly decreasing for $0 < q < 2/3$. This shows that the assumption \eqref{def:conditional-assumption-original} of Theorem \ref{th:intro-conditional-limit} holds, and also that the set $S(A)$ therein is simply the singleton $\{m_r\}$. \begin{remark} Interestingly, a simple variant of the above game yields a tractable example in which the Cournot-Nash equilibria are not unique and yet Theorem \ref{th:intro-conditional-limit} can be applied. For instance, suppose ${\mathcal W}=\{-1,1\}$ and ${\mathcal X}=\{0,1\}$, with \[ F(m,w,x) = -x(2m^x\{1\} + w). \] Note that there is no minus sign in front of $2m^x\{1\}$, so it is not really an entry game; agents are now \emph{encouraged} to participate (i.e., choose $x=1$) when other agents participate. Setting $m^1_q(dw,dx)=\lambda_q(dw)\delta_1(dx)$, it can be checked that $m^1_q \in {\mathcal M}(\lambda_q)$ for each $q \in [0,1]$; that is, it is always an equilibrium for every agent to participate. However, there are two (resp. three) equilibria for $q=1/2$ (resp. $q < 1/2$). Nonetheless, if $\mu_n$ is the empirical type-action distribution when types are sampled from $\lambda_p$, where $p > 1/2$, we can find a limit theorem for the law of $\mu_n$ conditioned on the event $\{\mu_n \in A\}$ where $A = \{m : m\{(-1,1)\} \le 1-r\}$, for $r \in (p,1)$ close to $p$. Indeed, we can check that the assumption \eqref{def:conditional-assumption} holds, and the unique element of $S(A)$ is $m_r$. There is even a critical value of $r$ for which \eqref{def:conditional-assumption} holds, but $S(A)$ is no longer a singleton. We omit the details of these calculations, with the remark mainly serving to illustrate the need for the generality of Theorem \ref{th:intro-conditional-limit}. \end{remark} \subsection{Proof of probabilistic bounds on the price of anarchy} \label{se:PoA-proofs} To prove Proposition \ref{pr:PoA}, we rework the notation of Section \ref{se:PoA} as we did in Section \ref{subs-coupling}. Recall the notation $\mathrm{Gr}({\mathcal C}) = \{(w,x) : x \in {\mathcal C}(w)\}$. For $(\lambda,u)$ belonging to the domain ${\mathcal D}$ defined in \eqref{def-dg}, define ${\mathcal A}(\lambda,u) \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ by \[ {\mathcal A}(\lambda,u) := \left\{m \in \mathcal{E}_u({\mathcal W}\times{\mathcal X}) : m(\mathrm{Gr}({\mathcal C}))=1, \ m^w = \lambda\right\}, \] Interpret ${\mathcal A}(\lambda,u)$ as the set of admissible type-action distributions. Recall the notation \[ V(m) = \int_{{\mathcal W}\times{\mathcal X}} F(m,w,x)m(dw,dx), \] for $m \in {\mathcal P}({\mathcal W}\times{\mathcal X})$. The price of anarchy is now defined as the function $\mathfrak{P} : {\mathcal D} \rightarrow [1,\infty]$ given by \[ \mathfrak{P}(\lambda,u) = \left. \sup_{m \in {\mathcal N}(\lambda,0,u)}V(m) \right\slash \inf_{m \in {\mathcal A}(\lambda,u)}V(m). \] \begin{lemma} \label{le:PoAusc} Suppose $V> 0$ pointwise. Then $\mathfrak{P}$ is upper semicontinuous on ${\mathcal D}$. \end{lemma} \begin{proof} First, $(\lambda,u) \mapsto \sup_{m \in {\mathcal N}(\lambda,0,u)}V(m)$ is upper semicontinuous because $V$ is continuous and because, by Proposition \ref{pr:Nash-UHC}, ${\mathcal N}$ is upper hemicontinuous and has compact values (see \cite[Lemma 17.30]{aliprantisborder}). It suffices (since $V > 0$) to show that the denominator $\inf_{m \in {\mathcal A}(\lambda,u)}V(m)$ is lower semicontinuous and strictly positive. For both of these claims it suffices to show that the set-valued map ${\mathcal A}$ is upper hemicontinuous and has compact values (again by \cite[Lemma 17.30]{aliprantisborder}). To prove this, we again use the sequential characterization of upper hemicontinuity. Fix a convergent sequence $(\lambda_n,u_n) \rightarrow (\lambda,u)$ in ${\mathcal D}$, and let $m_n \in {\mathcal A}(\lambda_n,u_n)$ for each $n$. We must show that there exist $m \in {\mathcal A}(\lambda,u)$ and a subsequence $\{m_{n_k}\}$ which converges to $m$. Because $m_n^w = \lambda_n$, the sequence $\{m_n^w\} \subset {\mathcal P}({\mathcal W})$ is tight. Because ${\mathcal X}$ is compact, the sequence $\{m_n\} \subset {\mathcal P}({\mathcal W}\times{\mathcal X})$ is tight and thus precompact by Prokhorov's theorem. Let $m$ denote any limit point, and abuse notation by assuming $m_n \rightarrow m$. It remains to show that $m$ belongs to ${\mathcal A}(\lambda,u)$. As $\mathrm{Gr}({\mathcal C})$ is closed, the Portmanteau theorem implies $m(\mathrm{Gr}({\mathcal C})) = \lim_n m_n(\mathrm{Gr}({\mathcal C}))=1$. Clearly \[ m^w = \lim_n m^w_n = \lim_n \lambda_n = \lambda, \] where the limits are in distribution. Finally, to check that $m \in \mathcal{E}_{u}({\mathcal W}\times{\mathcal X})$, there are two cases. If $u=0$, then $\mathcal{E}_{u}({\mathcal W}\times{\mathcal X})={\mathcal P}({\mathcal W}\times{\mathcal X})$ and there is nothing to prove. Otherwise, $u_n = u$ for all sufficiently large $n$, which implies $m_n$ and thus $m$ belong to the closed set $\mathcal{E}_{u}({\mathcal W}\times{\mathcal X})$. \end{proof} \begin{proof}[Proof of Proposition \ref{pr:PoA}] The notation of Proposition \ref{pr:PoA} translates as follows to the present notation, for $n \ge 1$ and $w_1,\ldots,w_n \in {\mathcal W}$: \[ \mathrm{PoA}_n(w_1,\ldots,w_n) = \mathfrak{P}\left(\frac{1}{n}\sum_{i=1}^n\delta_{w_i},\frac{1}{n}\right), \quad\quad \text{ and } \quad\quad \mathrm{PoA}(\lambda) = \mathfrak{P}(\lambda,0). \] When $(W_i)_{i=1}^\infty$ are i.i.d.\ with distribution $\lambda_0$, we know that $\frac{1}{n}\sum_{i=1}^n\delta_{W_i} \rightarrow \lambda_0$ almost surely. By Lemma \ref{le:PoAusc}, \begin{align*} \limsup_{n\rightarrow\infty}\mathrm{PoA}_n(W_1,\ldots,W_n) \le \mathrm{PoA}(\lambda_0), \ a.s. \end{align*} To prove the second claim, we apply Sanov's theorem. Consider the set \begin{align*} B = \left\{(\lambda,u) \in {\mathcal D}: \mathfrak{P}(\lambda,u) \ge r\right\}. \end{align*} Because $\mathfrak{P}$ is upper semicontinuous on ${\mathcal D}$, the set $B$ is closed in ${\mathcal D}$ and thus in ${\mathcal P}({\mathcal W}) \times [0,1]$. By Sanov's theorem, $\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\frac{1}{n}\right)$ satisfies an LDP on ${\mathcal P}({\mathcal W}) \times [0,1]$ with good rate function \[ J(\lambda,u) = \begin{cases} H(\lambda|\lambda_0) &\text{if } \lambda \in {\mathcal P}({\mathcal W}), \ u =0, \\ \infty &\text{otherwise}. \end{cases} \] Thus, we have \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}(\mathrm{PoA}_n(W_1,\ldots,W_n) \ge r) &= \limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb P}\left(\left(\frac{1}{n}\sum_{i=1}^n\delta_{W_i},\frac{1}{n}\right) \in B\right) \\ &\le -\inf_{(\lambda,u) \in B}J(\lambda,u) \\ &= -\inf\left\{H(\lambda|\lambda_0) : \lambda \in {\mathcal P}({\mathcal W}), \ \mathrm{PoA}(\lambda) \ge r\right\}. \end{align*} To prove the lower bound, simply apply the lower bound of Sanov's theorem to the set $B^c$. \end{proof}
1,477,468,750,505
arxiv
\section{Conclusion and Discussion} \label{sec:discussion} Through the use of garlic routing and ring signatures, complete communication and transaction anonymity is achieved. A garlic routing network such as I2P can ensure that no usage, bid, ask or identifiable data is leaked from the system. By using ring signatures, transactions cannot be traced, but it can still be proven that a bid or an ask has been responded to and that a transaction has taken place. The design we've proposed anonymizes the whole chain of transactions, both on a network communication layer and on a distributed ledger transaction layer. As for the DSO, it receives the same information from the smart meter as in a non-transactive smart grid (i.e., amount of energy produced and consumed). In particular, since price policies are recorded on the ledger (which the smart meters may read), each prosumer's smart meter may calculate and send the prosumer's monthly bill to the DSO, without revealing the prosumer's energy consumption or production. The DSO still gets aggregate information regarding load on the grid, but cannot identify individual users and their energy prosumption. \subsection{Services}\label{Services} \subsection{Communication Anonymity} \label{comm} The anonymous communication layer is the infrastructure upon which all other anonymity services in PETra are built. The goal of communication anonymity is to allow smart meters and users to exchange transactions and bids without revealing their IP-addresses or other information which can be used to identify them. In almost all cases, at the very least the Internet Service Provider (ISP) has information about the users' communications and identities. The goal of this section is to maximize the anonymity to such an extent that not even ISPs can identify users. Existing protocols for low-latency communication anonymity include onion routing ~\cite{reed1998anonymous} or the similar garlic routing \cite{Liu2014EmpiricalMA}, STAC \cite{7986314} and the decentralized Matrix protocol.\footnote{Open-federated protocol for instant messaging, Voice-over-IP and IoT communications (\url{https://matrix.org/}).} In this section, we present a brief survey of onion and garlic routing, especially with respect to application in PETra. \subsubsection{Onion and Garlic Routing} Onion routing is based on messages in communication being encapsulated in multiple layers of encryption and sent through a number of nodes in a network, called onion routers. It is anonymous because no single node, except for the sender and the receiver, can know the origin and the recipient of the message. In Figure \ref{fig:garlicrouting}, an example shows how smart meter A encrypts a message $m$, with final destination G, through a network of onion routers. A encrypts the message, for example a confirmation of an energy purchase, a certain number of times, along with addresses of members of the onion network. Each subsequent node, selected by the sender and specified in the different layers of encryption, decrypts one layer using its private key, revealing the next node to which the encrypted message is forwarded. Finally, the second to last node reveals the address of smart meter G and sends the still encrypted message to G, who can decrypt it safely. No single node in the network, except for the sender, knows how many times the packages is re-routed, and no node except for the sender and recipient can know their internal position in the chain of routing. Another technique for communication anonymity is called \textit{garlic routing}. It differs from onion routing in that multiple messages are encrypted together to counter tracing attacks. \begin{figure} \centering \includegraphics[width=\columnwidth]{garlicrouting.png} \caption{The principle behind onion and garlic routing. The difference being that in onion routing, \textit{m} is a single message, whereas in garlic routing, \textit{m} is multiple messages packaged together.}\label{fig:garlicrouting} \end{figure} In practice, the deployment of onion routing (or a variant thereof called garlic routing) in the Invisible Internet Project (I2P) works as follows. Each node in the network operates an I2P router, allowing for anonymous communications. A router is distinct from an endpoint application in that it is not a secret who runs a router. By contrast, an application is the destination for the communications and is anonymous. This disconnect allows for a higher degree of anonymity. To communicate between routers, unidirectional tunnels are set up. The tunnels use layered encryption, meaning that each router in the tunnel only can decrypt one layer. In order to transmit a message between two routers, the sender needs to know where to direct the message, i.e. what the address of the entry point of the receiver is. The I2P protocol differs from regular network communications in that, for communications to take place between routers, each router needs to know a structure called the \textit{RouterInfo}. It contains the 2048-bit ELGamal encryption key, a signing key, a certificate, timestamp, text field, signature of bundle and the contact addresses where a router can be reached. The RouterInfo is given along with something called a \textit{LeaseSet}, containing a group of tunnel entry points for a particular client destination, when the tunnel will expire, the destination itself, encryption key for end-to-end encryption of garlic messages, revocation key and a signature of the LeaseSet data. The LeaseSet identifies an application on the I2P network. The I2P protocol ensures the anonymity of its users because of the disconnect between the identities of the applications communicating over the network, and the identities of the routers. This metadata is stored in a distributed directory called the netDb, based on the Kademlia P2P-protocol, which describes a provably consistent and fault-tolerant distributed hash table. \cite{kademlia} The RouterInfo and LeaseSet data are stored on the netDb under the key derived from the SHA256 of the destination. \subsubsection{Threat Vectors in Onion and Garlic Routing} \label{commthreat} Murdoch and Danezis \cite{1425067} show that a low-cost traffic analysis is possible of the Tor-network, theoretically and experimentally. Traffic analyses are based on tracking the forwarding of the size of a data package between computers, for example, if computer A sends a package of exactly 42 bytes to computer J, who then sends a package of exactly 42 bytes to B, it can be easily deduced that A sent a package of unknown content to computer B. This is possible because of the distribution of metadata to all routers in the Tor-network~\cite{Hopper:2010:MAN:1698750.1698753}. In what is called a timing analysis attack, an attacker tries to find a correlation between the timing of messages moving through the network to gain information about user identities and their communications. Analyses have shown that these types of attacks can be very effective over a wide range of network parameters when specific defences are not employed~\cite{Levine2004,4797313}. To counter timing analysis attacks, the I2P network bundles multiple messages together (principle of garlic routing) and renders it more difficult to analyse~\cite{Liu2014EmpiricalMA}. Schimmer, 2009, showed that the bandwidth opportunistic peer-selection and -profiling algorithm does not prioritize anonymity in favor of performance~\cite{peerProfiling:2009}. Herrmann and Grothoff, 2011, exposed a potential weakness in anonymous HTTP-hosting done over the I2P network \cite{Herrmann2011}. The arguably only practical attack against the I2P network was done against the directory, the netDb, by Egger \textit{et al.} \cite{Egger2013}. An improvement of the protocol, aimed at Egger \textit{et al.}'s attack was suggested by Timpanaro \textit{et al.}, 2015 \cite{Timpanaro2015}. Another potential weakness of onion routing and garlic routing is that, even though the actual message is encrypted and the destinations are unknown, there is always a trace of the communication at the ISP level. The fact that a connection took place will be logged and is openly visible at the very least to the ISP. This attack can be countered in PETra by each node transacting and participating in the mixing network, regardless of the need for trading at that time. Trading of ``zero''-assets can help obfuscate the non-zero-assets of others. Another liability in onion and garlic routing can be that the legitimacy of the sender can not be immediately verified. This can be achieved by the techniques described in the section Transaction Anonymity. \subsubsection{Proposed Solution} Given the survey of the previous paragraphs, performing P2P energy trading in transactive grids over a garlic routing network protocol such as the I2P network provides a high amount of communication anonymity for users. Only part of the energy trading in PETra will be anonymized by garlic routing, namely the internet connections. PETra is no different from other network communications in that aspect. The particularity of the trading being local and thus IP-addresses being close, is a potential weakness that can be countered by creating ``fake'' IP-addresses. To apply garlic routing to transactive microgrids, the smart meters, prosumers, and DSO can act as onion routers, and distribution of available routers is done over netDb. In practice, this service can be built on the free and open-source I2P software with private Directory Authorities. In this case, anonymous communication identifiers in bids and asks correspond to public-keys that identify I2P applications. \section{Concluding Remarks} \section{Introduction} Transactive energy models have been proposed as a set of market based mechanisms for balancing the demand and generation of energy in communities \cite{kok2016society,cox2013structured,melton2013gridwise}. In this approach, customers on the same feeder (i.e. sharing a power line link) can operate in an open market, trading and exchanging generated energy locally. Distribution System Operators can be the custodian of this market, while still meeting the net demand \cite{7462854}. Blockchains have recently emerged as a foundation for enabling the transactional service in the microgrids. For example, the Brooklyn Microgrid (\url{brooklynmicrogrid.com}) is a peer-to-peer market for locally generated renewable energy, which was developed by LO3 Energy as a pilot project. Similarly, RWE, and Grid Singularity have developed blockchain based solutions for incentivizing neighbors to sell excess energy to the grid and payments for electric car charging However, those solutions do not address the requirements for off-blockchain communication network and the requirements for privacy. Specifically, while blockchains provide the necessary ledger services, we still need a communication network for sending control commands from the DSO to the prosumers as well as initiating the trade matching mechanisms. Additionally, this communication network and the blockchain itself must preserve the privacy of the prosumers. Energy usage patterns (actual or predicted) are sensitive, personally identifiable data. Legal requirements and security considerations make it mandatory to provide a mechanism to hide the identities and transaction patterns of trade partners. Additionally, solutions must also satisfy security and safety requirements, which often conflict with privacy goals. For example, to prevent a prosumer from destabilizing the system through careless or malicious energy trading, a transactive grid must check all of the prosumer's transactions. In a decentralized system, these checks require disseminating information, which could be used to infer the prosumer's future energy consumption. In \cite{Laszka17}, we introduced {\it Privacy-preserving Energy Transactions (PETra)}, which is our distributed-ledger based solution that (1) enables trading energy futures in a secure and verifiable manner, (2) preserves prosumer privacy, and (3) enables distribution system operators to regulate trading and enforce the safety rules. In this paper, we extend the communication and transaction anonymity mechanisms. The key contributions of this paper are (a) a survey of the key concepts required for implementing the anonymity across the two dimensions, (b) a discussion on the threats that must be considered when we implement the anonymization mechanisms, and lastly (c) a discussion on implementing the anonymization extensions in PETra. The outline of this paper is as follows. We first present an overview of the PETra workflow described in \cite{Laszka17} in Section \ref{sec:petra}. We then discuss the communication anonymity extensions in Section \ref{comm} and transaction anonymity in Section \ref{trans}. Section \ref{commthreat} discusses the threat vectors for the communication anonymity approach. Section \ref{transthreat} describes the transaction anonymity threats. Finally, we provide concluding remarks in Section~\ref{sec:discussion}. \section*{Acknowledgment} This work was funded in part by a grant by Siemens Corporation, CT. \bibliographystyle{SIGCHI-Reference-Format} \balance \section{Analysis of State of Art} \section{Privacy-preserving Energy Transactions} \label{sec:petra} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{petra.pdf} \caption{The sequence of activities in PETra. The red arrows show off-block chain communication and blue arrows show transactions on block-chain. Producers and consumers request the DSO to allocate the energy production and consumption assets to blockchain. The consumers receive asynchronous notification about offers from producers. Thereafter, they can finalize transaction. The energy transfer happens at a later time and is also recorded in the chain. Financial transactions are also done on the blockchain. These financial transactions are later tallied with the energy transactions.}\label{fig:petrasequence} \end{figure*} There is a systematic pattern emerging in the domain of Internet of Things which requires transactional capabilities. Examples include transactive ride-share systems \cite{yuan2016towards}, transactive health-care systems \cite{azaria2016medrec}, and transactive energy systems described earlier in this section. As shown in Figure \ref{fig:components}, there are three separate layers of this transaction. The first layer is the distributed ledger, which is responsible for keeping track of all log of all events of interest; in the energy domain these events are trades, energy transfer and financial transactions. In case of health care domain, the events record the time of access of the health care data. The data itself is not stored in the block-chain due to the size and privacy issues. Rather, the data is stored in the second layer, which can be implemented by either a cloud or a decentralized storage service like Storj\footnote{https://storj.io/}. The third layer is the IoT layer, which is responsible for sensing and control. This third layer is typically implemented using messaging middlewares like MQTT, DDS, etc. \definecolor{CustomBlue}{RGB}{88, 154, 214} \definecolor{CustomOrange}{RGB}{238, 124, 33} \begin{figure} \centering \resizebox {\columnwidth} {!} { \centering \begin{tikzpicture}[x=1.5cm, y=1.8cm, font=\small, Component/.style={fill=white, draw, align=center, rounded corners=0.1cm, drop shadow={shadow xshift=0.05cm, shadow yshift=-0.05cm, fill=black}}, Connection/.style={<->, >=stealth, shorten <=0.15cm, shorten >=0.15cm, very thick, CustomOrange}] \foreach \pos/\name in {0/pros1, 0.8/pros2, 1.6/pros3} { \node [Component] (\name) at (\pos - 4, \pos) {\texttt{IoT Device, geth}}; } \fill [fill=black!10] (90:1.5) -- (200:1.5) -- (340:1.5) -- (90:1.5); \foreach \pos in {90, 200, 340} { \node [Component] at (\pos:1.5) {Ethereum\\miner (\texttt{geth})}; } \node [Component, dotted] (contract) at (0, 0) {Smart contract\\(\texttt{Blockchain})}; \draw [Connection, bend left=0] (pros1) to (pros2); \draw [Connection, bend left=0] (pros2) to (pros3); \draw [Connection, bend left=-60] (pros3) to (pros1); \draw [Connection, CustomBlue, bend right=0] (pros1) to (contract); \draw [Connection, CustomBlue, bend right=0, , shorten <=0.5cm] (pros2) to (contract); \draw [Connection, CustomBlue, bend right=0] (pros3) to (contract); \node [Component, minimum width=10.25cm, minimum height=0.7cm] at (-1.39, -1.6) {Decentralized storage service}; \draw [Connection] (-3.2, -1.35) -- (-3.2, -0.32) node [midway, right,black] {bulk data}; \draw [Connection, CustomBlue, ->] (0, -1.35) -- (0, -0.27) node [midway, right, align=left, black, yshift={-0.1cm}] {meta\\[-0.2em]data}; \end{tikzpicture} } \vspace{-0.1in} \caption{Components of IoT Blockchain pattern. Typically the IoT devices communicate with each other over a messaging middleware (red arrows). They also communicate with blockchain and smart contracts (blue arrows) through clients, for example the Ethereum geth client. The miners are entities responsible for validating the events/transactions. } \vspace{-0.1in} \label{fig:components} \end{figure} The key aspect of this pattern is the tight integration of distributed messaging patterns between actors and the blockchain-based communication network used for transferring transactional information. For example, in the transactive energy domain, PETra, described in \cite{Laszka17}, involves the interactions between distribution system operator, prosumer, and a smart contract. The smart contract is responsible for keeping track of the energy and financial assets enabling prosumers to post trade offers and exchange assets when another prosumer decides to accept. The algorithm of PETra uses quantised energy asset tokens\footnote{There are two kinds of energy tokens: Energy Production Asset and Energy Consumption Asset. Token attributes include power and time interval for which the token is valid.} that can represent the non-negative amount of power to be produced or consumed (for example, measured in watts), the time interval in which energy is to be produced (or consumed) and the last time interval in which energy is to be produced (or consumed) (Figure \ref{fig:petrasequence} describes the full sequence of activity). These assets are withdrawn and submitted to anonymized accounts on behalf of prosumers by the distribution system operator, which is also responsible for validating that the specific prosumer has the energy capacity for feasible trades given the assets. Once the DSO posts the assets into the blockchain, prosumers can trade between themselves using these quantised assets and anonymized addresses, hiding their identity from each other. The DSO is also responsible for releasing and managing the transfer of currencies, which are represented by financial assets, which is simply an unsigned integer value, denominated in a fiat currency. In this workflow, there are both on- and off-blockchain communications between DSO and prosumer. The off-blockchain communication is required to request the transfer of assets. On-blockchain communication occurs via filters that track the posting of assets. Similarly, prosumers also communicate which each other via blockchain to indicate when an offer has been posted and when a transaction has cleared. While all of the transactive IoT systems require communication and transactional anonymity there are domain-specific requirements and challenges that must be considered. These characteristics and requirements guide us in the description of the anonymization architecture that we describe in the rest of this paper. Specifically, these characteristics are as follows: (1) transactions in a microgrid must clear in bounded time and any errors must be detected\footnote{Energy trades that have an impact on real-time control (e.g., selling energy production for the near future) must be permanently recorded on the ledger \emph{in time} since grid control signals cannot be delayed.}, (2) typically, there is a dedicated communication channel available in a microgrid that connects the prosumers and the distribution system operator, (3) the set of participants in the network are fixed and known ahead of time. Thus, a discovery procedure is typically not required, and (4) even though all the transactions are anonymous there is still a need for maintaining associativity of properties like maximum generation capacity\footnote{To prevent destabilization of the grid, a producer should not be allowed to bid more than its maximum generation capacity.}, reputation scores to prosumers as they participate in trades to maximize the likelihood of success, while reducing the likelihood of jeopardizing the stability of the microgrid\footnote{A prosumer with low reputation score might have a history of not fulfilling the energy transfer obligations}. In the next two sections, we describe the mechanisms for implementing communication and transaction anonymity in this workflow. \section{Related Work} \section{Communication and Transactive Infrastructure} \input{transactions.tex} \subsection{Transaction Anonymity} \label{trans} Communication anonymity is necessary but not sufficient for anonymous trading, as the cryptographic objectives of authentication and legitimacy are not fulfilled. We suggest using cryptographic techniques from distributed ledgers, \textit{blockchains} and cryptocurrencies. The most adopted of which is the Bitcoin blockchain and currency. It allows for very simple digital cash spending but has serious privacy and anonymity flaws~\cite{Barber2012,Reid2013,apostolaki2017}. Additionally, Biryukov and Pustogarov, 2015, show that using Bitcoin over the Tor network opens an entirely new attack surface~\cite{biryukov2015}. Solutions to the tracing and identification problems identified by these researchers have been proposed and implemented in alternative cryptocurrency protocols: mixing using ring signatures and zero-knowledge proofs.~\cite{miers2013zerocoin,cryptonote} \subsubsection{Mixing Through Ring Signatures} The CryptoNote protocol prevents tracing assets back to their original owners by mixing together multiple incoming transactions and multiple outgoing transactions. This service thus hides the connections between the prosumers and the anonymous addresses. Mixing requires the possibility to create new wallets at will, something that is generally recommended upon any cryptocurrency transfer and it requires the existence of a sufficient number of participants in the network. These protocols enable participants to mix assets with each other, thereby eliminating the need for a trusted third party. Monero is an example of a cryptocurrency that provides built-in mixing services by implementing the CryptoNote protocol.~\cite{cryptoeprint:2015:1098} There are however alternative implementations of mixing protocols such as CoinShuffle~\cite{ruffing2014coinshuffle} or Xim~\cite{bissias2014sybil}. The CryptoNote protocol achieves two objectives: \begin{compactenum} \item Untraceable transactions - \textit{for each incoming transaction all possible senders are equiprobable}. \item Unlinkable transactions - \textit{for any two outgoing transactions it is impossible to prove they were sent to the same person}~\cite{cryptonote}. \end{compactenum} Group signatures were first introduced by Chaum and van Heyst, 1991,~\cite{Chaum1991} and then built upon by Rivest \textit{et al.}, 2001.~\cite{Rivest2001} The basis for anonymity in the CryptoNote protocol, however, is a slightly modified version of the \textit{Traceable ring signature} algorithm by Fukisaki and Suzuki, 2007.~\cite{Fujisaki2007} The algorithm allows a member of a group to send a transaction in such a way that it is impossible for a receiver to know any more about the sender than that it came from a group member without the use of a central authority. Unlinkability is achieved by \textit{one-time ring signatures}, making use of four algorithms: \textbf{GEN, SIG, VER, LNK}. The general principle of the unconditional unlinkability is that a sender signs a transaction using a public key and a key image generated by \textbf{GEN} and produces a one-time ring signature using \textbf{SIG} and the public key pair and key image. \textbf{SIG} makes use of a non-interactive zero-knowledge proof which the verifier(s) then use to check the signature in \textbf{VER}. If the signature is valid, the verifier checks if the key image has been used in previous transactions, which mean that the same secret key was used to produce multiple signatures. She does that by running the algorithm \textbf{LNK}. Assuming that the mapping of the secret key to the key image is a one-way injection, it is certain that: \textbf{A.} The signer is not identifiable by way of recovering the secret key from the key image. \textbf{B.} The signer cannot create another key image with the same secret key. Additionally, if the receiver and sender have randomly generated, unique and new addresses, the Diffie-Hellman protocol can be used to generate a new pair of public-private keys. This is how untraceability of public keys is achieved. The sender should generate ephemeral keys for each transfer, enabling only the receiver to recover the corresponding private key. As an illustrative example, in Figure \ref{fig:ringsigs}, a schematic diagram shows households A, B and C signing a transaction since they are part of the same ring. A ring would, in reality, be many more households, not necessarily of the same microgrid. Let's assume that A is the true origin of the transaction. When E receives the transaction, the only thing that E can know with certainty is that one of A, B or C initiated the transaction. To increase the transaction anonymity further, a second, third or n rounds of ring signatures can be algorithmically imposed upon the network. With each round of signing parties, the group of potential origins grows linearly. Notably, the ring signature algorithm by Fujisaki and Suzuki, \cite{Fujisaki2007}, has been published in a peer-reviewed paper. This can be compared favorably to many cryptocurrency protocols which are simply published as white papers without any formal review-process. \cite{cryptonote} In practice, a transaction using the mixing service should be performed in the following way to ensure anonymity: It is also possible for household A that it paid prosumer B for energy by either disclosing the random number used in the generation of the one-time public destination key used in that transaction to B. Or she can use any other kind of zero-knowledge protocol to prove she knows the random number. The ring signatures would also allow the auditing of transactions by, for example, the DSO. This would be achieved by prosumer B giving the tracking key or truncated address to the DSO, who would then be able to link all incoming transactions to B. \subsubsection{Mixing through Zero-Knowledge Proofs} Zero-knowledge proofs (ZKP) are ways for a person to prove the knowledge of some specific fact to a verifier, without actually having to disclose the knowledge. Blum \textit{et al.} provided non-interactive ZKPs (NIZK) in 1988 \cite{Blum:1988:NZA:62212.62222}, where the prover and verifier don't have to interact or communicate directly with each other. The Zerocoin protocol \cite{miers2013zerocoin} outlines a way how NIZKs can achieve the untraceability objective of the previous section and it ensures that no double-spending is allowed.\footnote{Each coin in the protocol is identified uniquely by a serial number.} Zerocoin is a protocol for the decentralized mixing of coins, so that they can not be traced, or \textit{tainted}. However, senders and destinations can still be identified.~\cite{miers2013zerocoin} Luckily, Zerocash~\cite{Sasson:2014:ZDA:2650286.2650810} extends the NIZK functionality to allow for anonymous transactions, anonymous balances and coins, improved performance of transactions and sending of assets to a receivers fixed address without action required from the receiver. Zerocash makes use of a more efficient version of the NIZK, used in Zerocoin, called ZK \textit{Succinct Non-interactive ARguments of Knowledge} (zk-SNARK). The Zerocash-scheme could be carried out using a simple messaging board, but would not be safe in practice since information might be manipulated or the owner of said board might collude etc. Therefore, an immutable, decentralized data storage, governed by the consensus of its peers is required to assure the secure transmission of information. The blockchain provides such a structure. \subsubsection{Threats and weaknesses in Ring Signature- and Zero-Knowledge Proof-schemes} \label{transthreat} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{ringsigs.png} \caption{Visualization of untraceability in ring signatures in smart meter-based energy trading and the potential deductions of origin of the transaction by a single household in the chain of signatures.}\label{fig:ringsigs} \end{figure} When applying either ring signatures or zero-knowledge SNARKs to PETra, potential weaknesses or attacks need to be considered.A potential threat to ring signatures is when a large amount of the unspent transactions are owned by an adversary or when insufficient amounts of signatures are included in a ring. When a prosumer A wishes to select a group of signatures to sign her transaction as well, then it is likely that she will select many of the transactions from the adversary. Assuming the adversary spends his outputs without \textit{mixing}\footnote{The number of other signatures used in the ring.}, then A's transaction is exposed as well.~\cite{monero2014} Recent research also show that up to 65\% of Monero transactions are trivially traceable using one attack. They also exposes two more attacks that have been amended in the latest versions of the protocol, lowering the amount of transactions traceable to 20\%.~\cite{monero2014,DBLP:journals/corr/MillerMLN17} One of the main weaknesses of the Zerocash-based protocol is that for each private transaction, a costly zk-SNARK needs to be computed. But that is not a threat to anonymity, just a practical reason why it might be difficult to run the scheme over a congested public blockchain. In \cite{Sasson:2014:ZDA:2650286.2650810}, experiments show an average time of 3 min to create the zk-SNARK for a private transaction, verifying it takes only 8.5 ms. Another large practical drawback of Zerocash is the lack of programmability and functionality that would be required in PETra. Zhang \textit{et al.} solve some of the practical flaws and amend security issues.~\cite{zhangz} \subsubsection{Proposed Solution to Achieve Transaction Anonymity} Applying the CryptoNote protocol to PETra could be done by performing both energy transactions and monetary transactions using ring signatures. They would be securely logged, tamper-proof and anonymous through the usage of a blockchain. Even though some security flaws exists, as seen in the previous paragraph, the risk of identification, linking or tracing of transactions can be minimized by imposing a high minimum number of signatures per transaction. We also propose to connect the global transaction networks to augment the number of transactions and thereby limit the chance of deduction by elimination. Applying ZKPs to PETra would require that a smart meter can encrypt and sign a transaction, transmit a proof of it to the blockchain and thus the receiver of the payment, without having to reveal the actual amount of energy or cost incurred to anyone but the receiver. This is achieved by the Zerocash-protocol and is implemented as a fork of the Bitcoin blockchain. Neither the receiver, nor any other participants can gain information about the transactions sent over the blockchain. To provide full functionality for PETra, the Zerocash-protocol would need to be implemented for the transmission of bids and asks as well as the already existing monetary transactions. The second implementation would need to be modified to transmit and link bids and asks to the payments ledger. A more straightforward but bloated structure would be to create transactions without monetary value to post a bid or an ask and then directly reference the final bid-/ask-transaction in the payment-transaction. \subsection{Transactions}
1,477,468,750,506
arxiv
\section{Introduction} Let ${\mathbb F_{q^2}}$ be the finite field with $q^2$ elements, where $q$ is a power of a prime $p$, and let ${\mathcal X}$ be an ${\mathbb F_{q^2}}$-rational curve, that is a projective, absolutely irreducible, non-singular algebraic curve defined over ${\mathbb F_{q^2}}$. ${\mathcal X}$ is called ${\mathbb F_{q^2}}$-maximal if the number ${\mathcal X}({\mathbb F_{q^2}})$ of its ${\mathbb F_{q^2}}$-rational points attains the Hasse-Weil upper bound $$ q^2+1+2gq, $$ where $g$ is the genus of ${\mathcal X}$. Maximal curves have interesting properties and have also been investigated for their applications in Coding Theory. Surveys on maximal curves are found in \cite{FT,G,G2,GS,V,V2} and \cite[Chapt. 10]{HKT}. The most important example of an ${\mathbb F_{q^2}}$-maximal curve is the Hermitian curve ${\mathcal H}_q$, defined as any ${\mathbb F_{q^2}}$-rational curve projectively equivalent to the plane curve with Fermat equation $$ X^{q+1}+Y^{q+1}+T^{q+1}=0. $$ The norm-trace equation $$ Y^{q+1}=X^qT+XT^q $$ gives another model of ${\mathcal H}_q$, ${\mathbb F_{q^2}}$-equivalent to the Fermat model, see \cite[Eq. (2.15)]{GSX}. For fixed $q$, ${\mathcal H}_q$ has the largest possible genus $g({\mathcal H}_q)=q(q-1)/2$ that an ${\mathbb F_{q^2}}$-maximal curve can have. The automorphism group ${\rm Aut}({\mathcal H}_q)$ is isomorphic to ${\rm PGU}(3,q)$, the group of projectivities of ${\rm PG}(2,q^2)$ commuting with the unitary polarity associated with ${\mathcal H}_q$. By a result commonly attributed to Serre, see \cite[Prop. 6]{L}, any ${\mathbb F_{q^2}}$-rational curve which is ${\mathbb F_{q^2}}$-covered by an ${\mathbb F_{q^2}}$-maximal curve is also ${\mathbb F_{q^2}}$-maximal. In particular, ${\mathbb F_{q^2}}$-maximal curves are given by the Galois ${\mathbb F_{q^2}}$-subcovers of an ${\mathbb F_{q^2}}$-maximal curve ${\mathcal X}$, that is by the quotient curves ${\mathcal X}/G$ over a finite ${\mathbb F_{q^2}}$-automorphism group $G\leq{\rm Aut}({\mathcal X})$. Most of the known maximal curves are Galois subcovers of the Hermitian curve, many of which were studied in \cite{CKT,CKT2,GSX}. Garcia and Stichtenoth \cite{GS2} discovered the first example of maximal curve not Galois covered by the Hermitian curve, namely the curve $Y^7=X^9-X$ maximal over ${\mathbb F}_{3^6}$. It is a special case of the curve $\mathcal X_\ell$ with equation \begin{equation}\label{abq} Y^{\ell^2-\ell+1}=X^{\ell^2}-X, \end{equation} which is ${\mathbb F}_{\ell^{6}}$-maximal for any $\ell \ge 2$. In \cite{GK}, Giulietti and Korchm\'aros showed that the Galois covering of $\mathcal X_\ell $ given by $$ \begin{cases} Z^{\ell^2-\ell+1}=Y^{\ell^2}-Y \\ Y^{\ell+1}=X^\ell+X \end{cases} $$ is also ${\mathbb F}_{\ell^6}$-maximal, for any prime power $\ell$. Remarkably, it is not covered by ${\mathcal H}_{\ell^3}$ for any $\ell>2$. This curve, nowadays referred to as the GK curve, was generalized in \cite{GGS} by Garcia, G\"uneri, and Stichtenoth to the curve $$ {\mathcal C}_{\ell^n}: \begin{cases} Z^\frac{\ell^n+1}{\ell+1}=Y^{\ell^2}-Y \\ X^\ell+X=Y^{\ell+1} \end{cases}, $$ which is ${\mathbb F}_{\ell^{2n}}$-maximal for any prime power $\ell$ and $n\geq3$ odd. For $\ell=2$ and $n=3$, ${\mathcal C}_8$ is Galois covered by ${\mathcal H}_8$, see \cite{GK}. Duursma and Mak proved in \cite{DM} that, if $\ell\geq3$, then ${\mathcal C}_{\ell^n}$ is not Galois covered by ${\mathcal H}_{\ell^n}$. In Section $3$, we show that the same holds in the remaining open cases. \begin{theorem}\label{result1} For $\ell=2$ and $n\geq5$, ${\mathcal C}_{2^n}$ is not a Galois subcover of the Hermitian curve ${\mathcal H}_{\ell^n}$. \end{theorem} Duursma and Mak \cite[Thm. 1.2]{DM} showed that if ${\mathcal C}_{2^n}$ is the quotient curve ${\mathcal H}_{2^n}/G$ for $G$ a subgroup of ${\rm Aut}({\mathcal H}_{2^n})$, then $G$ has order $(2^n+1)/3$ and acts semiregularly on ${\mathcal H}_{2^n}$. We investigate all subgroups $G$ of ${\rm Aut}({\mathcal H}_{2^n})$ satisfying these conditions, relying also on classical results by Mitchell \cite{M} and Hartley \cite{H} (see Section $2$) which provide a classification of the maximal subgroups of ${\rm PSU}(3,q)$ in terms of their order and their action on ${\mathcal H}_q$. For any candidate subgroup $G$, we find another subgroup $\bar G$ of ${\rm Aut}({\mathcal H}_{2^n})$ containing $G$ as a normal subgroup, and such that $\bar G/G$ has an action on ${\mathcal H}_{2^n}/G$ not compatible with the action of any automorphism group of ${\mathcal C}_{2^n}$. In Section $4$ we consider the curve $\mathcal X_\ell$ with equation \eqref{abq}. In \cite{GS2} it was shown that ${\mathcal X}_3$ is not a Galois subcover of ${\mathcal H}_{3^6}$ by \cite{GS2}, while ${\mathcal X}_2$ is a quotient of ${\mathcal H}_{2^6}$, as noted in \cite{GT}. Garcia and Stichtenoth \cite[Remark 4]{GS2} raised the same question for any $\ell>3$. The case where $\ell$ is a prime was settled by Mak \cite{Mak}. Here we provide an answer for any prime power $\ell>3$. \begin{theorem}\label{result2} For $\ell>3$, ${\mathcal X}_\ell$ is not a Galois subcover of the Hermitian curve ${\mathcal H}_{\ell^6}$. \end{theorem} In the proof of Theorem \ref{result2} we bound the possible degree of a Galois covering ${\mathcal H}_{\ell^6}\rightarrow{\mathcal X}_\ell$ by means of \cite[Thm. 1.3]{DM}, then we exclude the three possible values given by the bound. To this aim, we use again the classification results of Mitchell \cite{M} and Hartley \cite{H}, other group-theoretic arguments, and the Riemann-Hurwitz formula (see \cite[Chapt. 3]{Sti}) applied to the Galois coverings ${\mathcal H}_{\ell^6}\rightarrow{\mathcal H}_{\ell^6}/G$. \section{Preliminary results} \begin{theorem}\label{MH} {\rm (Mitchell \cite{M}, Hartley \cite{H})} Let $q=p^k$, $d=\gcd(q+1,3)$. The following is the list of maximal subgroups of ${\rm PSU}(3,q)$ up to conjugacy: \begin{itemize} \item[i)] the stabilizer of a ${\mathbb F}_{q^2}$-rational point of ${\mathcal H}_q$, of order $q^3(q^2-1)/d$; \item[ii)] the stabilizer of a ${\mathbb F}_{q^2}$-rational point off ${\mathcal H}_q$ and its polar line (which is a $(q+1)$-secant to ${\mathcal H}_q$), of order $q(q-1)(q+1)^2/d$; \item[iii)] the stabilizer of the self-polar triangle, or order $6(q+1)^2/d$; \item[iv)] the normalizer of a cyclic Singer group stabilizing a triangle in ${\rm PG}(2,q^6)\setminus{\rm PG}(2,q^2)$, of order $3(q^2-q+1)/d$; {\rm for $p>2$:} \item[v)] ${\rm PGL}(2,q)$ preserving a conic; \item[vi)] ${\rm PSU}(3,p^m)$ with $m\mid k$ and $k/m$ odd; \item[vii)] subgroups containing ${\rm PSU}(3,2^m)$ as a normal subgroup of index $3$, when $m\mid k$, $k/m$ is odd, and $3$ divides both $k/m$ and $q+1$; \item[viii)] the Hessian groups of order $216$ when $9\mid(q+1)$, and of order $72$ and $36$ when $3\mid(q+1)$; \item[ix)] ${\rm PSL}(2,7)$ when $p=7$ or $-7$ is not a square in $\mathbb{F}_q$; \item[x)] the alternating group $\mathbf{A}_6$ when either $p=3$ and $k$ is even, or $5$ is a square in $\mathbb{F}_q$ but $\mathbb{F}_q$ contains no cube root of unity; \item[xi)] the symmetric group $\mathbf{S}_6$ when $p=5$ and $k$ is odd; \item[xii)] the alternating group $\mathbf{A}_7$ when $p=5$ and $k$ is odd; {\rm for $p=2$:} \item[xiii)] ${\rm PSU}(3,2^m)$ with $m\mid k$ and $k/m$ an odd prime; \item[xiv)] subgroups containing ${\rm PSU}(3,2^m)$ as a normal subgroup of index $3$, when $k=3m$ with $m$ odd; \item[xv)] a group of order $36$ when $k=1$. \end{itemize} \end{theorem} The previous theorem will be used for a case-analysis of the possible unitary groups $G$ such that the quotient curve ${\mathcal H}/G$ realizes the Galois covering. While dealing with case \textit{ii)}, we will invoke a result by Dickson \cite{D} which classifies all subgroups of the projective special linear group ${\rm PSL}(2,q)$ acting on ${\rm PG}(1,q)$. We remark that ${\rm PSL}(2,q)$ has index $\gcd(q-1,2)$ in the group ${\rm PGL}(2,q)$ of all projectivities of ${\rm PG}(1,q)$. From Dickson's result the classification of subgroups of ${\rm PGL}(2,q)$ is easily obtained. \begin{theorem}{\rm (\cite[Chapt. XII, Par. 260]{D}; see also \cite[Thm. A.8]{HKT})}\label{Di} Let $q=p^k$, $d=\gcd(q-1,2)$. The following is the complete list of subgroups of ${\rm PGL}(2,q)$ up to conjugacy: \begin{itemize} \item[i)] the cyclic group of order $h$ with $h\mid(q\pm1)$; \item[ii)] the elementary abelian $p$-group of order $p^f$ with $f\leq k$; \item[iii)] the dihedral group of order $2h$ with $h\mid(q\pm1)$; \item[iv)] the alternating group $\mathbf{A}_4$ for $p>2$, or $p=2$ and $k$ even; \item[v)] the symmetric group $\mathbf{S}_4$ for $16\mid(q^2-1)$; \item[vi)] the alternating group $\mathbf{A}_5$ for $p=5$ or $5\mid(q^2-1)$; \item[vii)] the semidirect product of an elementary abelian $p$-group of order $p^f$ by a cyclic group of order $h$, with $f\leq k$ and $h\mid(q-1)$; \item[viii)] ${\rm PSL}(2,p^f)$ for $f\mid k$; \item[ix)] ${\rm PGL}(2,p^f)$ for $f\mid k$. \end{itemize} \end{theorem} \section{${\mathcal C}_{2^n}$ is not Galois-covered by ${\mathcal H}_{2^n}$, for any $n\geq5$} The aim of this section is to prove Theorem \ref{result1}. Throughout the section, let $n\geq5$ be odd and $q=2^n$. We rely on the following result by Duursma and Mak. \begin{lemma}\label{gradofisso}{\rm (\cite[Thm. 1.2]{DM})} Let $n\geq5$ odd. If ${\mathcal C}_{2^n}\cong{\mathcal H}_{2^n}/G$ for some $G\leq{\rm Aut}({\mathcal H}_{2^n})$, then $G$ has order $(2^n+1)/3$ and acts semiregularly on ${\mathcal H}_{2^n}$. \end{lemma} By Lemma \ref{gradofisso} only subgroups $G$ of ${\rm Aut}({\mathcal H}_{q})$ of order $(q+1)/3$ acting semiregularly on ${\mathcal H}_{q}$ need to be considered. We will also use the fact that the whole automorphism group of ${\rm Aut}({\mathcal C}_{2^n})$ fixes a point. \begin{theorem}{\rm (\cite[Thm. 3.10]{GOS},\cite[Prop. 2.10]{GMP})}\label{GGSpuntofisso} For $n\geq5$, the group ${\rm Aut}({\mathcal C}_{2^n})$ has a unique fixed point $P_\infty$ on ${\mathcal C}_q$, and $P_\infty$ is ${\mathbb F_{q^2}}$-rational. \end{theorem} \begin{corollary}\label{cor1} Let $G\leq{\rm Aut}({\mathcal H}_q)$. If there exists $\bar G\leq{\rm Aut}({\mathcal H}_q)$ such that $G$ is a proper normal subgroup of $\bar G$ and $\bar G$ acts semiregularly on ${\mathcal H}_q$, then $\bar G/G\leq{\rm Aut}({\mathcal H}_q/G)$ acts semiregularly on ${\mathcal H}_q/G$, hence ${\mathcal C}_{2^n}\not\cong{\mathcal H}_q/G$. \end{corollary} The following well-known result about finite groups will be used (see for example \cite[Ex. 16 Page 232]{Mac}). \begin{lemma}\label{indice} Let $H$ be a finite group and $K$ a subgroup of $H$ such that the index $[H:K]$ is the smallest prime number dividing the order of $H$. Then $K$ is normal in $H$. \end{lemma} \begin{proposition}\label{punto-retta} Suppose $G\leq{\rm PSU}(3,q)$ and a maximal subgroup of ${\rm PSU}(3,q)$ containing $G$ satisfies case $\mathit{ii)}$ in Theorem \ref{MH}. Then ${\mathcal C}_{2^n}\not\cong{\mathcal H}_q/G$. \end{proposition} \begin{proof} Let $\ell$ be the $(q+1)$-secant to ${\mathcal H}_q$ stabilized by $G$; we show that $G$ is isomorphic to a cyclic subgroup of ${\rm PSL}(2,q^2)$. ${\rm PGU}(3,q)$ is transitive on the points of ${\rm PG}(2,q^2)\setminus{\mathcal H}_q$ (see for example \cite{HP}), hence also on the $(q+1)$-secant lines; therefore we can assume that $\ell$ is the line at infinity $T=0$. The action on $\ell$ of an element $g\in G$ is given by $(X,Y,0)\mapsto A_g\cdot(X,Y,0)$, where the matrix $A_g=(a_{ij})_{i=1,2,3}^{j=1,2,3}$ satisfies $a_{31}=a_{32}=0$, and we can assume $a_{33}=1$. By direct computation, it is easy to check that the application $$ \varphi:G\rightarrow{\rm PGL}(2,q^2), \qquad \varphi(g): \begin{pmatrix} X \\ Y \end{pmatrix} \mapsto \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \cdot \begin{pmatrix} X \\ Y \end{pmatrix}, $$ is a well-defined group homomorphism. Moreover, $\varphi$ is injective, since no non-trivial element of $G$ can fix the points of ${\mathcal H}_q\cap\ell$, by the semiregularity of $G$. Hence $G$ is isomorphic to a subgroup of ${\rm PGL}(2,q^2)\cong{\rm PSL}(2,q^2)$. Since $|G|$ is odd, then Theorem \ref{Di} implies that $G$ is cyclic. Let $g\in G$ be an element of prime order $d>3$; such a $d$ exists, since it is easy to check that $2^n+1$ is a power of $3$ only when $n=1$ or $n=3$. If we denote by $d^h$ the highest power of $d$ dividing $(q+1)/3$, then $d^{2h}$ is the highest power of $d$ dividing $$|{\rm PGU}(3,q)|=q^3(q^3+1)(q^2-1)=q^3(q+1)^2(q-1)(q^2-q+1).$$ Let ${\mathcal H}_q$ have equation $X^{q+1}+Y^{q+1}+T^{q+1}=0$, then $$ D=\left\{ (X,Y,T)\mapsto(\lambda X,\mu Y,T) \mid \lambda^{d^h}=\mu^{d^h}=1 \right\} $$ is a Sylow $d$-subgroup of ${\rm PGU}(3,q)$, and by Sylow's theorems we can assume up to conjugation that $g\in D$, so the fixed points of the subgroup $\langle g\rangle$ generated by $g$ are the fundamental points $P_i$, $i=1,2,3$. Since $G$ is abelian, then $\langle g\rangle$ is normal in $G$, hence $G$ acts on the fixed points $T=\left\{P_1,P_2,P_3\right\}$ of $\langle g\rangle$. In fact, for all $k\in G$ and $\bar g\in\langle g\rangle$, $$k(P_i)=k(\bar g(P_i))=\tilde g(k(P_i))$$ for some $\tilde g\in\langle g\rangle$, that is, $k(P_i)$ is fixed by $\tilde g$, hence $k(P_i)$ is a fundamental point $P_j$. As $|G|$ is odd, we have by the orbit stabilizer theorem that the orbits of any $k\in G$ on $T$ have length $1$ or $3$. If $k$ has a single orbit on $T$, then the matrix representing $k$ is $$ k=\begin{pmatrix} 0 & 0 & \lambda \\ \mu & 0 & 0 \\ 0 & \rho & 0 \end{pmatrix} \quad{\rm or}\quad k=\begin{pmatrix} 0 & \lambda & 0 \\ 0 & 0 & \mu \\ \rho & 0 & 0 \end{pmatrix},\quad{\rm in\; both\; cases}\quad k^3=\begin{pmatrix} \lambda\mu\rho & 0 & 0 \\ 0 & \lambda\mu\rho & 0 \\ 0 & 0 & \lambda\mu\rho \end{pmatrix}, $$ that is $k^3=1$, hence $G$ cannot be generated by $k$. Therefore a generator $\alpha$ of $G$ has the form $$ \alpha:(X,Y,T)\mapsto(\theta X,\eta Y,T), $$ with $\theta^\frac{q+1}{3}=\eta^\frac{q+1}{3}=1$. If $\theta$ had order $m<(q+1)/3$, then $\alpha^m$ would fix the points of ${\mathcal H}_q\cap(Y=0)$, against the semiregularity of $G$. Then $\theta$ is a primitive $(q+1)/3$-th root of unity, and the same holds for $\eta$, so that \begin{equation}\label{alpha} \alpha=\alpha_\theta:(X,Y,T)\mapsto(\theta X, \theta^i Y,T), \end{equation} where $\theta$ is a primitive $(q+1)/3$-th root of unity, and $i$ is co-prime with $(q+1)/3$. Let $\zeta\in{\mathbb F_{q^2}}$ satisfy $\zeta^3=\theta$, and let $\bar G$ be the group generated by $\alpha_\zeta$, as defined in \eqref{alpha}. Any element of $\bar G$ fixes only the fundamental points, hence $\bar G$ is semiregular on ${\mathcal H}_q$; moreover, $G$ is normal in $\bar G$ of index $3$. Then Corollary \ref{cor1} yields the thesis. \qed \end{proof} \begin{proposition}\label{triangolo} Suppose $G\leq{\rm PSU}(3,q)$ and a maximal subgroup of ${\rm PSU}(3,q)$ containing $G$ satisfies case $\mathit{iii)}$ in Theorem \ref{MH}. Then ${\mathcal C}_{2^n}\not\cong{\mathcal H}_q/G$. \end{proposition} \begin{proof} Up to conjugation, the self-polar triangle stabilized by $G$ is the fundamental triangle $T=\left\{P_1,P_2,P_3\right\}$. Let $N$ be the subgroup of $G$ stabilizing $T$ pointwise. Then $N$ is normal in $G$, since $g^{-1}ng(P_i)=g^{-1}n(g(P_i))=g^{-1}(g(P_i))=P_i$, where $n\in N$, $g\in G$. The group $G/N$ acts faithfully on $T$, hence either $G=N$ or $[G:N]=3$. If $G=N$, then $G$ fixes one fundamental point $P_i$, which is off ${\mathcal H}_q$, and the polar line of $P_i$ passing through the other fundamental points; therefore Proposition \ref{punto-retta} yields the thesis. Now suppose $[G:N]=3$. As in the proof of Proposition \ref{punto-retta}, $N$ is isomorphic to a subgroup of ${\rm PSL}(2,q^2)$; since $|N|$ is odd, we have by Theorem \ref{Di} that $N$ is cyclic, say $N=\langle\alpha_\xi\rangle$, where $\xi$ is a primitive $(q+1)/9$-th root of unity and $\alpha_\xi$ is defined in \eqref{alpha}. Let $h\in G\setminus N$. By arguing as for $k$ in the proof of Proposition \ref{punto-retta}, we have that $h$ has order $3$. Moreover, $G$ is the semidirect product $N\rtimes\langle h\rangle$, because $N\triangleleft G$, $N\cap\langle h\rangle=\emptyset$, and the orders of the subgroups imply $G=\langle h\rangle\cdot N$. Let $\bar N$ be the cyclic group $\langle \alpha_\theta\rangle$, with $\theta\in{\mathbb F_{q^2}}$ such that $\theta^3=\xi , and let $\bar G$ be the group generated by $\bar N$ and $h$. $\bar G$ is the semidirect product $\bar N\rtimes\langle h\rangle$; in fact, $\bar N$ is normal in $\bar G$ by Lemma \ref{indice}, $\bar N\cap\langle h\rangle=\emptyset$, and the orders of the subgroups imply $\bar G=\bar N\cdot \langle h\rangle$. We have that $G$ is normal in $\bar G$, again by Lemma \ref{indice}. We want to count in two ways the size of the set $$ I=\left\{ (\bar g,P)\mid \bar g\in\bar G\setminus\left\{id\right\},\;P\in{\mathcal H}_q\,,\;\bar g(P)=P \right\} $$ The diagonal group $\bar N$ is semiregular on ${\mathcal H}_q$, like $\bar G\cap{\rm PSU}(3,q)=G$. Then we consider only elements of the form $\bar n h$ or $\bar n h^2$, with $\bar n\in\bar N\setminus N$ We have $$ \bar n= \begin{pmatrix} \rho & 0 & 0 \\ 0 & \rho^i & 0 \\ 0 & 0 & 1 \end{pmatrix},\qquad h=\begin{pmatrix} 0 & \lambda & 0 \\ 0 & 0 & \mu \\ 1 & 0 & 0 \end{pmatrix} $$ where $\lambda^{q+1}=\mu^{q+1}=1$, $\gcd(i,(q+1)/3)=1$, $\rho=\theta^j$ with $0\leq j<(q+1)/3$ (the argument is analogous in case $h$ acts as the other possible $3$-cycle on the fundamental points). Hence $\bar n h$ is $$ \bar n h = \begin{pmatrix} \rho & 0 & 0 \\ 0 & \rho^i & 0 \\ 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} 0 & \lambda & 0 \\ 0 & 0 & \mu \\ 1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & A & 0 \\ 0 & 0 & B \\ 1 & 0 & 0 \end{pmatrix}, $$ where $A^{q+1}=B^{q+1}=1$, and $\det(\bar nh)=AB$ is not a cube in ${\mathbb F_{q^2}}$, since $\bar nh\notin{\rm PSU}(3,q)$.The eigenvalues of $\bar n h$ are the zeros of $X^3-AB\in{\mathbb F_{q^2}}[X]$. Since ${\mathbb F_{q^2}}$ has characteristic $2$, we get $3$ distinct eigenvalues in a cubic extension of ${\mathbb F_{q^2}}$, namely $z$, $zx$, and $z(x+1)$, where $x^2+x+1=0$ and $z^3=AB$. Then $\bar n h$ has exactly $3$ fixed points, given by $3$ independent eigenvectors: $$ Q_1=\left(z,\frac{z^2}{A},1\right),\quad Q_2=\left(zx,\frac{z^2x^2}{A},1\right),\quad Q_3=\left(z(x+1),\frac{z^2(x+1)^2}{A},1\right). $$ $Q_1$ is a point of ${\mathcal H}_q$. In fact, since ${\mathcal H}_q$ has equation $X^{q+1}+Y^{q+1}+T^{q+1}=0$, then $$ z^{q+1}+\left(\frac{z^2}{A}\right)^{q+1}+1 = z^{q+1}+z^{2(q+1)}+1 = \frac{(z^{q+1})^3-1}{z^{q+1}-1} = \frac{A^{q+1}-1}{z^{q+1}-1}=0 $$ as $z\notin{\mathbb F_{q^2}}$ implies $z^{q+1}\neq 1$. Similarly we get $Q_2\in{\mathcal H}_q$, $Q_3\in{\mathcal H}_q$. Then each element $\bar n h$ or $\bar n h^2$ with $\bar n\in\bar N\setminus N$ has exactly $3$ fixed points on ${\mathcal H}_q$, and \begin{equation}\label{I} |I|=2\cdot\left(|\bar{N}|-|N|\right)\cdot3=2\cdot\left(\frac{q+1}{3}-\frac{q+1}{9}\right)\cdot3=4\cdot\frac{q+1}{3}. \end{equation} The orbit ${\mathcal O}$ under $\bar G$ of a point $P\in{\mathcal H}_q$ contains the orbit of $P$ under $G$, hence $|{\mathcal O}|\geq(q+1)/3$; by the orbit stabilizer theorem, the stabilizer ${\mathcal S}$ of $P$ under $\bar G$ has size $|{\mathcal S}|\leq3$, in particular $|{\mathcal S}|\in\left\{1,3\right\}$ since $|\bar G|$ is odd. Then $|I|=2m$, where $m$ is the number of points of ${\mathcal H}_q$ which are fixed by some non-trivial element of $\bar G$. By \eqref{I}, we get $$ m=2\cdot\frac{q+1}{3},$$ that is, these $m$ points form $2$ distinct orbits under the action of $G$. Then the quotient group $\bar G/G$ has $2$ fixed points on ${\mathcal H}_q/G$ and any other orbit of $\bar G/G$ is long, with length $3$. By \ref{GGSpuntofisso}, one of the fixed points of $\bar G/G$ is ${\mathbb F_{q^2}}$-rational, and the other one may or may not be ${\mathbb F_{q^2}}$-rational. Then the number of ${\mathbb F_{q^2}}$-rational points of ${\mathcal H}_q/G$ is congruent to $1$ or $2$ mod $3$. On the other side, the ${\mathbb F_{q^2}}$-maximal curve ${\mathcal C}_{2^n}$ has genus $g=(3q-4)/2$ and number of ${\mathbb F_{q^2}}$-rational points equal to $$ |{\mathcal C}_{2^n}({\mathbb F_{q^2}})| = q^2+1+2qg = q^2+1+2q\cdot(3q-4)/2 = 4q^2-4q+1 ,$$ see \cite[Prop. 2.2]{GGS}; then $|{\mathcal C}_{2^n}({\mathbb F_{q^2}})|\equiv0\,(\mod\,3)$, as $q\equiv2\,(\mod\,3)$. Therefore, ${\mathcal H}_q/G\not\cong{\mathcal C}_{2^n}$. \qed \end{proof} \begin{proposition}\label{puntorettafuori} Suppose $G\not\subseteq{\rm PSU}(3,q)$ and a maximal subgroup of ${\rm PSU}(3,q)$ containing $G\cap{\rm PSU}(3,q)$ satisfies case $\mathit{ii)}$ in Theorem \ref{MH}. Then ${\mathcal C}_{2^n}\not\cong{\mathcal H}_q/G$. \end{proposition} \begin{proof} Let $G'=G\cap{\rm PSU}(3,q)$. Since ${\rm PSU}(3,q)$ has prime index $3$ in ${\rm PGU}(3,q)$, then ${\rm PGU}(3,q)=G\cdot{\rm PSU}(3,q)$, hence $[G:G']=3$, and $G'$ is normal in $G$ by Lemma \ref{indice}. Arguing as in the proof of Proposition \ref{punto-retta}, $G'=\langle\alpha_\xi\rangle$ is cyclic, where $\xi$ is a primitive $(q+1)/9$-th root of unity, $\alpha_\xi$ is defined in \eqref{alpha} and fixes the fundamental points, and $G$ stabilizes the fundamental triangle $T$. Suppose there exists $h\in G\setminus G'$ of order $3$. Arguing as for $k$ in Proposition \ref{triangolo}, $G=G'\rtimes\langle h\rangle$. Let $\theta\in{\mathbb F_{q^2}}$ with $\theta^3=\xi$, we define $\bar{G'}$ as the cyclic group generated by $\alpha_\theta$ (given in \eqref{alpha}), and $\bar G$ as the group generated by $\bar{G'}$ and $h$. Again, it is easily seen that $\bar G=\bar{G'}\rtimes\langle h\rangle$; moreover, $G'$ is normal in $\bar{G'}$ and $G$ is normal in $\bar G$ with indeces $[\bar G:G]=[\bar{G'}:G']=3$. We can repeat the same argument as in the proof of Proposition \ref{triangolo} after replacing $N$ with $G'$ and $\bar N$ with $\bar{G'}$; in this way we obtain that $|{\mathcal H}_q/G({\mathbb F_{q^2}})|\equiv1,2\,(\mod\,3)$, while $|{\mathcal C}_{2^n}|\equiv0\,(\mod\,3)$. This yields the thesis. Now suppose there is no $h\in G\setminus G'$ of order $3$. This fact implies that $G$ is made of diagonal matrices, since $G$ acts on $T$. Then, by Theorem \ref{Di}, $G$ is cyclic and $G=\langle\alpha_\theta\rangle$, where the notations are the same as above. We define the diagonal group $\bar G=\langle\alpha_\zeta\rangle$, with $\zeta^3=\theta$. $G$ is normal in $\bar G$ of index $3$ and$\bar G$ is semiregular on ${\mathcal H}_q$, hence Corollary \ref{cor1} yields the thesis. \qed \end{proof} \begin{proposition}\label{triangolofuori} Suppose $G\not\subseteq{\rm PSU}(3,q)$ and a maximal subgroup of ${\rm PSU}(3,q)$ containing $G\cap{\rm PSU}(3,q)$ satisfies case $\mathit{iii)}$ in Theorem \ref{MH}. Then ${\mathcal C}_{2^n}\not\cong{\mathcal H}_q/G$. \end{proposition} \begin{proof} As above, $G'=G\cap{\rm PSU}(3,q)$ is normal in $G$ of index $3$. By applying to $G'$ the argument of Proposition \ref{triangolo}, we get that either $G'$ is cyclic and $G'=\langle\alpha_\xi\rangle$, or $G'=\langle\alpha_\eta\rangle\rtimes\langle h\rangle$, where $\eta$ is a primitive $(q+1)/27$-th root of unity and $h$ is an element of order $3$ acting as a $3$-cycle on the fundamental triangle $T$. Consider the case $G'=\langle\alpha_\xi\rangle$. Since $G'$ is normal in $G$, then $G$ acts on $T$. If $G$ fixed $T$ pointwise, then $G$ would be made of diagonal matrices whose non-zero coefficients are cubes in ${\mathbb F_{q^2}}$ being $(q+1)/3$-th roots of unity, hence $G\leq{\rm PSU}(3,q)$, against the hypothesis. Then, arguing as above, $G=G'\rtimes\langle h\rangle$, where $h\in G\setminus G'$ has order $3$. Let $\theta\in{\mathbb F_{q^2}}$ with $\theta^3=\xi$, and define $\bar G=\langle\alpha_\theta\rangle\rtimes\langle h\rangle$. Arguing as in Proposition \ref{triangolo}, we obtain that $|{\mathcal H}_q/G({\mathbb F_{q^2}})|\equiv1,2\,(\mod\,3)$, while $|{\mathcal C}_{2^n}|\equiv0\,(\mod\,3)$. This yields the thesis. Now consider the case $G'=\langle\alpha_\eta\rangle\rtimes\langle h\rangle$. $\langle\alpha_\eta\rangle$ is the only subgroup of $G'$ of order $(q+1)/27$, then $\langle\alpha_\eta\rangle$ is a characteristic subgroup of $G'$; also, $G'$ is normal in $G$. Therefore $\langle\alpha_\eta\rangle$ is normal in $G$, hence $G$ acts on the fundamental points. Let $G''$ be the subgroup of $G$ fixing $T$ pointwise; $G''$ is normal in $G$ of index $3$, and $G=G''\rtimes\langle h\rangle$. Being made of diagonal matrices, $G''$ is abelian, with a subgroup $G'$ of index $3$. By the primary decomposition of abelian groups, we have either $G''=\langle\alpha_\xi\rangle$ with $\xi^3=\eta$, or $G''=\langle\alpha_\eta\rangle\times\langle h'\rangle$, with $h'\in G''\setminus\langle\alpha_\eta\rangle$ a diagonal matrix of order $3$. In the latter case, by ${h'}^3=id$ we get that $\det(h')^3=1$ and then $\det(h')$ is a cube in ${\mathbb F_{q^2}}$, hence $h'\in G\cap{\rm PSU}(3,q)=G'$; therefore $G'=G''$, against the fact that $h$ is a $3$-cycle on $T$. Then $G''=\langle\alpha_\xi \rangle$, and $G=\langle\alpha_\xi \rangle\rtimes\langle h\rangle$. Let $\bar G=\langle\alpha_\theta\rangle\rtimes\langle h\rangle$, where $\theta\in{\mathbb F_{q^2}}$ satisfies $\theta^3=\xi$. We can repeat the same argument as in the proof of Proposition \ref{triangolo} after replacing $N$ with $\langle\alpha_\xi \rangle$ and $\bar N$ with $\langle\alpha_\theta\rangle$; in this way we obtain that $|{\mathcal H}_q/G({\mathbb F_{q^2}})|\equiv1,2\,(\mod\,3)$, while $|{\mathcal C}_{2^n}|\equiv0\,(\mod\,3)$. \qed \end{proof} \begin{lemma}\label{lemmino} Suppose $G\leq{\rm PSU}(3,q)$ and any maximal subgroup $M$ of ${\rm PSU}(3,q)$ containing $G$ does not satisfy case $\mathit{ii)}$ nor case $\mathit{iii)}$ in Theorem \ref{Di}. Then $M$ satisfies only case $\mathit{xiv)}$, i.e. $G\not\subseteq{\rm PSU}(3,2^m)$ and $M$ contains ${\rm PSU}(3,2^m)$ as a normal subgroup of index $3$, where $n=3m$. \end{lemma} \begin{proof} We can exclude cases $ii)$ and $iii)$ by hypothesis, case $i)$ by the semiregularity of $G$, and cases $iv)$ and $xv)$ because $|G|$ is not a divisor of their orders. Then the thesis follows if we exclude case $xiii)$. To this end, we apply again Theorem \ref{MH} to ${\rm PSU}(3,2^m)$, where $n=p'm$ with $p'\geq3$ an odd prime. Note that since $n\geq5$ is odd, then either $p'\geq5$, or $p'=3$ and $m\geq5$. Case $i)$. $G$ fixes a point $P\in{\mathcal H}_{2^m}$. Then $P\notin{\mathcal H}_q$ by the semiregularity of $G$ on ${\mathcal H}_q$, hence $G$ satisfies case $ii)$ in the list of maximal subgroups of ${\rm PSU}(3,q)$, contradicting the hypothesis. Case $ii)$. By Lagrange's theorem, the order $(2^{p'm}+1)/3$ of $G$ divides $2^m(2^m-1)(2^m+1)^2/3$, hence $\sum_{i=0}^{p'-1}(-1)^i 2^{im}$ divides $(2^{2m}-1)$, which is impossible for any odd $p'\geq3$. Case $iii)$. Now $(2^{p'm}+1)/3$ divides $2(2^m+1)^2$, hence $\sum_{i=0}^{p'-1}(-1)^i 2^{im}$ divides $3(2^m+1)$, which is impossible since $\sum_{i=0}^{p'-1}(-1)^i 2^{im}>3(2^m+1)$. Case $iv)$. Now $(2^{p'm}+1)/3$ divides $(2^{2m}-2^m+1)$, which is impossible since $(2^{p'm}+1)/3>2^{2m}-2^m+1$ for any $p'\geq3$, $m\geq3$. Case $xiii)$. $G$ is contained in ${\rm PSU}(3,2^r)$ with $m/r=p''\geq3$ an odd prime, and $n/r=p'p''\geq9$. This is impossible since $|G|$ is greater than the order of any maximal subgroup of ${\rm PSU}(3,2^r)$. Case $xiv)$. $G$ is contained in a group $K$ containing ${\rm PSU}(3,2^r)$ as a normal subgroup of index $3$, where $r=m/3$. If $H\neq{\rm PSU}(3,2^r)$ is a maximal subgroup of $K$, we have $H\cdot{\rm PSU}(3,2^r)={\rm PGU}(3,2^r)$, hence $[H:H\cap{\rm PSU}(3,2^r)]=[{\rm PGU}(3,2^r):{\rm PSU}(3,2^r)]=3$. Therefore, $|H|/3$ divides the order of a maximal subgroup of ${\rm PSU}(3,2^r)$. Then we get a contradiction, since $|G|$ does not divide three times the order of any maximal subgroup of ${\rm PSU}(3,2^r)$. Case $xv)$. $|G|$ divides $36$, and $m=1$, which implies $p'\geq5$. For $p'=5$, we have $|G|=11$ which does not divide $36$; for $p'>5$, we have that $|G|$ is greater than $36$. \qed \end{proof} \begin{proposition}\label{quattordici} Suppose $G\leq{\rm PSU}(3,q)$ and a maximal subgroup $M$ of ${\rm PSU}(3,q)$ containing $G$ satisfies only case $xiv)$ in Theorem \ref{Di}. Then ${\mathcal C}_{2^n}\not\cong{\mathcal H}_q/G$. \end{proposition} \begin{proof} $M$ contains ${\rm PSU}(3,2^m)$ as a normal subgroup of order $3$, where $m=n/3$. Arguing as in case $xiv)$ of Lemma \ref{lemmino}, $|G|$ divides three times the order of a maximal subgroup of ${\rm PSU}(3,2^m)$. Then we multiply by $3$ the orders of maximal subgroups of ${\rm PSU}(3,q)$ as listed in Theorem \ref{Di}. Case $i)$. The order $(2^{3m}+1)/3$ of $G$ divides $2^{3m}(2^{2m}-1)$, hence $(2^{2m}-2^m+1)$ divides $3(2^m-1)$, which is impossible since $m\geq3$. Case $ii)$. $(2^{3m}+1)/3$ divides $2^m(2^m+1)^2(2^m-1)$, which is impossible as above. Case $iii)$. $(2^{3m}+1)/3$ divides $6(2^m+1)^2$, hence $(2^{2m}-2^m+1)$ divides $9(2^m+1)$, which is impossible for any $m\geq3$. Case $iv)$. $(2^{3m}+1)/3$ divides $3(2^{2m}-2^m+1)$, hence $(2^m+1)\mid9$, which implies $m=3$. Cases $xiii)$ and $xiv)$. $(2^{3m}+1)/3$ divides either $3\cdot|PSU(3,2^r)|$ or $3\cdot|PGU(3,2^r)|$, where $m/r=p''\geq3$ is an odd prime. As in the proof of Lemma \ref{lemmino}, this is impossible since $|G|$ exceeds three times the order of any subgroup of ${\rm PGU}(3,2^r)$. Case $xv)$. $(2^{3m}+1)/3$ divides $36$, which is impossible for any $m\geq3$. Therefore the only possibility is given in case $iv)$ for $m=3$. Then $G$ has order $171$, $G''=G\cap{\rm PSU}(3,2^m)$ has order $|G|/3=57$, and $G''$ is contained in the normalizer $N$ of a cyclic Singer group $S$, of order $|N|=(2^{2m}-2^m+1)=57$, hence $G''=N$. $G''$ acts on the three non-collinear points $Q_1,Q_2,Q_3$ fixed by $S$, whose coordinates are in a cubic extension of $\mathbb{F}_{2^{2m}}$, hence in $\mathbb{F}_{2^{2n}}$. By the semiregularity of $G$, we have $Q_i\notin{\mathcal H}_q$; then $\left\{Q_1,Q_2,Q_3\right\}$ is a self-polar triangle, and we get the thesis as in the proof of Proposition \ref{triangolofuori}, after replacing $q$ with $2^m$ and $G'$ with $G''$. \qed \end{proof} \begin{theorem} ${\mathcal C}_{2^n}$ is not a Galois subcover of the Hermitian curve ${\mathcal H}_q$. \end{theorem} \begin{proof} Suppose ${\mathcal C}_{2^n}\cong{\mathcal H}_q/G$. By Propositions \ref{punto-retta}, \ref{triangolo}, \ref{quattordici} and Lemma \ref{lemmino}, we have that $G\not\subseteq{\rm PSU}(3,q)$; then $G'=G\cap{\rm PSU}(3,q)$ has index $3$ in $G$. After replacing $G$ with $G'$, we can repeat the proofs of Propositions \ref{puntorettafuori} and \ref{triangolofuori}, the proof of Lemma \ref{lemmino}, and the first part of the proof of Proposition \ref{quattordici}. In this way, the only possibility we have is that $n=9$ and a maximal subgroup $M$ of ${\rm PSU}(3,2^9)$ containing $G'$ contains ${\rm PSU}(3,2^3)$ as a normal subgroup of index $3$; moreover, $G''=G'\cap{\rm PSU}(3,2^3)$ is contained in the normalizer $N'$ of a cyclic Singer group, of order $|N'|=57$. If $G'\leq{\rm PSU}(3,2^3)$, then we repeat the argument of the proof of Proposition \ref{quattordici}, after $G$ with $G'$. In this way we get a contradiction. If $G'\not\subseteq{\rm PSU}(3,2^3)$, then $G''=G'\cap{\rm PSU}(3,2^3)$ has order $|G'|/3=19$. Since $|G'|=57$, then by the third Sylow theorem $G'$ is the only Sylow $19$-subgroup of $G$, hence $G''$ is the cyclic Singer group normalized by $G'=N'$. Therefore $G''$ fixes a triangle with coordinates in the cubic extension $\mathbb{F}_{2^{18}}$ of $\mathbb{F}_{2^{6}}$, which is the fundamental triangle $T$ up to conjugation in ${\rm PGU}(3,2^9)$. Hence $G'$ acts on $T$, and Proposition \ref{triangolofuori} yields the thesis. \end{proof} \section{${\mathcal X}_{q}$ is not Galois-covered by ${\mathcal H}_{q^3}$, for any $q>3$} The aim of this section is to prove Theorem \ref{result2}. Throughout the section, let $q>3$ be a power of a prime $p$. By direct application of a result by Duursma and Mak, we have the following bound. \begin{proposition}{\rm (\cite[Thm. 1.3]{DM})}\label{possibilivalori} If there exists a Galois-covering ${\mathcal H}_{q^3}\rightarrow{\mathcal X}_q$, of degree $d$, then $$q^2+q\leq d\leq q^2+q+2.$$ \end{proposition} Therefore we have to exclude three possible values of $d$. \begin{proposition}\label{primovalore} There is no Galois-covering ${\mathcal H}_{q^3}\rightarrow{\mathcal X}_q$ of degree $q^2+q+2$. \end{proposition} \begin{proof} If such a Galois-covering exists, then $q^2+q+2$ divides the order $q^9(q^9+1)(q^6-1)$ of ${\rm PGU}(3,q^3)$, hence $q^2+q+2$ divides the remainder of the polynomial division, which is equal to $2128q-1568$. Then the possible values for $q$ are $1$, $2$, $3$, or $10$, but none of these is acceptable. \qed\end{proof} \vspace*{.2cm} Now we consider the possible value $d=q^2+q+1$. \begin{lemma}\label{lem} Let $G\leq{\rm PGU}(3,q^3)$ with $|G|=q^2+q+1$. Then $G\leq{\rm PSU}(3,q^3)$. \end{lemma} \begin{proof} If ${\rm PGU}(3,q^3)={\rm PSU}(3,q^3)$ there is nothing to prove, hence we can assume that ${\rm PSU}(3,q^3)$ has index $3$ in ${\rm PGU}(3,q^3)$. Then $\gcd(3,q^3+1)=3$, or equivalently $\gcd(3,q+1)=3$, so that $3$ does not divide $q^2+q+1=|G|$. If $G\not\subseteq{\rm PSU}(3,q^3)$, then ${\rm PGU}(3,q^3)=G\cdot{\rm PSU}(3,q^3)$ and $G$ has a subgroup $G\cap{\rm PSU}(3,q^3)$ of index $[{\rm PGU}(3,q^3):{\rm PSU}(3,q^3)]=3$, contradiction. \qed\end{proof} \begin{proposition}\label{secondovalore} There is no Galois-covering ${\mathcal H}_{q^3}\rightarrow{\mathcal X}_q$ of degree $q^2+q+1$. \end{proposition} \begin{proof} Suppose such a Galois-covering exists, say ${\mathcal X}_q\cong{\mathcal H}_{q^3}/G$. Then $G\leq{\rm PSU}(3,q^3)$ by Lemma \ref{lem}, and we can apply Theorem \ref{MH}. {\bf Case \textit{i)}} Let ${\mathcal H}_{q^3}$ have equation $Y^{q^3+1}=X^{q^3}+X$; up to conjugation, $G$ fixes the ideal point $P_\infty$. The stabilizer $S$ of $P_\infty$ in ${\rm PGU}(3,q^3)$ is the semidirect product $P\rtimes H$, where $P$ is the unique Sylow $p$-subgroup of $S$ of order $q^9$, and $H$ is a cyclic group of order $q^6-1$ generated by $$\alpha_a:(X,Y,T)\mapsto(a^{q^3+1}X,aY,T),$$ where $a$ is a primitive $(q^6-1)$-th root of unity; $H$ fixes two $\mathbb{F}_{q^3}$-rational points of ${\mathcal H}_{q^3}$ and acts semiregularly on the other points (see \cite[Section 4]{GSX}). Since $P\triangleleft S$, $|P|$ and $|H|$ are coprime, and $|G|$ divides $|H|$, then $G\subset H$, and $G=\langle\alpha_b\rangle$, where $b=a^{(q^3+1)(q-1)}$. Now consider the group $\bar G=\langle\alpha_c\rangle\subset H$, where $c=a^{q-1}$; $G$ is normal in $\bar G$ with index $q^3+1$. The automorphism group $\bar G/G$ has two fixed points on ${\mathcal H}_{q^3+1}/G$ and all other orbits are long; then the number of $\mathbb{F}_{q^6}$-rational points of ${\mathcal H}_{q^3}$v is congruent to $2$ modulo $q^3+1$. On the other side, the $\mathbb{F}_{q^6}$-maximal curve ${\mathcal X}_q$ has genus $(q-1)(q^3-q)/2$, hence the number of $\mathbb{F}_{q^6}$-rational points of ${\mathcal X}_q$ is $q^7-q^5+q^4+1\equiv q^2+1\,(\mod\,q^3+1)$, contradiction. {\bf Case \textit{ii)}} Let ${\mathcal H}_{q^3}$ have the Fermat equation $X^{q^3+1}+X^{q^3+1}+1=0$; up to conjugation, $G$ fixes the affine point $(0,0)$ and the line at infinity $\ell:T=0$. The action of $G$ on $\ell$ is faithful. In fact, if $g\in G$ fixes $\ell$ pointwise, then $g:(X,Y,T)\mapsto(X,Y,\lambda T)$ is a homology whose order divides $q^3+1$; on the other hand, the order of $g$ divides $|G|=q^2+q+1$, hence $g$ is the identity since $q$ is even. Therefore, as in the proof of Proposition \ref{punto-retta}, $G$ is isomorphic to a subgroup of ${\rm PGL}(2,q^6)$. Since $|G|$ is odd and coprime with $p$, then by Theorem \ref{Di} $G$ is cyclic. Moreover, since $|G|$ divides $q^6-1$, then $G$ has two fixed points $P_1,P_2\in\ell$ and acts semiregularly on $\ell\setminus\left\{P_1,P_2\right\}$ (see \cite[Hauptsatz 8.27]{Hu}). Since $|\ell\cap{\mathcal H}_{q^3+1}|\equiv 2\,(\mod\,q^2+q+1)$, this implies $P_1,P_2\in{\mathcal H}_{q^3+1}$. Therefore we can repeat the argument of Case $i)$ to get a contradiction. {\bf Cases \textit{iii)} and \textit{iv)}} The order of $G$ does not divide the order of these maximal subgroups. {\bf Cases \textit{v)}} $G$ acts on the $q^6+1$ $\mathbb{F}_{q^6}$-rational points of a conic ${\mathcal C}$; as in Case $ii)$, $G=\langle g\rangle$ is isomorphic to a cyclic subgroup $\Gamma\leq{\rm PGL}(2,q^6)$ acting on a line $\ell$ with no short orbits but two fixed points. The action of $G$ on ${\mathcal C}$ is equivalent to the action of $\Gamma$ on $\ell$, see \cite[Chapt. VIII, Thm. 15]{VY}; hence $G$ has no short orbits on ${\mathcal C}$ but two fixed points $P_1,P_2$. If $G$ has a fixed $\mathbb{F}_{q^6}$-point on ${\mathcal H}_{q^3}$, then argue as in Case $i)$. Otherwise, $P_1,P_2\notin{\mathcal H}_{q^3}$, and by \cite[Par. 2]{M}, \cite[Page 141]{H}, we have that $G$ fixes a third point $P_3$ such that $T=\left\{P_1,P_2,P_3\right\}$ is a self-polar triangle. Let ${\mathcal H}_{q^3}$ be given in the Fermat form, then up to conjugation $T$ is the fundamental triangle and $g$ has the form $g:(X,Y,T)\mapsto(\lambda X,\mu Y,T)$. Then the order of $g$ divides $q^3+1$, contradicting $|G|=q^2+q+1$. {\bf Cases \textit{viii)} to \textit{xii)} and Case \textit{xv)}} $|G|$ does not divide the order of these maximal subgroups. {\bf Cases \textit{vi), vii), xiii)}, and \textit{xiv)}} Note that, if $K$ is a group containing ${\rm PSU}(3,p^h)$ as a normal subgroup of index $3$, then the orders of maximal subgroups of $K$ are three times the orders of maximal subgroups of ${\rm PSU}(3,p^h)$. With this observation, by applying Theorem \ref{MH} to ${\rm PSU}(3,p^m)$, it is easily seen that $|G|$ does not divide the orders of maximal subgroups of ${\rm PSU}(3,p^m)$ nor three times these orders. \qed\end{proof} \begin{lemma}\label{quantisylow} Let $G\leq{\rm PGU}(3,q^3)$ with $|G|=q(q+1)$. Then the number of Sylow $p$-subgroups is either $1$ or $q+1$. \end{lemma} \begin{proof} Let $Q_1,\ldots,Q_{n}$ be the Sylow $p$-subgroups of $G$, and let $P_i\in{\mathcal H}_{q^3}$ be the unique rational point fixed by $Q_i$, $i=1,\ldots,n$. Assume $n>1$; note that $n\leq q+1$, as $Q_i\cap Q_j$ is trivial for $i\neq j$. Since $G$ has no fixed points and $Q_i$ is semiregular on ${\mathcal H}_{q^3}\setminus\left\{P_i\right\}$, then the length of the orbit ${\mathcal O}_{P_1}$ of $P_1$ under $G$ is at least $q+1$; on the other side, the stabilizer of $P_1$ in $G$ has length at least $q$, since it contains $Q_1$. Therefore $|{\mathcal O}_{P_1}|=q+1$ by the orbit-stabilizer theorem. If $P\in{\mathcal O}_{P_1}$, then the stabilizer of $P$ in $G$ has order $q$, hence $P=P_i$ for some $i\in\left\{2,\ldots,n\right\}$. Therefore ${\mathcal O}_{P_1}=\left\{P_1,\ldots,P_n\right\}$ and the thesis follows. \qed\end{proof} \begin{proposition}\label{1sylow} Let $G\leq{\rm PGU}(3,q^3)$ with $|G|=q(q+1)$. If $G$ has a unique Sylow $p$-subgroup $Q$, then ${\mathcal X}_q\not\cong{\mathcal H}_{q^3}/G$. \end{proposition} \begin{proof} Let ${\mathcal H}_{q^3}$ be given in the norm-trace form $Y^{q^3+1}=X^{q^3}+X$. Since $Q$ is normal in $G$, then $G$ fixes the unique fixed point of $Q$ on ${\mathcal H}_{q^3}$; up to conjugation, this is the ideal point $P_\infty$. By Hall's theorem, we can assume that $G=Q\rtimes\langle\alpha_\lambda\rangle$, where $$\alpha_\lambda:(X,Y,T)\mapsto(\lambda^{q^3+1}X,\lambda Y,T)=(X,\lambda Y,T),$$ with $\lambda$ primitive $(q+1)$-th root of unity. Suppose ${\mathcal X}_q\cong{\mathcal H}_{q^3}/G$, in particular the genus of ${\mathcal X}_q$ equals the genus of ${\mathcal H}_{q^3}/G$, which is given in \cite[Thm. 4.4]{GSX}. With the notations of \cite[Section 4]{GSX}, this implies $v=0$ and $q=p^w$, that is, the elements of $Q$ have the form $$\beta_c:(X,Y,T)\mapsto(X+cT,Y,T),$$ with $c^{q^3}+c=0$. The set $\left\{c\in\mathbb{F}_{q^6}\mid\beta_c\in Q\right\}$ is an additive group, isomorphic to $Q$. Then $Q\cong\left\{c\in\mathbb{F}_{q^6}\mid L(c)=0\right\}$, where $L\in\mathbb{F}_{q^6}[X]$ is a linearized polynomial of degree $q$ dividing $X^{q^3}+X$, and there is a linearized polynomial $F\in\mathbb{F}_{q^6}[X]$ of degree $q^2$ such that $F(L(X))=X^{q^3}+X$ (see \cite[Theorems 3.62, 3.65]{LN}). Therefore the quotient curve ${\mathcal H}_{q^3}/G$ has equation $Y^{q^2-q+1}=F(X)$. The thesis follows, if we show that there cannot exist an $\mathbb F_{q^6}$-isomorphism $\varphi:{\mathcal C}\rightarrow{\mathcal X}_{q}$, where ${\mathcal X}_q: V^{q^2-q+1}=U^{q^2}-U$ and ${\mathcal C}$ is a curve with equation $Y^{q^2-q+1}=F(X)$, with $F\in\mathbb{F}_{q^6}[X]$ a linearized divisor of $X^{q^3}+X$ of degree $q^2$. Suppose such a $\varphi$ exists. By \cite[Thm. 12.11]{HKT}, the ideal points $P_\infty\in{\mathcal X}_q$, $Q_\infty\in{\mathcal C}$ are the unique fixed points of the respective automorphism groups ${\rm Aut}({\mathcal X}_q)$, ${\rm Aut}({\mathcal C})$, hence $\varphi(Q_\infty)=P_\infty$. Moreover, the coordinate functions have pole divisors $$ div(u)_\infty=(q^2-q+1)P_\infty,\; div(v)_\infty=q^2P_\infty,\quad div(x)_\infty=(q^2-q+1)Q_\infty,\; div(y)_\infty=q^2Q_\infty, $$ and the Weierstrass semigroups at the ideal points are $H(P_\infty)=H(Q_\infty)=\langle q^2-q+1,q^2\rangle$ (see \cite[Lemmata 12.1, 12.2]{HKT}). By Riemann-Roch theory (see \cite[Chapt. 1]{Sti}), it is easily seen that $\left\{1,x\right\}$ is a basis of the Riemann-Roch space $\mathcal{L}((q^2-q+1)P_\infty)$ associated to $(q^2-q+1)P_\infty$, and $\left\{1,x,y\right\}$ is a basis of $\mathcal{L}(q^2P_\infty)$. Then there exist $a,b,c,d,e\in\mathbb{F}_{q^6}$, $a,d\neq0$, such that $\varphi^*(u)=ax+b$ and $\varphi^*(v)=cx+dy+e$, where $\varphi^*:\mathbb{F}_{q^6}({\mathcal X}_q)\rightarrow\mathbb{F}_{q^6}({\mathcal C})$ is the pull-back of $\varphi$; equivalently, $\varphi(X,Y,T)=(aX+b,cX+dY+e,T)$. Then the polynomial identity $$ \left(aX+b\right)^{q^2}-\left(aX+b\right)-\left(cX+dY+e\right)^{q^2-q+1}=k\left(F(X)-Y^{q^2-q+1}\right) $$ holds, for some $k\in\bar{\mathbb{F}}_{q^6}$, $k\neq0$. By direct calculation and comparison of the coefficients, we get the constraints $c=e=0$, $b\in\mathbb{F}_{q^2}$, $k=d^{q^2-q+1}$, which imply $$ F(X)=k^{-1}a^{q^2}X^{q^2}-k^{-1}aX. $$ It is easily checked that the conventional $p$-associate of the linearized polynomial $F(X)$ is not a divisor of the conventional $p$-associate of $X^{q^3}+X$, hence $F(X)$ is not a divisor of $X^{q^3}+X$. \qed\end{proof} \begin{lemma}\label{tantisylow} Let $G\leq{\rm PGU}(3,q^3)$ with $|G|=q(q+1)$. If $G$ has $q+1$ distinct Sylow $p$-subgroup $Q_1,\ldots Q_{q+1}$, then $G\cong(\mathbb{Z}_{p'})^s\rtimes Q_1$, where $p'$ is a prime and $(p')^s=q+1$. \end{lemma} \begin{proof} By Lemma \ref{quantisylow}, the points $P_1,\ldots,P_{q+1}$ fixed respectively by $Q_1,\ldots,Q_{q+1}$ constitute a single orbit ${\mathcal O}$ under the action of $G$. By Burnside's Lemma, $G$ is sharply $2$-transitive on ${\mathcal O}$. Then, by \cite[Thm. 20.7.1]{Hall}, $G$ is isomorphic to the group of affine trasformations of a near-field $F$; moreover, $G$ has a regular normal subgroup $N$, hence $G=N\rtimes Q_1$. The order $f$ of $F$ satisfies $q(q+1)=(f-1)f$, which implies $f=q+1$. By this condition, $F$ cannot be one of the seven exceptional near-fields listed in \cite{Z}, hence $F$ is a Dickson near-field, see \cite[Thm. 20.7.2]{Hall} for a description. In particular, $N$ is isomorphic to the additive group $(\mathbb{Z}_{p'})^s$ of a finite field $\mathbb{F}_{(p')^s}$. \qed\end{proof} \begin{proposition}\label{q+1sylow} Let $G\leq{\rm PGU}(3,q^3)$ with $|G|=q(q+1)$. If $G$ has $q+1$ distinct Sylow $p$-subgroup $Q_1,\ldots Q_{q+1}$, then ${\mathcal X}_q\not\cong{\mathcal H}_{q^3}/G$. \end{proposition} \begin{proof} We use the notations of Lemma \ref{tantisylow} and assume ${\mathcal X}_q\cong{\mathcal H}_{q^3}/G$. Suppose $q$ is odd. Then all involutions of ${\rm PGU}(3,q^3)$ are conjugate, and they are homologies of ${\rm PG}(2,q^6)$, see \cite[Lemma 2.2]{KOS}. Two homologies commute if and only if the center of each lies on the axis of the other (see for example \cite[Thm. 5.32]{Cox}), hence the maximum number of involutions commuting pairwise is $3$, since their centers are three non-collinear points. Then $(p')^s=4$ and $q=3$, against the assumptions of this section. Suppose $q$ is even. $Q_1$ is isomorphic to the multiplicative group of $F$, hence it is a metacyclic group, see for example \cite[Ex. 1.19]{Cam}; moreover, $Q_1$ has exponent $2$ or $4$ by \cite[Lemma 2.1]{KOS}. Therefore $q\in\left\{2,4,8,16\right\}$. The case $q=2$ is excluded. If $q=16$, then $F$ is a Dickson near-field of prime order $17$, hence $F$ is a field, against the exponent $2$ or $4$ of $Q_1$. Then $q=4$ or $q=8$. We use the Riemann-Hurwitz formula \cite[Thm. 3.4.13]{Sti} on the covering ${\mathcal H}_{q^3}\rightarrow{\mathcal X}_q\cong{\mathcal H}_{q^3}/G$, in order to get a contradiction on the degree $\Delta=\left(2g({\mathcal H}_{q^3})-2\right)-|G|\left(2g({\mathcal X}_q)-2\right)$ of the Different. By \cite[Thm. 3.8.7]{Sti} $$ \Delta=\sum_{\sigma\in G\setminus\left\{id\right\}}i(\sigma), $$ where the contributions $i(\sigma)\geq0$ to $\Delta$ satisfy the following: \begin{itemize} \item If $\sigma$ has order $2$, then $i(\sigma)=q^3+2$; if $\sigma$ has order $4$, then $i(\sigma)=2$ (see \cite[Eq. (2.12)]{Sti}). \item If $\sigma$ is odd, then $i(\sigma)$ equals the number of fixed points of $\sigma$ on ${\mathcal H}_{q^3}$, see \cite[Cor. 3.5.5]{Sti}; moreover, by \cite[pp. 141-142]{H}, either $\sigma$ has exactly $3$ fixed points or $\sigma$ is a homology. In the former case $i(\sigma)\leq3$, in the latter $i(\sigma)=q^3+1$. \end{itemize} Let $q=4$, hence $\Delta=470$ and $G=\mathbb{Z}_5\rtimes Q_1$. If $Q_1\cong\mathbb{Z}_2\times\mathbb{Z}_2$, then $G$ has $15$ involutions, whose contributions to $\Delta$ sum up to $990>\Delta$. Then $Q_1\cong\mathbb{Z}_4$, and the contributions to $\Delta$ of the $Q_i$'s sum up to $5\cdot66+10\cdot2=350$. The non-trivial elements of $\mathbb{Z}_5$ are generators of $\mathbb{Z}_5$, then either all of them are homologies or all of them fix $3$ points. In the former case their contribution to $\Delta$ exceeds $120$, in the latter their contribution is smaller than $120$. Let $q=8$, hence $\Delta=7758$ and $G=(\mathbb{Z}_3\times\mathbb{Z}_3)\rtimes Q_1$. If $Q_1$ has more than one involution, then the involutions of $G$ contribute to $\Delta$ of at least $18\cdot514>\Delta$. Then $Q_1$ is the quaternion group, and the $Q_i$'s contribute to $\Delta$ of $9\cdot514+54\cdot2=4734$. The contribution to $\Delta$ of any non-trivial element of $\mathbb{Z}_3\times\mathbb{Z}_3$ is either $513$ or less than $4$, hence they cannot sum up to $\Delta-4734$. \qed\end{proof} \vspace*{.2cm} By Lemma \ref{quantisylow} and Propositions \ref{1sylow}, \ref{q+1sylow}, we have shown the following result. \begin{proposition}\label{terzovalore} There is no Galois-covering ${\mathcal H}_{q^3}\rightarrow{\mathcal X}_q$ of degree $q^2+q$. \end{proposition} \vspace*{.2cm} Finally, Theorem \ref{result2} follows from Propositions \ref{possibilivalori}, \ref{primovalore}, \ref{secondovalore}, and \ref{terzovalore}.
1,477,468,750,507
arxiv
\section{INTRODUCTION \label{intro}} Indirect detection methods for finding extrasolar planets have yielded in excess of 150 candidate planets over the past decade \citep{Marcy05}. None have been directly imaged, however two have had their light detected through transit studies with the {\it Spitzer Space Telescope} \citep[HD 209458b and TRes-1b; ][]{Charbonneau05,Deming05}. Recently, an object has been imaged, and resolved, whose properties appear somewhat consistent with being a young extrasolar planet: the companion to 2MASSW J1207334-393254 \citep[2M1207; ][]{Chauvin04}. These objects appear to represent the opening chapter in humanity's quest to study the atmospheres of planets beyond our solar system. As one of the nearest and youngest brown dwarfs yet identified, 2M1207 has received considerable attention since its discovery. \citet{Gizis02} discovered 2M1207 in a spectroscopic survey of red 2MASS sources, and claimed that the object was a $\sim$10 Myr-old, $\sim$25\,M$_{Jup}$\, member of the nearby ($\sim$55 pc) TW Hya association \citep[TWA;][]{Webb99}. Further observations of its radial velocity \citep{Mohanty03} and proper motion \citep{Scholz05} are roughly consistent with TWA membership. With low resolution spectroscopy, \citet{Gizis02} found 2M1207 to show signs of low surface gravity and strong H$\alpha$ emission (EW\,=\,300\,$\AA$). In their echelle spectroscopy survey, \citet{Mohanty03} found the H$\alpha$ emission line to be broad and asymmetric, and accompanied by several other Balmer and \ion{He}{1} emission lines. \citet{Mohanty03} hypothesize that the brown dwarf is probably still accreting from a circumstellar disk. While astrophysically interesting in its own right as a representative of the new class of young, accreting brown dwarfs \citep[e.g.][]{Muzerolle05,Mohanty05}, it appears that 2M1207 may become most famous for being the host ``Sun'' for the first imaged extrasolar planet -- if indeed 2M1207B can be called a ``planet''. \citet{Chauvin04} discovered a faint companion to 2M1207, which has near-IR photometry and a low signal-to-noise-ratio spectrum consistent with having a late-L spectral type. Recently, \citet{Chauvin05} and Schneider et al. (in prep.) confirmed that the companion B is indeed co-moving with 2M1207 A. Debate on the origin and classification of this object is in its infancy. To help better constrain the physical nature of this object, I present an improved distance estimate to the 2M1207 system through the moving cluster method. The new distance provides more accurate luminosities (and inferred masses) for the components of this interesting substellar binary. \section{ANALYSIS \label{analysis}} Although a trigonometric parallax is not yet available for 2M1207, one can exploit the star's putative membership in the TW Hya association to derive a distance using the cluster parallax (or ``moving-cluster'') method \citep[e.g.][]{Atanasijevic71,deBruijne99a}. With an observed proper motion and radial velocity (as well as other supporting evidence), I test whether the star is consistent with being a TWA group member. To exploit this technique, one needs to take the following steps: (1) estimate the space motion vector for the TWA, (2) test whether the observations for 2M1207 (proper motion, radial velocity) are consistent with the TWA motion vector, and (3) use the moving cluster method to estimate the parallax from the proper motion and TWA space motion data. I address these steps in order. Although the rest of the TWA membership is not the focus of this study, I will briefly mention relevant results for these systems throughout this analysis. I will also examine whether the expansion of the TWA is detectable, and whether it can help constrain the age of 2M1207 and the rest of the association. \subsection{Sample \label{sample}} The initial pool of candidate TWA members considered in this study are listed in Table \ref{tab:TWA}. I add to TWA numbers 1 through 25 \citep{Zuckerman04} the three, new, low-mass candidate members 2M1207 and 2MASSW J1139511-315921 from \citet{Gizis02}, and SSSPM J1102-3431 from \citet{Scholz05}. The TWA members from \citet[][TWA 1-11]{Webb99} and \citet[][TWA 12, 13]{Sterzik99} comprise what I will tentatively call the ``classic'' membership of the TW Hya association. These are young stars which were mostly selected due to infrared or X-ray excesses within the immediate vicinity of TW Hya. There has been debate regarding the membership for TWA 14-19 \citep{Mamajek01,Lawson05}, since their positions, proper motions, and rotational properties are at variance with TWA 1-13. \citet{Mamajek01} and \citet{Lawson05} have suggested that TWA 14-19 are probably members of the more distant \citep[$\sim$120\,pc; ][]{deZeeuw99} and older \citep[$\sim$16 Myr; ][]{Mamajek02} Lower Centaurus Crux OB association (LCC). TWA 20 was claimed to be a TWA member by \citet[][R03]{Reid03}, but rejected by \citet{Zuckerman04} due to its weak Li. As the Li data is not published, and the similarity in proper motion between TWA 20 and the other TWA members is quite striking, I retain TWA 20 in the candidate pool. TWA 21 through 25 were selected by \citet[][SZB03]{Song03} due to their strong Li and H$\alpha$ emission. From Fig. 6 of SZB03, it appears that TWA 23 and 25 have positions and proper motions very close to those of TWA 1-13, but TWA 21 and 22 are spatially isolated, and TWA 24 has a small proper motion, similar to LCC members. Hence it is not obvious that many of the TWA 14-25 stars were born in the same star-formation event as TWA 1-13. I conservatively include only the classic members (TWA 1-13) in the initial calculations for estimating the convergent point and space motion vector for the TW Hya association. \subsection{Astrometric Data \label{data}} The adopted proper motion and radial velocity data for proposed TWA members, and their associated references, are presented in Table 1. I searched the literature and on-line catalogs\footnote{ ADS (http://adsabs.harvard.edu/), Vizier (http://vizier.u-strasbg.fr/viz-bin/VizieR), and SIMBAD (http://simbad.harvard.edu/sim-fid.pl)} to find the best values for the proper motions and radial velocities of TWA members. To mitigate against the effects of short-term astrometric perturbations by short-period companions, I preferentially adopted the long-baseline proper motion with the smallest error bars \citep[usually Tycho-2 or UCAC2; ][]{Hog00,Zacharias04} over {\it Hipparcos} values \citep{Perryman97}, when available. In a few instances, I calculated new proper motions using published positions. I calculated weighted mean radial velocities when multiple values were available, or adopted systemic velocities for spectroscopic binaries, when available. 2M1207 has two published proper motion estimates in the literature \citep{Gizis02,Scholz05}. The \citet{Gizis02} proper motion ($\mu_{\alpha *}$,\,$\mu_{\delta}$\,=\, --100,\,--30\,mas\,yr$^{-1}$) does not have error bars and is based only on a few plate images in the USNO image archive. \citet{Scholz05} estimated a proper motion for 2M1207 of $\mu_{\alpha *}$\,=\,--78\,$\pm$\,11\,mas\,yr$^{-1}$, $\mu_{\delta}$\,=\,--24\,$\pm$\,9\, mas\,yr$^{-1}$. This proper motion estimate included a Chandra pointing, rather than an actual measured position, and so is invalid. Omitting the Chandra pointing, R.-D. Scholz has calculated a revised proper motion of $\mu_{\alpha *}$\,=\,--67\,$\pm$\,7\,mas\,yr$^{-1}$, $\mu_{\delta}$\,=\,--28\,$\pm$\,11\, mas\,yr$^{-1}$\, using a least-squares fit with equal weighting (R.-D. Scholz, personal communication). As there are large differences in the accuracy between the SuperCOSMOS, 2MASS, and DENIS positions ($\sim$60\,mas vs. $\sim$500\,mas), I recalculated the proper motion using weighting by the inverse of the square of the positional errors, following the method of \citet{Corbin77}\footnote{The formulae are given in the on-line documentation for the AC2000.2 catalog \citep{Urban98} at http://ad.usno.navy.mil/ac/}. The SuperCOSMOS and 2MASS positions are tied to the International Celestial Reference System (ICRS) via the Tycho-2 catalog, and so for our purposes they are on the same system. In order to estimate a positional error for the SuperCOSMOS positions, I performed a least-squares fit to the 4 SuperCOSMOS points, and found their scatter consistent with positional errors of $\sigma_{\alpha *}$ = 143 mas and $\sigma_{\delta}$ = 196 mas. These errors are very consistent with the SuperCOSMOS positional errors quoted by \citet{Hambly01}. I corrected the DENIS position for the 2MASS-DENIS offset found by \citet{Cabrera-Lavers03}, since the 2MASS positional errors are much smaller than DENIS's \citep[2MASS is tied to the ICRS via the Tycho-2 catalog to an accuracy of $\sim$80 mas; ][]{Cutri03}. For the DENIS positional errors, I adopted the square root of the 2MASS-DENIS rms differences added in quadrature with the 2MASS-ICRS rms residuals ($\sim$80\,mas), giving $\sigma_{\alpha *}$ = 430\,mas, and $\sigma_{\delta}$ = 320\,mas. I estimate the proper motion of 2M1207 to be {$\mu_{\alpha *}$\,=\,--72\,$\pm$\,7\,mas\,yr$^{-1}$}, {$\mu_{\delta}$\,=\,--22\,$\pm$\,9\, mas\,yr$^{-1}$\,}, which is within 1$\sigma$ of both of Scholz's estimates. The change in the position of 2M1207 over time is plotted in Fig. \ref{fig:pm}. \subsection{The Space Motion of the TWA \label{spacemotion}} In order to calculate a cluster parallax for 2M1207, we require an accurate convergent point solution for the TW Hya association, to which 2M1207 is proposed to be a member. Mean space motion vectors and/or convergent point solutions for the TWA were previously estimated by \citet{Frink01}, \citet[][MF01]{Makarov01}, R03, and SZB03. Considering the increase in proposed association membership (SZB03), wealth of new proper motion data \citep[UCAC2;][]{Zacharias04} and radial velocity data \citep{Torres03} made available since R03, I will briefly discuss and reanalyze the kinematics of TWA. To estimate the space motion for the TWA, I will combine information from 2 different methods: using what little data there is regarding the 3D space motion vectors for individual members, as well as applying the convergent point method on the classical membership. Only four classic TWA members have sufficient data to reliably calculate the 3D space motion vector (TWA 1, 4, 9, and 11), and these individual determinations have modest errors in any given velocity component \citep[$\sigma$ $\sim$ 1-2\,km\,s$^{-1}$; ][]{Mamajek00}. Three of the systems are binaries, but their systemic velocities are probably accurate to $\sim$1\,km\,s$^{-1}$, or better. The mean barycentric Galactic space motion vector for these four systems ($U, V, W$ = --10.2, --17.1, --5.1\,km\,s$^{-1}$) provides the best estimate of the {\it centroid} velocity vector for the TW Hya association. For helping refine the vertex estimate for the TWA, I will use the convergent point method on the classical membership. The convergent point, as calculated only from the proper motion data, will also become important when the question of association expansion is addressed (\S\ref{expansion}). I approximately follow the convergent point grid technique of \citet{Jones71}. In this implementation, I alter Jones's definition of $t^2$, following \citet{deBruijne99a}, and include an intrinsic velocity dispersion term ($\sigma_v$\,=\,1\,km\,s$^{-1}$), and assumed distance (50\,pc) in the definition for $t^2$ (the method is rather insensitive to both input values). Over the entire hemisphere ${\alpha\,\in\,0^{\circ}-180^{\circ}}$, I calculate the $t^2$ statistic at every 0$^{\circ}$.1 grid step, and find the celestial position which gives the minimum $t^2$ value. For every grid point, the method assumes that this position is the convergent point for the group, and rotates the stellar proper motion components (in $\mu_{\alpha *}$\, and $\mu_{\delta}$) to the proper motion directed toward the convergent point ($\mu_{\upsilon}$) and perpendicular to the great circle joining the star and test convergent point ($\mu_{\tau}$). The method iteratively searches for which test convergent point minimizes the ${\tau}$ components of proper motion for the input sample. Jones's and de Bruijne's $t^2$ value can be treated statistically as the classic ${\chi}^2$ \citep{Bevington92}. In its iterative search for the group convergent point, the method will reject stars contributing the most to the $t^2$ statistic, until the position with lowest $t^2$ value corresponds to a sufficiently high enough $\chi^2$ probability that the best convergent point can not be statistically rejected. For a statistical rejection threshold, I adopt a 5\% level of significance (i.e. 5\% probability of falsely rejecting the null hypothesis) following \citet{Trumpler53}. The TWA stars are sufficiently convergent, and the proper motion errors for the faint members are large enough, that a convergent point can be determined for all the classic TWA members (\#1-13) with a low $\chi^2_{\nu}$ ($\chi^2$/$\nu$ = 15.9/13; $\chi^2$ probability = 25\%). For internal velocity dispersions of $\sigma_v$\,$>$\,0.6\,km\,s$^{-1}$, the method is able to find a convergent point with $\chi^2$ probability of $>$5\% without rejecting any of the classic members. If $\sigma_v$\,=\,0.6\,km\,s$^{-1}$\, is adopted, TWA 6 (which contributes the most to the $t^2$ statistic) is rejected, and a sound solution is found with the other nuclear members ($\chi^2$ probability = 21\%). The internal velocity dispersion is probably near $\sigma_v$\,$\simeq$\,1\,km\,s$^{-1}$, and with this adopted velocity dispersion, 33\% of the classical members contribute $\Delta$$t^2$ $>$ 1 (similar to how MF01 estimate the velocity dispersion). Hence, there is no good reason to remove TWA 6 in the hunt for a statistically satisfactory convergent point for TWA 1-13. I will determine a more refined estimate of the velocity dispersion for TWA in \S\ref{distance}. The ability of the technique to give a statistically sound convergent point solution ($\chi^2_{\nu}$ $\simeq$ 1) with $\sigma_v$\,=\,1\,km\,s$^{-1}$\, already suggests that the velocity dispersion of TWA is similar to that of nearby OB associations \citep{Madsen02}. In Fig. \ref{fig:cvp}, I plot the convergent points for subsamples of the TW Hya association, as well as previous determinations from the literature. I also plot the convergent points for subsamples of the TWA 14-25 membership in Fig. \ref{fig:cvp}. The confidence regions of these subsamples are roughly twice as large as than that for TWA 1-13, but not wholly inconsistent given the large error bars. Much of the positional deviance of these subsample convergent points is owed to TWA 22, which may not be a kinematic TWA member (\S\ref{distance}). From Fig. \ref{fig:cvp}, one can conclude that the convergent point for TWA 1-13 (within the dashed confidence regions) agrees well with that predicted by the TWA space motion vectors found by R03 and SZB03. The TWA vertex found by MF01 is just outside of the 95\% confidence region, and seems to be deviant when compared to the values from R03, SZB03, F01, and the results of this convergent point analysis. This fact, combined with the finding that most of the stars in the MF01 convergent point analysis are not pre-MS \citep{Song02}, suggests that their convergent point and dynamical age for the TWA (8.3 Myr) are not valid. I will discuss the expansion age further in \S\ref{expansion}. After considering the agreement between the TWA 1-13 convergent point and that inferred from the mean TWA space motions from R03 and SZB03, I adopted the following fiducial TWA parameters. For the group vertex, I took the weighted mean of the vertices from my convergent point analysis of TWA 1-13 ($\alpha, \delta$ = 100$^{\circ}$\,$\pm$\,10$^{\circ}$, {--28$^{\circ}$\,$\pm$\,4$^{\circ}$}), and the individual vertices inferred from the space motion vectors for TWA 1, 4, 9, and 11 \citep[using eqn. 10 of ][]{deBruijne99a}. This analysis assumes zero association expansion, and assumes that there is no significant offset between spectroscopic and physical radial velocities -- both of which are acceptable assumptions at this level of accuracy. The best estimate of the convergent point for the TWA is calculated to be ($\alpha$\,=\,{103$^{\circ}$.2\,$\pm$\,1$^{\circ}$.5}, $\delta$\,=\,{--30$^{\circ}$.7\,$\pm$\,1$^{\circ}$.5}). For the mean speed of the classic TWA membership, I adopt the weighted mean barycentric speed ($v$\,=\,21.3\,$\pm$\,1.3\,(s.e.m.)\,km\,s$^{-1}$) of TWA 1, 4, 9, and 11, using their astrometry and radial velocities in Table \ref{tab:TWA}, and weighted mean Hipparcos and Tycho parallaxes. \subsection{Is 2M1207 a TWA Member? \label{member}} Given the adopted convergent point solution for the ``classic'' TWA members, and a proper motion for 2M1207, one can estimate a membership probability and predict the star's radial velocity. Using the updated proper motion for 2M1207 (\S\ref{data}), I find that most of the motion is indeed pointed towards the convergent point ($\mu_{\upsilon}$\,=\,75\,$\pm$\,7\,mas\,yr$^{-1}$) and very little of it is in the perpendicular direction ($\mu_{\tau}$\,=\,2\,$\pm$\,8\,mas\,yr$^{-1}$). Using the membership probability equation from \citet[][ his eqn. 23]{deBruijne99a}, and adopting a mean cluster distance of 50\,pc and velocity dispersion of 1\,km\,s$^{-1}$, I estimate a membership probability of 98\%. This membership probability should be interpreted as: given the proper motion errors, 98\% of bona fide TWA members are expected to have $\mu_{\tau}$\, values more deviant than 2M1207. That is, the proper motion of 2M1207 is consistent with the null hypothesis ($\mu_{\tau}$\,=\,0) for an ``ideal'' member. One can also use the predicted and observed radial velocity as a check of the moving cluster method. Assuming parallel motion among group members, the method predicts the radial velocity as $v_{rad}$\,=\,$v$\,cos$\lambda$, where $v$\, is the speed of the group, and $\lambda$ is the angular separation (62$^{\circ}$.9\,$\pm$\,1$^{\circ}$.5) between 2M1207 and the convergent point. The predicted radial velocity for 2M1207 (+9.7\,$\pm$\,1.6\,km\,s$^{-1}$) is within 0.6$\sigma$ of the observed radial velocity measured by \citet[][+11.2\,$\pm$\,2\,km\,s$^{-1}$]{Mohanty03}. Both the proper motion and radial velocity data for 2M1207 are quantitatively consistent with TWA membership, and its evidence of membership is as strong as that for most of the classical members. \subsection{Distances \label{distance}} \subsubsection{The Distance to 2M1207} If a star belongs to a moving group, its proper motion can be used to estimate its distance. The star's moving cluster parallax ($\varpi$) is calculated as $\varpi$\,=\,$A$\,$\mu_{\upsilon}$/$v$\,sin\,$\lambda$, where $\mu_{\upsilon}$, $v$, and $\lambda$ are as described before, and $A$ (= 4.74047) is the astronomical unit expressed in the convenient units of km\,yr\,s$^{-1}$ \citep{deBruijne99a}. Using the values (and uncertainties) for $\mu_{\upsilon}$, $v$, and $\lambda$ as given in \S\ref{data} and \S\ref{spacemotion}, I calculate a cluster parallax for 2M1207 of $\varpi$\,=\,18.8\,$\pm$\,2.3\,mas, or a corresponding distance of $d$\,=\,53\,$\pm$\,6 pc. The only published distance estimates to 2M1207 are $\sim$70\,pc \citep[][]{Chauvin04} and 70\,$\pm$\,20\,pc \citep[][]{Chauvin05}. Both are photometric distance estimates which force 2M1207A to be an unreddened M8 star on a 10 Myr-old isochrone. Considering the variations between published evolutionary tracks, especially for stars which are young and low-mass \citep{Hillenbrand04}, the cluster parallax distance should be considered an improved estimate. \subsubsection{Distances to TWA Objects: Implications and Final Membership} The agreement between the trigonometric parallaxes for TWA 1, 4, 9, and 11, and their cluster parallaxes are excellent, as shown in Table \ref{tab:par}. All the parallaxes are within 2$\sigma$ of each other, with an insignificant weighted-mean zero-point offset of --0.8\,$\pm$\,1.2\,mas, in the sense ``cluster minus trigonometric''. Cluster parallax distances for all TWA member candidates are given in column 9 of Table \ref{tab:TWA}. There is a small caveat regarding the cluster parallax distances in Table \ref{tab:TWA} and Fig. \ref{fig:ra_dist} that is worth elaborating upon. There have been suggestions that some TWA stars may actually be background members of the $\sim$16-Myr-old Lower Centaurus-Crux (LCC) OB subgroup at $d$ $\simeq$ 110\,pc \citep[e.g.][]{Mamajek01,Mamajek02}. The space motion vectors for TWA and LCC are very similar, and within roughly $\sim$5\,km\,s$^{-1}$\, \citep{Mamajek00}. If one calculates cluster parallax distances to ``TWA'' objects using the space motion vector of LCC \citep{Madsen02}, the mean distances in column 7 of Table 1 change by less than $\pm$5\% (rms). This is smaller than the quoted distance errors (typically $\sim$11\%). Hence, any conclusions based on the distribution of cluster parallax distances (i.e. Fig. \ref{fig:ra_dist}) are very insensitive to whether individual ``TWA'' objects are co-moving with either TWA or LCC. I plot the cluster parallax distances versus Right Ascension in Fig. \ref{fig:ra_dist}. Fig. \ref{fig:ra_dist} illustrates that there appears to be a gap in the distances between LCC members and ``classic'' TWA members near $d$\,=\,85\,pc, effectively splitting the groups spatially. Hence, TWA 12, 17, 18, 19, and 24 have distances more consistent with LCC than the other TWA members. Previous investigators (MF01, R03) have suggested that the TWA members are clustered at distances of $\sim$70\,pc, however Fig. \ref{fig:ra_dist} suggests that what is really being seen is two detached populations of young stars: one at $\sim$50\,pc (TWA) and one at $\sim$110\,pc (LCC). The agreement between the observed and predicted radial velocities for TWA 12, 17, 18, 19, and 24, are probably due to the similarity in space motion between LCC and TWA (see Fig. \ref{fig:cvp}). As it is often not clear how ``TWA'' candidate members have been retained (or rejected) in past studies, there may be an observational bias present for the radial velocities of these more distant objects to agree well with that of the foreground members. As Fig. \ref{fig:ra_dist} suggests that some of the ``TWA'' stars may be more distant members of LCC, it is worth reexamining the vertex of the remaining TWA members, including the new brown dwarf members TWA 26-28. If the convergent point method (\S\ref{spacemotion}) is run on the remaining members (again assuming a mean distance of 50\,pc and $\sigma_v$\,=\,1\,km\,s$^{-1}$), a somewhat poor vertex solution is found ($\chi^2$/$\nu$ = 39.8/23; $\chi^2$ probability = 1.6\%). The biggest contributer to the $\chi^2$ (contributing a third of the quantity) is the closest TWA candidate -- TWA 22. If TWA 22 is dropped, the convergent point method shifts by a few $\sigma$ in position, and a much more statistically sound vertex is found: $\alpha\,=\,100^{\circ}.5\,\pm\,5^{\circ}.0$, $\delta\,=\,-27^{\circ}.9\,\pm\,2^{\circ}.3$, $\chi^2$/$\nu$ = 17.5/22; $\chi^2$ probability = 74\%. Rejecting further members has negligible effect on the vertex, and only pushes the $\chi^2$ probability to absurdly higher levels. This remarkable reduction in $\chi^2$, upon removal of TWA 22 from the sample, suggests that TWA 22 should probably be excluded as a TWA member. Clearly it is a nearby, young star, however it does not appear to be a kinematic TWA member. In the initial calculation of membership probabilities (column 8 of Table \ref{tab:TWA}; \S\ref{member}), TWA 22 had $P$\,=\,2\% -- by far the lowest. This new {\it a posteriori} convergent point estimate is currently the best that can be done purely geometrically, i.e. with proper motions alone. It is in excellent agreement with the original TWA 1-13 vertex determination, and with the individual vertices for TWA 1, 4, 9, and 11. With the sample of ``final'' TWA members (denoted ``Y'' or ``Y?'' in column 11 of Table 1), one can independently estimate the velocity dispersion $\sigma_v$ of TWA based on how well the proper motions determine the convergent point. Considering the range of $\chi^2$ values for an acceptable fit \citep[see discussion in ][]{Gould03}, the final estimate of the velocity dispersion of TWA, from the proper motion data alone, is $\sigma_v$\,=\,$0.8^{+0.3}_{-0.2}$\,km\,s$^{-1}$. By adopting $\sigma_v$\,=\,0.8\,km\,s$^{-1}$, the uncertainties on the proper motion-determined convergent point decrease to $\sigma_{\alpha}$\,=\,4$^{\circ}$.2 and $\sigma_{\delta}$\,=\,1$^{\circ}$.9. Using the revised, proper motion-based convergent point estimate, and the new estimate of the velocity dispersion of the group, has negligible effect on the distance determinations. For these reasons (and clarity of presentation), I have chosen not to list the reevaluated quantities. After excluding TWA 12, 17, 18, 19, 22, and 24 as TWA members, I characterize the final TWA membership with probability plots \citep[so as to be immune to the effects of outliers;][]{Lutz80}: $d_{TWA}$ = 49\,$\pm$\,3\,(s.e.m.)\,$\pm$\,12\,(1$\sigma$)\,pc, $\alpha_{TWA}$ = 174$^{\circ}$.8\,$\pm$\,12$^{\circ}$.3\,(1$\sigma$), and $\delta_{TWA}$ = --37$^{\circ}$.1\,$\pm$\,7$^{\circ}$.5\,(1$\sigma$). The projected radii in $\alpha$ and $\delta$ correspond to $\sim$7\,pc at $d$ = 49\,pc. Taking into account the typical distance errors ($\sim$5\,pc) and the observed distance dispersion ($\sim$12\,pc), the data are consistent with the radius along the line of sight being $\sim$10\,pc ($\sim$40\% larger than the projected width of $\sim$7\,pc). All three of the new brown dwarf members (TWA 26-28; red open circles in Fig. \ref{fig:ra_dist}) cluster lie between $d$\,=\,40-53\,pc, close to the classic TWA membership. With the membership and characteristics of the TWA better defined, one can ask the question: are there other stars in the vicinity whose astrometric data also suggest that they are TWA members? Dozens of other young, low-mass, field stars have been proposed as TWA members, enough so that assessing their membership is probably worth a separate study. A question that can be answered here is: {\it are there any high-mass TWA members besides HR 4796?}. The quoted magnitude limits of the {\it Hipparcos} catalog suggest that it should be complete for unreddened A and B-type stars on, or above, the main sequence within $\sim$85\,pc \citep{Perryman97}. I queried the {\it Hipparcos} database for stars within a 15$^{\circ}$ radius centered on the TWA central position given earlier. I retained the 31 stars with parallaxes of $>$10\,mas and $B-V$ colors of $<$0.30 (consistent with unreddened stars earlier than F0). I calculated membership probabilities and predicted cluster parallaxes for these stars, in the same manner that was done for 2M1207 in \S\ref{member}. Of these 31 stars, only eight had membership probabilities of $>$5\%. For these eight, I compared the moving cluster parallax values to the {\it Hipparcos} trigonometric parallaxes. Only three of these eight stars had agreement between cluster and trigonometric parallaxes at better than 2$\sigma$: HR 4796 (known member), HIP 54477 (A1V, $d$\,=\,58\,pc), and HIP 53484 (F0V, $d$\,=\,97\,pc). HIP 53484 is $\sim$4$\sigma$ more distant than the mean TWA distance, and nearly $\sim$15$^{\circ}$ from the TWA centroid position, so I reject its TWA membership, and discuss it no further. HIP 54477 is not so easy to dismiss as a TWA member. This A1 dwarf has a high TWA membership probability (90\%), and its trigonometric parallax (17.2\,$\pm$\,0.7\,mas) agrees fairly well with its predicted TWA cluster parallax (20.8\,$\pm$\,1.6\,mas). Its projected position is in the core region near TWA 2, 4, and 8. At $d$ $\simeq$ 56\,pc ({\it Hipparcos}) or $d$ $\simeq$ 48\,pc (predicted cluster parallax distance), it would be slightly further than TWA 2, 4, and 8 (all of which have $d$ $\simeq$ 40\,pc). The radial velocity of HIP 54477 is not well constrained \citep[$v_{rad}$\,=\,+16.2\,$\pm$\,10\,km\,s$^{-1}$;][]{Barbier-Brossat00}, but consistent with that for a TWA member at its position (+12.2\,$\pm$\,1.6\,km\,s$^{-1}$). The star appears to be close to the zero-age main sequence, and so could be as young as the other TWA members. Further observations should be undertaken to see if the object has any low-mass companions which may further constrain its age. The membership of HIP 54477 to TWA can not be rejected on kinematic grounds, but one would certainly like to see further data before claiming that it is a true TWA member. In summary, {\it the TWA appears to contain at least one (HR 4796), but possibly two (HIP 54477), stars hotter than F0 in its membership.} \subsection{Expansion Age of TWA \label{expansion}} One may be able to put an interesting astrophysical constraint on the age of the 2M1207 system through calculating an ``expansion age'' for the TW Hya association. MF01 claimed that the kinematics of the TWA are consistent with an expansion age of 8.3 Myr. The analysis of MF01 included tens of X-ray-selected stars in their analysis which have been since shown to not be pre-MS stars \citep[SZB03,][]{Torres03}. As the majority of the stars in the MF01 analysis are not genetically related to TW Hya or its cohort, {\it this expansion age is not a useful constraint on the age of the TWA or 2M1207.} With the best proper motion and radial velocity data currently available, I investigate whether an expansion is still evident in the TWA using a Blaauw expansion model. For discussions on trying to detect the linear expansions of unbound associations, see \citet{Blaauw56,Blaauw64}, \citet{Bertiau58}, \citet{Jones71}, \citet{Brown97}, \citet{Dravins99}, and \citet{Madsen02}. \subsubsection{The Blaauw Linear Expansion Model \label{Blaauw_model}} Linear expansion of an association can not be demonstrated with proper motions alone \citep{Blaauw64,Brown97}. A group of stars with generally parallel motion vectors, but with a small linear expansion, will simply appear to converge to a point further away (higher $\lambda$) than that demonstrated by a group with strictly parallel motion vectors. The classical convergent point method equations which assume parallel motion are slightly modified to allow for expansion. In the \citet{Blaauw64} linear expansion model, the individual cluster parallax ($\varpi$) for an association member is calculated as: \begin{equation} \varpi\,=\,\frac{\mu_{\upsilon}\,A\,}{\,v^{\prime}\,\sin\,\lambda^{\prime}} \end{equation} and the radial velocity is predicted to follow the relation: \begin{equation} v_{rad}\,=\,v^{\prime}\,\cos\,\lambda^\prime\,+\,\kappa\,d\,+\,K \end{equation} where $A$ is the AU as previously defined, $\mu_{\upsilon}$ is the proper motion directed toward the convergent point, $\kappa$ is the expansion term in units of km\,s$^{-1}$\,pc$^{-1}$, $d$ is the distance to the star in pc (where $d_{pc}$\,$\simeq$\,1000\,$\varpi_{mas}^{-1}$), and $K$ is a zero-point term which may reflect gravitational redshift or convective blueshift terms \citep[see][for a detailed discussion]{Madsen03}. The ``expansion age'' $\tau$ of the association in Myr is: \begin{equation} \tau\,=\,\gamma^{-1}\,\kappa^{-1} \end{equation} where $\gamma$ is the conversion factor 1.0227\,pc\,Myr$^{-1}$\,km$^{-1}$\,s. The cluster speed $v$$^\prime$ and star-vertex angular separation $\lambda^\prime$ are defined differently that in the standard case of parallel motions. In the Blaauw model, $v$$^\prime$ is the barycentric speed of a hypothetical association member participating in the expansion, situated at the barycenter of our solar system \citep[see Fig. 3 of][]{Blaauw64}, and $\lambda^\prime$ is the angular separation between a star and the association convergent point as defined solely by the stars' proper motions. If an association is expanding, the convergent point determined from the mean 3D space motion of its members (the ``centroid'' space motion) will define a different ``convergent point'' than the vertex determined through a convergent point analysis of the stars' proper motions. To test whether the association is expanding or not, and possibly assign an ``expansion age'', I analyze the data for TWA members two ways. First, I compare model convergent points for varying expansion ages to the observed convergent point. Second, I will use the available radial velocities and cluster parallax distances to directly measure the expansion rate. \subsubsection{Expanding versus Non-expanding Association Convergent Point \label{cvp_compare}} In Fig. \ref{fig:cvp}, I plot the variation in the convergent point (long-dashed line) if one takes the TWA ``centroid'' space motion vector (using the mean velocity vector for TWA 1, 4, 9, and 11), and add linear expansion with characteristic expansion timescales. In \S\ref{distance}, I determined that the best convergent point for the final TWA membership using the proper motion data alone was ($\alpha\,=\,100^{\circ}.5\,\pm\,4^{\circ}.2$, $\delta\,=\,-27^{\circ}.9\,\pm\,1^{\circ}.9$). Predicted expansion model convergent points for ages 0-100 Myr were statistically compared to the observed convergent point error ellipse. From this analysis alone, {\it one can reject expansion ages of $<$7\,Myr at 5\% significance, and $<$6\,Myr at 1\% significance}. The close agreement between the TWA vertex found by the convergent point method and the vertices for the four individual TWA members with known UVW vectors (see Fig. \ref{fig:cvp}), suggests that any kinematic expansion must be very subtle, and perhaps not even demonstrable with existing data. It is worth exploring whether the radial velocity data can help either determine a significant expansion age, or at least place a more interesting lower limit. \subsubsection{A ``Hubble Diagram'' for TWA?\label{Hubble}} \citet{Blaauw56,Blaauw64} suggested that linear expansion or contraction may be detectable if deviations are present between the observed spectroscopic radial velocities, and those predicted from the moving group method for parallel motion. If a significant linear expansion term $\kappa$ is present, then the Blaauw expansion model equations (Eqns. 1 and 2) predict that one should see a correlation between distance $d$ and the difference ($v_{rad}$\,--\,$v$\,cos$\lambda$) between the observed and predicted spectroscopic radial velocities. As the radius of the TWA is $\sim$10\,pc and the isochronal age is $\sim$10$^7$\,yr, one expects that $\kappa$ should be of order $\sim$0.1\,km\,s$^{-1}$\,pc$^{-1}$, if the stars are linearly expanding from a point. The effects of expansion on cluster parallax distances are usually negligible \citep[e.g. as shown by comparing trigonometric parallaxes to cluster parallaxes; e.g.][]{deBruijne99b,Mamajek02,Madsen02}. For the case of TWA, the change in cluster parallax distances, between assuming parallel motion and linear expansion, is $<$6\% rms for expansion ages of $>$5\,Myr, and $<$3\% rms for $>$10 Myr. Note that expansion ages of $<$6\,Myr were effectively ruled out in \S\ref{cvp_compare}, and the typical distance errors from other sources (e.g. proper motions) are $\sim$11\%. One can then conclude that the effects of association expansion (if any) on the distances and distance errors quoted in this study are negligible. In order to detect any possible expansion by fitting the Blaauw model to the observations, I adopt the convergent point defined solely using the proper motion data. I estimate $v^{\prime}$ for the four TWA members (TWA 1, 4, 9, 11) with trigonometric parallaxes through the equation: \begin{equation} v^{\prime}\,=\,\frac{\mu_{\upsilon}\,A\,}{\,\varpi\,\sin\,\lambda^{\prime}} \end{equation} The mean value for the four TWA members is $v^{\prime}$ = 20.4\,$\pm$\,2.2\,km\,s$^{-1}$. Already, one notices that $v$ (21.3\,$\pm$\,1.3\,km\,s$^{-1}$) is indistinguishable from $v^{\prime}$, which is consistent with no expansion. In order to see whether a non-zero $\kappa$ coefficient is detectable, I plot in Fig. \ref{fig:rv_kappa} the data in the format $d$ versus ($v_{rad} - v^{\prime}\,cos\lambda^{\prime}$) so that one can solve for the slope $\kappa$ and intercept $K$: \begin{equation} v_{rad}\,-\,v^{\prime}\,\cos\,\lambda^\prime\,=\kappa\,d\,+\,K \end{equation} Plotted in this form, any expansion will manifest itself as a significantly positive slope. The individual distance estimates for the expansion model are calculated as: \begin{equation} d_{pc}\,=\,\frac{1000\,v^{\prime}\,sin\,\lambda^{\prime}}{A\,\mu_{\upsilon}} \end{equation} As seen in Fig. \ref{fig:rv_kappa}, it is a success of the kinematic model that the ($v_{rad} - v^{\prime}\,cos\lambda^{\prime}$) values are crowded near zero at all. Recall that the {\it predicted} radial velocity component ($v^{\prime}\,cos\lambda^{\prime}$) is totally independent of {\it any} measured radial velocity data, i.e. they are solely dependent on the the convergent point position (via $\lambda^{\prime}$), and the trigonometric parallax distances and proper motions for TWA 1, 4, 9, and 11 (via $v^{\prime}$). This agreement further strengthens the interpretation that the TWA constitutes a bona fide kinematic group. The errors in distance and velocity difference have some peculiarities worth mentioning. The distance errors tend to scale with distance, i.e. $\sigma_d$\,$\propto$\,$d$. Secondly, the distances will all be affected systematically if the convergent point is in error. Finally, the linear fit of the data to equation \#5 using weighting in both variables \citep[using {\it fitexy} from Numerical Recipes; ][]{Press92} gives an uncomfortably good fit ($\chi^2$/$\nu$ = 7.8/19), presumably due to overestimated errors in either the observed radial velocities, group speed, or convergent point. This weighted fit finds $\kappa$\,=\,+0.036\,$\pm$\,0.039\,km\,s$^{-1}$\,pc$^{-1}$ and $K$\,=\,+1.07\,$\pm$\,0.51\,km\,s$^{-1}$, but again, due to the very low $\chi^2$, it is unclear how much to believe the errors. To avoid overinterpreting a derived slope $\kappa$ whose error bars may not be believable, I fit an unweighted, ordinary least-squares line to the data, with the distance $d$ as the independent variable, and the velocity difference ($v_{rad} - v^{\prime}\,cos\lambda^{\prime}$) as the dependent variable. I do this for the 19 TWA ``final'' members whose radial velocity errors are $<$2.5\,km\,s$^{-1}$. Since the sample is small, I use bootstrap and jackknife resampling to help determine the error in the derived slope \citep{Feigelson92}, although the agreement with the errors derived from the asymptotic formulae is excellent. The least-squares fit finds $\kappa$\,=\,+0.049\,$\pm$\,0.027\,km\,s$^{-1}$\,pc$^{-1}$ and $K$\,=\,+1.20\,$\pm$\,0.36\,km\,s$^{-1}$\, (evaluated at the mean distance). Although the sign of the slope is consistent with expansion, the correlation is very weak (Pearson $r$ = 0.42\,$\pm$\,0.19). The basic result is unchanged whether all of the TWA members are retained, independent of radial velocity error, or if only the 8 TWA stars with radial velocity errors of $<$2\,km\,s$^{-1}$\, are retained\footnote{A non-zero velocity offset $K$ should not cause too much alarm. Part of the offset may be due to gravitational redshift, which for the typical $\sim$10\,Myr-old TWA member with mass $\sim$0.5\,M$_{\odot}$ should be of order $\sim$0.4\,km\,s$^{-1}$ \citep[][using radii from the D'Antona \& Mazzitelli 1997 tracks]{Greenstein67}, compared to $\sim$0.6\,km\,s$^{-1}$\, for the Sun. An unexplained radial velocity offset of 0.4\,km\,s$^{-1}$\, appears to be present among low-mass Hyades members \citep{Gunn88}, even after accounting for gravitational redshift. The offsets between measured ``spectroscopic'' radial velocities and ``astrometric'' radial velocities are difficult to quantify, but should be more easily measurable for larger samples of stars with future astrometric missions \citep{Dravins99,Madsen03}.}. Although the slope $\kappa$ is small, one can state that it is positive at 95\% confidence, i.e. that the data are consistent with some expansion. Unfortunately, the derived expansion age has very large errors, and is of limited utility: $\tau\,=\,\gamma^{-1}\,\kappa^{-1}$\,$\simeq$\,20$^{+25}_{-7}$\,Myr. The probability distribution function of $\kappa$ {\it excludes} expansion ages of $<$8.7\,Myr at 99\% confidence, and $<$10.4\,Myr at 95\% confidence. The confidence intervals on the expansion age are very wide: 13-43 Myr (68\% CL) and 9.5-262 Myr (90\% CL), with $\sim$4\% of the probability distribution corresponding to contraction. The expansion age advocated by MF01 (8.3\,Myr) can, however, be statistically rejected. {\it It does not seem appropriate at this time to quote an unambiguous ``expansion age'' for the TW Hya association, but to quote the lower limit ($\gtrsim$10.4\,Myr)}. \section{DISCUSSION \label{discussion}} With an improved distance estimate, one can revise the absolute magnitude, luminosity, and inferred mass estimates for 2M1207A and B. The properties of 2M1207 A and B from the literature, and derived here, are listed in Table \ref{tab:phot}. Using the photometry from \citet{Chauvin04} and revised distance estimate from the moving cluster method, the absolute magnitudes of 2M1207A and B are M$_{K}$(A)\,=\,8.32\,$\pm$\,0.27 and M$_{K}$(B)\,=\,13.30\,$\pm$\,0.29 mag, respectively. These are 0.6 mag fainter than one would derive using $d$ = 70\,pc (i.e. a factor of two intrinsically dimmer). I calculate luminosities using these absolute magnitudes, and the bolometric correction estimates of \citet{Golimowski04}. Using the constraints on luminosity and age \citep{Chauvin05}, I interpolate masses from the non-gray evolutionary tracks of \citet{Burrows97}, the DUSTY tracks of \citet{Chabrier00}, and the COND tracks of \citet{Baraffe03}. For all three sets of evolutionary tracks, the masses of A and B cluster near $\sim$21\,M$_{Jup}$\, and $\sim$3-4\,M$_{Jup}$. Table \ref{tab:phot} also lists the mass extrema from the 1$\sigma$ extrema in both luminosity and age (i.e. the low mass end is for the -1$\sigma$ luminosity {\it and} age, and the high mass end is the +1$\sigma$ luminosity {\it and} age). With the previous distance estimates ($\sim$70\,pc), \citet{Chauvin04} estimates mass of 25\,M$_{Jup}$\, and 5\,M$_{Jup}$\, for A and B, respectively. For all three models, the inferred upper mass limit of 2M1207 B ($\sim$5-7\,M$_{Jup}$) is less than half of the deuterium-burning mass limit \citep[$\sim$13\,M$_{Jup}$;][]{Burrows97}, and less than half of the maximum mass of Doppler velocity planets \citep[$\sim$15\,M$_{Jup}$;][]{Marcy05}. Hence, 2M1207 B could be considered a ``planet'' on the merits of its inferred mass. The angular separation of AB (778 mas) measured by \citet{Chauvin04} translates into a projected physical separation of 41\,$\pm$\,5 AU at the revised distance (similar to the semi-major axis of Pluto). If the observed separation is assumed to be equal to the semi-major axis, and one adopts masses of 3.5 and 21\,M$_{Jup}$\, for A and B, then one naively predicts an orbital period of $\sim$1700\,yr. The pair has a high mass ratio ($q$ $\sim$ 0.2), and B is massive enough to force the primary to be $\sim$6\,AU from the system barycenter. A solid detection of orbital evolution (or any hint of the dynamical masses of the components) will probably not be reported anytime soon. One can not rule out whether the TWA is expanding on a timescale longer than its isochronal age ($>$10\,Myr). The slow, or negligible, expansion may also be a clue that the proto-TWA molecular cloud complex was perhaps not a small pc-sized core with tens of stars, similar to those seen in Taurus (e.g. LDN 1551). The TWA members may have formed in a series of small-N systems (N $\sim$ few stars) distributed along filaments, separated by a few pc, and with similar bulk motions. The TWA appears to be moving away from the LCC subgroup \citep{Mamajek00}, so it is conceivable that the proto-TWA cloudlets were simply fragments of the proto-LCC cloud, which owed their velocities to molecular cloud turbulence \citep{Feigelson96}. An alternative scenario is that the proto-TWA cloudlets were bright-rim clouds or cometary globules on the periphery of LCC $\sim$10-15 Myr ago, when presumably the LCC subgroup still had a few late-O stars \citep{deGeus92}. Such cloudlets could have been accelerated away from the LCC O-star population through the rocket effect \citep{Oort55}, and compressed to form stars due to radiation-driven implosion \citep{Bertoldi90}. The energy input from deceased LCC members (via UV light, winds, and supernovae) has probably dominated the energy input of the local interstellar medium over the past 10 Myr, and within 100\,pc, in the general direction of LCC and TWA \citep{Maiz-Apellaniz01}. Small-scale star-formation in cometary globules on the edge of OB associations has strong observational support \citep{Reipurth83,Ogura98}, and there is strong evidence for triggering by the massive stellar population \citep[e.g.][]{Kim05,Lee05}. A cometary globule formation scenario for TWA might explain a few observational quirks of the group, namely its location ($\sim$70\,pc away from the nearby LCC OB subgroup), age ($\sim$7 Myr younger than LCC), space motion vector \citep[directed $\sim$5\,km\,s$^{-1}$\,away from the LCC;][]{Mamajek00}, and low stellar density. The small, young stellar groups associated with $\eta$ Cha, $\epsilon$ Cha, and $\beta$ Pic show many of these same symptoms \citep{Mamajek99,Mamajek00,Ortega02,Jilinski05}, although the $\eta$ and $\epsilon$ Cha clusters appear to be more strongly bound than the TWA and $\beta$ Pic groups. Cloudlets analogous to those on the periphery of Vel OB2 \citep{Kim05} and Ori OB1 \citep{Lee05} may be the evolutionary predecessors of small, unbound, $\sim$10 Myr-old associations like TWA. That 2M1207 and the TWA formed in a region of rather low stellar density could explain how such a wide, low-mass binary system as 2M1207 could survive its birth environment intact. \acknowledgments EM is supported by a Clay Postdoctoral Fellowship from the Smithsonian Astrophysical Observatory. The author thanks the referee, Ronnie Hoogerwerf, for useful comments and criticisms which improved the paper. The author also thanks Glenn Schneider, Ralf-Dieter Scholz, Willie Torres, Jay Farihi, and Subu Mohanty for useful discussions, and Lissa Miller and Kevin Luhman for critiquing early drafts. This publication makes use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the Univ. of Massachusetts and the IPAC/California Institute of Technology, funded by NASA and the NSF. SuperCOSMOS Sky Survey material is based on photographic data originating from the UK, Palomar and ESO Schmidt telescopes and is provided by the Wide-Field Astronomy Unit, Institute for Astronomy, University of Edinburgh. The GSC 2.2 is a joint project of STSci and the Osservatorio Astronomico di Torino. STSci is operated by AURA, for NASA under contract NAS5-26555. The participation of the Osservatorio Astronomico di Torino is supported by the Italian Council for Research in Astronomy. Additional support for GSC 2.2 was provided by ESO, ST-ECF, the International GEMINI project and the ESA Astrophysics Division. This work made extensive use of the SIMBAD database, operated at the CDS, Strasbourg, France.
1,477,468,750,508
arxiv
\section{Introduction } The study of extensions is a theory that has developed from multiplicative groups \cite{Schreirer1926,Holder1893}, with applications ranging from representations of central simple algebras \cite{Brauer1928,Hasse1932} to topology \cite{Eilenberg1942}. In this article we will focus on extensions in an abelian category $\mathcal{C}$. In this context, an extension of an object $A$ by an object $C$ is a short exact sequence \[ \suc[A][][C] \] up to equivalence, where two exact sequences are equivalent if there is a morphism from one to another with identity morphisms at the ends. This kind of approach was first made by R. Baer in 1934. On his work \cite{Baer1934,Baer1935}, Baer defined an addition on the class $\Ext[C][1][\,][A]$ of extensions of an abelian group $A$ by an abelian group $C$. His construction can be easily extended to abelian categories, where it is used to show that the class $\Ext[C][1][][A]$ has a natural structure of abelian group. For this reason usually $\Ext[C][1][][A]$ is called the group of extensions of $A$ by $C$. Later on, H. Cartan and S. Eilenberg \cite{Cartan-Eilemberg}, using methods of homological algebra, showed that the first derived functor of the $\Hom[\mathcal{C}][C][-]$ functor, or $\Hom[][-][A]$ functor, is isomorphic to $\Ext[C][1][][-]$, or respectively to $\Ext[-][1][\mathcal{C}][A]$. This result marked the beginning of a series of research works looking for ways of constructing the derived functors of the Hom functor without using projective or injective objects, with the spirit that resolutions should be only a calculation tool for derived functors. One of this attempts, registered in the work of D. Buchsbaum, B. Mitchell, S. Schanuel, S. Mac Lane, M.C.R. Butler, and G. Horrocks \cite{maclanehomology,buchbaumext,Extrel,mitchell}, was based in the ideas of N. Yoneda \cite{Yoneda1954,Yoneda1960}, defining what is known today as the theory of $n$-extensions and the functor called as the Yoneda Ext. An $n$-extension of an object $A$ by an object $C$ is an exact sequence of length $n$ \[ \suc[A][M_{1}\rightarrow\cdots\rightarrow M_{n}][C] \] up to equivalence, where the equivalence of exact sequences of length $n>1$ is defined in a similar way as was defined for length $1$. In this theory, the Baer sum can be extended to $n$-extensions, proving that the class $\Ext[C][n][\mathcal{C}][A]$ of $n$-extensions of $A$ by $C$ is an abelian group. Recently, the generalization of homological techniques such as Gorenstein or tilting objects to abstract contexts \cite{tilcotiltcorresp,relgor,colpi2007tilting,vcoupek2017cotilting}, such as abelian categories that do not necessarily have projectives or injectives, claim for the introduction of an Ext functor that can be used without restraints. The only problem is that it is not clear if the rich properties of the homological Ext are also valid for the Yoneda Ext. The goal of this work is to make a next step by exploring some properties that the Yoneda Ext shares with the homological Ext. Namely, we will explore the following property that is well known for module categories: \begin{thm} \cite[Proposition 7.21]{rotman} Let $R$ be a ring, $M\in\Mod$, and $\{N_{i}\}_{i\in I}$ be a set of $R$-modules. Then, there exist an isomorphism \[ \Ext[\bigoplus_{i\in I}N_{i}][n][R][M]\cong\prod_{i\in I}\Ext[N_{i}][n][R][M]\mbox{.} \] \end{thm} The proof of such theorem can be extended to Ab4 abelian categories with enough projectives. Our goal will be to prove an analogue result for the Yoneda Ext without assuming the existence of enough projectives. Let us now describe the contents of this paper. Section 2 is devoted to review the basic results of the theory of extensions by following the steps of B. Mitchell in \cite{mitchell}. In section 3 we prove the desired theorem. More precisely, we show that in an Ab4 abelian category we can build the desired bijections explicitly by using colimits. Finally, in section 4 we use the bijections constructed in section 3 to characterize Ab4 categories. \section{Extensions} In this section we will remember the basic theory of extensions. As was mentioned before, the theory of $n$-extensions was created by Nobuo Yoneda in \cite{Yoneda1954}. In such paper he worked in a category of modules and most of the results are related with the homological tools built by projective and injective modules. Since our goal is to work in an abelian category without depending on the existence of projective or injective objects, we refer the reader to the work of Barry Mitchell \cite{mitchell} for an approach in abelian categories without further assumptions. Throughout this paper, $\mathcal{C}$ will denote an abelian category. \begin{defn} \cite[Section 1]{mitchell} Let $C\in\mathcal{C}$, and $\alpha:A\rightarrow B$, $\alpha':A'\rightarrow B'$ be morphisms in $\mathcal{C}$. We set the following notation: \begin{enumerate} \item $\nabla_{C}:=\left(\begin{smallmatrix}1_{C} & 1_{C}\end{smallmatrix}\right):C\oplus C\rightarrow C\mbox{;}$ \item $\Delta_{C}:=\left(\begin{smallmatrix}1_{C}\\ 1_{C} \end{smallmatrix}\right):C\rightarrow C\oplus C\mbox{;}$ \item $\alpha\oplus\alpha':=\left(\begin{smallmatrix}\alpha & 0\\ 0 & \alpha' \end{smallmatrix}\right):A\oplus A'\rightarrow B\oplus B'\mbox{.}$ \end{enumerate} \end{defn} \subsection{1-Extensions} Let us begin by recalling some basic facts and notation about 1-extensions. \begin{defn} \cite[Section 1]{mitchell} Let $\alpha:N\rightarrow N'$, $\beta:M\rightarrow M'$, and $\gamma:K\rightarrow K'$ be morphisms in $\mathcal{C}$, and consider the following short exact sequences in $\mathcal{C}$ \[ \eta:\;\suc[][][][f][g]\mbox{ and }\eta':\;\suc[N'][M'][K'][f'][g']\mbox{.} \] \begin{enumerate} \item We say that $(\alpha,\beta,\gamma):\eta\rightarrow\eta'$ is a morphism of short exact sequences if \[ \beta f=f'\alpha\;\mbox{and}\;\gamma g=g'\beta\mbox{.} \] \item We denote by $\eta\oplus\eta'$ to the short exact sequence \[ \suc[N\oplus N'][M\oplus M'][K\oplus K'][f\oplus f'][g\oplus g']\mbox{.} \] \end{enumerate} \end{defn} \begin{defn} \cite[Section 1]{mitchell} For $N,K\in\mathcal{C}$, let $\mathcal{E}_{\mathcal{C}}(K,N)$ denote the class of short exact sequences of the form $\suc\mbox{.}$\end{defn} \begin{rem} Let $A,C\in\mathcal{C}$ and $\eta,\eta'\in\mathcal{E}_{\mathcal{C}}(C,A)$. Consider the relation $\eta\equiv\eta'$ given by the existence of a short exact sequence morphism $(1_{A},\beta,1_{C}):\eta\rightarrow\eta'\mbox{.}$ By the snake lemma, we know that $\beta$ is an isomorphism, and hence $\equiv$ is an equivalence relation on $\mathcal{E}_{\mathcal{C}}(C,A)$.\end{rem} \begin{defn} \cite[Section 1]{mitchell} Consider $A,C\in\mathcal{C}$. \begin{enumerate} \item Let $\Ext[C][1][][A]:=\mathcal{E}_{\mathcal{C}}(C,A)/\equiv\mbox{;}$ \item Each object of $\Ext[C][1][][A]$ is refered as an extension from $A$ to $C$. \item Every extension from $A$ to $C$ will be denoted with a capital letter $E$, or by $\overline{\eta}$, in case $\eta$ is a representative of the class $E$. \item Given $\overline{\eta}\in\Ext[C][1][][A]$ and $\overline{\eta'}\in\Ext[C'][1][][A']$, we will call extension morphism from $\overline{\eta}$ to $\overline{\eta'}$, to every short exact sequence morphism $\eta\rightarrow\eta'$. \item If $(\alpha,\beta,\gamma):E\rightarrow E'$ and $(\alpha',\beta',\gamma'):E'\rightarrow E''$ are extension morphisms, we define the composition morphism as \[ (\alpha',\beta',\gamma')(\alpha,\beta,\gamma):=(\alpha'\alpha,\beta'\beta,\gamma'\gamma)\mbox{.} \] \end{enumerate} \end{defn} \begin{rem} An essential comment made by B. Mitchell in \cite{mitchell} is that the class $\Ext[C][1][][A]$ may not be a set (see \cite[Chapter 6, Exercise A]{freyd1964abelian} for an example). Considering this fact, we should be cautious when we talk about correspondences between extensions classes. Nevertheless, by simplicity we will say that a correspondence \[ \Phi:\Ext[C'][1][][A']\rightarrow\Ext[C][1][][A] \] is a function, if it associates to each $\overline{\eta}\in\Ext[C'][1][][A']$ a single element $\Phi(\overline{\eta})$ in $\Ext[C][1][][A]$. \end{rem} Remember the following result.\\ \begin{minipage}[t]{0.58\columnwidth \begin{prop} \cite[Lemma 1.2]{maclanehomology}\label{prop:pb:operar a izquierda} Consider a morphism $\alpha:X\rightarrow K$ and an exact sequence $\suc[][][][f][g]$ in $\mathcal{C}$. If $(E,\alpha',g')$ is the pullback diagram of the morphisms $g$ and $\alpha$, then there is an exact short sequence $\eta_{\alpha}$ and a morphism $(1,\alpha',\alpha):\eta_{\alpha}\rightarrow\eta$.\end{prop} \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.37\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=1cm,y=1cm, font=\scriptsize] \node[main node] (C) at (0,0) {$K$}; \node[main node] (X0) [left of=C] {$M$}; \node[main node] (X1) [left of=X0] {$N$}; \node[main node] (X) [above of=C] {$X$}; \node[main node] (E) [left of=X] {$E$}; \node[main node] (X2) [left of=E] {$N$}; \node[main node] (03) [right of=C] {$0$}; \node[main node] (04) [left of=X1] {$0$}; \node[main node] (05) [right of=X] {$0$}; \node[main node] (06) [left of=X2] {$0$}; \draw[->, thick] (X0) to node {$g$} (C); \draw[->, thick] (X1) to node {$$} (X0); \draw[->, thick] (C) to node {$$} (03); \draw[->, thick] (04) to node {$$} (X1); \draw[->, thick] (X) to node {$\alpha$} (C); \draw[-, double] (X1) to node {$$} (X2); \draw[->, thin] (E) to node {$\alpha '$} (X0); \draw[->, thin] (X) to node {$$} (05); \draw[->, thin] (E) to node {$g'$} (X); \draw[->, thin] (X2) to node {$$} (E); \draw[->, thin] (06) to node {$$} (X2); \end{tikzpicture} \ \end{minipage}} Of course the construction described above defines a correspondence between the extension classes. \begin{prop} \cite[Corollary 1.2.]{mitchell}\label{cor:caracterizacion accion a izquierda}\label{cor:caracterizacion accion a derecha} Let $\eta\in\mathcal{E}_{\mathcal{C}}(C,A)$ and $\gamma\in\Hom[\mathcal{C}][C'][C]$. Then, the correspondence $\Phi_{\gamma}:\Ext[C][1][][A]\rightarrow\Ext[C'][1][][A]$, $\overline{\eta}\mapsto\overline{\eta_{\gamma}}$, is a function. \end{prop} By duality, given a morphism $\alpha:N\rightarrow X$ and an exact sequence \[ \eta:\:\suc[][][][f]\mbox{,} \] the pushout of the morphisms $f$ and $\alpha$, gives us an exact sequence $\eta^{\alpha}$ together with a morphism $(\alpha,\alpha',1):\eta\rightarrow\eta^{\alpha}$. Moreover, we also have that the correspondence $\Phi^{\alpha}:\Ext[K][1][][N]\rightarrow\Ext[K][1][][X]$, $\overline{\eta}\mapsto\overline{\eta^{\alpha}}$, is a function. \begin{defn} \cite[Section 1]{mitchell} For $\alpha:A\rightarrow A'$ and $\gamma:C'\rightarrow C$ morphisms in $\mathcal{C}$, and $E\in\Ext[C][1][][A]$, we set $E\gamma:=\Phi_{\gamma}(E)$, and $\alpha E:=\Phi^{\alpha}(E)$. \end{defn} As we have described, there exists a natural action of the morphisms on the extension classes. These actions are associative and respect identities. \begin{lem} \cite[Lemma 1.3]{mitchell}\label{lem:propiedades1} Let $E\in\Ext[C][1][][A]$, $\alpha:A\rightarrow A'$, $\alpha':A'\rightarrow A''$, $\gamma:C'\rightarrow C$, and $\gamma':C''\rightarrow C'$ be morphisms in $\mathcal{C}$. Then, \begin{enumerate} \item $1_{A}E=E$ and $E1_{C}=E$; \item $(\alpha'\alpha)E=\alpha'(\alpha E)$ and $E(\gamma\gamma')=(E\gamma)\gamma'$; \item $(\alpha E)\gamma=\alpha(E\gamma)$. \end{enumerate} \end{lem} Next, we recall the definition of the Baer sum. \begin{defn} \cite[Section 1]{mitchell} For $E,E'\in\Ext[C][1][][A]$, the sum extension of $E$ and $E'$ is $E+E':=\nabla_{A}\left(E\oplus E'\right)\Delta_{C}\mbox{.}$ \end{defn} This sum operation is well behaved with the actions before described and gives a structure of abelian group to the extension classes. \begin{thm} \label{thm:ext1 es grupo}\label{cor:0E=00003D0}\label{lem:propiedades2}\cite[Lemma 1.4 and Theorem 1.5.]{mitchell} For any $A,C\in\mathcal{C}$, we have that the pair $\left(\Ext[C][1][][A],+\right)$ is an abelian group, where the identity element is the extension given by the class of exact sequences that split. Furthermore, let $E\in\Ext[C][1][][A]$, $E'\in\Ext[C'][1][][A']$, $\alpha\in\Hom[][A][X]$, $\alpha'\in\Hom[][A'][X']$, $\gamma\in\Hom[][Y][C]$ and $\gamma'\in\Hom[][Y'][C']$. Then, the following equalities hold true: \begin{enumerate} \item $\left(\alpha\oplus\alpha'\right)\left(E\oplus E'\right)=\alpha E\oplus\alpha'E'$; \item $\left(\alpha+\alpha'\right)E=\alpha E+\alpha'E$; \item $\alpha\left(E+E'\right)=\alpha E+\alpha E'$; \item $\left(E\oplus E'\right)\left(\gamma\oplus\gamma'\right)=E\gamma\oplus E'\gamma'$; \item $E\left(\gamma+\gamma'\right)=E\gamma+E\gamma'$; \item $\left(E+E'\right)\gamma=E\gamma+E'\gamma$; \item $0E=E0=E_{0}$ for every $E\in\Ext[C][1][][A]$. \end{enumerate} \end{thm} \subsection{$n$-Extensions} We are ready for recalling the definition of $n$-extensions. It is a well known fact that short exact sequences can be sticked together in order to contruct a long exact sequence. Following this thought, the spirit of $n$-extensions is to define a well behaved 1-extensions composition that constructs long extensions. \begin{defn} \cite[Section 3]{mitchell} We will make use of the following considerations. \begin{enumerate} \item For an exact sequence $\eta:\;\suc[A][B_{n-1}\rightarrow\cdots\rightarrow B_{0}][C]$ in $\mathcal{C}$ we say that $\eta$ is an exact sequence of length $n$, and $A$ and $C$ are the left and right ends of $\eta$, respectively. \item Let $\mathcal{E}_{\mathcal{C}}^{n}(L,N)$ denote the class of exact sequences of length $n$ with $L$ and $N$ as right and left ends. \item Consider the following exact sequences in $\mathcal{C}$ \begin{alignat*}{1} \eta:\;\suc[N][B_{n-1}\stackrel{f_{n-1}}{\rightarrow}\cdots\stackrel{f_{1}}{\rightarrow}B_{0}][K][\mu][\pi] & \mbox{,}\\ \eta':\;\suc[N'][B'_{n-1}\stackrel{f'_{n-1}}{\rightarrow}\cdots\stackrel{f'_{1}}{\rightarrow}B'_{0}][K'][\mu'][\pi'] & \mbox{.} \end{alignat*} A morphism $\eta\rightarrow$$\eta'$ is a collection of $n+2$ morphisms $(\alpha,\beta_{n-1},\cdots,\beta_{0},\gamma)$ in $\mathcal{C}$, where $\alpha:N\rightarrow N'$, $\gamma:K\rightarrow K'$, and $\beta_{i}:B_{i}\rightarrow B'_{i}\:\forall i\in[0,n-1]$ are such that \[ \beta_{n-1}\mu=\mu'\alpha\mbox{, }\gamma\pi=\pi'\beta_{0}\mbox{ and }\beta_{i-1}f_{i}=f_{i}'\beta_{i}\:\forall i\in[0,n-1]\mbox{.} \] Equivalently, we can say that a morphism of exact sequences of length $n$ is a commutative diagram \end{enumerate} \begin{minipage}[t]{1\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,framed, font=\scriptsize] \node[main node] (01) at (0,0) {$0$}; \node[main node] (N) [right of=01] {$N$}; \node[main node] (B1) [right of=N] {$B_{n-1}$}; \node[main node] (C) [right of=B1] {$\cdots$}; \node[main node] (B2) [right of=C] {$B_0$}; \node[main node] (K) [right of=B2] {$K$}; \node[main node] (02) [right of=K] {$0$}; \node[main node] (03) [below of=01] {$0$}; \node[main node] (N') [right of=03] {$N'$}; \node[main node] (B'1) [right of=N'] {$B'_{n-1}$}; \node[main node] (C') [right of=B'1] {$\cdots$}; \node[main node] (B'2) [right of=C'] {$B'_0$}; \node[main node] (K') [right of=B'2] {$K'$}; \node[main node] (04) [right of=K'] {$0$}; \draw[->, thin] (01) to node {$$} (N); \draw[->, thin] (N) to node {$$} (B1); \draw[->, thin] (B1) to node {$$} (C); \draw[->, thin] (C) to node {$$} (B2); \draw[->, thin] (B2) to node {$$} (K); \draw[->, thin] (K) to node {$$} (02); \draw[->, thin] (03) to node {$$} (N'); \draw[->, thin] (N') to node {$$} (B'1); \draw[->, thin] (B'1) to node {$$} (C'); \draw[->, thin] (C') to node {$$} (B'2); \draw[->, thin] (B'2) to node {$$} (K'); \draw[->, thin] (K') to node {$$} (04); \draw[->, thin] (N) to node {$$} (N'); \draw[->, thin] (B1) to node {$$} (B'1); \draw[->, thin] (B2) to node {$$} (B'2); \draw[->, thin] (K) to node {$$} (K'); \end{tikzpicture} \ \end{minipage} \end{defn} In the following lines, we define an equivalence relation for studying the classes of exact sequences of length $n$. As we did for the case with $n=1$, we start by saying that two exact sequences $\eta,\eta'\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$ are related, denoted by $\eta\preceq\eta'$, if there is a morphism $(1_{A},\beta_{n-1}\cdots,\beta_{0},1_{C}):\eta\rightarrow\eta'\mbox{.}$ In this case, we say also that this morphism has fixed ends. Observe that, in contrast with the case $n=1$, this relation needs not to be symmetric. Thus, for achieving our goal, we most consider the equivalence relation $\equiv$ induced by $\preceq$. Namely, we write $\eta\equiv\eta'$ if there are exact sequences $\eta_{1},\cdots,\eta_{k}$ such that \[ \eta=\eta_{1}\mbox{,}\qquad\eta_{i}\preceq\eta_{i+1}\mbox{ or }\eta_{i+1}\preceq\eta_{i},\mbox{ and }\qquad\eta'=\eta_{k}\mbox{.} \] \begin{defn} \cite[Section 9]{Hilton-Stammbach} For $n\geq1$ and $A,C\in\mathcal{C}$, we consider the class $\Ext[C][n][][A]:=\mathcal{E}_{\mathcal{C}}^{n}(C,A)/\equiv\mbox{,}$ whose elements will be called extensions of length $n$ with $C$ and $A$ as right and left ends. Let $\overline{\eta}$ denote the equivalence class of $\eta\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$. An extension morphism from $\overline{\eta}$ to $\overline{\eta'}$ is just a morphism from $\eta$ to $\eta'$. \end{defn} \begin{rem} The definition of the equivalence relation above might seem naive. But actually the relation is built with the purpose of making the composition of extensions associate properly when there is a morphism acting in the involved extensions \cite[Section 3]{mitchell}\cite[Section 5]{maclanehomology}. In the following lines, we will discuss briefly such matter. Observe how in general, for $\eta\in\mathcal{E}_{\mathcal{C}}^{1}(C,A)$, $\eta'\in\mathcal{E}_{\mathcal{C}}^{1}(D,C')$ and $\beta:C\rightarrow C'$ in $\mathcal{C}$, it is false that $\left(\eta\beta\right)\eta'=\eta\left(\beta\eta'\right)$. The only affirmation that can be made is that there is an extension morphism $\left(\eta\beta\right)\eta'\rightarrow\eta\left(\beta\eta'\right)$. To show such morphism, we remember that $\beta$ induces morphisms $\eta\beta\rightarrow\eta\qquad$ and $\qquad\eta'\rightarrow\beta\eta'\mbox{.}$ Hence, we can build the morphisms \[ \left(\eta\beta\right)\eta'\rightarrow\eta\eta'\qquad\mbox{and}\qquad\eta\eta'\rightarrow\eta\left(\beta\eta'\right)\mbox{,} \] whose composition gives the wanted morphism. Therefore, even if we have the inequality $\left(\eta\beta\right)\eta'\neq\eta\left(\beta\eta'\right)$ we can conclude that $\overline{\left(\eta\beta\right)\eta'}=\overline{\eta\left(\beta\eta'\right)}$.\end{rem} \begin{defn} \cite[Section 3]{mitchell} Consider the following exact sequences of length $n$ and $m$, respectively \begin{alignat*}{1} \eta:\;\suc[N][B_{n}\rightarrow\cdots\rightarrow B_{1}][K][\mu][\pi] & \mbox{,}\\ \eta':\;\suc[K][B'_{m}\rightarrow\cdots\rightarrow B'_{1}][L][\mu'][\pi'] & . \end{alignat*} The composition sequence $\eta\eta'$, of $\eta$ with $\eta'$, is the exact sequence \[ \suc[N][B_{n}\rightarrow\cdots\rightarrow B_{1}\stackrel{\mu'\pi}{\rightarrow}B'_{m}\rightarrow\cdots\rightarrow B'_{1}][L][\mu][\pi']\mbox{.} \] \end{defn} \begin{rem} Note that each exact sequence in $\mathcal{C}$ \[ \kappa:\:\suc[A][B_{n}\rightarrow\cdots\rightarrow B_{1}][C] \] can be written as a composition of $n$ short exact sequences $\kappa=\eta_{n}\cdots\eta_{1}$, where \[ \eta_{i}:=\;\suc[K_{i+1}][B_{i}][K_{i}]\mbox{,} \] with $K_{n+1}:=A$, $K_{1}:=C$ and $K_{i}=\im[(B_{i}\rightarrow B_{i-1})]\:\forall i\in[2,n-1]$. We will refer to such factorization of $\kappa$ as its natural decomposition. \end{rem} Of course, the composition of exact sequences induces a composition of extensions. \begin{lem} \cite[Proposition 3.1]{mitchell}\label{lem:composicion esta bien definido} Let $m,n>0$, and $A,C,D\in\mathcal{C}$. Then, the correspondence $\Phi:\Ext[C][n][][A]\times\Ext[D][m][][C]\rightarrow\Ext[D][n+m][][A]$, $(\overline{\eta},\overline{\eta'})\mapsto\overline{\eta\eta'}$, is a function. \end{lem} We can now define without ambiguity the composition of extensions. \begin{defn} Let $E\in\Ext[C][n][][A]$ and $E'\in\Ext[D][m][][C]$. For $E=\overline{\eta}$ and $E'=\overline{\eta'}$, we define the composition extension $EE'$ of $E$ with $E'$, as the extension $EE':=\overline{\eta\eta'}$. If $\eta=\eta_{n}\cdots\eta_{1}$ is the natural decomposition of $\eta$, the induced extension factorization $E=\overline{\eta_{n}}\cdots\overline{\eta_{1}}$ is known as a natural decomposition of $E$. \end{defn} In the same way, an $n$-extension can be factored into simpler extensions; a morphism of $n$-extensions can be factored into a composition of $n$ simpler morphisms. The next lemma shows the basic fact in this matter.\\ \begin{minipage}[t]{0.6\columnwidth \begin{lem} \cite[Lemma 1.1]{mitchell}\label{lem:pb/po}\label{cor:aE=00003DE'b} Consider a morphism of exact sequences $(\alpha,\beta,\gamma):\eta'\rightarrow\eta$, with \begin{alignat*}{1} \eta: & \quad\suc[A][B][C][f][g]\mbox{ and }\\ \eta': & \quad\suc[A'][B'][C'][f'][g']\mbox{.} \end{alignat*} Then, $\eta\gamma=\alpha\eta'$ and $(\alpha,\beta,\gamma)$ factors through $\eta\gamma$ as \[ (\alpha,\beta,\gamma)=(1,\beta',\gamma)(\alpha,\beta'',1)\mbox{.} \] \end{lem} \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.35\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=, font=\scriptsize] \node[main node] (I1) at (0,0) {$0$}; \node[main node] (A1) [right of=I1] {$A'$}; \node[main node] (B1) [right of=A1] {$B'$}; \node[main node] (C1) [right of=B1] {$C'$}; \node[main node] (D1) [right of=C1] {$0$}; \node[main node] (I2) [below of=I1] {$0$}; \node[main node] (A2) [below of=A1] {$A$}; \node[main node] (B2) [below of=B1] {$E$}; \node[main node] (C2) [below of=C1] {$C'$}; \node[main node] (D2) [below of=D1] {$0$}; \node[main node] (I3) [below of=I2] {$0$}; \node[main node] (A3) [below of=A2] {$A$}; \node[main node] (B3) [below of=B2] {$B$}; \node[main node] (C3) [below of=C2] {$C$}; \node[main node] (D3) [below of=D2] {$0$}; \draw[->, thin] (I1) to node {$$} (A1); \draw[->, thin] (A1) to node {$f'$} (B1); \draw[->, thin] (B1) to node {$g'$} (C1); \draw[->, thin] (C1) to node {$$} (D1); \draw[->, thin] (I2) to node {$$} (A2); \draw[->, thin] (A2) to node {$$} (B2); \draw[->, thin] (B2) to node {$$} (C2); \draw[->, thin] (C2) to node {$$} (D2); \draw[->, thin] (I3) to node {$$} (A3); \draw[->, thin] (A3) to node {$f$} (B3); \draw[->, thin] (B3) to node {$g$} (C3); \draw[->, thin] (C3) to node {$$} (D3); \draw[->, thin] (A1) to node {$\alpha$} (A2); \draw[->, thin] (B1) to node {$\beta ''$} (B2); \draw[-, double] (C1) to node {$$} (C2); \draw[-, double] (A2) to node {$$} (A3); \draw[->, thin] (B2) to node {$\beta '$} (B3); \draw[->, thin] (C2) to node {$\gamma $} (C3); \end{tikzpicture} \ \end{minipage}} In general, we can make the following affirmation. \begin{cor} \label{cor:asociatividad con morfismos}Let $\eta,\eta'\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$ be exact sequences with natural decompositions $\eta=\eta_{n}\cdots\eta_{1}$ and $\eta'=\eta'_{n}\cdots\eta'_{1}$. Then, the following statements hold true. \begin{enumerate} \item There is an exact sequence morphism $(\alpha,\beta_{n-1},\cdots,\beta_{0},\gamma):\eta\rightarrow\eta'$ if, and only if, there is a collection of extension morphisms \[ (\alpha_{i},\beta_{i-1},\alpha_{i-1}):\overline{\eta{}_{i}}\rightarrow\overline{\eta'_{i}}\:\forall i\in[1,n] \] where $\alpha_{n}=\alpha$ and $\alpha_{0}=\gamma$. \item If there is an exact sequence morphism $(\alpha,\beta_{n-1},\cdots,\beta_{0},\gamma):\eta\rightarrow\eta'$, then there is a collection of morphisms $\alpha_{n-1},\cdots,\alpha_{1}$ in $\mathcal{C}$ satisfying the following equalities: \begin{enumerate} \item $\overline{\eta'_{n}}\cdots\overline{\eta'_{1}}\gamma=\alpha\overline{\eta_{n}}\cdots\overline{\eta_{1}}$, \item $\overline{\eta'_{i}}\cdots\overline{\eta'_{1}}\gamma=\alpha_{i}\overline{\eta_{i}}\cdots\overline{\eta_{1}}\:\forall i\in[1,n-1]$, and \item $\overline{\eta'_{n}}\cdots\overline{\eta'_{i+1}}\alpha_{i}=\alpha\overline{\eta_{n}}\cdots\overline{\eta_{i+1}}\:\forall i\in[1,n-1]$. \end{enumerate} \end{enumerate} \end{cor} \begin{proof} It follows from \ref{lem:pb/po}. \end{proof} By Lemma \ref{lem:composicion esta bien definido}, the following actions are well defined. \begin{defn} \cite[Section 3]{mitchell} Consider $\eta,\eta'\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$, $E:=\overline{\eta}\in\Ext[C][n][][A]$, $E':=\overline{\eta'}\in\Ext[C][n][][A]$, and let $\eta=\eta_{n}\cdots\eta_{1}$ and $\eta'=\eta'_{n}\cdots\eta'_{1}$ be the natural decompositions of $\eta$ and $\eta'$. \begin{enumerate} \item Given $\alpha\in\Hom[][A][A']$, we define $\alpha E:=\alpha\overline{\eta_{n}}\cdots\overline{\eta_{1}}\mbox{.}$ \item Given $\gamma\in\Hom[][C'][C]$, we define $E\gamma:=\overline{\eta_{n}}\cdots\overline{\eta_{1}}\gamma\mbox{.}$ \item We define the sum of extensions of length $n$ in the following way \[ E+E':=\nabla_{A}\left(E\oplus E'\right)\Delta_{C}\mbox{.} \] \end{enumerate} \end{defn} Most of the properties, proved earlier for extensions of length 1, can be naturally extended, as can be seen in the following lines. \begin{cor} \cite[Lemma 3.2 an Theorem 3.3]{mitchell}\label{cor:props comps extn}\label{thm:extn es grupo} Let $n>0$. \begin{enumerate} \item Let $E\in\Ext[C][n][][A]$, $E'\in\Ext[D][m][][C']$, $\beta\in\Hom[][C'][C]$, $\beta'\in\Hom[][C''][C']$, $\alpha\in\Hom[][A][A']$, and $\alpha'\in\Hom[][A'][A'']$. Then the following equalities hold true: \begin{enumerate} \item $\left(E\beta\right)E'=E\left(\beta E'\right)$; \item $1_{A}E=E=E1_{C}$; \item $E\left(\beta\beta'\right)=\left(E\beta\right)\beta'$; \item $\left(\alpha'\alpha\right)E=\alpha'\left(\alpha E\right)$. \end{enumerate} \item Let $E\in\Ext[C][n][][A]$, $E'\in\Ext[C'][n][][A']$, $F\in\Ext[D][m][][C]$, $F'\in\Ext[D'][m][][C']$, $\alpha\in\Hom[][A][X]$, $\alpha'\in\Hom[][A'][X']$, $\gamma\in\Hom[][Y][C]$, and $\gamma'\in\Hom[][Y'][C']$. Then the following equalities hold true: \begin{enumerate} \item $(\alpha\oplus\alpha')\left(E\oplus E'\right)=\alpha E\oplus\alpha'E'$ and $\left(E\oplus E'\right)\left(\gamma\oplus\gamma'\right)=E\gamma\oplus E'\gamma'$; \item $\left(E\oplus E'\right)\left(F\oplus F'\right)=EF\oplus E'F'$; \item $\left(E+E'\right)F=EF+E'F$ and $E\left(F+F'\right)=EF+EF'$; \item $(\alpha+\alpha')E=\alpha E+\alpha'E$ and $E\left(\gamma+\gamma'\right)=E\gamma+E\gamma'$; and \item $\alpha\left(E+E'\right)=\alpha E+\alpha E'$ and $\left(E+E'\right)\gamma=E\gamma+E'\gamma$. \end{enumerate} \item The pair $\left(\Ext[C][n][][A],+\right)$ is an abelian group, where the identity element is the extension $E_{0}$ given by the exact sequence, in case $n\geq2$, \[ \suc[A][A\stackrel{0}{\rightarrow} \cdots \stackrel{0}{\rightarrow} C][C][1][1] \mbox{ .} \] \end{enumerate} \end{cor} We conclude this section with the following theorem that focus on characterizing the trivial extensions. \begin{thm} \cite[Theorem 4.2]{mitchell}\label{thm:E=00003D0} Let $n>1$ and $\eta\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$ with a natural decomposition $\eta=\eta_{n}\cdots\eta_{1}$. Then , the following statements hold true: \begin{enumerate} \item $\overline{\eta}=0$; \item there is an exact sequence $\kappa$$\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$ and a pair of morphisms with fixed ends $0\leftarrow\kappa\rightarrow\eta\mbox{.}$ \item there is an exact sequence $\kappa'\in\mathcal{E}_{\mathcal{C}}^{n}(C,A)$ and a pair of morphisms with fixed ends $0\rightarrow\kappa'\leftarrow\eta\mbox{.}$ \end{enumerate} \end{thm} \section{Additional structure in Abelian Categories} In this section we will approach our problem dealing with arbitrary products and coproducts. Of course, an abelian category does not necessarily have arbitrary products and coproducts. Hence, we will review briefly the theory of abelian categories with additional structure introduced by A. Grothendieck in \cite{Ab}. For further reading we suggest \cite[Section 2.8]{Popescu}. \subsection{Limits and colimits} $ $ \begin{defn} \cite[Section 1.4.]{Popescu} Let $\mathcal{C}$ and $I$ be categories, where $I$ is small (that is the class of objects of $I$ is a set). Let $F:I\rightarrow\mathcal{C}$ be a functor and $X\in\mathcal{C}$. A family of morphisms $\left\{ \alpha_{i}:F(i)\rightarrow X\right\} _{i\in I}$ in $\mathcal{C}$ is co-compatible with $F$, if $\alpha_{i}=\alpha_{j}F(\lambda)$ for every $\lambda:i\rightarrow j$ in $I$.\\ \begin{minipage}[t]{0.7\columnwidth The colimit (or inductive limit) of $F$ is an object $\colim$ in $\mathcal{C}$ with a co-compatible family of morphisms \[ \left\{ \mu_{i}:F(i)\rightarrow\colim\right\} _{i\in I}\mbox{,} \] such that for every co-compatible family of morphisms $\left\{ \gamma_{i}:F(i)\rightarrow X\right\} _{i\in I}$, there is a unique morphism $\gamma:\colim\rightarrow X$ such that $\gamma_{i}=\gamma\mu_{i}$ for every $i\in I$. \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.25\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=3cm,main node/.style=, font=\scriptsize] \coordinate (A) at (150:1.4cm); \coordinate (B) at (30:1.4cm); \coordinate (C) at (270:1.4cm); \node (Fi) at (barycentric cs:A=1,C=0,B=0 ) {$F(i) $}; \node (Fj) at (barycentric cs:A=0,C=0,B=1 ) {$F(j) $}; \node (X) at (barycentric cs:A=0,C=1,B=0 ) {$X $}; \node (L) at (barycentric cs:A=1,C=1,B=1 ) {$\operatorname{colim}F $}; \draw[->, thin] (Fi) to node {$F( \lambda )$} (Fj); \draw[->, thin] (Fi) to node [below left] {$\gamma _i$} (X); \draw[->, thin] (Fj) to node [below right] {$\gamma _j$} (X); \draw[->, thin] (Fi) to node {$\mu _i$} (L); \draw[->, thin] (Fj) to [above left] node {$\mu _j$} (L); \draw[->, dashed] (L) to [above left] node {$\gamma$} (X); \end{tikzpicture} \ \end{minipage}} \end{defn} Let $I$ be a small category and $\lambda:i\rightarrow j$ be a morphism in $I$. The following notation will be useful $s(\lambda):=i$ and $t(\lambda):=j$. \begin{prop} \label{prop:construccion colimites}\cite[Proposition 8.4]{ringsofQuotients} Let $\mathcal{C}$ be a preadditive category with coproducts and cokernels, $I$ be a small category, $F:I\rightarrow\mathcal{C}$ be a functor, and \[ u_{k}:F(k)\rightarrow\bigoplus_{i\in I}F(i)\quad\forall k\in I\mbox{,}\quad v_{\lambda}:F(s(\lambda))\rightarrow\bigoplus_{\gamma\in H}F(s(\gamma))\quad\forall\lambda\in H:=\mbox{Hom}_{I} \] be the respective canonical inclusions into the coproducts. Then,\\ \begin{minipage}[t]{0.48\columnwidth {\footnotesize \[ \colim=\Cok[\left(\bigoplus_{\gamma\in H}F(s(\gamma))\overset{\varphi}{\rightarrow}\bigoplus_{i\in I}F(i)\right)]\mbox{,} \]} where $\varphi$ is the morphism induced by the universal property of coproducts applied to the family of morphisms \[ \left\{ \varphi_{\lambda}:=u_{s(\lambda)}-u_{t(\lambda)}F(\lambda)\right\} _{\lambda\in H}\mbox{.} \] \end{minipage}\hfill{ \begin{minipage}[t]{0.49\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,framed, font=\scriptsize] \node[main node] (1) at (0,0) {$\bigoplus_{\gamma\in H}F(s(\gamma))$}; \node[main node] (2) at (3,0) {$\bigoplus_{i\in I}F(i)$}; \node[main node] (3) [below of=1] {$F(s( \lambda))$}; \node[main node] (4) [below of=2] {$F(s( \lambda)) \oplus F(t( \lambda))$}; \draw[->, dashed] (1) to node {$\varphi$} (2); \draw[->, thin] (3) to [below] node {$\left(\begin{smallmatrix}1\\-F(\lambda)\end{smallmatrix}\right)$} (4); \draw[->, thin] (3) to node {$v_{\lambda}$} (1); \draw[->, thin] (4) to [left] node {$\left(\begin{smallmatrix}u_{s(\lambda)} & u_{t(\lambda)}\end{smallmatrix}\right)$} (2); \end{tikzpicture} \ \end{minipage} \end{prop} The dual notion of colimit is the limit. \begin{defn} \cite[Section 1.4.]{Popescu} Let $\mathcal{C}$ and $I$ be categories, with $I$ small. Let $F:I\rightarrow\mathcal{C}$ be a functor and $X\in\mathcal{C}$. A family of morphisms $\left\{ \alpha_{i}:X\rightarrow F(i)\right\} _{i\in I}$ in $\mathcal{C}$ is compatible with $F$, if $\alpha_{j}=F(\lambda)\alpha_{i}$ for every $\lambda:i\rightarrow j$ in $I$.\\ \begin{minipage}[t]{0.7\columnwidth The limit (or projective limit) of $F$ is an object $\lim F$ in $\mathcal{C}$ together with a compatible family of morphisms \[ \left\{ \mu_{i}:\lim F\rightarrow F(i)\right\} _{i\in I} \] such that for any compatible family of morphisms $\left\{ \gamma_{i}:X\rightarrow F(i)\right\} _{i\in I}$ there is a unique $\gamma \in \operatorname{Hom}_{\mathcal{C}}(X, \lim F )$ such that $\gamma_{i}=\mu_{i}\gamma$ for every $i\in I$. \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.25\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=3cm,main node/.style=, font=\scriptsize] \coordinate (A) at (150:1.4cm); \coordinate (B) at (30:1.4cm); \coordinate (C) at (270:1.4cm); \node (Fi) at (barycentric cs:A=1,C=0,B=0 ) {$F(i) $}; \node (Fj) at (barycentric cs:A=0,C=0,B=1 ) {$F(j) $}; \node (X) at (barycentric cs:A=0,C=1,B=0 ) {$X $}; \node (L) at (barycentric cs:A=1,C=1,B=1 ) {$\operatorname{lim}F $}; \draw[->, thin] (Fi) to node {$F( \lambda )$} (Fj); \draw[->, thin] (X) to node [below left] {$\gamma _i$} (Fi); \draw[->, thin] (X) to node [below right] {$\gamma _j$} (Fj); \draw[->, thin] (L) to node [above right] {$\mu _i$} (Fi); \draw[->, thin] (L) to [above left] node {$\mu _j$} (Fj); \draw[->, dashed] (X) to [above left] node {$\gamma$} (L); \end{tikzpicture} \ \end{minipage}} \end{defn} \begin{prop} \label{prop:construccion limites}\cite[Proposition 8.2]{ringsofQuotients} Let $\mathcal{C}$ be a preadditive category with products and kernels, $I$ be an small category, $F:I\rightarrow\mathcal{C}$ be a functor, and \[ u_{k}:\prod_{i\in I}F(i)\rightarrow F(k)\quad\forall k\in I,\quad v_{\lambda}:\prod_{\gamma\in H}F(t(\gamma))\rightarrow F(t(\lambda))\quad\forall\lambda\in H:=\mbox{Hom}_{I} \] be the respective canonical proyections out of the products. Then,\\ \begin{minipage}[t]{0.46\columnwidth {\small \[ \lim F=\Ker[\left(\prod_{i\in I}F(i)\overset{\varphi}{\rightarrow}\prod_{\gamma\in H}F(t(\gamma))\right)]\mbox{,} \]} where $\varphi$ is the morphism induced by the universal property of products applied to the family of morphisms \[ \left\{ \varphi_{\lambda}:=F(\lambda)u_{s(\lambda)}-u_{t(\lambda)}\right\} _{\lambda\in H}\mbox{.} \] \end{minipage}\hfill{ \begin{minipage}[t]{0.49\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.5cm,main node/.style=,framed] \node[main node] (1) at (0,0) {$\prod_{i\in I}F(i)$}; \node[main node] (2) at (3,0) {$\prod_{\gamma\in H}F(t(\gamma))$}; \node[main node] (3) [below of=1] {$F(s( \lambda)) \oplus F(t( \lambda))$}; \node[main node] (4) [below of=2] {$F(t( \lambda))$}; \draw[->, dashed] (1) to node {$\varphi$} (2); \draw[->, thin] (1) to [left] node {$\left(\begin{smallmatrix}u_{s(\lambda)}\\u_{t(\lambda)}\end{smallmatrix}\right)$} (3); \draw[->, thin] (2) to node {$v_{\lambda}$} (4); \draw[->, thin] (3) to [below] node {$\left(\begin{smallmatrix} -F(\lambda) & 1\end{smallmatrix}\right)$} (4); \end{tikzpicture} \ \end{minipage}\end{prop} \begin{defn} Let $I$ be a small category and $\mathcal{C}$ be an abelian category. It is said that a family of objects and morphisms $\left(M_{i},f_{\alpha}\right)_{i\in I,\alpha\in\mbox{Hom}_{I}}$ is a direct system, if there is a functor $F:I\rightarrow\mathcal{C}$ such that $F(i)=M_{i}\,\forall i\in I$ and $F(\alpha)=f_{\alpha}$ for every $\alpha\in\mbox{Hom}_{I}$. \end{defn} \subsection{Ab3 and Ab4 Categories} \begin{defn} \cite[Section 2.8.]{Popescu} An Ab3 category is an abelian category satisfying the following condition: \begin{description} \item [{(Ab3)}] For every set of objects $\left\{ A_{i}\right\} _{i\in I}$ in $\mathcal{C}$, the coproduct $\bigoplus_{i\in I}A_{i}$ exists. \end{description} \end{defn} We remember the following well known fact. \begin{prop} \cite[Section 2.8.]{Popescu} Let $\mathcal{C}$ be an Ab3 category and \[ \left\{ X'_{i}\overset{f_{i}}{\rightarrow}X_{i}\overset{g_{i}}{\rightarrow}X''_{i}\rightarrow0\right\} _{i\in I} \] be a set of exact sequences in $\mathcal{C}$. Then, \[ \bigoplus_{i\in I}X'_{i}\overset{\bigoplus_{i\in I}f_{i}}{\rightarrow}\bigoplus_{i\in I}X_{i}\overset{\bigoplus_{i\in I}g_{i}}{\rightarrow}\bigoplus_{i\in I}X''_{i}\rightarrow0 \] is an exact sequence in $\mathcal{C}$. \end{prop} In general, it is not possible to prove that $\bigoplus_{i\in I}f_{i}$ is a monomorphism if each $f_{i}$ is a monomorphism. For this reason, the following Grothendieck's condition arised. \begin{defn} \cite[Proposition 8.3.]{Popescu} An Ab4 category is an Ab3 category $\mathcal{C}$ satisfying the following condition: \begin{description} \item [{(Ab4)}] for every set of monomorphisms $\left\{ f_{i}:X_{i}\rightarrow Y_{i}\right\} _{i\in I}$ in $\mathcal{C}$, the morphism $\bigoplus_{i\in I}f_{i}$ is a monomorphism. \end{description} \end{defn} We will refer to the dual condition as Ab4{*}. \begin{rem} Let $\mathcal{C}$ be an Ab4 category. Then, for every sets of objects $\{A_{i}\}_{i\in I}$ and $\{B_{i}\}_{i\in I}$ in $\mathcal{C}$, the correspondence \begin{align*} C:\prod_{i\in I}\Ext[A_{i}][n][][B_{i}] & \rightarrow\Ext[\bigoplus_{i\in I}A_{i}][n][][\bigoplus_{i\in I}B_{i}]\mbox{, }(\overline{\eta_{i}})\mapsto\overline{\bigoplus_{i\in I}\eta_{i}}\mbox{,} \end{align*} is a well defined morphism of abelian groups. \end{rem} \subsection{Ext groups and arbitrary products and coproducts} We are finally ready to proceed in our goal's direction. \begin{lem} \label{lem:n-pushouy de monos} Let $\mathcal{C}$ be an Ab4 category, and \[ \left\{ \eta_{i}:\quad\suc[B][A_{i}][C_{i}][f_{i}][g_{i}]\right\} _{i\in I} \] be a set of short exact sequences in $\mathcal{C}$. Then, there is a short exact sequence \[ \eta:\quad\suc[B][\mbox{colim}(f_{i})][\bigoplus_{i\in I}C_{i}][f][g] \] such that $\eta\mu_{i}=\eta_{i}\:\forall i\in I$, where $\left\{ \mu_{i}:C_{i}\rightarrow\bigoplus_{i\in I}C_{i}\right\} _{i\in I}$ is the family of canonical inyections into the coproduct. \end{lem} \begin{proof} Consider the set $\left\{ f_{i}:B\rightarrow A_{i}\right\} _{i\in I}$ as a direct system. Observe that the set of morphisms of exact sequences \[ \left\{ (1_{B},f_{i},0):\beta\rightarrow\eta_{i}\right\} _{i\in I}\quad\mbox{with }\quad\beta:=\quad\suc[B][B][0][1][0]\mbox{,} \] is a direct system of exact sequences. We will consider the colimit of such system and prove that, as result, we get a short exact sequence. To this end, we observe that $(B,1_{i}:B\rightarrow B)_{i\in I}$ is the colimit of the system $\left\{ 1_{i}:B\rightarrow B\right\} _{i\in I}$ and that $(\bigoplus_{i\in I}C_{i},\mu_{i}:C_{i}\rightarrow\bigoplus_{i\in I}C_{i})$ is the colimit of the system $\left\{ 0:0\rightarrow C_{i}\right\} _{i\in I}$.\\ \begin{minipage}[t]{0.33\columnwidth Hence, by \ref{prop:construccion colimites} we build the diagram beside, where the columns are the morphism mentioned in \ref{prop:construccion colimites}, the upper and central rows are coproducts of the sequences $\beta$ and $\eta_{i}$ respectively, and the bottom row is the result of the colimits. Thus, by the snake lemma we get the exact sequence \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.62\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=2.5cm,main node/.style=,x=2cm,y=.9cm, font=\scriptsize] \node[main node] (01) at (3,1) {$\scriptstyle 0$}; \node[main node] (02) at (0.3,0) {$\scriptstyle 0$}; \node[main node] (03) at (3.6,0) {$\scriptstyle 0$}; \node[main node] (04) at (0.3,-1) {$\scriptstyle 0$}; \node[main node] (05) at (3.6,-1) {$\scriptstyle 0$}; \node[main node] (06) at (1,-3) {$\scriptstyle 0$}; \node[main node] (07) at (2,-3) {$\scriptstyle 0$}; \node[main node] (08) at (3,-3) {$\scriptstyle 0$}; \node[main node] (09) at (3.6,-2) {$\scriptstyle 0$}; \node[main node] (X) at (2,-.5) {$\scriptstyle $}; \node[main node] (1) at (1,0) {$\scriptstyle \bigoplus _{i\in I} B_i$}; \node[main node] (2) at (2,0) {$\scriptstyle \bigoplus _{i\in I} B_i$}; \node[main node] (3) at (3,0) {$\scriptstyle 0$}; \node[main node] (4) at (1,-1) {$\scriptstyle B \oplus ( \bigoplus _{i\in I} B_i )$}; \node[main node] (5) at (2,-1) {$\scriptstyle B \oplus ( \bigoplus _{i\in I} A_i )$}; \node[main node] (6) at (3,-1) {$\scriptstyle \bigoplus _{i\in I} C_i$}; \node[main node] (7) at (1,-2) {$\scriptstyle B$}; \node[main node] (8) at (2,-2) {$\scriptstyle \operatorname{colim}(f_i)$}; \node[main node] (9) at (3,-2) {$\scriptstyle \bigoplus _{i\in I} C_i$}; \draw[->, thin] (01) to node {$$} (3); \draw[->, thin] (02) to node {$$} (1); \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to node {$$} (3); \draw[->, thin] (3) to node {$$} (03); \draw[->, thin] (04) to node {$$} (4); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (5) to node {$$} (6); \draw[->, thin] (6) to node {$$} (05); \draw[->, thin] (7) to node {$$} (8); \draw[->, thin] (8) to node {$$} (9); \draw[->, thin] (9) to node {$$} (09); \draw[->, thin] (1) to node {$$} (4); \draw[->, thin] (4) to node {$$} (7); \draw[->, thin] (7) to node {$$} (06); \draw[->, thin] (2) to node {$$} (5); \draw[->, thin] (5) to node {$$} (8); \draw[->, thin] (8) to node {$$} (07); \draw[->, thin] (01) to node {$$} (3); \draw[->, thin] (3) to node {$$} (6); \draw[->, thin] (6) to node {$$} (9); \draw[->, thin] (9) to node {$$} (08); \draw[-, thin] (01) ..controls (4,-.3).. (X); \draw[->, thin] (X) ..controls (0,-.7).. (7); \draw[-, thin] (01) ..controls (4,-.3).. (2,-.5); \draw[->, thin] (2,-.5) ..controls (0,-.7).. (7); \end{tikzpicture} \ \end{minipage}}\\ \[ \eta:\quad\suc[B][\mbox{colim}(f_{i})][\bigoplus_{i\in I}C_{i}]\mbox{.} \] Furthermore, the families of morphisms associated to such colimits give us the exact sequence morphisms $(1,\mu'_{i},\mu_{i}):\eta_{i}\rightarrow\eta\;\forall i\in I\mbox{,}$ which proves the statement. \end{proof} \begin{prop} \label{prop:ext1 y coprods arb}Let $\mathcal{C}$ be an Ab4 category and $\{A_{i}\}_{i\in I}$ a set of objects in $\mathcal{C}$. Consider the coproduct with the canonical inclusions $\left(\mu_{i}:A_{i}\rightarrow\bigoplus_{i\in I}A_{i}\right)_{i\in I}$. Then, the correspondence $\Psi:\Ext[\bigoplus A_{i}][1][][B]\rightarrow\prod_{i\in I}\Ext[A_{i}][1][][B]$, defined by $E\mapsto\left(E\mu_{i}\right)_{i\in I}$, is an isomorphism for every $B\in \mathcal{C}$. \end{prop} \begin{proof} We will proceed by proving the following steps: \begin{enumerate} \item The correspondence $\Psi$ is an abelian group morphism. \item $\Psi$ is injective. \item Given $(\overline{\eta_{i}})\in\prod_{i\in I}\Ext[A_{i}][1][][B]$, there is $E\in\Ext[\bigoplus A_{i}][1][][B]$ such that $\Psi(E)=(\overline{\eta_{i}})$. \end{enumerate} Clearly, proving these statements are enough to conclude the desired proposition. \begin{enumerate} \item It follows by \ref{cor:asociatividad con morfismos}. \item Suppose that $E$ is an extension with representative \[ \eta:\;\suc[B][C][\bigoplus_{i\in I}A_{i}][f][g] \] such that $E\mu_{i}=0$ $\forall i\in I$. Suppose that $(1,p_{i},\mu_{i}):E\mu_{i}\rightarrow E$ is the morphism induced by $\mu_{i}$, and that each extension $E\mu_{i}$ has as representative the exact \\ \begin{minipage}[t]{0.45\columnwidth sequence $\eta_{i}:\;\suc[B][C_{i}][A_{i}][f_{i}][g_{i}]\mbox{.}$ By definition, there is a morphism $h_{i}:A_{i}\rightarrow C_{i}$ such that $g_{i}h_{i}=1_{A_{i}}$. Thus, by the coproduct universal property, there is a unique morphism $h:\bigoplus_{i\in I}A_{i}\rightarrow C$ such that $h\mu_{i}=p_{i}h_{i}$ $\forall i\in\{1,2\}$. Therefore, by \[ gh\mu_{i}=gp_{i}h_{i}=\mu_{i}g_{i}h_{i}=\mu_{i}\:\forall i\in I, \] \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.45\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.3cm,main node/.style=,x=1.8cm,y=1.5cm, font=\scriptsize] \node[main node] (1) at (0,0) {$0$}; \node[main node] (2) at (0.5,0) {$B$}; \node[main node] (3) [right of=2] {$C_i$}; \node[main node] (4) [right of=3] {$A_i$}; \node[main node] (5) at (2.6,0) {$0$}; \node[main node] (6) [below of=1] {$0$}; \node[main node] (7) [below of=2] {$B$}; \node[main node] (8) [right of=7] {$C$}; \node[main node] (9) [right of=8] {$\bigoplus _{i\in I} A_i$}; \node[main node] (0) [below of=5] {$0$}; \draw[->, thin] (1) to node [above]{$$} (2); \draw[->, thin] (2) to node [below]{$f_i$} (3); \draw[->, thin] (3) to node [below]{$g_i$} (4); \draw[->, thin] (4) to node [above]{$$} (5); \draw[->, thin] (6) to node [below]{$$} (7); \draw[->, thin] (7) to node [above]{$f$} (8); \draw[->, thin] (8) to node [above]{$g$} (9); \draw[->, thin] (9) to node [below]{$$} (0); \draw[-, double] (2) to node [above]{$$} (7); \draw[->, thin] (3) to node {$p_i$} (8); \draw[->, thin] (4) to node {$\mu _i$} (9); \draw[->, thin] (4) to [bend right=30] node [above]{$h_i$} (3); \draw[->, dashed] (9) to [bend left=30] node [below]{$h$} (8); \end{tikzpicture} \ \end{minipage}}\\ we have that $gh=1_{\bigoplus_{i\in I}A_{i}}$ by the coproduct universal property; and thus, $E=0$. \item It follows by \ref{lem:n-pushouy de monos}. \end{enumerate} \end{proof} \begin{thm} \label{prop:extn coprods arb} Let $\mathcal{C}$ be an Ab4 category, $n\geq1$, and $\{A_{i}\}_{i\in I}$ be a set of objects in $\mathcal{C}$. Consider the coproduct $\bigoplus _{i \in I} A_i$ and the canonic inclusions $\left(\mu_{i}:A_{i}\rightarrow\bigoplus_{i\in I}A_{i}\right)_{i\in I}$. Then, the correspondence $\Psi_{n}:\Ext[\bigoplus A_{i}][n][][B]\rightarrow\prod_{i\in I}\Ext[A_{i}][n][][B]$, $E\mapsto\left(E\mu_{i}\right)_{i\in I}$, is an isomorphism of abelian groups for every $B\in \mathcal{C}$.\end{thm} \begin{proof} We will proceed by proving the following statements: \begin{enumerate} \item The correspondence $\Psi_{n}$ is a morphism of abelian groups; \item $\Psi_{n}$ is injective; \item For every $(\overline{\eta_{i}})\in\prod_{i\in I}\Ext[A_{i}][n][][B]$, there is $E\in \Ext[\bigoplus _{i \in I} A_{i}][n][][B]$ such that $\Psi_{n}(E)=(\overline{\eta_{i}})$. \end{enumerate} It is worth to mention that the result was already proved in \ref{prop:ext1 y coprods arb} for $n=1$. Furthermore, in the proof of \ref{prop:ext1 y coprods arb}(c) it was shown explicitly the inverse function of $\Psi_{1}$. We will denote such correspondence as $\Psi_{1}^{-1}$. \begin{enumerate} \item It follows by \ref{cor:asociatividad con morfismos}. \item Let $\overline{\eta}$ be an extension with a natural decomposition $\overline{\eta}=\overline{\eta_{n}}\cdots\overline{\eta_{1}}$ such that $\overline{\eta}\mu_{i}=0\:\forall i\in I$. By \ref{thm:E=00003D0} this means that for every $i\in I$ there is a pair of exact sequences morphisms with fixed ends $\eta\mu_{i}\leftarrow\kappa_{i}\rightarrow0$. Suppose that each exact sequence $\kappa_{i}$ has the natural decomposition $\kappa_{i}=\kappa(i)_{n}\cdots\kappa(i)_{1}\mbox{.}$ It follows from the morphism $\kappa_{i}\rightarrow0$ that \[ \kappa(i)_{n}:=\kappa'_{i}:\quad\suc[B][Y_{i}][X_{i}][f_{i}][g_{i}] \] is a splitting exact sequence. Let $(\overline{\kappa'_{i}}):=(\overline{\kappa'_{i}})_{i\in I}\in\prod_{i\in I}\Ext[X_{i}][1][][B]\mbox{.}$ By \ref{prop:ext1 y coprods arb}(c), we know that $\Psi_{1}^{-1}(\overline{\kappa'_{i}})\in\Ext[\bigoplus_{i\in I}X_{i}][1][][B]$ is an extension such that $\Psi_{1}^{-1}(\overline{\kappa'_{i}})\mu'_{i}=\kappa'_{i}\forall i\in I$, where each $\mu'_{i}:X_{i}\rightarrow\bigoplus_{i\in I}X_{i}$ is the canonical inclusion. Let $\kappa:=\Psi_{1}^{-1}(\overline{\kappa'_{i}})\left(\overline{\bigoplus_{i\in I}\kappa(i)_{n-1}}\right)\cdots\left(\overline{\bigoplus_{i\in I}\kappa(i)_{1}}\right)\mbox{.}$ We will show that there is a pair of exact sequence morphisms with fixed ends $\eta\leftarrow\kappa\rightarrow0$, which will prove (b) by \ref{thm:E=00003D0}. Indeed, by the fact that for every $i\in I$ there is a morphism with fixed ends $\eta\mu_{i}\leftarrow\kappa_{i}$, it follows that there is a morphism with fixed right end $\eta_{n-1}\cdots\eta_{1}\mu_{i}\leftarrow\kappa(i)_{n-1}\cdots\kappa(i)_{1}\mbox{,}$ inducing by the coproduct universal property a morphism with fixed right end \[ \eta_{n-1}\cdots\eta_{1}\leftarrow\left(\bigoplus_{i\in I}\kappa(i)_{n-1}\right)\cdots\left(\bigoplus_{i\in I}\kappa(i)_{1}\right)\mbox{.} \] Furthermore, by the proof of \ref{prop:ext1 y coprods arb} we know that $\Psi_{1}^{-1}(\overline{\kappa'_{i}})$ has as representative the exact sequence $\suc[B][\mbox{colim}(f_{i})][\bigoplus_{i\in I}X_{i}][f][g]\mbox{.}$ Hence, using the colimit universal property, is easy to see that there is a morphism with fixed left end $\eta_{n}\leftarrow\Psi_{1}^{-1}(\overline{\kappa'_{i}})\mbox{.}$ Therefore, with the last morphisms we can build a morphism with fixed ends $\eta\leftarrow\kappa\mbox{.}$ For showing the existence of a morphism with fixed ends $\kappa\rightarrow0$, it is enough to show that $f$ is a splitting monomorphism, which follows straightforward from the colimit universal property together with the fact that every $f_{i}$ is a splitting monomorphism. \item Let $(\overline{\eta_{i}})\in\prod_{i\in I}\Ext[A_{i}][n][][B]$. We observe the following facts for every $i\in I$. Suppose $\overline{\eta_{i}}=\overline{\kappa_{n}^{i}}\cdots\overline{\kappa_{1}^{i}}$ is a natural decomposition, where \[ \kappa_{k}^{i}:\quad\suc[B_{k+1}^{i}][C_{k}^{i}][B_{k}^{i}]\:\forall k\in\{n,\cdots,1\}\mbox{.} \] Consider the coproduct canonical inclusions $u_{k}^{i}:B_{k}^{i}\rightarrow\bigoplus_{i\in I}B_{k}^{i}$. Observe that $u_{1}^{i}=\mu_{i}\,\forall i\in I$. By \ref{cor:aE=00003DE'b} we can see that $\overline{\left(\bigoplus_{i\in I}\kappa_{k}^{i}\right)}u_{k}^{i}=u_{k+1}^{i}\overline{\kappa_{k}^{i}}$ for all $ k\in\left\{ 1,\cdots,n+1\right\} \mbox{.}$ Hence, by \ref{prop:ext1 y coprods arb}(c), for every $i \in I$ the extension defined as $\overline{\eta}:=\Psi_{1}^{-1}(\kappa_{n}^{i})_{i\in I}\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{n-1}\right)}\cdots\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{1}\right)}\mbox{,}$ satisfies by recursion the following equalities \begin{alignat*}{1} \overline{\eta}\mu_{i} =\overline{\eta}u_{1}^{i} & =\Psi_{1}^{-1}(\kappa_{n}^{i})_{i\in I}\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{n-1}\right)}\cdots\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{1}\right)}u_{1}^{i}\\ & =\Psi_{1}^{-1}(\kappa_{n}^{i})_{i\in I}\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{n-1}\right)}\cdots\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{2}\right)}u_{2}\overline{\kappa^{i}{}_{1}}\\ & =\Psi_{1}^{-1}(\kappa_{n}^{i})_{i\in I}\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{n-1}\right)}\cdots\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{3}\right)}u_{3}\overline{\kappa{}_{2}^{i}}\overline{\kappa^{i}{}_{1}}\\ & \vdots\\ & =\Psi_{1}^{-1}(\kappa_{n}^{i})_{i\in I}\overline{\left(\bigoplus_{i\in I}\kappa^{i}{}_{n-1}\right)}u_{n-1}\overline{\kappa_{n-2}^{i}}\cdots\overline{\kappa_{1}^{i}}\\ & =\Psi_{1}^{-1}(\kappa_{n}^{i})_{i\in I}u_{n}\overline{\kappa_{n-1}^{i}}\cdots\overline{\kappa_{1}^{i}}\\ & =\overline{\kappa_{n}^{i}}\overline{\kappa_{n-1}^{i}}\cdots\overline{\kappa_{1}^{i}}\\ & =\eta_{i}\mbox{.} \end{alignat*} \end{enumerate} \end{proof} By duality we have the following result. \begin{thm} \label{thm:ext vs prod arb} Let $\mathcal{C}$ be an Ab4{*} category, $n\geq1$, and $\{A_{i}\}_{i\in I}$ be a set of objects in $\mathcal{C}$. Consider the product $\left(\pi_{i}:\prod_{i\in I}A_{i}\rightarrow A_{i}\right)_{i\in I}$. Then, the correspondence $\Phi_{n}:\Ext[B][n][][\prod_{i\in I}A_{i}]\rightarrow\prod_{i\in I}\Ext[B][n][][A_{i}]$, $E\mapsto\left(\pi_{i}E\right)_{i\in I}$, is an isomorphism of abelian groups for every $B \in \mathcal{C}$. \end{thm} We will end this section introducing an application related to the tilting theory developed in recent years. Namely, R. Colpi and K. R. Fuller developed a theory of tilting objects of projective dimension $\leq1$ for abelian categories in \cite{colpi2007tilting}, and P. \v{C}oupek and J. {\v{S}}t'ov{\'\i}{\v{c}}ek developed a theory of cotilting objects of injective dimension $\leq1$ for Grothendieck categories in \cite{vcoupek2017cotilting}. A fundamental result needed in these theories is that \[ {}\mbox{Ext}_{\mathcal{A}}^{1}\left(\bigoplus_{i\in I}A_{i},X\right)=0{}\mbox{ if and only if }{}\mbox{Ext}_{\mathcal{A}}^{1}(A_{i},X)=0\,\forall i\in I\mbox{.} \] Such result is proved showing that, in any Ab3 abelian category $\mathcal{A}$, there is an injective correspondence $\mbox{Ext}_{\mathcal{A}}^{1}\left(\bigoplus_{i\in I}A_{i},X\right)\rightarrow\prod_{i\in I}\mbox{Ext}_{\mathcal{A}}^{1}(A_{i},X)$ (see \cite[Proposition 8.1, Proposition 8.2]{colpi2007tilting} and \cite[Proposition A.1]{vcoupek2017cotilting} or the proof of \ref{prop:ext1 y coprods arb}). Now, for extending the theory to tilting objects of projective dimension $\leq n$, it is needed a similar result for $\mbox{Ext}^{n}$. But, it is not known in general if there is an injective correspondence $\mbox{Ext}_{\mathcal{A}}^{n}\left(\bigoplus_{i\in I}A_{i},X\right)\rightarrow\prod_{i\in I}\mbox{Ext}_{\mathcal{A}}^{n}(A_{i},X)$. The following result follows from \ref{thm:ext vs prod arb} and \ref{prop:extn coprods arb}. It is worth to mention that it extends \cite[Corollary 8.3]{colpi2007tilting} and the dual of \cite[Corollary A.2]{vcoupek2017cotilting} when the category is Ab4. \begin{cor} Let $\mathcal{C}$ be an abelian category, $n\geq1$, $\{A_{i}\}_{i\in I}$ be a set of objects in $\mathcal{C}$, and $B\in\mathcal{C}$. Then, the following statements hold true: \begin{enumerate} \item If $\mathcal{C}$ is Ab4, then $\Ext[\bigoplus_{i\in I}A_{i}][n][][B]=0$ if and only if $\Ext[A_{i}][n][][B]=0\,\forall i\in I$. \item If $\mathcal{C}$ is Ab4{*}, then $\Ext[B][n][][\prod_{i\in I}A_{i}]=0$ if and only if $\Ext[B][n][][A_{i}]=0\,\forall i\in I$. \end{enumerate} \end{cor} \section{A characterization of Ab4} This section is inspired by the comments made by Sergio Estrada during the Coloquio Latinoamericano de \'Algebra XXIII. The goal is to prove that if the correspondence $\Psi:\Ext[\bigoplus A_{i}][1][][B]\rightarrow\prod_{i\in I}\Ext[A_{i}][1][][B]$ defined above is always biyective for an Ab3 category $\mathcal{C}$, then $\mathcal{C}$ is Ab4. Throughout this section for every natural number $ n>0$ we will consider the correspondence $\Psi_{n}:\Ext[\bigoplus X_{i}][n][][Y]\rightarrow\prod_{i\in I}\Ext[X_{i}][n][][Y]$ defined above. In \ref{lem:n-pushouy de monos}, it was proved that, if $\mathcal{C}$ is Ab4, then given a set of exact sequences \[ \left\{ \eta_{i}:\:\suc[B][A_{i}][C_{i}][f_{i}]\right\} _{i\in I}\mbox{,} \] it can be built an exact sequence $\suc[B][\mbox{colim}(f_{i})][\bigoplus_{i\in I}C_{i}][f]$, where $f$ is part of the co-compatible family of morphisms associated to $\colim[f_{i}]$. In case $\mathcal{C}$ is only an Ab3 category, then by doing a similar construction we get an exact sequence $B\rightarrow\colim[f_{i}]\rightarrow\bigoplus_{i\in I}C_{i}\rightarrow0$. Indeed, consider the direct system of exact sequences \\ \begin{minipage}[t]{1\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (1) at (0,0) {$0$}; \node[main node] (2) [right of=1] {$B$}; \node[main node] (3) [right of=2] {$B$}; \node[main node] (4) [right of=3] {$0$}; \node[main node] (5) [right of=4] {$0$}; \node[main node] (1') [below of=1] {$0$}; \node[main node] (2') [right of=1'] {$B$}; \node[main node] (3') [right of=2'] {$A_i$}; \node[main node] (4') [right of=3'] {$C_i$}; \node[main node] (5') [right of=4'] {$0$}; \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to node {$1$} (3); \draw[->, thin] (3) to node {$$} (4); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (1') to node {$$} (2'); \draw[->, thin] (2') to node {$f_i$} (3'); \draw[->, thin] (3') to node {$$} (4'); \draw[->, thin] (4') to node {$$} (5'); \draw[->, thin] (2) to node {$1$} (2'); \draw[->, thin] (3) to node {$f_i$} (3'); \draw[->, thin] (4) to node {$$} (4'); \end{tikzpicture} \ \end{minipage}\\ Then, we have an exact sequence $B\stackrel{f}{\rightarrow}\colim[f_{i}]\stackrel{g}{\rightarrow}\bigoplus_{i\in I}C_{i}\rightarrow0$, where $f$ and $g$ are induced by the colimit universal property (see \cite[page 55]{Popescu}). Such exact sequence we shall name it $\Theta(\eta_{i})$. As a first step we will show that, even if the category is not Ab4, if the correspondence $\Psi$ is biyective, then the inverse correspondence is given by $\Theta$. That is, if $\Psi:\Ext[\bigoplus A_{i}][1][][B]\rightarrow\prod_{i\in I}\Ext[A_{i}][1][][B]$ is biyective, then for every set of exact sequences $\left\{ \eta_{i}:\:\suc[B][A_{i}][C_{i}][f_{i}]\right\} _{i\in I}$, then the morphism $f$ in $\Phi(\eta_{i})$ is monic, and $\Psi\overline{\Theta(\eta_{i})}=1$. \begin{lem} Let $\mathcal{C}$ be an AB3 category, $\left\{ A_{i}\right\} _{i\in I}$ be a set of objects in $\mathcal{C}$, and $B\in\mathcal{C}$. Consider the coproduct $\bigoplus_{i\in I}A_{i}$ with the canonical inclusions $\left\{ \mu_{i}:A_{i}\rightarrow\bigoplus_{i\in I}A_{i}\right\} _{i\in I}$, and a set of exact sequences $\left\{ \eta_{i}:\;\suc[B][E_{i}][A_{i}][f_{i}][g_{i}]\right\} _{i\in I}$. If there is an exact sequence $\eta:\:\suc[B][E][\bigoplus_{i\in I}A_{i}][f'][g']$ such that $\overline{\eta}\mu_{i}=\overline{\eta_{i}}\,\forall i\in I$, then the morphism $f$ in the exact sequence \[ \Theta(\eta_{i}):\:B\stackrel{f}{\rightarrow}\colim[f_{i}]\stackrel{g}{\rightarrow}\bigoplus_{i\in I}C_{i}\rightarrow0 \] is a monomorphism and $\overline{\eta}=\overline{\Theta(\eta_{i})}$.\end{lem} \begin{proof} Consider the direct system $\left\{ f_{i}:B\rightarrow E_{i}\right\} _{i\in I}$. We know that $\overline{\eta}\mu_{i}=\overline{\eta_{i}}\,\forall i\in I$. Hence, for every $i\in I$ there is a morphism of exact sequences\\ \begin{minipage}[t]{0.5\columnwidth \[ (1,\nu_{i},\mu_{i}):\eta_{i}\rightarrow\eta\mbox{.} \] Observe that the set of morphisms $\left\{ \nu_{i}:E_{i}\rightarrow E\right\} _{i\in I}$, together with the morphism $f':B\rightarrow E$, is a co-compatible family of morphisms. \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.45\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.3cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (1) at (0,0) {$0$}; \node[main node] (2) [right of=1] {$B$}; \node[main node] (3) [right of=2] {$E_i $}; \node[main node] (4) [right of=3] {$A_i $}; \node[main node] (5) [right of=4] {$0$}; \node[main node] (1') [below of=1] {$0$}; \node[main node] (2') [right of=1'] {$B$}; \node[main node] (3') [right of=2'] {$E$}; \node[main node] (4') [right of=3'] {$\bigoplus A_i$}; \node[main node] (5') [right of=4'] {$0$}; \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to [below] node {$f_i$} (3); \draw[->, thin] (3) to [below] node {$g_i$} (4); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (1') to node {$$} (2'); \draw[->, thin] (2') to [above] node {$f'$} (3'); \draw[->, thin] (3') to [above] node {$g'$} (4'); \draw[->, thin] (4') to node {$$} (5'); \draw[-, double] (2) to node {$$} (2'); \draw[->, thin] (3) to node {$\nu_i$} (3'); \draw[->, thin] (4) to node {$\mu _i$} (4'); \end{tikzpicture} \] \end{minipage}}\\ Therefore, there is a unique morphism $\omega:\colim[f_{i}]\rightarrow E$ such that $\omega\sigma_{i}=\nu_{i}$ for every $i$ in $ I$ and $\omega f=f'$, where $\left\{ \sigma_{i}:E_{i}\rightarrow\colim[f_{i}]\right\} _{i\in I}\cup\{f:B\rightarrow\colim[f_{i}]\}$ is\\ \fbox{\begin{minipage}[t]{0.45\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.5cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (1) at (-.5,0) {$0$}; \node[main node] (2) at (0,0) {$B$}; \node[main node] (3) [right of=2] {$E_i $}; \node[main node] (4) [right of=3] {$A_i $}; \node[main node] (5) at (2.5,0) {$0$}; \node[main node] (1') [below of=1] {$$}; \node[main node] (2') [below of=2] {$B$}; \node[main node] (3') [right of=2'] {$\operatorname{colim}(f_i)$}; \node[main node] (4') [right of=3'] {$\bigoplus A_i$}; \node[main node] (5') [below of=5] {$0$}; \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to [below] node {$f_i$} (3); \draw[->, thin] (3) to [below] node {$g_i$} (4); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (2') to node {$f$} (3'); \draw[->, thin] (3') to node {$g$} (4'); \draw[->, thin] (4') to node {$$} (5'); \draw[-, double] (2) to node {$$} (2'); \draw[->, thin] (3) to node {$\sigma_i$} (3'); \draw[->, thin] (4) to node {$\mu _i$} (4'); \end{tikzpicture} \] \end{minipage}}\hfill{ \begin{minipage}[t]{0.5\columnwidth the co-compatible family associated to the colimit. Notice that $\omega f=f'$ is a monomorphism, so $f$ is also a monomorphism. It remains to prove that $\overline{\eta}=\overline{\Theta(\eta_{i})}$. Observe that, by the cokernel universal property, we can build a morphism of exact sequence \end{minipage \\ \begin{minipage}[t]{0.5\columnwidth \[ (1,\omega,\omega'):\Theta(\eta_{i})\rightarrow\eta\mbox{.} \] It is enough to show that $\omega'=1$. With that goal, we see that \[ \omega'\mu_{i}g_{i}=\omega'g\sigma_{i}=g'\omega\sigma_{i}=g'\nu_{i}=\mu_{i}g_{i}\mbox{.} \] \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.45\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.5cm,main node/.style=,x=1.5cm,y=1.3cm] \node[main node] (1) at (-.5,0) {$0$}; \node[main node] (2) at (0,0) {$B$}; \node[main node] (3) [right of=2] {$\operatorname{colim}(f_i)$}; \node[main node] (4) [right of=3] {$\bigoplus A_i$}; \node[main node] (5) at (2.7,0) {$0$}; \node[main node] (1') at (-.5,-1) {$0$}; \node[main node] (2') at (0,-1) {$B$}; \node[main node] (3') [right of=2'] {$E$}; \node[main node] (4') [right of=3'] {$\bigoplus A_i$}; \node[main node] (5') at (2.7,-1) {$0$}; \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to node [below] {$f$} (3); \draw[->, thin] (3) to node [below] {$g$} (4); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (1') to node {$$} (2'); \draw[->, thin] (2') to node {$f'$} (3'); \draw[->, thin] (3') to node {$g'$} (4'); \draw[->, thin] (4') to node {$$} (5'); \draw[-, double] (2) to node {$$} (2'); \draw[->, thin] (3) to node {$\omega$} (3'); \draw[->, thin] (4) to node {$\omega '$} (4'); \end{tikzpicture} \] \end{minipage}} Hence, by the fact that $g_{i}$ is an epimorphism, $\omega'\mu_{i}=\mu_{i}\,\forall i\in I$. Then, by the universal coproduct property we can conclude that $\omega'=1$. \end{proof} \begin{cor} \label{cor:the inverse correspondence} Let $\mathcal{C}$ be an AB3 abelian category, $\left\{ A_{i}\right\} _{i\in I}$ be a set of objects in $\mathcal{C}$, and $B\in\mathcal{C}$. Consider the coproduct $\left\{ \mu_{i}:A_{i}\rightarrow\bigoplus_{i\in I}A_{i}\right\} _{i\in I}$, and the correspondence $\Psi_{1}:\Ext[\bigoplus A_{i}][1][][B]\rightarrow\prod_{i\in I}\Ext[A_{i}][1][][B]$. If $\Psi_{1}$ is biyective, then the inverse correspondence maps each $(\overline{\eta_{i}})\in\prod_{i\in I}\Ext[A_{i}][1][][B]$, with representatives \[ \eta_{i}:\;\suc[B][E_{i}][A_{i}][f_{i}]\:\forall i\in I\mbox{,} \] to the extension given by the exact sequence \[ \suc[B][\mbox{colim}(f_{i})][\bigoplus_{i\in I}A_{i}]\mbox{.} \] \end{cor} \begin{thm} \label{thm:Ab4 vs ext} Let $\mathcal{C}$ be an Ab3 category. Then, $\mathcal{C}$ is an Ab4 category if, and only if, the correspondence $\Psi_{1}:\Ext[\bigoplus X_{i}][1][][Y]\rightarrow\prod_{i\in I}\Ext[X_{i}][1][][Y]$ is biyective for every $Y\in\mathcal{C}$ and every set of objects $\left\{ X_{i}\right\} _{i\in I}$.\end{thm} \begin{proof} By \ref{prop:ext1 y coprods arb}, it is enough to prove that if $\Psi$ is biyective for every $Y\in\mathcal{C}$ and every set of objects $\left\{ X_{i}\right\} _{i\in I}$, then $\mathcal{C}$ is Ab4. With this purpose, we will consider a set of exact sequences $\left\{ \eta_{i}:\:\suc[A_{i}][B_{i}][C_{i}][\alpha_{i}][\beta_{i}]\right\} _{i\in I}$ and prove that the morphism $\bigoplus_{i\in I}\alpha_{i}:\bigoplus _{i\in I}A_i \rightarrow \bigoplus _{i\in I}B_i$ is a monomorphism. Consider the coproduct $\bigoplus_{i\in I}A_{i}$ and the canonic inclusions $\mu_{i}:A_{i}\rightarrow\bigoplus_{i\in I}A_{i}$. \\ \begin{minipage}[t]{0.5\columnwidth By the dual result of \ref{prop:pb:operar a izquierda}, for every $i$ in $ I$ we have an exact sequence morphism $(\mu_{i},\mu_{i}',1):\eta_{i}\rightarrow\mu_{i}\eta_{i}$, where \[ \mu_{i}\eta_{i}:\:\suc[\bigoplus_{i\in I}A_{i}][E_{i}][C_{i}][f_{i}]\mbox{.} \] Consider the correspondence \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.45\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.3cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (1) at (0,0) {$0$}; \node[main node] (2) [right of=1] {$A_i$}; \node[main node] (3) [right of=2] {$B_i$}; \node[main node] (4) [right of=3] {$C_i$}; \node[main node] (5) [right of=4] {$0$}; \node[main node] (1') [below of=1] {$0$}; \node[main node] (2') [right of=1'] {$\bigoplus _{i\in I} A_i$}; \node[main node] (3') [right of=2'] {$E_i $}; \node[main node] (4') [right of=3'] {$C_i$}; \node[main node] (5') [right of=4'] {$0$}; \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to node {$\scriptstyle \alpha _i$} (3); \draw[->, thin] (3) to node {$\scriptstyle \beta _i $} (4); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (1') to node {$$} (2'); \draw[->, thin] (2') to node {$\scriptstyle f_i$} (3'); \draw[->, thin] (3') to node {$\scriptstyle $} (4'); \draw[->, thin] (4') to node {$$} (5'); \draw[->, thin] (2) to node {$\scriptstyle \mu _i$} (2'); \draw[->, thin] (3) to node {$\scriptstyle \mu _i '$} (3'); \draw[-, double] (4) to node {$\scriptstyle $} (4'); \end{tikzpicture} \] \end{minipage}} \[ \Psi:\Ext[\bigoplus_{i\in I}C_{i}][1][][\bigoplus_{i\in I}A_{i}]\rightarrow\prod_{i\in I}\Ext[C_{i}][1][][\bigoplus_{i\in I}A_{i}]\mbox{.} \] By \ref{cor:the inverse correspondence}, we know that $\Psi^{-1}$ maps $(\mu_{i}\overline{\eta_{i}})\in\prod_{i\in I}\Ext[C_{i}][1][][\bigoplus_{i\in I}A_{i}]$ to the extension given by the exact sequence \[ \suc[\bigoplus_{i\in I}A_{i}][\mbox{colim}(f_{i})][\bigoplus_{i\in I}C_{i}][f]\mbox{.} \] \fbox{\begin{minipage}[t]{0.3\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.5cm,main node/.style=,x=1.3cm,y=1.3cm] \node (1) at (0,1) {$A_i$}; \node (2) at (1,1) {$B_i$}; \node (3) at (0,0) {$\bigoplus A_i$}; \node (4) at (1,0) {$E_i$}; \begin{scope}[xshift=1.5cm,yshift=0cm] \node (6) at (315:1.3cm) {$X$} ; \end{scope} \draw[->, thin] (1) to node {$\alpha _i$} (2); \draw[->, thin] (2) to node {$\mu ' _i$} (4); \draw[->, thin] (1) to node {$\mu _i$} (3); \draw[->, thin] (3) to node [below] {$f _i$} (4); \draw[->, dashed] (3) to [out=300,in=180] node {$\alpha$} (6); \draw[->, thin] (2) to [out=330,in=90] node [right] {$g_i$} (6); \draw[->, dashed] (4) to node [left] {$\gamma _i$} (6); \end{tikzpicture} \] \end{minipage}}\hfill{ \begin{minipage}[t]{0.65\columnwidth We will show that $\colim[f_{i}]=\bigoplus_{i\in I}B_{i}$ to conclude that $\bigoplus_{i\in I}\alpha_{i}$ is a monomorphism. Indeed, consider a family of morphisms $\left\{ g_{i}:B_{i}\rightarrow X\right\} _{i\in I}$. By the universal property of the coproduct $\bigoplus_{i\in I}A_{i}$, there is a unique morphism $\alpha:\bigoplus_{i\in I}A_{i}\rightarrow X$ such that $g_{i}\alpha_{i}=\alpha\mu_{i}\forall i\in I$. Now, by the universal property of the pushout on the last equality, for every $i\in I$ there is a unique morphism $\gamma_{i}:E_{i}\rightarrow X$ such that $g_{i}=\gamma_{i}\mu_{i}'$ and $\alpha=\gamma_{i}f_{i}$ \end{minipage}\\ Before going further, consider the co-compatible family of morphisms associated to the colimit $\left\{ u_{k}:E_{k}\rightarrow\colim[f_{i}]\right\} _{k\in I}$. Observe that, by the universal property of the colimit on the last equalities, there is a unique morphism $\Lambda:\colim[f_{i}]\rightarrow X$ such that $\Lambda u_{i}=\gamma_{i}\,\forall i\in I$ and $\Lambda f=\alpha$. \\ \begin{minipage}[t]{0.55\columnwidth In particular, if $X=\bigoplus_{i\in I}B_{i}$ and $\left\{ g_{i}:B_{i}\rightarrow\bigoplus_{i\in I}B_{i}\right\} _{i\in I}$ is the set of canonic inclusions, there is a unique morphism $\Lambda:\colim[f_{i}]\rightarrow\bigoplus_{i\in I}B_{i}$ such that $\Lambda u_{i}=\gamma_{i}\,\forall i\in I$ and $\Lambda f=\alpha$. Furthermore, by the universal property of the coproduct $\bigoplus_{i\in I}B_{i}$, there is a unique morphism $\Lambda':\bigoplus_{i\in I}B_{i}\rightarrow\colim[f_{i}]$ such that $\Lambda'g_{i}=u_{i}\mu'_{i}\,\forall i\in I$. We shall now prove that $\Lambda$ is an isomorphism and $\Lambda'=\Lambda^{-1}$. Observe that \[ \Lambda\Lambda'g_{i}=\Lambda u_{i}\mu'_{i}=\gamma_{i}\mu'_{i}=g_{i}\,\forall i\in I\mbox{.} \] Hence, by the universal property of the coproduct $\bigoplus_{i\in I}B$, we can conclude that $\Lambda\Lambda'=1$. Next, we prove that $\Lambda'\Lambda=1$. Observe that \end{minipage}\hfill{ \fbox{\begin{minipage}[t]{0.4\columnwidth \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.5cm,main node/.style=,x=1.5cm,y=1.5cm] \node (1) at (0,1) {$A_i$}; \node (2) at (1,1) {$B_i$}; \node (3) at (0,0) {$\bigoplus A_i$}; \node (4) at (1,0) {$E_i$}; \node (5) at (300:1.5cm) {$\operatorname{colim}f_i$}; \node (6) [below of=5] {$X$}; \node (7) [below of=6] {$\operatorname{colim}f_i$}; \draw[->, thin] (1) to node {$\alpha _i$} (2); \draw[->, thin] (2) to node {$\mu ' _i$} (4); \draw[->, thin] (1) to node {$\mu _i$} (3); \draw[->, thin] (3) to node [below] {$f _i$} (4); \draw[->, thin] (3) to node [below left] {$f $} (5); \draw[->, thin] (4) to node {$u_i $} (5); \draw[->, dashed] (5) to node {$\Lambda$} (6); \draw[->, dashed] (6) to node {$\Lambda'$} (7); \draw[->, thin] (1) to [out=210,in=150] node [left] {$$} (6); \draw[->, thin] (3) to [out=240,in=150] node [right] {$\alpha$} (6); \draw[->, thin] (2) to [out=330,in=30] node [above left] {$g_i$} (6); \draw[->, thin] (2) to [out=330,in=30] node [below left] {$u_i \mu ' _i$} (7); \draw[->, thin] (4) to [out=300,in=30] node [left] {$\gamma _i$} (6); \end{tikzpicture} \] \end{minipage}} \[ u_{i}\mu'_{i}\alpha_{i}=u_{i}f_{i}\mu_{i}=f\mu_{i}\,\forall i\in I, \] and also that \[ u_{i}\mu'_{i}\alpha_{i}=\Lambda'g_{i}\alpha_{i}=\Lambda'\alpha\mu_{i}=\left(\Lambda'\Lambda f\right)\mu_{i}\,\forall i\in I. \] Hence, by the last equalities and the universal property of the coproduct $\bigoplus_{i\in I}A_{i}$, we can conclude that $f=\Lambda'\Lambda f$. Furthermore, observe that \[ (u_{i})\mu'_{i}=u_{i}\mu'_{i}\mbox{ and }\left(u_{i}\right)f_{i}=f\,\forall i\in I\mbox{,} \] and also that \[ (\Lambda'\Lambda u_{i})\mu'_{i}=\Lambda'\gamma_{i}\mu'_{i}=\Lambda'g{}_{i}=u_{i}\mu'_{i}\mbox{ and }\left(\Lambda'\Lambda u_{i}\right)f_{i}=\Lambda'\Lambda f=f\,\forall i\in I\mbox{.} \] Hence, by the last equalities and the universal property of the pushout $(E_{i},f_{i},\mu'_{i})$, we can conclude that $\Lambda'\Lambda u_{i}=u_{i}$. Now, it follows from the universal property of the colimit that $\Lambda'\Lambda=1$. Therefore, $\Lambda$ is an isomorphism and $\Lambda'=\Lambda^{-1}$. By the last remark, without loss of generality $\colim[f_{i}]=\bigoplus_{i\in I}B_{i}$, $\Lambda=1=\Lambda'$, and $g_{i}=u_{i}\mu'_{i}\,\forall i\in I$. Now, observe that \[ f\mu_{i}=g_{i}\alpha_{i}\,\forall i\in I\mbox{.} \] Hence, by the universal property of the corpoduct $\bigoplus_{i\in I}A_{i}$, we can conclude that $f=\bigoplus_{i\in I}\alpha_{i}$. Therefore, $\bigoplus_{i\in I}\alpha_{i}$ is a monomorphism. \end{proof} We have the following equivalences. \begin{thm} Let $\mathcal{C}$ be an Ab3 category. Then, the following statements are equivalent: \begin{enumerate} \item $\mathcal{C}$ is an Ab4 category. \item The correspondence $\Psi:\Ext[\bigoplus_{i\in I}X_{i}][1][][Y]\rightarrow\prod_{i\in I}\Ext[X_{i}][1][][Y]$ is biyective for every $Y\in\mathcal{C}$ and every set of objects $\left\{ X_{i}\right\} _{i\in I}$. \item The correspondence $\Psi_{n}:\Ext[\bigoplus_{i\in I}X_{i}][n][][Y]\rightarrow\prod_{i\in I}\Ext[X_{i}][n][][Y]$ is biyective $\forall Y\in\mathcal{C}$, every set of objects $\left\{ X_{i}\right\} _{i\in I}$, and $\forall n>0$. \end{enumerate} \end{thm} \begin{proof} It follows from \ref{prop:extn coprods arb} and \ref{thm:Ab4 vs ext}. \end{proof} \begin{rem} For an example of an Ab3 category that is not Ab4, see the dual category of \cite[Example A.4]{vcoupek2017cotilting}. \end{rem} By duality we have the following result. \begin{thm} Let $\mathcal{C}$ be an Ab3{*} category. Then, the following statements are equivalent: \begin{enumerate} \item $\mathcal{C}$ is an Ab4{*} category. \item The correspondence $\Psi:\Ext[Y][1][][\prod_{i\in I}X_{i}]\rightarrow\prod_{i\in I}\Ext[Y][1][][X_{i}]$ is biyective for every $Y\in\mathcal{C}$ and every set of objects $\left\{ X_{i}\right\} _{i\in I}$. \item The correspondence $\Psi_{n}:\Ext[Y][n][][\prod_{i\in I}X_{i}]\rightarrow\prod_{i\in I}\Ext[Y][n][][X_{i}]$ is biyective $\forall Y\in\mathcal{C}$, every set of objects $\left\{ X_{i}\right\} _{i\in I}$, and $\forall n>0$. \end{enumerate} \end{thm} \section*{Acknowledgements} I wish to thank my advisor Octavio Mendoza for encouraging me to publish this work and for proof reading the article. I am also grateful to Sergio Estrada whose comments greatly improved the quality of this paper. \bibliographystyle{plain}
1,477,468,750,509
arxiv
\section{Introduction} \setcounter{equation}{0} In this paper we consider reconstruction of signals of the following a priori known form: \begin{equation}\label{equation_decoupling_model} F(x)=\sum_{j=1}^k \sum_{q=1}^{q_j} a_{jq} f_j(x-x_{jq}), \end{equation} with $a_{jq}\in \mathbb{R}, \ x_{jq}=(x^1_{jq},\ldots,x^n_{jq}) \in {\mathbb R}^n.$ We assume that the signals $f_1,\dots, f_k: \mathbb{R}^n\to\mathbb{R}$ are known (in particular, their Fourier transforms ${\cal F}(f_j)$ are known), while $a_{jq}, \ x_{jq}$ are the unknown signal parameters, which we want to find from Fourier samples of $F$. We explicitly assume here that $k\geq 2$. So the usual methods which allow one to solve this problem ``in closed form'' in the case of shifts of a single function (see \cite{Vet,Bat.Sar.Yom,Sar}) are not directly applicable. Still, we shall show that in many cases an explicit reconstruction from a relatively small collection of Fourier samples of $F$ is possible. Practical importance of signals as above is well recognized in the literature: for some discussions and similar settings see, e.g. \cite{Vet,gedalyahu2011multichannel, peter2011nonlinear}. We follow a general line of the ``Algebraic Sampling'' approach (see \cite{Vet,sig_ack,Bat.Yom1} and references therein), i.e. we reconstruct the values of the unknown parameters, solving a system of non-linear equations, imposed by the measurements (system (\ref{equation_uncoupled_system}) below). The equations in this system appear as we equate the ``symbolic'' expressions of the Fourier samples, obtained from (\ref{equation_decoupling_model}), to their actual measured values. Our specific strategy is as follows: we choose a sampling set $S_r \subset {\mathbb R}^n, \ r=1,\ldots,k,$ in a special way, in order to ``decouple'' (\ref{equation_uncoupled_system}), and to reduce it to $k$ separate systems, each including only one of the signals $f_r$. To achieve this goal we take $S_r$ to be a subset of the common set of zeroes of the Fourier transforms ${\cal F}(f_\ell), \ \ell\ne r$. The decoupled systems turn out to be of a ``generalized Prony'' type: \begin{equation}\label{equation_decoupled1} \sum_{j=1}^{N} a_j y_j^{s_\ell} = m_\ell, \quad \ell=1,2,\dots, \ s_\ell \in S\subset {\mathbb R}^n. \end{equation} The standard Prony system, where the sample set $S$ is the set of integer points in a cube of a prescribed size, allows for a solution ``in closed form'' (see, for example, \cite{Bat.Sar.Yom,rao1992mbp,Sar,stoica2005spectral} and references therein). We are not aware of any method for an explicit solution of generalized Prony systems. However, ``generic'' solution methods can be applied. Their robustness can be estimated via Tur\'an-Nazarov inequality for exponential polynomials and its discrete version (\cite{Fri.Yom,Naz}). Some initial results in this direction have been presented in \cite{Sar,Bat.Sar.Yom}. Below we further extend these results, restricting ourselves to the uniqueness problem only. \section{Reconstruction System and its Decoupling}\label{subsection_decoupling_system} \setcounter{equation}{0} For $F$ of the form (\ref{equation_decoupling_model}) and for any $s=(s^1,\ldots,s^n)\in {\mathbb R}^n$ we have for the sample of the Fourier transform ${\cal F}(F)$ at $s$ \begin{eqnarray*} &{\cal F}(F)(s)&= \int_{{\mathbb R}^n} e^{-2\pi isx} F(x)dx\\ &&=\sum_{j=1}^k \sum_{q=1}^{q_j} a_{jq} e^{-2\pi isx_{jq}}{\cal F}(f_j)(s). \end{eqnarray*} So taking samples at the points $s_\ell=(s^1_\ell,\ldots,s^n_\ell)$ of the sample set $S=\{s_1,\dots,s_m\}$, and denoting the vector $e^{-2\pi ix_{jq}}=(e^{-2\pi ix^1_{jq}},\dots,e^{-2\pi ix^n_{jq}})$ by $y_{jq}=(y^1_{jq},\dots, y^n_{jq})$ we get our reconstruction system in the form \begin{equation}\label{equation_uncoupled_system} \sum_{j=1}^k \sum_{q=1}^{q_j} a_{jq} {\cal F}(f_j)(s_\ell)y_{jq}^{s_\ell} = {\cal F}(F)(s_\ell), \ \ell=1,\dots,m, \end{equation} in the standard multi-index notations. In system (\ref{equation_uncoupled_system}) the right hand sides ${\cal F}(F)(s_\ell)$ are the known measurements, while the Fourier samples ${\cal F}(f_j)(s_\ell)$ are known by our assumptions. The unknowns in (\ref{equation_uncoupled_system}) are the amplitudes $a_{jq}$ and the shifts $x_{jq}$, encoded in the vectors $y_{jq}$. In the case $k=1$ we could divide the equations in (\ref{equation_uncoupled_system}) by ${\cal F}(f_1)(s_\ell)$ and obtain directly a Prony-like system. However, for $k\geq 2$ this transformation usually is not applicable. Instead we ``decouple" system (\ref{equation_uncoupled_system}) with respect to the signals $f_1,\ldots,f_k$ using the freedom in the choice of the sample set $S$. Let \[Z_\ell=\bigl\{x\in {\mathbb R}^n, \ {\cal F}(f_\ell)(x)=0\bigr\}\] denote the set of zeroes of the Fourier transform ${\cal F}(f_\ell)$. For each $r=1,\dots,k$ we take the sampling set $S_r$ to be a subset of the set \[W_r=(\bigcap_{\ell\ne r} Z_\ell)\setminus Z_r\] of common zeroes of the Fourier transforms ${\cal F}(f_\ell), \ \ell\ne r$, but not of ${\cal F}(f_r)$. For such $S_r$ all the equations in (\ref{equation_uncoupled_system}) vanish, besides those with $j=r$. Hence we obtain: \begin{proposition}\label{proposition_coupling} Let for each $r=1,\dots,k$ the sampling set $S_r$ satisfy \[S_r=\{s_{r1},\dots,s_{rm_r}\}\subset W_r.\] Then for each $r$ the corresponding system (\ref{equation_uncoupled_system}) on the sample set $S_r$ takes the form \begin{equation}\label{equation_decoupled} \sum_{q=1}^{q_r} a_{rq} y_{rq}^{s_{r\ell}} = c_{r\ell}(F), \ \ell=1,\dots,m_r, \end{equation} where $c_{r\ell}(F)= {{{\cal F}(F)(s_{r\ell})}/{{\cal F}(f_r)(s_{r\ell})}}$. $\square$ \end{proposition} \smallskip So (\ref{equation_uncoupled_system}) is decoupled into $k$ generalized Prony systems (\ref{equation_decoupled}), each relating to the shifts of the only signal $f_r$. The problem is that some (or all) of the sets $W_r$ may be too small, and the resulting systems (\ref{equation_decoupled}) will not allow us to reconstruct the unknowns $a_{rq}$ and $y_{rq}$. Another problem is instability of zero finding, which may lead to only approximate zeroes of Fourier transforms. We have at present only initial results outlying applicability of the Fourier decoupling method (\cite{Sar}). In a ``good'' case where the zero sets $Z_\ell$ of the Fourier transforms ${\cal F}(f_\ell), \ \ell=1,\ldots,k,$ are nonempty $n-1$-dimensional hypersurfaces meeting one another transversally, still for $k>n+1$ the intersection of $Z_\ell, \ \ell\ne r,$ is empty. So the resulting systems (\ref{equation_decoupled}) contain no equations. Hence we can apply the above decoupling only for $k \leq n+1$. \medskip Some specific examples, as well as investigation of the conditions on $f_1,\ldots,f_k$ which provide solvability of systems (\ref{equation_decoupled}) were presented in \cite{Sar}. In one-dimensional case $(n=1, k=2)$ these conditions can be given explicitly. In this case $W_1=W_1(f_1,f_2)$ consists of zeroes of ${\cal F}(f_2)$ which are not zeroes of ${\cal F}(f_1)$, and $W_2=W_2(f_1,f_2)$ consists of zeroes of ${\cal F}(f_1)$ which are not zeroes of ${\cal F}(f_2)$. The following result has been proved (for real Prony systems) in \cite{Sar}. Here we extend it to the case of system (\ref{equation_decoupled}) which has purely imaginary exponents. The constant $2N$ below is sharp, in contrast with the constant $C(n,d)$ in (multidimensional) Theorem \ref{span} below. \smallskip Let in (\ref{equation_decoupling_model}) \ $n=1,k=2,$ and let $q_1=q_2=N$. Assume that for the signals $f_1,f_2$ in (\ref{equation_decoupling_model}) each of the sets $W_1$ and $W_2$ contains at least $2N$ elements. Let $D_j, \ j=1,2,$ be the length of the shortest interval $\Delta_j$ such that $S_j=\Delta_j\cap W_j$ contains exactly $2N$ elements, and let $\rho_j={1\over {D_j}}$. \begin{theo} \label{solv.cond} For shifts $x_{jq}$ in the interval $[0,\rho_j), \ j=1,2,$ systems (\ref{equation_decoupled}) with the sampling sets $S_1,S_2$ are uniquely solvable. \end{theo} \noindent{\bf Proof: } Let us fix $j=1$. The proof for $j=2$ is the same. Substituting $y_{1q}=e^{-2\pi i x_{1q}}$ associates to a solution $(a_{1q},y_{1q}), \ q=1,\ldots,N,$ of (\ref{equation_decoupled}) an exponential polynomial $H(s)=\sum_{q=1}^N a_{1q}e^{-2\pi i x_{1q}s}$ with purely imaginary exponents. If (\ref{equation_decoupled}) has two different solutions, the corresponding exponential polynomials $H_1(s)$ and $H_2(s)$ are equal for each $s \in S_1.$ Hence $S_1$ is a set of zeroes of $H_2(s)-H_1(s)$, which is an exponential polynomial of the order at most $2N$. On the other hand, by Langer's lemma (Lemma 1.3 in \cite{Naz}) such polynomial can have in each interval of length $D$ at most $2N-1+{{\rho D}\over {2\pi}}$ zeroes, where $\rho$ is the maximum of the absolute values of the exponents. In our case $D=D_1$ and $\rho < 2\pi\rho_1={{2\pi}\over {D_j}}$. Hence ${{\rho D}\over {2\pi}}$ is strictly less than $1$, and so the number of zeroes of $H_2-H_1$ is at most $2N-1$, in contradiction with the assumptions. $\square$ \section{Examples}\label{examples} \setcounter{equation}{0} Some examples of Fourier decoupling have been presented in \cite{Sar}. In these examples the sets $W_r$ are ``large enough'' to reduce the problem (with the number of allowed shifts fixed but arbitrarily large) to a set of decoupled standard Prony systems. \smallskip In dimension one we can take, for example, $f_1$ to be the characteristic function of the interval $[-1,1],$ while $f_2(x)=\delta(x-1)+\delta(x+1).$ So we consider signals of the form \begin{equation}\label{equation_decoupling_model1} F(x)= \sum_{q=1}^{N} [a_{1q} f_1(x-x_{1q})+a_{2q} f_2(x-x_{2q})]. \end{equation} Easy computations show that \[ {\cal F}(f_1)(s)=\sqrt{\frac{2}{\pi}}\frac{\sin s}{s} \] and \[ {\cal F}(f_2)(s)=\sqrt\frac{2}{\pi}\cos s. \] So the zeros of the Fourier transform of $f_1$ are the points $\pi n,\ n\in {\mathbb Z}\setminus \{0\}$ and those of $f_2$ are the points $({1\over 2}+n)\pi,\ n\in {\mathbb Z}$. These sets do not intersect, so $W_1=\{\pi n\}$, and $W_2=\{({1\over 2}+n)\pi\}$. Since $W_1$ and $W_2$ are just shifted integers ${\mathbb Z}$, the generalized Prony systems in (\ref{equation_decoupled}) are actually the standard ones. For $f_2$ the system (\ref{equation_decoupled}) takes the form \[ \frac{{\cal F}(F)(\pi n)}{\sqrt\frac{2}{\pi}(-1)^n}=\sum_{q=1}^Na_{2q}({y_{2q}})^{ \pi n}, \ n\in {\mathbb Z}. \] If we denote $M_n=\frac{{\cal F}(F)(\pi n)}{\sqrt\frac{2}{\pi}(-1)^n}$ , $A_q=a_{2q}(y_{2q})^\pi$ and $\eta_q=(y_{2q})^\pi$ we get the usual Prony system \[ M_n=\sum_{q=0}^NA_q\eta_q^n\ , \ n\in {\mathbb Z}. \] For $f_1$ we get \[ \frac{{\cal F}(F)(({1\over 2} +n)\pi)}{\sqrt\frac{2}{\pi}\frac{(-1)^{n+1}} {({1\over 2} +n)\pi}}=\sum_{q=1}^Na_{1q}({y_{1q}})^{({1\over 2} +n)\pi} \ , \ n\in {\mathbb Z}. \] In this case we denote $\mu_n=\frac{{\cal F}(F)(({1\over 2} +n)\pi)}{\sqrt\frac{2}{\pi} \frac{(-1)^{n+1}}{({1\over 2} +n)\pi}},\ \alpha_q=a_{1q}(y_{1q})^{\frac\pi2}$ and $\xi_q=(y_{1q})^\pi$ and we get again the usual Prony system \[ \mu_n=\sum_{q=1}^N\alpha_q\xi_q^n,\ n\in {\mathbb Z}. \] Solving these two systems by any standard method will give us the translations and amplitudes of the functions $f_1, f_2$. Notice that a possible non-uniqueness of the solutions is imposed here by the substitutions $\eta_q=(y_{2q})^\pi$ and $\xi_q=(y_{1q})^\pi$. \smallskip In dimension two we can take, in particular, $f_1,f_2,f_3$ to be the characteristic functions of the three squares: $Q_1=[-3,3]^2, \ Q_2=[-5,5]^2,$ and $Q_3$ which is the rotation of the square $[-\sqrt 2,\sqrt 2]^2$ by ${\pi \over 4}$. So we put \begin{equation} \chi_j(x) =\left\{\begin{array}{lr}1&x\in Q_j \\0&x\not\in Q_j \end{array}\right. \end{equation} and consider signals of the form \begin{equation}\label{equation_decoupling_model2} F(x)=\sum_{j=1}^3 \sum_{q=1}^{q_j} a_{jq} \chi_j(x-x_{jq}), \quad \text{ with } a_{jq}\in \mathbb{R},\; x_{jq} \in {\mathbb R}^3. \end{equation} The following result is proved in \cite{Sar}: \smallskip \begin{proposition} The zero sets $Z_1,Z_2$ and $Z_3$ of the Fourier transforms of the three functions $\chi_1,\chi_2$ and $\chi_3$ intersect each other in such a way that the decoupling procedure based on the sets $W_1=(Z_2\cap Z_3)\setminus Z_1, W_2=(Z_3\cap Z_1)\setminus Z_2$ and $W_3=(Z_1\cap Z_2)\setminus Z_3$ provides three standard Prony systems for the shifts of each of the functions. \end{proposition} \smallskip \noindent{\bf Sketch of the proof:} Simple calculation gives \begin{equation}\begin{array}c {\cal F}(\chi_1)(\omega,\rho)=4\frac{\sin 3\omega}{\omega}\cdot\frac{\sin 3\rho}{\rho}\\ {\cal F}(\chi_2)(\omega,\rho)=4\frac{\sin 5\omega}{\omega}\cdot\frac{\sin 5\rho}{\rho}\\ {\cal F}(\chi_3)(\omega,\rho)=8\frac{\sin \frac{\omega+\rho}2}{\frac{\omega+\rho}2}\cdot\frac{\sin \frac{\omega-\rho}2}{\frac{\omega-\rho}2}. \end{array} \end{equation} So $Z_1$ is the union of horizontal or vertical lines crossing the Fourier plane's axes at $(0,\frac{n\pi}{3})$ or $(\frac{n\pi}{3},0)$ respectively, for all non zero integer $n$. Similarly for $Z_2$, with the only difference that the lines cross the axes at $(0,\frac{n\pi}5)$ or $(\frac{n\pi}5,0)$. \\$Z_3$ is the union of lines with slopes $1$ or $-1$ crossing the $\omega$ axis at $2\pi n$ for some non zero integer $n$. Hence for any two integers $n$ and $m$ we have $(\frac{1+5n}5,\frac{1+5n}5)\in S_1, (\frac{1+3m}3,\frac{1+3m}3)\in S_2$ and since $\frac{1+3m}3\pm\frac{1+5n}5$ is not an integer, $(\frac{1+3m}3,\frac{1+5n}5)\in S_3$. These three points form a triangle which repeats itself as a periodic pattern. Appropriate transformations now bring the decoupled systems (\ref{equation_decoupled}) to the form of the standard two-dimensional Prony system. See \cite{Sar,Bat.Sar.Yom} for a new approach to solving such systems and for the results of numerical simulations. $\square$ \section{Uniqueness of Reconstruction}\label{Non-uniform} \setcounter{equation}{0} Application of Proposition \ref{proposition_coupling} prescribes the choice of sample points from the common zeroes of the Fourier transforms ${\cal F}(f_j)$. So the geometry of the sample sets $S_r$ may be complicated, and the known results on unique solvability of the standard Prony system (\cite{Bat.Sar.Yom,Bat.Yom,rao1992mbp,stoica2005spectral}) are not directly applicable. Non-Uniform Sampling in Prony-type systems is also essential in other problems of algebraic signal reconstruction. In particular, recently it appeared as a key point in a proof of the Eckhoff conjecture, related to the accuracy of reconstruction of piecewise-smooth functions from their Fourier samples (\cite{Bat}). There are results on a behavior of exponential polynomials on arbitrary sets, which can provide important information on unique solvability and robustness of the generalized Prony system. In particular, this concerns the Turan-Nazarov inequality (\cite{Naz}), and its extension to discrete sets obtained in \cite{Fri.Yom}. In this last paper for each set $S$ a quantity $\omega_D(S)$ has been introduced, measuring, essentially, the robustness of solvability of a generalized Prony system with the sample points $s_\ell\in S$. Here $D$ comprises the ``discrete'' parameters of the Prony system to be solved. $\omega_D(S)$ can be explicitly estimated in terms of the metric entropy of $S$ (see below), and we expect that in many important cases the quantity $\omega_D(W_r)$ for the zeroes sets $W_r$ of the Fourier transforms ${\cal F}(f_j)$ can be effectively bounded from below. Some initial results and discussions in this direction, mainly in dimension one, are presented in \cite{Sar,Bat.Yom1}. In the present paper we do not consider robustness of the Prony system, but provide a new multi-dimensional result on the uniqueness of solutions, in the lines of \cite{Sar,Fri.Yom} and Theorem \ref{solv.cond} above. \smallskip Let us recall that for $Z$ a bounded subset of ${\mathbb R}^n,$ and for $\epsilon>0$ the covering number $M(\epsilon,Z)$ is the minimal number of $\epsilon$-balls in ${\mathbb R}^n,$ covering $Z$. The $\epsilon$-entropy $H(\epsilon,Z)$ is the binary logarithm of $M(\epsilon,Z)$. \smallskip Let $H(s)=\sum_{j=1}^d a_je^{\lambda_j\cdot s},$ with $a_j\in {\mathbb R}, \ \lambda_j=(\lambda_{j1},\ldots,\lambda_{jn})\in {\mathbb R}^n,$ be a real exponential polynomial in $s\in {\mathbb R}^n.$ Denote $Z(H)$ the set of zeroes of $H$ in ${\mathbb R}^n$, and let $Q^n_R$ be the cube in ${\mathbb R}^n$ with the edge $R$. The following result is a special case of Lemma 3.3 proved in \cite{Fri.Yom}: \begin{proposition} \label{entropy} For each $R>0,$ and $\epsilon$ with $R>\epsilon>0$ we have $M(\epsilon,Z(H)\cap Q^n_R)\leq C(d,n)({R\over \epsilon})^{n-1}$. $\square$ \end{proposition} \smallskip The explicit expression for $C(d,n)$ is given in \cite{Fri.Yom}, via Khovanski's bound (\cite{Kho}) for ``fewnomial'' systems. Consider now a generalized Prony system (\ref{equation_decoupled1}) with a finite set $S$ of samples allowed: \begin{equation} \label{Pron} \sum_{j=1}^{N} a_j y_j^{s_\ell} = m_\ell, \ s_\ell \in S=\{s_1,\ldots,s_m \}\subset {\mathbb R}^n. \end{equation} We shall consider only real solutions of (\ref{Pron}) with $y_j$ having all its coordinates positive. \begin{theo} \label{span} Let $S=\{s_1,\ldots,s_m \}\subset Q_R^n$ be given, such that for a certain $\epsilon>0$ we have $M(\epsilon,S)> C(2N,n)({R\over \epsilon})^{n-1}.$ Then system (\ref{Pron}) has at most one solution. \end{theo} \noindent{\bf Proof: } Associate to a solution $(a_j,y_j), \ j=1,\ldots,N,$ of (\ref{Pron}) an exponential polynomial $H(s)=\sum_{j=1}^N a_je^{\lambda_j\cdot s},$ where $y_j=e^{\lambda_j}, \ \lambda_j\in {\mathbb R}^n.$ If (\ref{Pron}) has two different solutions, the corresponding exponential polynomials $H_1(s)$ and $H_2(s)$ are equal for each $s=s_\ell\in S.$ Hence $S$ is a set of zeroes of $H_2(s)-H_1(s)$, which is an exponential polynomial of order at most $2N$. By Proposition \ref{entropy} we have $M(\epsilon,S) \leq C(2N,n)({R\over \epsilon})^{n-1}$ for each $\epsilon>0$, in contradiction with the assumptions of the theorem. $\square$ \smallskip Informally, Theorem \ref{span} claims that finite sets $S$ which cover (in a ``resolution $\epsilon$'', for some $\epsilon>0$), a significant part of the cube $Q_R^n$, are uniqueness sets of the Prony system. The condition of Theorem \ref{span} on the sampling set $S$ is quite robust with respect to the geometry of $S$, so we can explicitly verify it in many cases. In particular, for non-regular lattices we get the following result: \begin{definition} For fixed positive $\alpha < {1\over 2}$ and $h>0,$ a set $Z' \subset {\mathbb R}^n$ is called an $(\alpha,h)$-net if it possesses the following property: there exists a regular grid $Z$ with the step $h$ in ${\mathbb R}^n$ such that for each $z'\in Z'$ there is $z\in Z$ with $||z'-z||\leq \alpha h,$ and for each $z\in Z$ there is $z'\in Z'$ with $||z'-z||\leq \alpha h.$ \end{definition} \begin{corollary} \label{Nonreg} Let $Z' \subset {\mathbb R}^n$ be an $(\alpha,h)$-net. Then for $R> C(2N)h(1-2\alpha)^{1-n}$ the set $S=Z\cap Q_R^n$ is a uniqueness set of the Prony system \eqref{Pron}. \end{corollary} \noindent{\bf Proof: } By definition, for each $z\in Z$ we can find $z'\in Z'$ inside the $\alpha h$-ball around $z$. Clearly, any two such points are $h'=(1-2\alpha)h$-separated. So for each $\epsilon < h'$ we have $M(\epsilon,S)\geq |Z\cap Q_R^n|=({R\over h})^n.$ We conclude that the inequality $({R\over h})^n > C(2N)({R\over h'})^{n-1},$ or $R > C(2N)h(1-2\alpha)^{1-n}$ implies the condition of Theorem \ref{span}. $\square$ \smallskip The condition of Theorem \ref{span} can be verified in many other situations, under natural assumptions on the sample set $S$. In particular, using integral-geometric methods developed in \cite{Com.Yom}, it can be checked for the zero sets of Fourier transforms of various types of signals. We plan to present these results separately. \smallskip \noindent{\bf Remark} The restriction to only positive solutions of Prony system is very essential for the result of Theorem \ref{span}. Indeed, consider the Prony system \begin{equation} \label {Skolem} a_1x_1^k+a_2x_2^k=m_k, \ k=0,1,\ldots. \end{equation} If we put $a_1=1, \ x_1=1, \ a_2=-1, \ x_2=-1$, then $m_k=1^k-(-1)^k=0$ for each even $k$. So the regular grid of even integers is not a uniqueness set for system (\ref{Skolem}). This fact is closely related to the classical Skolem-Mahler-Lech Theorem (see \cite{Lec,Mey.vdP,Tao} and references therein) which says that the integer zeros of an exponential polynomial are the union of complete arithmetic progressions and a finite number of exceptional zeros. So such sets may be non-uniqueness sample sets for complex Prony systems. \smallskip The proof of the Skolem-Mahler-Lech Theorem is relied on non-effective arithmetic considerations. Recently the problem of obtaining effective such theorem was discussed in \cite{Tao}. This problem may turn to be important for understanding of complex solutions of Prony systems. One can wonder whether the methods of Khovanskii (\cite{Kho}) and Nazarov (\cite{Naz}), as well as their combination in \cite{Fri.Yom}, can be applied here. \vskip1cm \bibliographystyle{plain}
1,477,468,750,510
arxiv
\section{Introduction} The Reidemeister trace is a fundamental invariant in topological fixed point theory, generalizing both the Lefschetz and Nielsen numbers. It was originally defined by Reidemeister in \cite{reid36}. A more modern treatment, under the name ``generalized Lefschetz number,'' was given by Husseini in \cite{huss82}. If $X$ is a finite connected CW-complex with universal covering space $\widetilde X$ and fundamental group $\pi$, then the cellular chain complex $C_q(\widetilde X)$ is a free $\mathbb{Z}\pi$-module. If $f:X \to X$ is a cellular map and $\widetilde f:\widetilde X \to \widetilde X$ is a lift of $f$, then the induced map $\widetilde f_q: C_q(\widetilde X) \to C_q(\widetilde X)$ can be viewed as a matrix with entries in $\mathbb{Z}\pi$ (with respect to some chosen $\mathbb{Z}\pi$ basis for $C_q(\widetilde X)$). We then define \[ \text{\textit{RT}}(f,\widetilde f) = \sum_{q=0}^{\infty} (-1)^q \rho(\tr(\widetilde f_q)), \] where $\tr$ is the sum of the diagonal entries of the matrix, and $\rho$ is the projection into the ``Reidemeister classes'' of $\pi$. The Reidemeister trace, then, is an element of $\mathbb{Z} R$, where $R$ is the set of Reidemeister classes. Wecken, in \cite{weck41}, proved what we will refer to as the \emph{Wecken Trace Theorem}, that \[ \text{\textit{RT}}(f,\widetilde f) = \sum_{[\alpha] \in R} \ind([\alpha])\, [\alpha], \] where $\ind([\alpha])$ is the index of the Nielsen fixed point class associated to $[\alpha]$ (see e.g. \cite{jian83}). Thus the number of terms appearing in the Reidemeister trace with nonzero coefficient is equal to the Nielsen number of $f$, and by the Lefschetz-Hopf Theorem, the sum of the coefficients is equal to the Lefschetz number of $f$. Recent work of Furi, Pera, and Spadini in \cite{fps04} has given a new proof of the uniqueness of the fixed point index on orientable manifolds with respect to three natural axioms. In \cite{stae07} their approach was extended to the coincidence index. The result is the following theorem: \begin{thm}\label{indexuniqueness} Let $X$ and $Y$ be oriented differentiable manifolds of the same dimension. The coincidence index $\ind(f,g,U)$ of two mappings $f,g:X \to Y$ over some open set $U \subset X$ is the unique integer-valued function satisfying the following axioms: \begin{itemize} \item (Additivity) If $U_1$ and $U_2$ are disjoint open subsets of $U$ whose union contains all coincidence points of $f$ and $g$ on $U$, then \[ \ind(f,g,U) = \ind(f,g,U_1) + \ind(f,g,U_2). \] \item (Homotopy) If $f$ and $g$ are ``admissably homotopic'' to $f'$ and $g'$, then \[ \ind(f,g,U) = \ind(f',g',U) \] \item (Normalization) If $L(f,g)$ denotes the coincidence Lefschetz number of $f$ and $g$, then \[ \ind(f,g,X) = L(f,g). \] \end{itemize} \end{thm} In the spirit of the above theorem, we demonstrate the existence and uniqueness of a local Reidemeister trace in coincidence theory subject to five axioms. A local Reidemeister trace for fixed point theory was given by Fares and Hart in \cite{fh94}, but no Reidemeister trace (local or otherwise) has appeared in the literature for coincidence theory. We note that recent work by Gon\c calves and Weber in \cite{gw06} gives axioms for the Reidemeister trace in fixed point theory using entirely different methods. Their work uses no locality properties, and is based on axioms for the Lefschetz number by Arkowitz and Brown in \cite{ab04}. In Section \ref{prelim} we present our axiom set, and we prove the uniqueness in coincidence theory in Section \ref{Suniqueness}. In the special case of local fixed point theory, we can obtain a slightly stronger uniqueness result which we discuss in Section \ref{fixpt}. Section \ref{existence} is a demonstration of the existence in the setting of coincidence theory. This paper contains pieces of the author's doctoral dissertation. The author would like to thank his dissertation advisor Robert F. Brown for assistance with both the dissertation work and with this paper. The author would also like to thank Peter Wong, who guided the early dissertation work and interested him in the coincidence Reidemeister trace. \section{The Axioms} \label{prelim} Throughout the paper, unless otherwise stated, let $X$ and $Y$ denote connected orientable differentiable manifolds of the same dimension. All maps $f,g:X \to Y$ will be assumed to be continuous. The universal covering spaces of $X$ and $Y$ will be denoted $\widetilde X$ and $\widetilde Y$ with projection maps $p_X: \widetilde X \to X$ and $p_Y:\widetilde Y \to Y$. A \emph{lift} of some map $f:X \to Y$ is a map $\widetilde f:\widetilde X \to \widetilde Y$ with $p_Y \circ \widetilde f = f \circ p_X$. Let $f,g:X \to Y$ be maps, with induced homomorphisms $\phi,\psi:\pi_1(X) \to \pi_1(Y)$ respectively. We will view elements of $\pi_1(X)$ and $\pi_1(Y)$ as covering transformations, so that for any $\widetilde x \in \widetilde X$ and $\sigma \in \pi_1(X)$, we have $\widetilde f(\sigma \widetilde x) = \phi(\sigma) \widetilde f(\widetilde x)$ and $\widetilde g(\sigma\widetilde x) = \psi(\sigma)\widetilde g(\widetilde x)$. We will partition the elements of $\pi_1(Y)$ into equivalence classes defined by the ``doubly twisted conjugacy'' relation: \[ \alpha \sim \beta \iff \alpha = \psi(\sigma)^{-1}\beta \phi(\sigma). \] The equivalence classes with respect to this relation (denoted e.g. $[\alpha]$) are called \emph{Reidemeister classes}. The set of Reidemeister classes is denoted $\mathcal{R}[f,g]$. For any set $S$, let $\mathbb{Z} S$ denote the free abelian group generated by $S$, whose elements we write as sums of elements of $S$ with integer coefficients. For any such abelian group, there is a homomorphism $c:\mathbb{Z} S \to \mathbb{Z}$ defined as the sum of the coefficients: \[ c\left( \sum_i k_i s_i \right) = \sum_i k_i, \] for $s_i \in S$ and $k_i \in \mathbb{Z}$, and $i$ ranging over a finite set. For some maps $f,g:X \to Y$ and an open subset $U\subset X$, let \[ \Coin(f,g,U) = \{ x\in U \mid f(x) = g(x) \}. \] We say that the triple $(f,g,U)$ is \emph{admissable} if $\Coin(f,g,U)$ is compact. Two triples $(f,g,U)$ and $(f',g',U)$ are \emph{admissably homotopic} if there is some pair of homotopies $F_t,G_t:X \times [0,1] \to X$ of $f,g$ to $f',g'$ with $\{ (x,t) \in U \times [0,1] \mid F_t(x) = G_t(x) \}$ compact. Let $\mathcal C(X,Y)$ be the set of \emph{admissable tuples}, all tuples of the form $(f, \widetilde f, g, \widetilde g, U)$ where $f,g:X \to Y$ are maps, $(f,g,U)$ is an admissable triple, and $\widetilde f$ and $\widetilde g$ are lifts of $f$ and $g$. Let $(f, \widetilde f, g, \widetilde g, U), (f', \widetilde f', g', \widetilde g', U) \in \mathcal C(X,Y)$ with $(f,g,U)$ admissably homotopic to $(f',g',U)$ by homotopies $F_t, G_t$. By the homotopy lifting property, there are unique lifted homotopies $\widetilde F_t, \widetilde G_t: \widetilde X \times [0,1] \to \widetilde Y$ with $\widetilde F_0 = \widetilde f$ and $\widetilde G_0 = \widetilde g$. If we additionaly have $\widetilde F_1 = \widetilde f'$ and $\widetilde G_1 = \widetilde g'$, then we say that the tuples $(f, \widetilde f, g, \widetilde g, U)$ and $(f',\widetilde f', g', \widetilde g', U)$ are \emph{admisssably homotopic}. Throughout the following, let $\text{\textit{RT}}$ be any function which to an admissable tuple $(f,\widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$ associates an element of $\mathbb{Z}\mathcal{R}[f,g]$. Our first three axioms for the local Reidemeister trace are modeled after the axioms of Theorem \ref{indexuniqueness}. \begin{axiom}[Additivity] Given $(f, \widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$, if $U_1$ and $U_2$ are disjoint open subsets of $U$ with $\Coin(f,g,U) \subset U_1 \cup U_2$, then \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U) = \text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U_1) + \text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U_2). \] \end{axiom} \begin{axiom}[Homotopy] If $(f, \widetilde f, g, \widetilde g, U)$ and $(f',\widetilde f', g',\widetilde g', U)$ are admissably homotopic admissable tuples, then \[ \text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U) = \text{\textit{RT}}(f', \widetilde f', g', \widetilde g', U). \] \end{axiom} \begin{axiom}[Normalization] If $(f,\widetilde f,g, \widetilde g, X) \in \mathcal C(X,Y)$, then \[ c(\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, X)) = L(f,g), \] where $L(f,g)$ is the Lefschetz number of $f$ and $g$. \end{axiom} We will require one additional axiom to make some connections with Nielsen theory, based on a well-known property of the Reidemeister trace: \begin{axiom}[Lift invariance] For any $(f, \widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$, and any $\alpha, \beta \in \pi_1(Y)$ we have \[ c(\text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U)) = c(\text{\textit{RT}}(f, \alpha \widetilde f, g, \beta \widetilde g, U)). \] \end{axiom} The four axioms above are enough to demonstrate some relationships between $\text{\textit{RT}}$ and the coincidence index. \begin{prop} \label{coeffs} If $\text{\textit{RT}}$ satisfies the homotopy, additivity, normalization, and lift invariance axioms, then \[ c(\text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U)) = \ind(f,g,U) \] for any $(f, \widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$, where $\ind$ denotes the coincidence index (see \cite{gonc05}). \end{prop} \begin{proof} Let $\omega = c\circ \text{\textit{RT}}: \mathcal C(X,Y) \to \mathbb{Z}$. By the lift invariance axiom, $\omega$ is independent of the choice of lifts. Thus $\omega$ can be viewed as a function from the set of all admissable \emph{triples} to $\mathbb{Z}$. It is clear that $\omega$ satisfies the three axioms of Theorem \ref{indexuniqueness}, since they are implied by our additivity, homotopy, and normalization axioms for $\text{\textit{RT}}$ (disregarding the lift parameters). Thus $\omega$ is the coincidence index. \end{proof} \begin{prop}\label{coinprop} If $\text{\textit{RT}}$ satisfies the additivity, homotopy, normalization, and lift invariance axioms and $c(\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U)) \neq 0$, then there is some $\sigma \in \pi_1(Y)$ such that $\sigma \widetilde f$ and $\widetilde g$ have a coincidence on $p_X^{-1}(U)$. \end{prop} \begin{proof} By Proposition \ref{coeffs}, if $c(\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U)) \neq 0$ then $\ind(f,g,U) \neq 0$, and so $f$ and $g$ have a coincidence on $U$. Let $x \in U$ be this coincidence point, and choose $\widetilde x \in p_X^{-1}(x)$. Then since $\widetilde f$ and $\widetilde g$ are lifts, the points $\widetilde f(\widetilde x)$ and $\widetilde g(\widetilde x)$ will project to the same point of $Y$ by $p_Y$. Thus there is some covering transformation $\sigma$ with $\sigma \widetilde f(\widetilde x) = \widetilde g(\widetilde x)$. \end{proof} The four axioms given above are not sufficient to uniquely characterize the Reidemeister trace in fixed point or coincidence theory. For instance, the function defined by \[ T(f,\widetilde f, g, \widetilde g, U) = \ind(f,g,U)[1], \] where [1] is the Reidemeister class of the trivial element $1 \in \pi_1(Y)$, satisfies all of the axioms above, but provides none of the expected data concerning $\mathcal{R}[f,g]$, and so that function cannot be the Reidemeister trace. An additional axiom is needed, one which somehow indicates the elements of $\mathcal{R}[f,g]$ which are to appear in the Reidemeister trace. Our final axiom is a sort of strengthening of Proposition \ref{coinprop}, which specifies the Reidemeister data associated to the coincidence points. \begin{axiom}[Coincidence of lifts] If $[\alpha]$ appears with nonzero coefficient in $\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U)$, then $\alpha \widetilde f$ and $\widetilde g$ have a coincidence on $p_X^{-1}(U)$. \end{axiom} Any function $\text{\textit{RT}}$ which to a tuple $(f,\widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$ associates an element of $\mathbb{Z}\mathcal{R}[f,g]$, and satisfies the additivity, homotopy, normalization, lift invariance, and coincidence of lifts\ axioms we will call a \emph{local Reidemeister trace}. Our main result (Theorem \ref{uniqueness}) states that there is a unique such function. \section{Uniqueness} \label{Suniqueness} Let $(f, \widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$, let $\widetilde U = p_X^{-1}(U)$, and let \[ C(\widetilde f, \widetilde g, \widetilde U, [\alpha]) = p_X(\Coin(\alpha \widetilde f, \widetilde g, \widetilde U)).\] For each $\alpha$ we have $C(\widetilde f, \widetilde g, \widetilde U, [\alpha]) \subset \Coin(f,g,U)$, and such coincidence sets are called \emph{coincidence classes}. That these classes are well defined is a consequence of the following lemma, which appears in slightly different language as Lemma 2.3 of \cite{dj93}. \begin{lem}\label{coinlift} Let $\alpha, \beta \in \pi_1(Y)$, maps $f,g:X \to Y$, and an open subset $U \subset X$ be given. Then: \begin{itemize} \item $[\alpha] = [\beta]$ if and only if \[ p_X \Coin(\alpha\widetilde f, \widetilde g, \widetilde U) = p_X \Coin(\beta\widetilde f, \widetilde g, \widetilde U) \] for any lifts $\widetilde f, \widetilde g$. \item If $[\alpha] \neq [\beta]$, then $p_X \Coin(\alpha \widetilde f, \widetilde g, \widetilde U)$ and $p_X \Coin(\alpha \widetilde f, \widetilde g, \widetilde U)$ are disjoint for any lifts $\widetilde f,\widetilde g$. \end{itemize} \end{lem} Given the above notation, the coincidence of lifts\ axiom could be restated as follows: If $[\alpha]$ appears with nonzero coefficient in $\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U)$, then $C(\widetilde f, \widetilde g, \widetilde U, [\alpha])$ is nonempty. For each coincidence point $x$ in $U$, define $[x_{\widetilde f,\widetilde g}] \in \mathcal{R}[f,g]$ as that class $[\alpha]$ for which $x \in C(\widetilde f, \widetilde g, \widetilde U, [\alpha])$. \begin{thm} \label{weckentrace} If $\text{\textit{RT}}$ is a local Reidemeister trace and $\Coin(f,g,U)$ is a set of isolated points, then \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U) = \sum_{x \in \Coin(f,g,U)} \ind(f,g,U_x) [x_{\widetilde f,\widetilde g}], \] where $U_x$ is an isolating neighborhood for the coincidence point $x$. \end{thm} \begin{proof} By the additivity property, we need only show that \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U_x) = \ind(f, g, U_x) [x_{\widetilde f,\widetilde g}]. \] First, we observe that no element of $\mathcal{R}[f,g]$ other than $[x_{\widetilde f, \widetilde g}]$ appears as a term with nonzero coefficient in $\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U_x)$: If some $[\beta]$ does appear with nonzero coefficient, then we know by the coincidence of lifts axiom that $\beta \widetilde f$ and $\widetilde g$ have a coincidence on $\widetilde U_x = p_X^{-1}(U_x)$. Projection of this coincidence point gives a coincidence point in $U_x$ which necessarily must be $x$, since $x$ is the only coincidence point in $U_x$. Thus $x \in p_X \Coin(\beta \widetilde f, \widetilde g, \widetilde U_x)$, which means that $[\beta] = [x_{\widetilde f, \widetilde g}]$. Since $[x_{\widetilde f, \widetilde g}]$ is the only element of $\mathcal{R}[f,g]$ appearing in $\text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U)$, we have \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U_x) = k [x_{\widetilde f, \widetilde g}] \] for some $k\in \mathbb{Z}$ (possibly $k=0$). Proposition \ref{coeffs} says that the coefficient sum must equal the index, and so $k = \ind(f, g, U_x)$ as desired. \end{proof} The above is a strong result for maps whose coincidence sets are isolated. In order to leverage this result for arbitrary maps, we will make use of a technical lemma, a combination of Lemmas 13 and 15 from \cite{stae07}. \begin{lem} \label{mfldisolation} Let $(f,g,U)$ be an admissable triple, and let $V \subset U$ be an open subset containing $\Coin(f,g,U)$ with compact closure $\bar V \subset U$. Then $(f,g,V)$ is admissably homotopic to an admissable triple $(f',g',V)$, where $f'$ and $g'$ have isolated coincidence points in $V$. \end{lem} The above lemma is used to approximate any maps by maps having isolated coincidence points, and we obtain our uniqueness theorem: \begin{thm}\label{uniqueness} There is at most one local Reidemeister trace defined on $\mathcal C(X,Y)$. \end{thm} \begin{proof} Let $\text{\textit{RT}}$ be local Reidemeister trace, and take $(f, \widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$. Then by Lemma \ref{mfldisolation} there is an open subset $V \subset U$ with $\Coin(f,U) \subset V$ and maps $f', g'$ with isolated coincidence points with $(f,g,V)$ admissably homotopic to $(f',g',V)$. Then by the homotopy axiom there are lifts $\widetilde f', \widetilde g'$ of $f$ and $g$ with \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U) = \text{\textit{RT}}(f', \widetilde f', g', \widetilde g', V). \] The coincidence points of $f'$ and $g'$ in $V$ are isolated, so we have \[ \text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U) = \sum_{x\in \Coin(f',g',V)} \ind(f',g', V_x) [x_{\widetilde f', \widetilde g'}], \] where $V_x$ is an isolating neighborhood of the coincidence point $x$. This gives an explicit formula for the computation of $\text{\textit{RT}}(f, \widetilde f, g, \widetilde g, U)$. The only choice made in the computation is of the admissable homotopy to $(f', g', V)$, but any alternative choice must give the same local Reidemeister trace by the homotopy axiom. Thus all local Reidemeister traces must be computed in the same way, giving the same result, which means that there can be only one. \end{proof} \section{Uniqueness in fixed point theory} \label{fixpt} In the special case where $Y=X$ and $g$ is taken to be the identity map $\id:X \to X$, the above method can be used with slight modifications to prove a uniqueness result for the local Reidemeister trace in the fixed point theory of possibly nonorientable manifolds. We have not in this paper made explicit use of the orientability hypothesis, but it is a necessary hypothesis for the uniqueness of the coincidence index in Theorem \ref{indexuniqueness}, which was used in Proposition \ref{coeffs}. An accounting of orientations is needed in coincidence theory to distinguish between points of index $+1$ and index $-1$ (though see \cite{dj93} for an approach to an index for nonorientable manifolds, which does not always give an integer). Orientability is not needed in local fixed point theory, since the notion of an orientation preserving selfmap is well-defined locally, even on a manifold with no global orientation. Thus the uniqueness of the fixed point index in \cite{fps04} does not require orientability, and we will not require it here. Let $\mathcal C(X)$ be the set of all tuples of the form $(f,\widetilde f, \widetilde{\imath}, U)$, where $f:X \to X$ is a selfmap, $\widetilde f: \widetilde X \to \widetilde X$ is a lift of $f$, the map $\widetilde{\imath}: \widetilde X \to \widetilde X$ is a lift of the identity map, and $U$ is an open subset of $X$ with compact fixed point set $\Fix(f,U) = \Coin(f,\id,U)$. Let $\mathcal{R}[f] = \mathcal{R}[f,\id]$. Two tuples $(f,\widetilde f, \widetilde{\imath}, U)$ and $(f',\widetilde f',\widetilde{\imath}, U)$ are said to be admissably homotopic if there is some homotopy $F_t$ of $f$ to $f'$ with $\{ (x,t) \mid F_t(x) = x \}$ compact, and $F_t$ lifts to a homotopy of $\widetilde f$ to $\widetilde f'$. Our uniqueness theorem is then: \begin{thm}\label{fixptuniqueness} If $X$ is a (possibly nonorientable) differentiable manifold, then there is a unique function taking an admissable tuple $(f,\widetilde f,\widetilde{\imath}, U)$ to an element of $\mathbb{Z}\mathcal{R}[f]$ satisfying the following axioms: \begin{itemize} \item (Additivity) If $U_1$ and $U_2$ are disjoint open subsets of $U$ with $\Fix(f,U) \subset U_1 \cup U_2$, then \[ \text{\textit{RT}}(f,\widetilde f,\widetilde{\imath}, U) = \text{\textit{RT}}(f, \widetilde f,\widetilde{\imath}, U_1) + \text{\textit{RT}}(f, \widetilde f,\widetilde{\imath}, U_2) \] \item (Homotopy) If $(f,\widetilde f,\widetilde{\imath}, U)$ is admissably homotopic to $(f',\widetilde f',\widetilde{\imath}, U)$, then \[ \text{\textit{RT}}(f,\widetilde f,\widetilde{\imath}, U) = \text{\textit{RT}}(f',\widetilde f',\widetilde{\imath}, U) \] \item (Weak normalization) If $f$ is a constant map, then \[ c (\text{\textit{RT}}(f,\widetilde f,\widetilde{\imath}, U)) = 1\] \item (Lift invariance) For any $\alpha, \beta \in \pi_1(X)$, we have \[ c(\text{\textit{RT}}(f,\widetilde f, \widetilde{\imath}, U)) = c(\text{\textit{RT}}(f, \alpha\widetilde f, \beta \widetilde{\imath}, U)) \] \item (Coincidence of lifts) If $[\alpha]$ appears with nonzero coefficient in $\text{\textit{RT}}(f,\widetilde f,\widetilde{\imath}, U)$, then $\alpha \widetilde f$ and $\widetilde{\imath}$ have a coincidence point on $p_X^{-1}(U)$. \end{itemize} \end{thm} \begin{proof} First we note that a result analagous to Proposition \ref{coeffs} can be obtained in the fixed point setting using only the weak normalization axiom: Using the three axioms of \cite{fps04}, which make use of an appropriately weakened normalization axiom, we see that $c\circ \text{\textit{RT}}$ is the fixed point index. Then letting $g = \id$ in the proof of Theorem \ref{weckentrace}, we have that, if $f$ has isolated fixed points, \[ \text{\textit{RT}}(f,\widetilde f, \widetilde{\imath}, U) = \sum_{x \in \Fix(f,U_x)} \ind(f,U_x) [x_{\widetilde f,\widetilde{\imath}}], \] where $\ind$ denotes the fixed point index, and $U_x$ is an isolating neighborhood for the fixed point $x$. A fixed point version of Lemma \ref{mfldisolation} can be found in Lemmas 4.1 and 3.3 of \cite{fps04}, and the proof of Theorem \ref{uniqueness} can be mimicked to obtain our uniqueness result. \end{proof} Note that the uniqueness in fixed point theory requires only a weakened version of the normalization axiom. A uniqueness result for coincidence theory using only the weak normalization axiom can be obtained if we restrict ourselves to self-maps of a particular (not necessarily orientable) manifold. This would use a proof similar to the above, using results from Section 5 of \cite{stae07}. \section{Existence} \label{existence} The existence of a local Reidemeister trace in fixed point theory for connected finite dimensional locally compact polyhedra is established by Fares and Hart in \cite{fh94}. There, the slightly more general local $H$-Reidemeister trace is defined, called ``the local generalized $H$-Lefschetz number''. An extension of this paper to the mod $H$ theory would not be difficult. The fact that the mod $H$ Reidemeister classes are unions of ordinary Reidemeister classes allows the same results to be obtained without substantial modifications. In \cite{fh94}, the additivity and homotopy axioms are proved in Proposition 3.2.9 and Proposition 3.2.8, respectively. A strong version of the lift invariance axiom (see our Theorem \ref{liftcoeffs}) is proved in Proposition 3.2.4. The coincidence of lifts\ axiom is not stated explicitly by Fares and Hart, but is a straightforward consequence of their trace-like definition (if some $[\alpha]$ has nonzero coefficient in the Reidemeister trace, it neccesarily comes from some simplex in the covering space containing a fixed point of $\alpha\widetilde f$). A result analogous to the Wecken Trace Theorem (which trivially implies the normalization and weak normalization axioms) is given in Theorem 3.3.1. No Reidemeister trace for coincidence theory, either local or global, has appeared previously in the literature. The proof of Theorem \ref{uniqueness} furnishes the appropriate definition, as follows: Given an admissable tuple $(f,\widetilde f, g, \widetilde g, U)$, we find (by Lemma \ref{mfldisolation}) an admissably homotopic tuple $(f',\widetilde f', g', \widetilde g', V)$ with isolated coincidence points, and we define \[ \text{\textit{RT}}(f,\widetilde f,g, \widetilde g, U) = \sum_{x \in \Coin(f',g',V)} \ind(f',g',V_x) [x_{\widetilde f',\widetilde g'}], \] where $V_x$ is an isolating neighborhood for the coincidence point $x$. The above is well defined provided that it is independent of the choice of the admissably homotopic tuple. This is ensured by the following lemma: \begin{lem} If $(f,\widetilde f, g, \widetilde g, U)$ and $(f', \widetilde f', g', \widetilde g', U)$ are admissably homotopic tuples with isolated coincidence points, then \[ \sum_{x \in \Coin(f,g,U)} \ind(f,g,U_x)[x_{\widetilde f, \widetilde g}] = \sum_{x' \in \Coin(f',g',U)} \ind(f',g',U_{x'}) [x'_{\widetilde f',\widetilde g'}], \] where $U_x$ is an isolating neighborhood for the coincidence point $x \in \Coin(f,g,U)$, and $U_{x'}$ is an isolating neighborhood of the coincidence point $x' \in \Coin(f',g',U)$. \end{lem} \begin{proof} We define the index of a coincidence class $C$ of $f$ and $g$ as follows: \[ \ind C = \sum_{x \in C} \ind(f,g,U_x). \] A class is called \emph{essential} if its index is nonzero. Since $f$ and $g$ are homotopic to $f'$ and $g'$, we have $\mathcal{R}[f,g] = \mathcal{R}[f',g']$. Call this common set of Reidemeister classes $R$. Letting $\widetilde U = p_X^{-1}(U)$, the statement of the Lemma is equivalent to \[ \sum_{[\alpha] \in R} \ind C(\widetilde f,\widetilde g, \widetilde U, [\alpha]) [\alpha] = \sum_{[\alpha] \in R} \ind C(\widetilde f',\widetilde g', \widetilde U, [\alpha]) [\alpha], \] and we need only show that $\ind C(\widetilde f,\widetilde g,\widetilde U,[\alpha]) = \ind C(\widetilde f',\widetilde g', \widetilde U, [\alpha])$ for any $[\alpha]$. We will prove this using Brooks's notion of homotopy-relatedness of coincidence classes, exposited in detail in \cite{broo67} and briefly in \cite{bb69}. Let $F_t, \widetilde F_t, G_t, \widetilde G_t$ be homotopies realizing the admissable homotopy of $(f,\widetilde f, g, \widetilde g, U)$ and $(f',\widetilde f', g', \widetilde g', U)$. Two coincidence points $x \in \Coin(f,g,U)$ and $x' \in \Coin(f',g',U)$ are \emph{$(F_t,G_t)$--related} if there is some path $\gamma(t)$ in $X$ connecting $x$ to $x'$ such that the paths $F_t(\gamma(t))$ and $G_t(\gamma(t))$ are homotopic in $Y$ as paths with fixed endpoints. Two coincidence classes are related if at least one point of one is related to at least one point of the other. Theorem II.22 of \cite{broo67} shows that the notion of $(F_t,G_t)$-relatedness gives a bijective correspondence between the essential coincidence classes of $(f,g)$ and those of $(f',g')$. Theorem IV.24 of \cite{broo67} further shows that any two such related classes will have the same index. What remains is an elementary argument using covering-space theory. Let $C = C(\widetilde f, \widetilde g, \widetilde U, [\alpha])$, and let $C'$ be the unique coincidence class of $(f',g')$ which is $(F_t,G_t)$-related to $C$. We need only show that $C' = C(\widetilde f', \widetilde g', \widetilde U, [\alpha])$, and thus (since homotopy-relatedness preserves the index) that $\ind C(\widetilde f, \widetilde g, \widetilde U, [\alpha]) = \ind C(\widetilde f', \widetilde g', \widetilde U, [\alpha])$. Choose a point $x \in C$, and let $x'$ be a point in $C'$ which is $(F_t,G_t)$ related to $x$. Then there is some path $\gamma$ in $X$ from $x$ to $x'$ with $F_t(\gamma(t))$ homotopic to $G_t(\gamma(t))$. Let $\widetilde x$ be some point with $p_X(\widetilde x) = x$ and $\alpha \widetilde f (\widetilde x) = \widetilde g(\widetilde x)$. We can lift $\gamma$ to a path $\widetilde \gamma$ in $\widetilde X$ starting at $\widetilde x$. Since $F_t(\gamma(t))$ is homotopic to $G_t(\gamma(t))$, we will have $\widetilde F_t(\widetilde \gamma(t))$ homotopic to $\widetilde G_t(\widetilde \gamma(t))$, which in particular means that they will have the same endpoint. This common endpoint is $\alpha \widetilde f'(\widetilde \gamma(1)) = \widetilde g'(\widetilde \gamma(1))$, which must project by $p_X$ to the point $x'$. Thus $x' \in p_X(\Coin(\alpha \widetilde f', \widetilde g', \widetilde U))$, and so $C' = C(\widetilde f', \widetilde g', \widetilde U, [\alpha])$, as desired. \end{proof} We have thus produced a meaningful definition of a local coincidence Reidemeister trace on orientable differentiable manifolds of the same dimension, and the proof above suffices to give: \begin{thm}[Wecken Coincidence Trace Theorem]\label{wtt} Let $\text{\textit{RT}}$ be the unique local coincidence Reidemeister trace satisfying our five axioms. Then for any $(f, \widetilde f, g, \widetilde g, U) \in \mathcal C(X,Y)$ with $\widetilde U = p_X^{-1}(U)$, we have \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U) = \sum_{[\alpha] \in \mathcal{R}[f,g]} \ind C(\widetilde f, \widetilde g, \widetilde U, [\alpha]) [\alpha]. \] \end{thm} In conclusion we prove a stronger form of the lift invariance axiom, a coincidence version of a well-known property of the Reidemeister trace. \begin{thm} \label{liftcoeffs} Let $\text{\textit{RT}}$ be the unique local coincidence Reidemeister trace satisfying our five axioms. If \[ \text{\textit{RT}}(f,\widetilde f, g, \widetilde g, U) = \sum_{[\sigma] \in \mathcal{R}[f,g]} k_{[\sigma]} [\sigma] \] for $k_{[\sigma]} \in \mathbb{Z}$, then for any $\alpha, \beta \in \pi_1(Y)$, we have \[ \text{\textit{RT}}(f, \alpha \widetilde f, g, \beta \widetilde g, U) = \sum_{[\sigma] \in \mathcal{R}[f,g]} k_{[\sigma]} [\beta \sigma \alpha^{-1}]. \] \end{thm} \begin{proof} Letting $\widetilde U = p_X^{-1}(U)$, by Theorem \ref{wtt} we know that $k_{[\sigma]} = \ind C(\widetilde f, \widetilde g, \widetilde U, [\sigma])$. Then we have \[ C(\alpha \widetilde f, \beta \widetilde g, [\sigma]) = p_X \Coin(\sigma \alpha \widetilde f, \beta \widetilde g, \widetilde U) = p_X \Coin(\beta^{-1} \sigma \alpha \widetilde f, \widetilde g, \widetilde U) = C(\widetilde f, \widetilde g, [\beta^{-1} \sigma \alpha]), \] and thus $\ind C(\alpha \widetilde f, \beta \widetilde g, \widetilde U, [\sigma]) = k_{[\beta^{-1} \sigma \alpha]}$. Now by Theorem \ref{wtt} again, we have \begin{align*} \text{\textit{RT}}(f,\alpha \widetilde f, g, \beta \widetilde g, \widetilde U) &= \sum_{[\sigma] \in \mathcal{R}[f,g]} \ind C(\alpha \widetilde f, \beta\widetilde g, [\sigma]) [\sigma] = \sum_{[\sigma] \in \mathcal{R}[f,g]} k_{[\beta^{-1} \sigma \alpha]} [\sigma] \\ &= \sum_{[\gamma] \in \mathcal{R}[f,g]} k_{[\gamma]} [\beta \gamma \alpha^{-1}], \end{align*} as desired. \end{proof}
1,477,468,750,511
arxiv
\section{Introduction} Let $\{X_i\}$ be a sequence of independent and identically distributed (i.i.d.) real valued random variables. We denote by $S_n$ the partial sums of $X_i$, i.e. $S_0=0$, $S_n = X_1+\cdots+ X_n$. In this paper we are interested in the situation when $X_1$ has negative drift, but simultaneously ${\mathbb{P}}[X_1 >0]>0$. Our primary objective is to describe the precise large deviations of the linearly normalized first passage time $$\tau_u = \inf \{n:S_n > u \},$$ as $u$ tends to infinity. The stopping time $\tau_u$ arises in various contexts in probability, e.g. in risk theory, sequential statistical analysis, queueing theory. We refer to Siegmund \cite{S} and Lalley \cite{Lalley} for a comprehensive bibliography. A celebrated result concerning $\tau_u$, playing a major role in the ruin theory, is due to Cram\'er, who revealed estimate of the ruin probability \begin{equation} \label{eq: cramer} {\mathbb{P}}[\tau_u<\infty] \sim C e^{-\alpha_0 u}, \qquad \mbox{as } u\to\infty, \end{equation} for some parameter $\alpha_0$ that will be described below (see Cram\'er \cite{C} and Feller \cite{F}). Our aim is to describe the probability that at a given time partial sums $S_n$ first cross a linear boundary $\rho n$. This problem was studied e.g. by Siegmund \cite{S} and continued by Lalley \cite{Lalley}. Up to our best knowledge all the known results concern probabilities of the form ${\mathbb{P}}[\tau_u< u/\rho ]$ or ${\mathbb{P}}[u/\rho < \tau_u <\infty]$, see Lalley \cite{Lalley} (see also Arfwedson \cite{Arf} and Asmussen \cite{Asmussen} for similar results related to compound Poisson risk model). In this paper we describe pointwise behavior of $\tau_u$, i.e. the asymptotic behavior of ${\mathbb{P}}\big[\tau_u = \lfloor u/\rho \rfloor \big]$ as $u$ tends to infinity. \section{Statement of the results} Our main result will be expressed in terms of the moment and cumulant generating functions of $X_1$, i.e. \[ \lambda(s) = \mathbb{E}[e^{sX_1}] \quad \text{and} \quad \Lambda(s) = \log \lambda(s), \] respectively. We assume that $\lambda(s)$ exists for $s$ in the interval $D = [0, s_0)$ for some $s_0 > 0$. It is well known that both $\lambda$ and $\Lambda$ are smooth and convex on $D$. Throughout the paper we assume that there are $\alpha \in D$ and $\xi > 0$ such that \begin{equation}\label{eq:1.1} \rho= \Lambda'(\alpha) > 0 \end{equation} and \begin{equation*} \lambda(\alpha + \xi) < \infty. \end{equation*} Observe that \eqref{eq:1.1} implies that $\mathbb{P}\left[X_1 > 0 \right] > 0$. Recall the convex conjugate (or the Fenchel-Legendre transform) of $\Lambda$ defined by $$ \Lambda^*(x) = \sup_{s\in {\mathbb{R}}}\{sx - \Lambda(s)\}, \quad x\in{\mathbb{R}}. $$ This rate function appears in studying large deviations problems for random walks. Its various properties can be found in Dembo, Zeitouni \cite{DZ}. Given $\a < s_0$ and $\rho$ as in \eqref{eq:1.1} we consider $$ \overline \a = \frac 1{\rho}\; \Lambda^*(\rho). $$ An easy calculation shows \[ \overline{\alpha} = \alpha - \frac{\Lambda(\alpha)}{\Lambda'(\alpha)}. \] The parameter $\overline \a$ arises in the classical large deviations theory for random walks. The Petrov's theorem and the Bahadur-Rao theorem say that \begin{equation}\label{eq: petrov} {\mathbb{P}}[S_n > n\rho] \sim C \; \frac{e^{-\overline \a n \rho}}{\sqrt n} \qquad \mbox{as } n \to \infty, \end{equation} (see Petrov \cite{Petrov} and Dembo, Zeitouni \cite{DZ}). As we will see below $\overline \a$ will play also the crucial role in our result. This parameter has a geometric interpretation: the tangent line to $\Lambda$ at point $\alpha$ intersects the $x$-axis at $\overline{\alpha}$. See the Figure \ref{fig} below. \begin{figure}[!h] \centering \caption{$\Lambda(s) = \log\mathbb{E}e^{sX_1}$}\label{fig} \includegraphics{img.pdf} \end{figure} We also introduce parameters $k_u$ and $\alpha_{min}$ defined by \[ \alpha_{min} = \argmin \Lambda(s) \quad \text{and } \quad k_u = \frac{u}{\rho} . \] \medskip Now we are ready to state our main result. \begin{thm}\label{th1} Assume that $\{X_i\}$ is an i.i.d. sequence such that the law of $X_1$ is nonlattice, ${\mathbb{E}} X_1<0$ and $\rho = \Lambda'(\a)>0$ for some $\a<s_0$. Then \begin{equation*} \begin{split} \mathbb{P}\left[\tau_u =\left \lfloor{k_u }\right \rfloor \right] = C(\alpha) \lambda(\alpha)^{-\Theta (u)} \; \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \; (1+o(1)) \quad \text{as} \quad u \to \infty \end{split} \end{equation*} for some constant $C(\a)>0$ and $\Theta (u) = k_u - \left \lfloor{k_u}\right \rfloor$. \end{thm} Notice that the above formula gives the largest asymptotics when $\a=\a_0$ for $\a_0$ such that $\Lambda(\a_0)=0$. Then $\overline \a_0 = \a_0$. For all the other parameters $\a$ we have $\overline \a > \overline \a_0$. The parameter $\a_0$ arises in the Cram\'er's formula \eqref{eq: cramer}. Similar results were obtained by Lalley, who proved that for $\alpha$ such that $\Lambda(\alpha) > 0$ we have \begin{equation*} \mathbb{P} \left[\tau_u \leq k_u \right] = {C_1(\alpha) \lambda(\alpha)^{-\Theta (u)}}\; \frac{e^{-u \overline{\alpha}}}{\sqrt{u}} \; (1+o(1)) \quad \text{as} \quad u \to \infty \end{equation*} and for $\alpha$ such that $\Lambda(\alpha) < 0$ \begin{equation*} \mathbb{P} \left[\tau_u > k_u \right] = {C_2(\alpha) \lambda(\alpha)^{1-\Theta (u)}} \; \frac{ e^{-u \overline{\alpha}}}{\sqrt{u}}\; (1+o(1)) \quad \text{as} \quad u \to \infty, \end{equation*} for some known, depending only on $\alpha$ constants $C_1(\alpha)$, $C_2(\alpha)$ (see Lalley \cite{Lalley}, Theorem 5). Notice that the function $\Theta(u)$ appears in all the formulas above only from purely technical reason. It reflects the fact that $\tau_u$ attains only integer values, whereas $k_u$ is continuous. Thus the function $\Theta$ is needed only to adjust both expressions for noninteger values of $k_u$. Below we will omit this point and without any saying we assume that $k_u$ is an integer. \section{Auxillary results.} The proof of Theorem \ref{th1} bases on the Petrov's theorem and the Bahadur-Rao theorem describing precise large deviations for random walks \eqref{eq: petrov}. We apply here techniques, which were recently used by Buraczewski et al. \cite{BCDZ, BDZ} to study the problem of the first passage time in a more general context of perpetuities. They obtained similar results as described above, but in our context the proof is essentially simpler and final results are stronger. Here we need a reinforced version of \eqref{eq: petrov}, which is both uniform and allows to slightly perturb the parameters. As a direct consequence of Petrov's theorem \cite{Petrov} the following results was proved in \cite{BCDZ}: \begin{lem}\label{lem: petrov} Assume that the law of $X_1$ is nonlattice and that $\rho$ satisfies $\mathbb{E} X_1 < \rho< A_0$. Choose $\alpha$ such that $\Lambda'(\alpha) = \rho $. If $\{\delta_n\}$, $\{j_n\}$ are two sequences satisfying \begin{equation}\label{eq10} \max \{\sqrt{n} \left| \delta_n \right|, j_n/\sqrt{n} \} \leq \overline{\delta}_n \to 0, \end{equation} then \begin{equation*} \mathbb{P}\left[S_{n-j_n} > n\left(\rho + \delta_n\right) \right] = C(\alpha)\frac{e^{-\overline{\alpha} n\rho}}{\sqrt{n}} e^{ - \alpha n \delta_n} \lambda(\alpha)^{-j_n }(1+o(1)) \quad \quad \text{as}\ n \to \infty, \end{equation*} uniformly with respect to $\rho$ in the range \begin{equation*} \mathbb{E}X + \epsilon \leq \rho \leq A_0 - \epsilon, \end{equation*} and for all $\delta_n$, $j_n$ as in \eqref{eq10}. \end{lem} Let us define $M_n = \max_{1 \leq k \leq n}S_k$ and $S_{i}^n = S_n - S_{n-i} = X_{n - i +1}+...+X_{n}$ for $0 \leq i \leq n$. The following Lemma will play a crucial role in the proof. \begin{lem}\label{l1} Let $L$ and $M$ be two integers such that $L \geq 1$ and $-1 \leq M \leq L$. For any $ \gamma \geq 0$, $\alpha_{min} < \beta < \alpha$ and sufficiently large $u$, the following holds \begin{equation*} \begin{split} \mathbb{P}\left[M_{k_u - L} > u, S_{k_u-M} > u - \gamma \right] & \leq C(\alpha, \beta) e^{\gamma \beta} \lambda(\alpha)^{-L} \lambda(\beta)^{L - M} \frac{e^{-u\overline{\alpha}}}{\sqrt{ u}}, \end{split} \end{equation*} where $C(\a,\beta)$ is some constant depending on $\a$ and $\beta$. \end{lem} \begin{proof} We have \begin{equation*} \begin{split} \mathbb{P}\left[M_{k_u - L} > u, S_{k_u-M} > u - \gamma \right] & \leq \sum_{i=0}^{k_u -1 -L} \mathbb{P}\left[ S_{k_u-M} > u - \gamma , S_{k_u - i - L} > u\right]. \end{split} \end{equation*} Denote $\delta = \frac{\lambda(\beta)}{\lambda(\alpha)} < 1$. To estimate the above series, we divide the set of indices into two sets. \noindent {\sc Case 1.} First we consider $i$ satisfying $i > K \log k_u $ for some constant $K$ such that $\delta^{K \log k_u} < 1/u$. Notice that for any $u$ we have \begin{equation*} e^{-u \overline{\alpha}} = e^{-u\alpha} \lambda({\alpha})^{k_u}. \end{equation*} Then, for any such $i$ we write \begin{equation*} \begin{split} \mathbb{P}\!\left[ S_{k_u-M} \!>\! u \!- \!\gamma , S_{k_u - i - L} \!>\! u\right] & \leq \sum_{m=0}^{\infty} \mathbb{P}\!\left[S_{k_u-M} \!>\! u \!-\! \gamma ,u\!+\!m \!<\! S_{k_u - i - L} \!\leq\! u \!+\! m\!+\!1\right]\\ & = \sum_{m=0}^{\infty} \mathbb{P}\!\left[S_{k_u-i-L} \!+\! S_{L+i-M}^{k_u-M} \!>\! u \!-\! \gamma, u \!+\! m \!<\! S_{k_u - i - L} \!\leq\! u \!+\! m\!+\!1\right] \\ & \leq \sum_{m=0}^{\infty} \mathbb{P}\!\left[ S_{L+i-M}^{k_u-M} \!>\! -\gamma\!-\!(m\!+\!1)\right] \mathbb{P}\!\left[S_{k_u - i - L} \!>\! u\!+\!m\right] \\ & \leq \sum_{m=0}^{\infty} e^{\beta \gamma} e^{\beta(m+1)} \lambda(\beta)^{L+i-M} e^{-u\alpha} e^{-\alpha m} \lambda(\alpha)^{k_u - i - L}\\ & \leq C(\alpha, \beta) e^{\beta \gamma} \delta^{i} e^{-u\overline{\alpha}} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M}, \end{split} \end{equation*} where in the third line we used Markov's inequality with functions $e^{\beta x}$ and $e^{\alpha x}$. Summing over $i$ we obtain \begin{equation*} \begin{split} \sum_{K \log k_u < i \leq k_u -1 + L} \!\!\!\mathbb{P}\left[ S_{k_u-M} > u-\gamma , S_{k_u - i - L} > u\right] & \leq C(\alpha, \beta) \sum_{ i > K \log k_u}\!\! e^{\beta \gamma} \delta^{i} e^{-u\overline{\alpha}} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} \\ & \leq C(\alpha, \beta) e^{\beta \gamma} \delta^{K \log k_u} e^{-u\overline{\alpha}} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} \\ & \leq C(\alpha, \beta) e^{\beta \gamma} \frac{e^{-u\overline{\alpha}}}{ u} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M}. \end{split} \end{equation*} \noindent {\sc Case 2.} Now consider $ i \leq K \log k_u$. Let $N$ be a constant such that $-\alpha N +1 < 0$, for $\Lambda(\alpha) \geq 0$ and $-\alpha N +1 - \Lambda(\alpha)K < 0$ for $\Lambda(\alpha) < 0$. We have \begin{equation*} \begin{split} \mathbb{P}\left[ S_{k_u-M} > u-\gamma , S_{k_u - i - L} > u\right] & \leq \mathbb{P}\left[ S_{k_u - i -L} \geq u + N \log k_u \right] \\ &\quad + \mathbb{P}\left[S_{k_u-M} > u-\gamma, u< S_{k_u - i -L} < u +N \log k_u\right]\\ & = P_1 + P_2 \end{split} \end{equation*} The first term $P_1$ we estimate using Markov's inequality with function $e^{\alpha x}$ and we obtain \begin{equation*} \begin{split} P_1 & \leq e^{-u\alpha} k_u^{-\alpha N} \lambda(\alpha)^{k_u-i-L} = e^{-u\overline{\alpha}} k_u^{-\alpha N} \lambda(\alpha)^{-i} \lambda(\alpha)^{-L} \leq C(\alpha) e^{-u\overline{\alpha}} \frac{1}{u} k_u^{-\alpha N + 1} e^{-i \Lambda(\alpha)} \lambda(\alpha)^{-L} \\ & \leq C(\alpha) e^{-u\overline{\alpha}} \frac{1}{ u} \lambda(\alpha)^{-L}. \end{split} \end{equation*} To estimate $P_2$ we apply Lemma \ref{lem: petrov} and again Markov's inequality with function $e^{\beta x}$. \begin{equation*} \begin{split} P_2 & =\mathbb{P}\left[S_{k_u-i-L} + S_{L+i-M}^{k_u-M} > u-\gamma, u< S_{k_u - i -L} < u + N \log k_u \right] \\ & \leq \sum_{m=0}^{\left \lceil{N \log k_u - 1}\right \rceil } \mathbb{P}\left[S_{k_u-i-L} + S_{L+i-M}^{k_u-M} > u-\gamma, u + m < S_{k_u - i -L} < u +m+1\right]\\ & \leq \sum_{m=0}^{\left \lceil{N \log k_u - 1}\right \rceil } \mathbb{P}\left[S_{k_u-i-L} > u+m \right] \mathbb{P}\left[S_{L+i-M}^{k_u-M} > -\gamma -(m+1) \right]\\ & \leq \sum_{m=0}^{\left \lceil{N \log k_u - 1}\right \rceil } C(\alpha) \frac{e^{-u\overline{\alpha}}}{\sqrt{k_u}} \lambda(\alpha)^{-i-L} e^{-\alpha m} e^{\beta \gamma} e^{\beta (m+1)} \lambda(\beta)^{i+L-M}\\ & \leq \sum_{m=0}^{\left \lceil{N \log k_u - 1}\right \rceil } C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{k_u}} e^{(\beta-\alpha) m} \delta^i e^{\beta \gamma} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} \\ & \leq C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \delta^i e^{\beta \gamma} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M}. \end{split} \end{equation*} Now we sum over $i$ \begin{equation*} \begin{split} \sum_{ i \leq K \log k_u} \mathbb{P}[ S_{k_u-M} > u - \gamma &, S_{k_u - i - L} > u] \\ &\leq \sum_{ i \leq K \log k_u} \!\!\! \left( P_1 + P_2 \right) \\ & \leq \sum_{ i \leq K \log k_u} \!\!\! \left( C(\alpha) \frac{e^{-u\overline{\alpha}}}{u} \lambda(\alpha)^{-L} + C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \delta^i e^{\beta \gamma} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} \right) \\ & \leq C(\alpha) e^{-u\overline{\alpha}} \frac{ \log k_u}{u} \lambda(\alpha)^{-L} + C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{ u}} e^{\beta \gamma} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} . \end{split} \end{equation*} Combining both cases we end up with \begin{equation*} \begin{split} \mathbb{P}\left[M_{k_u - L} > u, S_{k_u-M} > u - \gamma \right] & \leq C(\alpha, \beta) e^{\beta \gamma} \frac{e^{-u\overline{\alpha}}}{u} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} + C(\alpha) e^{-u\overline{\alpha}} \frac{ \log k_u}{u} \lambda(\alpha)^{-L}\\ &\quad + C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{\beta \gamma} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M} \\ & \leq C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{\beta \gamma} \lambda(\alpha)^{-L} \lambda(\beta)^{L-M}. \end{split} \end{equation*} \end{proof} \section{Lower and upper estimates} The goal of this section is to prove the following \begin{prop}\label{prop1} There is a constant $C > 0$ such that for large $u$ \begin{equation}\label{pr1eq1} \begin{split} \frac {1} C \; \frac{e^{-u\overline{\alpha}} }{\sqrt{u}} \le \mathbb{P}\left[\tau_u = k_u + 1 \right] \le C \; \frac{e^{-u\overline{\alpha}} }{\sqrt{u}}. \end{split} \end{equation} \end{prop} \begin{proof} First, observe that the upper estimate is an immediate consequence of Petrov's theorem (Lemma \ref{lem: petrov}) used with $\gamma_n = 0$. Indeed, we have \begin{equation*} \mathbb{P}\left[\tau_u = k_u + 1\right] = \mathbb{P}\left[M_{k_u} \leq u, S_{k_u+1} > u\right] \leq \mathbb{P}\left[ S_{k_u+1} > u\right] \leq C(\alpha) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}}. \end{equation*} For the lower estimate we write for any positive $\gamma$ and any positive integer $L$ \begin{equation*} \begin{split} \mathbb{P}\left[\tau_u = k_u + 1\right] = \mathbb{P}\left[M_{k_u} \leq u, S_{k_u+1} > u\right] \geq \mathbb{P}\left[M_{k_u} \leq u, S_{k_u+1} > u, S_{L+1}^{k_u+1} > \gamma \right] . \end{split} \end{equation*} For any $0 < r < \gamma$ one has \begin{equation*} \begin{split} \mathbb{P}\left[M_{k_u} \leq u, S_{k_u+1} > u, S_{L+1}^{k_u+1} > \gamma \right] \geq \mathbb{P}\left[M_{k_u} \leq u, u - \gamma < S_{k_u-L} < r+u-\gamma, S_{L+1}^{k_u+1} > \gamma \right]. \end{split} \end{equation*} Let \!$M_i^n \!\!=\!\! \max(0,S_{1}^{n-i+1},S_{2}^{n-i+2},S_{3}^{n-i+3},...,S_{i-1}^{n-1},S_{i}^{n})$.\! Note that $M_{k_u} \!\!\!\!=\!\max(\!M_{k_u - L},S_{k_u - L}\!+\! M_L^{k_u})$. Hence we have \begin{equation*} \begin{split} \mathbb{P}[M_{k_u} \leq u, u - \gamma & < S_{k_u-L} < r+u-\gamma, S_{L+1}^{k_u +1} > \gamma] \\ & = \mathbb{P}\left[M_{k_u-L} \leq u,S_{k_u - L} + M_{L}^{k_u} \leq u, u - \gamma < S_{k_u-L} < r+u-\gamma, S_{L+1}^{k_u +1} > \gamma \right]\\ & \geq \mathbb{P}\left[M_{k_u-L} \leq u,M_{L}^{k_u} \leq -r+\gamma, u -\gamma < S_{k_u-L} < r+u-\gamma, S_{L+1}^{k_u +1} > \gamma \right]. \end{split} \end{equation*} Finally, we combine above, use independence of $(M_{L}^{k_u}, S_{L+1}^{k_u +1})$ and $(M_{k_u-L}, S_{k_u-L})$ and the identity ${\mathbb{P}[A \cap B] = \mathbb{P}[A] -\mathbb{P}[A \cap B^c]}$ to obtain \begin{equation}\label{p3} \begin{split} \mathbb{P}\!\left[\tau_u \!=\! k_u \!+\! 1\right] & \geq \mathbb{P}\!\left[M_{k_u-L} \leq u,M_{L}^{k_u} \leq -r\!+\!\gamma, u \!-\!\gamma < S_{k_u-L} < r\!+\!u\!-\!\gamma, S_{L+1}^{k_u +1} > \gamma \right]\\ & = \mathbb{P}\!\left[M_{k_u-L} \leq u, u\!-\!\gamma < S_{k_u-L} < r\!+\!u\!-\!\gamma \right] \mathbb{P}\!\left[ M_{L}^{k_u} \leq -r\!+\!\gamma, S_{L+1}^{k_u +1} > \gamma \right] \\ & = \mathbb{P}\!\left[M_{L}^{k_u} \leq -r\!+\!\gamma , S_{L+1}^{k_u +1} > \gamma \right]\\ & \quad \times \left( \mathbb{P}\!\left[u\!-\!\gamma < S_{k_u-L} < r\!+\!u\!-\!\gamma\right] - \mathbb{P}\!\left[M_{k_u-L} > u, u\!-\!\gamma < S_{k_u-L} < r\!+\!u\!-\!\gamma\right] \right). \end{split} \end{equation} Lemma \ref{lem: petrov} gives an asymptotics \begin{equation}\label{p1} \begin{split} \mathbb{P}\left[u-\gamma < S_{k_u-L} < r+u-\gamma\right] & \sim C(\alpha, r) e^{\alpha \gamma} \lambda(\alpha)^{-L} \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \quad \quad \text{as } u \to \infty. \end{split} \end{equation} Using Lemma \ref{l1} with $M = L$ we obtain \begin{equation}\label{p2} \begin{split} \mathbb{P}\left[M_{k_u - L} > u, u-\gamma < S_{k_u-L} < r+u-\gamma \right] & \leq \mathbb{P}\left[M_{k_u - L} > u, S_{k_u-L} > u-\gamma \right] \\ & \leq C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{\beta \gamma} \lambda(\alpha)^{-L}, \end{split} \end{equation} where $\beta < \alpha$. From \eqref{p3}, \eqref{p1} and \eqref{p2} we have \begin{equation*} \begin{split} \mathbb{P}\!\left[\tau_u \!=\! k_u \!+\! 1\right] & \geq \mathbb{P}\!\left[ S_{L+1}^{k_u +1} \!>\! \gamma, M_{L}^{k_u} \leq -r \!+\! \gamma\right]\\ & \quad \times \left( \mathbb{P}\!\left[u\!-\!\gamma < S_{k_u-L} < r\!+\!u\!-\!\gamma \right] - \mathbb{P}\!\left[M_{k_u-L} > u, u\!-\!\gamma < S_{k_u-L} < r\!+\!u\!-\!\gamma\right] \right) \\ &\geq \mathbb{P}\!\left[ S_{L+1}^{k_u +1} \!>\! \gamma, M_{L}^{k_u} \leq -r\!+\!\gamma \right]\! \left(\! C(\alpha, r) \lambda(\alpha)^{-L} \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{\alpha \gamma} \!- C(\alpha, \beta) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{\beta \gamma} \lambda(\alpha)^{-L} \!\right)\\ & = \mathbb{P}\left[ S_{L+1}^{k_u +1} \!>\! \gamma, M_{L}^{k_u} \leq -r+\gamma\right]\lambda(\alpha)^{-L} \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \left( C(\alpha, r) e^{\alpha \gamma} - C(\alpha, \beta) e^{\beta \gamma} \right). \end{split} \end{equation*} Notice that $\left(M_{i}^{n}, S_{i+1}^{n+1} \right) \,{\buildrel d \over =}\, \left(M_i, S_{i+1} \right)$. To make constants in the last term strictly positive firstly pick $r > 0$ such that $\mathbb{P}\left[X_1 > 2r\right] > 0$. Next, take $\gamma > 0$ big enough to ensure that $C(\alpha, r) e^{\alpha \gamma} - C(\alpha, \beta) e^{\beta \gamma} > 0$ and $\gamma - 2r > 0$. Now we choose large $L$ to have $\mathbb{P}\left[L X_1 >-2r +\gamma\right] > 0$. Since $\gamma$ is continuous parameter, if necessary, we can increase it to get ${\mathbb{P}\left[-2r +\gamma < L X_1 < - r +\gamma \right] > 0}$. For such constants we have \begin{equation*} \begin{split} 0 & < \mathbb{P}\left[X_{L+1} > 2r \right] \prod_{i=1}^{L} \mathbb{P}\left[-2r +\gamma < L X_i < - r +\gamma \right]\\ & \leq \mathbb{P}\left[X_{L+1} > 2r, S_L \geq S_{L-1} \geq ... \geq S_1,-2r + \gamma < S_L < -r+\gamma \right]\\ & \leq \mathbb{P}\left[X_{L+1} > 2r, S_L = M_L,-2r + \gamma < S_L < -r+\gamma \right]\\ & \leq \mathbb{P}\left[M_L < -r+\gamma, S_{L+1} > \gamma \right], \end{split} \end{equation*} and \eqref{pr1eq1} follows. \end{proof} \section{Asymptotics} \begin{proof}[Proof of Theorem \ref{th1}] We will show that the limit \begin{equation}\label{eq5} \lim_{u \to \infty} e^{u\overline{\alpha}}\sqrt{u}\, \mathbb{P}\left[\tau_{u} = k_u +1 \right] \end{equation} exists, which combined with Proposition \ref{prop1} gives us Theorem \ref{th1}. \newline Fix an arbitrary $L$. Since $M_{k_u} = \max(M_{k_u - L},S_{k_u - L}+ M_{L}^{k_u})$ we have \begin{equation}\label{eq6} \begin{split} \mathbb{P}\left[ S_{k_u -L } + M_{L}^{k_u} \leq u, S_{k_u +1} >u \right] & = \mathbb{P}\left[ S_{k_u -L } +M_{L}^{k_u} \leq u, S_{k_u +1} >u, M_{k_u - L} > u \right]\\& \quad + \mathbb{P}\left[ S_{k_u -L }+ M_{L}^{k_u} \leq u, S_{k_u +1} >u, M_{k_u - L} \leq u \right] \\ & = \mathbb{P}\left[ S_{k_u -L } +M_{L}^{k_u} \leq u, S_{k_u +1} >u, M_{k_u - L} > u \right] \\& \quad + \mathbb{P}\left[M_{k_u} \leq u, S_{k_u +1} >u \right] \\ & = \mathbb{P}\left[ S_{k_u -L }+ M_{L}^{k_u} \leq u, S_{k_u +1} >u, M_{k_u - L} > u \right]\\& \quad + \mathbb{P}\left[\tau_{u} = k_u +1 \right]. \end{split} \end{equation} From Lemma \ref{l1} with $M = -1$ and $\gamma = 0$ we obtain \begin{equation*} \mathbb{P}\left[M_{k_u - L} > u, S_{k_u+1} > u \right] \leq C(\alpha, \beta) \lambda(\alpha)^{-L} \lambda(\beta)^{L+1} \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} = C(\alpha, \beta) \delta^{L} \frac{e^{-u\overline{\alpha}}}{\sqrt{u}}, \end{equation*} where $\delta = \frac{\lambda(\beta)}{\lambda(\alpha)} < 1$ provided $\beta < \alpha$. Thus to get \eqref{eq5} it is sufficient to show that for some large fixed $L$ \begin{equation*} \lim_{u \to \infty} e^{u\overline{\alpha}}\sqrt{u}\, \mathbb{P}\left[ S_{k_u -L }+ M_{L}^{k_u} \leq u, S_{k_u +1} >u \right] \end{equation*} exists. Indeed, multiply both sides of \eqref{eq6} by $e^{u\overline{\alpha}}\sqrt{u}$, let first $u \to \infty$ and then $L \to \infty$. We write \begin{equation*} \begin{split} \mathbb{P}\left[ S_{k_u -L }+ M_{L}^{k_u} \leq u, S_{k_u +1} >u \right] = \quad& \mathbb{P}\left[u -u^{\frac{1}{4}} < S_{k_u - L} < u, S_{k_u -L } +M_{L}^{k_u} \leq u, S_{k_u +1} >u \right] \\ & + \mathbb{P}\left[u -u^{\frac{1}{4}} \geq S_{k_u - L}, S_{k_u -L }+ M_{L}^{k_u} \leq u, S_{k_u +1} >u \right]. \end{split} \end{equation*} To estimate the second summand fix $\beta > \a$ and observe that by Markov's inequality with functions $e^{\alpha x}$ and $e^{\beta x}$ we have \begin{equation*} \begin{split} \mathbb{P}[S_{k_u - L} \leq u -u^{\frac{1}{4}}&, S_{k_u +1} >u ] \\ &\leq \sum_{m \geq 0} \mathbb{P}\left[u -u^{\frac{1}{4}} -(m+1) < S_{k_u - L} \leq u -u^{\frac{1}{4}} -m, S_{k_u - L}+S_{L+1}^{k_u +1} >u \right]\\ & \leq \sum_{m \geq 0} \mathbb{P}\left[S_{k_u - L} > u -u^{\frac{1}{4}} -(m+1)\right] \mathbb{P}\left[S_{L+1} > u^{\frac{1}{4}}+ m \right] \\ & \leq \sum_{m \geq 0} \lambda(\alpha)^{k_u - L} e^{-u\alpha} e^{\alpha u^{\frac{1}{4}}} e^{\alpha(m+1)} \lambda(\beta)^{L+1} e^{-\beta u^{\frac{1}{4}}} e^{-\beta m } \\ & = \lambda(\alpha)^{k_u - L} e^{-u\alpha} e^{(\alpha - \beta)u^{\frac{1}{4}}} \lambda(\beta)^{L+1} \sum_{m \geq 0} e^{\alpha(m+1)} e^{-\beta m } = o\left( \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \right). \end{split} \end{equation*} The same argument proves $$ {\mathbb{P}}\big[ S_{k_u-L} > u-u^{\frac 14}, S^{k_u+1}_{L+1} > u^{\frac 14}\big] = o\left( \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \right). $$ Now we see that \begin{equation*} \begin{split} \mathbb{P}[ S_{k_u -L }+& M_{L}^{k_u} \leq u, S_{k_u +1} >u ] \\ & = \mathbb{P}\left[u -u^{\frac{1}{4}} < S_{k_u - L} < u, S_{k_u -L } +M_{L}^{k_u} \leq u, S_{k_u +1} >u \right] + o\left( \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \right)\\ & = \mathbb{P}\Big[u -u^{\frac{1}{4}} < S_{k_u - L} < u , S_{k_u -L } +M_{L}^{k_u} \leq u, S_{k_u +1} >u, S_{L+1}^{k_u+1} < u^{\frac 14} \Big] + o\left( \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} \right) \end{split} \end{equation*} and hence we reduced our problem to finding \begin{equation*} \lim_{u \to \infty} e^{u\overline{\alpha}}\sqrt{u}\, \mathbb{P}\left[u -u^{\frac{1}{4}} < S_{k_u - L} < u, S_{k_u -L }+ M_{L}^{k_u} \leq u, S_{k_u +1} >u, S_{L+1}^{k_u+1} < u^{\frac 14} \right]. \end{equation*} For this purpose we write \begin{equation}\label{eq8} \begin{split} &\mathbb{P}\left[u -u^{\frac{1}{4}} < S_{k_u - L} < u, S_{k_u -L }+ M_{L}^{k_u}\leq u, S_{k_u - L}+ S_{L+1}^{k_u +1} >u, S_{L+1}^{k_u+1} < u^{\frac 14} \right] \\ &= \int_{0\le y\le x < u^{\frac 14}} \mathbb{P}\left[u - x < S_{k_u - L} < u - y \right] \mathbb{P} \left[M_{L}^{k_u} \in dy, S_{L+1}^{k_u +1} \in dx \right]. \end{split} \end{equation} Now we apply Lemma \ref{lem: petrov} with $n=k_u$, $j_n = L$, $\overline{\delta}_n = C n^{-\frac{1}{4}}$ and $\delta_n = -\frac{y}{n}$. We have $$ \mathbb{P}\left[S_{k_u - L} \geq u - y \right] = C(\alpha) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{y \alpha} e^{-L \Lambda(\alpha)} (1+o(1)),$$ provided $\max \Big\{\frac{\sqrt{u}}{u} y,\, L/\sqrt{u} \Big\} \leq C u^{-\frac{1}{4}}$. But since $y < u^{\frac{1}{4}}$ all the assumptions of the Lemma are satisfied. Analogously $$ \mathbb{P}\left[S_{k_u - L} \geq u - x \right] = C(\alpha) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{x \alpha} e^{-L \Lambda(\alpha)} (1+o(1)).$$ Back to \eqref{eq8} we end up with \begin{equation*} \begin{split} &\mathbb{P}\left[u -u^{\frac{1}{4}} < S_{k_u - L} < u, S_{k_u -L }+ M_{L}^{k_u}\leq u, S_{k_u - L}+ S_{L+1}^{k_u +1} >u \right] \\ &= C(\alpha) \frac{e^{-u\overline{\alpha}}}{\sqrt{u}} e^{-L \Lambda(\alpha)} \mathbb{E}\left[\left( e^{\alpha S_{L+1}} - e^{\alpha M_{L}} \right)_{+} \right] (1+o(1)) \quad \text{as } u \to \infty. \end{split} \end{equation*} Note that by the moment assumptions the expectation above is finite, hence we conclude \eqref{eq5}. \end{proof}
1,477,468,750,512
arxiv
\section{Appendix} \section*{\fontsize{16}{16}\selectfont Appendix} \renewcommand{\thetable}{\Alph{table}} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{\Alph{figure}} \setcounter{figure}{0} \setcounter{table}{0} Here is the appendix. \section{Notations} \bgroup \def1.5{1.5} \begin{tabular}{p{1.5in}p{3.25in}} $\displaystyle p$ & data distribution\\ $\dst \mathbb{P}(A)$ & probability of event $\dst A$\\ $\displaystyle \mathcal{C}^k$ & set of functions with continuous $k$-th derivatives \\ $\displaystyle{\bm{w}}(t)$ & standard Wiener Process\\ $\displaystyle\overline{{\bm{w}}}(t)$ & reverse-time standard Wiener Process\\ $\displaystyle h({\bm{x}},t)$ & drift coefficient in SDE\\ $\displaystyle g(t)$ & diffusion coefficient in SDE\\ $\displaystyle \alpha_t $ & scaling coefficient at time $\dst t$\\ $\displaystyle \sigma_t^2 $ & variance of added Gaussian noise at time $\dst t$\\ $\displaystyle \{{\mathbf{x}}_t\}_{t\in [0,1]}$ & diffusion process generated by SDE\\ $\displaystyle \{\hat {\mathbf{x}}_t\}_{t\in [0,1]}$ & reverse process generated by reverse-SDE\\ $\displaystyle p_t$ & distribution of ${\mathbf{x}}_t$ and $\hat {\mathbf{x}}_t$\\ $\displaystyle \{{\mathbf{x}}_1, {\mathbf{x}}_2,\ldots, {\mathbf{x}}_N\}$ & diffusion process generated by DDPM\\ $\dst \{\beta_i\}_{i=1}^N$ & pre-defined noise scales in DDPM \\ $\displaystyle \boldsymbol{\epsilon}_a$ & adversarial attack\\ $\displaystyle {\bm{x}}_a$ & adversarial sample\\ $\displaystyle {\bm{x}}_{a,t}$ & scaled adversarial sample\\ $\displaystyle f(\cdot)$ & classifier\\ $\displaystyle g(\cdot)$ & smoothed classifier\\ $\displaystyle \mathbb{P}\left(\hat{{\mathbf{x}}}_0 ={\bm{x}}| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$ & density of conditional distribution generated by reverse-SDE based on $ {\bm{x}}_{a,t}$\\ $\dst \mathcal{P}({\bm{x}}_a; t)$ & purification model with highest density point\\ $\dst \mathcal{G}({\bm{x}}_0)$ & data region with the same label as ${\bm{x}}_0$\\ $\dst \mathcal{D}^f_{\mathcal{P}}(\mathcal{G}({\bm{x}}_0);t)$ & robust region for $\dst \mathcal{G}({\bm{x}}_0)$ associated with base classifier $f$ and purification model $\dst \mathcal{P}$\\ $\dst r^f_{\mathcal{P}}({\bm{x}}_0;t)$ & robust radius for the point associated with base classifier $f$ and purification model $\dst \mathcal{P}$\\ $\dst \mathcal{D}_{sub}({\bm{x}}_0;t)$ & convex robust sub-region\\ $\displaystyle {\bm{s}}_\theta({\bm{x}},t)$ & score function\\ $\displaystyle \{{\mathbf{x}}^{\theta}_t\}_{t\in [0,1]}$ & reverse process generated by score-based diffusion model\\ $\displaystyle \mathbb{P}\left({{\mathbf{x}}}^\theta_0 ={\bm{x}}| {{{\mathbf{x}}}^\theta_t = {\bm{x}}_{a,t}}\right)$ & density of conditional distribution generated by score-based diffusion model based on $ {\bm{x}}_{a,t}$\\ $\lambda(\tau)$ & weighting scheme of training loss for score-based diffusion model\\ $\dst \mathcal{J}_{\mathrm{SM}}(\theta, t ; \lambda(\cdot))$ & truncated training loss for score-based diffusion model\\ $\boldsymbol{\mu}_{t}, \boldsymbol{\nu}_{t}$ & path measure for $\dst \{\hat {\mathbf{x}}_\tau\}_{\tau\in [0,t]}$ and $\dst \{{\mathbf{x}}^\theta_\tau\}_{\tau\in [0,t]}$ respectively\\ \end{tabular} \egroup \vspace{0.25cm} \section{More details about Theoretical analysis } \subsection{Assumptions}\label{appendassump} \begin{itemize} \item[(i)] The data distribution $\dst p \in \mathcal{C}^2$ and $\mathbb{E}_{{\bm{x}}\sim p} [||{\bm{x}}||_2^2]< \infty$. \item[(ii)] $\forall t \in[0, T]: h(\cdot, t) \in \mathcal{C}^1, \exists C>0, \forall {\bm{x}} \in \mathbb{R}^n, t \in[0, T]:||h({\bm{x}}, t)||_2 \leqslant C\left(1+||{\bm{x}}||_2\right)$. \item[(iii)] $\exists C>0, \forall {\bm{x}}, {\bm{y}} \in \mathbb{R}^n:||h({\bm{x}}, t)-h({\bm{y}}, t)||_2 \leqslant C\|{\bm{x}}-{\bm{y}}\|_2$. \item[(iv)] $g \in \mathcal{C} \text { and } \forall t \in[0, T],|g(t)|>0$. \item[(v)] $\forall t \in[0, T]: {\bm{s}}_\theta(\cdot, t) \in \mathcal{C}^1, \exists C>0, \forall {\bm{x}} \in \mathbb{R}^n, t \in[0, T]:||{\bm{s}}_\theta({\bm{x}}, t)||_2 \leqslant C\left(1+||{\bm{x}}||_2\right)$. \item[(vi)] $\exists C>0, \forall {\bm{x}}, {\bm{y}} \in \mathbb{R}^n:||{\bm{s}}_\theta({\bm{x}}, t)-{\bm{s}}_\theta({\bm{y}}, t)||_2 \leqslant C\|{\bm{x}}-{\bm{y}}\|_2$. \end{itemize} \subsection{Theorems and Proofs} \label{app:proofs} \normalfont \textbf{Theorem 3.1.} \textit{Under conditions \ref{appendassump}, solving \eqref{reverseSDE} starting from time $t$ and point $\dst {\bm{x}}_{a,t}= \sqrt{\alpha_t} {\bm{x}}_a$ will generate a reversed random variable $\dst \hat{\mathbf{x}}_0 $ with conditional distribution \begin{align*} \dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 ={\bm{x}}| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) \propto p({\bm{x}}) \cdot \frac{1}{\sqrt{\left(2\pi\sigma^2_t\right)^n}} e^{\frac{-|| {\bm{x}} -{\bm{x}}_a||^2_2}{2\sigma^2_t}} \end{align*} where $\dst \sigma_t^2 = \frac{1-\alpha_t}{\alpha_t}$ is the variance of the Gaussian noise added at timestamp $\dst t$ in the diffusion process \ref{SDE}.} \begin{proof} Under the assumption, we know $\{{\mathbf{x}}_t\}_{t\in [0,1]}$ and $\{\hat {\mathbf{x}}_t\}_{t\in [0,1]}$ follow the same distribution, which means \begin{align*} \dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) ~=&~ \frac{\mathbb{P}(\hat{{\mathbf{x}}}_0 = {\bm{x}}, \hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t})}{\mathbb{P}(\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t})} \\ = &~ \frac{\mathbb{P}({\mathbf{x}}_0 = {\bm{x}}, {\mathbf{x}}_t = {\bm{x}}_{a,t})}{\mathbb{P}({\mathbf{x}}_t = {\bm{x}}_{a,t})} \\ = &~ \mathbb{P}\left({\mathbf{x}}_0={\bm{x}}\right) \frac{ \mathbb{P}({\mathbf{x}}_t = {\bm{x}}_{a,t} | {\mathbf{x}}_{0} = {\bm{x}})}{\mathbb{P}({\mathbf{x}}_t = {\bm{x}}_{a,t})} \\ \propto&~ \mathbb{P}\left({\mathbf{x}}_0={\bm{x}}\right) \frac{1}{\sqrt{\left(2\pi\sigma^2_t\right)^n}} e^{\frac{-|| {\bm{x}} -{\bm{x}}_a||^2_2}{2\sigma^2_t}}\\ =& ~ p({\bm{x}}) \cdot \frac{1}{\sqrt{\left(2\pi\sigma^2_t\right)^n}} e^{\frac{-|| {\bm{x}} -{\bm{x}}_a||^2_2}{2\sigma^2_t}} \end{align*} where the third equation is due to the chain rule of probability and the last equation is a result of the diffusion process. \end{proof} \textbf{Theorem 3.3.} \textit{ Under conditions \ref{appendassump} and classifier $f$, let $\dst {\bm{x}}_0$ be the sample with ground-truth label and $\dst {\bm{x}}_a$ be the adversarial sample, then (i) the purified sample $\dst \mathcal{P}({\bm{x}}_a; t)$ will have the ground-truth label if $\dst {\bm{x}}_a $ falls into the following convex set, \begin{align*} \dst \mathcal{D}_{{\tiny\mbox{sub}}}\left({\bm{x}}_0;t\right):=\bigcap_{\left\{{\bm{x}}'_0:f({{\bm{x}}'_0})\neq f({\bm{x}}_0)\right\}} \left\{{\bm{x}}_a : ({{\bm{x}}}_a -{{\bm{x}}_0})^\top ({{\bm{x}}}'_0-{{\bm{x}}}_0) < \sigma_t^2 \log\left(\frac{p({{\bm{x}}}_0)}{p({{\bm{x}}}'_0)}\right)+\frac{||{\bm{x}}'_0 -{{\bm{x}}}_0||^2_2 }{2} \right\}, \end{align*} and further, (ii) the purified sample $\dst \mathcal{P}({\bm{x}}_a; t)$ will have the ground-truth label if and only if $\dst {\bm{x}}_a $ falls into the following set, $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right) := \bigcup_{\tilde{{{\bm{x}}}}_0: f\left(\tilde{{{\bm{x}}}}_0\right) = f\left({{\bm{x}}}_0\right)} \mathcal{D}_{{\tiny\mbox{sub}}}\left(\tilde{{{\bm{x}}}}_0;t\right)$. In other words, $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ is the robust region for data region $\mathcal{G}({{\bm{x}}}_0)$ under $\dst \mathcal{P}(\cdot ; t)$ and $f$. } \begin{proof} We start with part (i). The main idea is to prove that a point $\dst {\bm{x}}_0'$ such that $\dst f({\bm{x}}'_0)\neq f({\bm{x}}_0)$ should have lower density than $\dst {\bm{x}}_0$ in the conditional distribution in Theorem \ref{distribution:reverse} so that $\mathcal{P}({\bm{x}}_a; t)$ cannot be $\dst {\bm{x}}_0'$. In other words, we should have \begin{align*} \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {{\bm{x}}}_0| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) > \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}'_0\mid{\hat{{\mathbf{x}}}_t = \pmb{x}_{a,t}}\right). \end{align*} By Theorem \ref{distribution:reverse}, this is equivalent to \begin{align*}\dst &~p({{\bm{x}}}_0) \cdot \frac{1}{\sqrt{\left(2\pi\sigma^2_t\right)^n}} e^{\frac{-|| {\bm{x}}_0 -{\bm{x}}_a||^2_2}{2\sigma^2_t}} > p({{\bm{x}}}'_0) \cdot \frac{1}{\sqrt{\left(2\pi\sigma^2_t\right)^n}} e^{\frac{-|| {{\bm{x}}}'_0 -{\bm{x}}_a||^2_2}{2\sigma^2_t}}\\ \Leftrightarrow&~ \log\left(\frac{p({\bm{x}}_0)}{p({{\bm{x}}}'_0)}\right) > \frac{1}{2\sigma^2_t} \left( || {\bm{x}}_0 -{{\bm{x}}}_a||^2_2 -|| {{\bm{x}}}'_0 -{{\bm{x}}}_a||^2_2\right)\\ \Leftrightarrow&~ \log\left(\frac{p({\bm{x}}_0)}{p({{\bm{x}}}'_0)}\right) > \frac{1}{2\sigma^2_t} \left( || {\bm{x}}_0 -{{\bm{x}}}_a||^2_2 -|| {{\bm{x}}}'_0 -{\bm{x}}_0+{\bm{x}}_0 -{{\bm{x}}}_a||^2_2\right)\\ \Leftrightarrow&~ \log\left(\frac{p({\bm{x}}_0)}{p({{\bm{x}}}'_0)}\right) > \frac{1}{2\sigma^2_t} \left( 2({{\bm{x}}}_a-{{\bm{x}}}_0)^\top ({{\bm{x}}}'_0-{{\bm{x}}}_0)-\|{{\bm{x}}}'_0-{{\bm{x}}}_0\|_2^2\right). \end{align*} Re-organizing the above inequality, we obtain \begin{align*}\label{robust:halfspace} \dst ({{\bm{x}}}_a -{{\bm{x}}}_0)^\top ({{\bm{x}}}'_0-{{\bm{x}}}_0) < \sigma_t^2 \log\left(\frac{p({{\bm{x}}}_0)}{p({{\bm{x}}}'_0)}\right)+\frac{1}{2} || {{\bm{x}}}'_0 -{{\bm{x}}}_0||^2_2. \end{align*} Note that the order of $\dst{\bm{x}}_a$ is at most one in every term of the above inequality, so the inequality actually defines a half-space in $ \dst \mathbb{R}^n$ for every $\dst({\bm{x}}_0, {\bm{x}}'_0)$ pair. Further, we have to satisfy the inequality for every $\dst{\bm{x}}'_0$ such that $\dst f({\bm{x}}'_0)\neq f({\bm{x}}_0)$, therefore, by intersecting over all such half-spaces, we obtain a convex $\dst\mathcal{D}_{{\tiny\mbox{sub}}}\left({{\bm{x}}}_0;t\right)$. Then we prove part (ii). On the one hand, if $\dst {\bm{x}}_a\in \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$, then there exists one $\dst \tilde{{{\bm{x}}}}_0$ such that $f(\tilde{{\bm{x}}}_0) = f({\bm{x}}_0)$ and $\dst {\bm{x}}_a\in \mathcal{D}_{{\tiny\mbox{sub}}}\left(\tilde{{{\bm{x}}}}_0;t\right)$. By part (i), $\dst \tilde{{{\bm{x}}}}_0$ has higher probability than all other points with different labels from $\dst {{\bm{x}}}_0$ in the conditional distribution $\dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 ={\bm{x}}| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$ characterized by Theorem \ref{distribution:reverse}. Therefore, $\mathcal{P}({\bm{x}}_a ; t)$ should have the same label as $\dst {{\bm{x}}}_0$. On the other hand, if $\dst {\bm{x}}_a\notin \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$, then there is a point $\dst \tilde{{{\bm{x}}}}_1$ with different label from $\dst {{\bm{x}}}_0$ such that for any $\dst \tilde{{{\bm{x}}}}_0$ with the same label as $\dst {{\bm{x}}}_0$, $\dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 =\tilde{{\bm{x}}}_1| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) > \mathbb{P}\left(\hat{{\mathbf{x}}}_0 =\tilde{{\bm{x}}}_0| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$. In other words, $\mathcal{P}({\bm{x}}_a ; t)$ would have different label from $\dst {{\bm{x}}}_0$. \end{proof} \textbf{Theorem 3.4.} \textit{ Under score-based diffusion model \cite{Song2021ICLR} and conditions \ref{appendassump}, we can bound \begin{align*} \dst D_{\text{KL}}(\mathbb{P}(\hat {\mathbf{x}}_0 ={\bm{x}} \mid \hat {\mathbf{x}}_{t} = {\bm{x}}_{a,t}) \| \mathbb{P}({\mathbf{x}}^{\theta}_0 ={\bm{x}} \mid {\mathbf{x}}^{\theta}_{t} = {\bm{x}}_{a,t})) = \mathcal{J}_{\mathrm{SM}}(\theta, t ; \lambda(\cdot)) \end{align*} where $\{\hat {\bm{x}}_\tau\}_{\tau\in [0,t]}$ and $\{{\bm{x}}^\theta_\tau\}_{\tau\in [0,t]}$ are stochastic processes generated by \ref{reverseSDE} and score-based diffusion model respectively, $$\dst \mathcal{J}_{\mathrm{SM}}(\theta, t ; \lambda(\cdot)):=\frac{1}{2} \int_0^{t} \mathbb{E}_{p_\tau(\mathbf{x})}\left[\lambda(\tau)\left\|\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})-\boldsymbol{s}_{\theta}(\mathbf{x}, \tau)\right\|_2^2\right] \mathrm{d} \tau,$$ $\boldsymbol{s}_{\theta}(\mathbf{x}, \tau)$ is the score function to approximate $\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})$, and $\lambda: \mathbb{R}\rightarrow \mathbb{R}$ is any weighting scheme used in the training score-based diffusion models.} \begin{proof} Similar to proof of \cite[Theorem 1]{song2021maximum}, let $\dst \boldsymbol{\mu}_{t}$ and $\dst \boldsymbol{\nu}_{t}$ be the path measure for reverse processes $\dst \{\hat {\mathbf{x}}_\tau\}_{\tau\in [0,t]}$ and $\dst \{{\mathbf{x}}^\theta_\tau\}_{\tau\in [0,t]}$ respectively based on the scaled adversarial sample ${\bm{x}}_{a, t}$. Under conditions \ref{appendassump}, the KL-divergence can be computed via the Girsanov theorem \cite{oksendal2013stochastic}: \begin{align*} & ~\dst D_{\text{KL}}\left(\mathbb{P}(\hat {\mathbf{x}}_0 ={\bm{x}} \mid \hat {\mathbf{x}}_{t} = {\bm{x}}_{a,t}) \| \mathbb{P}({\mathbf{x}}^{\theta}_0 ={\bm{x}} \mid {\mathbf{x}}^{\theta}_{t} = {\bm{x}}_{a,t})\right) \\ =&~-\mathbb{E}_{\boldsymbol{\mu}_{t}}\left[\log \frac{d \boldsymbol{\nu}_{t}}{d \boldsymbol{\mu}_{t}}\right] \\ \stackrel{(i)}{=}&~ \mathbb{E}_{\boldsymbol{\mu}_{t}}\left[\int_0^{t} g(\tau)\left(\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})-\boldsymbol{s}_{\theta}(\mathbf{x}, \tau)\right) \mathrm{d} \overline{\mathbf{w}}_\tau+\frac{1}{2} \int_0^{t} g(\tau)^2\left\|\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})-\boldsymbol{s}_{\theta}(\mathbf{x}, \tau)\right\|_2^2 \mathrm{~d} \tau\right] \\ =&~ \mathbb{E}_{\boldsymbol{\mu}_{t}}\left[\frac{1}{2} \int_0^{t} g(\tau)^2\left\|\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})-s_{\theta}(\mathbf{x}, \tau)\right\|_2^2 \mathrm{~d} \tau\right] \\ =&~ \frac{1}{2} \int_0^{\tau} \mathbb{E}_{p_\tau(\mathbf{x})}\left[g(\tau)^2\left\|\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})-s_{\theta}(\mathbf{x}, \tau)\right\|_2^2\right] \mathrm{d} \tau \\ =& ~\mathcal{J}_{\mathrm{SM}}\left(\theta, t ; g(\cdot)^2\right) \end{align*} where (i) is due to Girsanov Theorem and (ii) is due to the martingale property of Itô integrals. \end{proof} \section{More details about \textsf{DensePure}~} \subsection{Pseudo-Code}\label{appendpseudocode} We provide the pseudo code of \textsf{DensePure}~ in Algo.~\ref{alg1} and Alg.~\ref{alg2} \begin{algorithm} \caption{\textsf{DensePure}~ pseudo-code with the highest density point}\label{alg1} \begin{algorithmic}[1] \State Initialization: choose off-the-shelf diffusion model and classifier $f$, choose $\dst \psi = t$, \State Input sample $\dst {\bm{x}}_a = {\bm{x}}_0 + \boldsymbol{\epsilon}_a$ \State Compute $\dst \hat{{\bm{x}}}_0 = \mathcal{P}({\bm{x}}_{a}; \psi)$ \State $\hat{y} = f(\hat{{\bm{x}}}_0)$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{\textsf{DensePure}~ pseudo-code with majority vote}\label{alg2} \begin{algorithmic}[1] \State Initialization: choose off-the-shelf diffusion model and classifier $f$, choose $\sigma$ \State Compute $\overline{\alpha}_n = \frac{1}{1+\sigma^2}$, $ n = \argmin_{s} \left\{ \left|\overline{\alpha}_s- \frac{1}{1+\sigma^2} \right| \ |~ s\in \{1, 2, \cdots, N\} \right\}$ \State Generate input sample $\dst {\bm{x}}_{\text{rs}} = {\bm{x}}_0 + \boldsymbol{\epsilon}, \boldsymbol{\epsilon} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2 {\bm{I}})$ \State Choose schedule $S^b$, get $\dst \hat{{\bm{x}}}_0^i \leftarrow \mathbf{rev}(\sqrt{\overline{\alpha}_n} {\bm{x}}_{\text{rs}})_i, i = 1, 2, \dots, K$ with Fast Sampling \State $\hat{y} = \textbf{MV}(\{f(\hat{{\bm{x}}}_0^1), \dots, f(\hat{{\bm{x}}}_0^{K})\}) = \argmax_c \sum_{i=1}^{K} \pmb{1} \{f(\hat{{\bm{x}}}_0^i) = c\}$ \end{algorithmic} \end{algorithm} \subsection{Details about Fast Sampling} \label{sec:fast} Applying single-step operation $n$ times is a time-consuming process. In order to reduce the time complexity, we follow the method used in ~\citep{nichol2021improved} and sample a subsequence $S^b$ with $b$ values (i.e., $S^b= \underbrace{ \{n, \floor{n-\frac{n}{b}}, \cdots, 1 \}}_{b}$ , where $S_j^b$ is the $j$-th element in $S^b$ and $S_j^b= \floor{n - \frac{jn}{b}}, \forall j < b \text{ and } S_b^b = 1$) from the original schedule $S$ (i.e., $S = \underbrace{ \{n, n-1, \cdots, 1\}}_{n}$, where $S_j= j $ is the $j$-th element in $S$). Within this context, we adapt the original $\overline{\alpha}$ schedule $\overline{\alpha}^S$ = $\{\overline{\alpha}_1, \cdots, \overline{\alpha}_i, \cdots, \overline{\alpha}_n\}$ used for single-step to the new schedule $\overline{\alpha}^{S^b}$ = $\{\overline{\alpha}_{S_1^b}, \cdots, \overline{\alpha}_{S_j^b}, \cdots, \overline{\alpha}_{S_b^b}\}$ (i.e., $\overline{\alpha}^{S^{b}}_i = \overline{\alpha}_{S_i^b} = \overline{\alpha}_{S_{\floor{n - \frac{i n}{b}} }}$ is the $i$-th element in $\overline{\alpha}^{S^b}$). We calculate the corresponding $\beta^{S^b} = \{\beta^{S^b}_1, \beta^{S^b}_2, \cdots, \beta^{S^b}_i, \cdots,\beta^{S^b}_b \}$ and $\widetilde{\beta}^{S^b} = \{ \widetilde{\beta}^{S^b}_1, \widetilde{\beta}^{S^b}_2, \cdots, \widetilde{\beta}^{S^b}_i, \cdots, \widetilde{\beta}^{S^b}_b \}$ schedules, where $ \beta_{S^b_i}=\beta^{S^b}_i = 1 - \frac{\overline{\alpha}^{S^{b}}_i }{\overline{\alpha}^{S^{b}}_{i-1}}, \quad \widetilde{\beta}_{S^b_i}=\widetilde{\beta}^{S^{b}}_i = \frac{1-\overline{\alpha}^{S^{b}}_{i-1}}{1-\overline{\alpha}^{S^b}_{i}}\beta_{S^b_i}$. With these new schedules, we can use $b$ times reverse steps to calculate $\hat{{\bm{x}}}_{0} = \underbrace{\textbf{Reverse}( \cdots \textbf{Reverse}( \textbf{Reverse}({\bm{x}}_n; S^b_b); S^b_{b-1}); \cdots ; 1)}_{b}$. Since $\boldsymbol{\Sigma}_{\boldsymbol{\theta}} ({\bm{x}}_{S^b_{i}}, S^b_{i})$ is parameterized as a range between $\beta^{S^b}$ and $\widetilde{\beta}^{S^b}$, it will automatically be rescaled. Thus, $\hat{{\bm{x}}}_{S^b_{i-1}} = \textbf{Reverse}(\hat{{\bm{x}}}_{S^b_i}; S^b_i) $ is equivalent to sample ${\bm{x}}_{S^b_{i-1}}$ from $\mathcal{N}({\bm{x}}_{S^b_{i-1}}; \boldsymbol{\mu}_{\boldsymbol{\theta}} ({\bm{x}}_{S^b_{i}}, S^b_{i}), \boldsymbol{\Sigma}_{\boldsymbol{\theta}} ({\bm{x}}_{S^b_{i}}, S^b_{i}))$. \section{More Experimental details and Results} \subsection{Implementation details} We select three different noise levels $\sigma \in \left\{ 0.25, 0.5, 1.0 \right\}$ for certification. For the parameters of \textsf{DensePure}~{}, The sampling numbers when computing the certified radius are $n = 100000$ for CIFAR-10 and $n = 10000$ for ImageNet. We evaluate the certified robustness on 500 samples subset of CIFAR-10 testset and 500 samples subset of ImageNet validation set. we set $K = 40$ and $b$ = 10 except the results in ablation study. The details about the baselines are in the appendix. \subsection{Baselines.} We select randomized smoothing based methods including PixelDP~\citep{lecuyer2019certified}, RS~\citep{Cohen2019ICML}, SmoothAdv ~\citep{salman2019provably}, Consistency ~\citep{jeong2020consistency}, MACER ~\citep{zhai2020macer}, Boosting ~\citep{horvath2021boosting} , SmoothMix ~\citep{jeong2021smoothmix}, Denoised ~\citep{salman2020denoised}, Lee~\citep{lee2021provable}, Carlini~\citep{carlini2022certified} as our baselines. Among them, PixelDP, RS, SmoothAdv, Consistency, MACER, and SmoothMix require training a smooth classifier for a better certification performance while the others do not. \citeauthor{salman2020denoised} and \citeauthor{lee2021provable} use the off-the-shelf classifier but without using the diffusion model. The most similar one compared with us is \citeauthor{carlini2022certified}, which also uses both the off-the-shelf diffusion model and classifier. The above two settings mainly refer to \cite{carlini2022certified}, which makes us easier to compared with their results. \subsection{Main Results for Certified Accuracy}\label{main} We compare with \citet{carlini2022certified} in a more fine-grained version. We provide results of certified accuracy at different ${\epsilon}$ in Table~\ref{tbl:cifar10ab} for CIFAR-10 and Table~\ref{tbl:imagenetab} for ImageNet. We include the accuracy difference between ours and ~\citet{carlini2022certified} in the bracket in Tables. We can observe from the tables that the certified accuracy of our method outperforms \citet{carlini2022certified} except ${\epsilon} = 0$ at $\sigma = 0.25, 0.5$ for CIFAR-10. \begin{table}[t] \resizebox{\linewidth}{!}{% \begin{tabular}{llrrrrr} \toprule & & \multicolumn{5}{c}{Certified Accuracy at $\boldsymbol{{\epsilon}}(\%)$} \\ Methods & Noise & 0.0 & 0.25 & 0.5 & 0.75 & 1.0 \\ \midrule & $\sigma = 0.25$ & \textbf{88.0} & 73.8 & 56.2 & 41.6 & 0.0 \\ Carlini~\citep{carlini2022certified} & $\sigma = 0.5$ & 74.2 & 62.0 & 50.4 & 40.2 & 31.0 \\ & $\sigma = 1.0$ & 49.4 & 41.4 & 34.2 & 27.8 & 21.8 \\ \midrule & $\sigma = 0.25$ & 87.6(-0.4) & \textbf{76.6(+2.8)} & \textbf{64.6(+8.4)} & \textbf{50.4(+8.8)} & 0.0(+0.0) \\ \textbf{Ours} & $\sigma = 0.5$ & 73.6(-0.6) & 65.4(+3.4) & 55.6(+5.2) & 46.0(+5.8) & \textbf{37.4(+6.4)} \\ & $\sigma = 1.0$ & 55.0(+5.6) & 47.8(+6.4) & 40.8(+6.6) & 33.0(+5.2) & 28.2(+6.4) \\ \bottomrule \end{tabular}} \caption{Certified accuracy compared with \cite{carlini2022certified} for CIFAR-10 at all $\sigma$. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as \cite{carlini2022certified}.} \label{tbl:cifar10ab} \end{table} \begin{table}[t] \resizebox{\linewidth}{!}{% \begin{tabular}{llrrrrrr} \toprule & & \multicolumn{6}{c}{Certified Accuracy at $\boldsymbol{{\epsilon}}(\%)$} \\ Methods & Noise & 0.0 & 0.5 & 1.0 & 1.5 & 2.0 &3.0 \\ \midrule & $\sigma = 0.25$ & 82.0 & 74.0 & 0.0 & 0.0 & 0.0 &0.0 \\ Carlini~\citep{carlini2022certified} & $\sigma = 0.5$ & 77.2 & 71.8 & 59.8 & 47.0 & 0.0 & 0.0 \\ & $\sigma = 1.0$ & 64.6 & 57.8 & 49.2 & 40.6 & 31.0 & 19.0 \\ \midrule & $\sigma = 0.25$ & \textbf{84.0(+2.0)} & \textbf{77.8(+3.8)} & 0.0(+0.0) & 0.0(+0.0) & 0.0(+0.0) &0.0(+0.0) \\ \textbf{Ours} & $\sigma = 0.5$ & 80.2(+3.0) & 75.6(+3.8) & \textbf{67.0(+7.2)} & \textbf{54.6(+7.6)} & 0.0(+0.0) & 0.0(+0.0)\\ & $\sigma = 1.0$ & 67.8(+3.2) & 61.4(+3.6) & 55.6(+6.4) & 50.0(+9.4) & \textbf{42.2(+11.2)} & \textbf{25.8(+6.8)} \\ \bottomrule \end{tabular}} \caption{Certified accuracy compared with \cite{carlini2022certified} for ImageNet at all $\sigma$. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as \cite{carlini2022certified}.} \label{tbl:imagenetab} \end{table} \subsection{Experiments for Voting Samples}\label{exp:vote} Here we provide more experiments with $\sigma \in \{ 0.5, 1.0\}$ and $b=10$ for different voting samples $K$ in Figure~\ref{fig:mv_0.5} and Figure~\ref{fig:mv_1.0}. The results for CIFAR-10 is in Figure~\ref{fig:mv-cifar}. We can draw the same conclusion mentioned in the main context . \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/cifar10_mv_0.5.png}\\ CIFAR=10 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_mv_0.5.png}\\ ImageNet \end{minipage}\hfill \vspace{-2mm} \caption{Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise $\sigma=0.50$.} \label{fig:mv_0.5} \end{figure*} \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/cifar10_mv_1.0.png}\\ CIFAR=10 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_mv_1.0.png}\\ ImageNet \end{minipage}\hfill \vspace{-2mm} \caption{Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise $\sigma=1.00$.} \label{fig:mv_1.0} \end{figure*} \subsection{Experiments for Fast Sampling Steps}\label{exp:steps} We also implement additional experiments with $b \in \{1, 2, 10\}$ at $\sigma = 0.5, 1.0$. The results are shown in Figure~\ref{fig:steps_0.5} and Figure~\ref{fig:steps_1.0}. The results for CIFAR-10 are in Figure~\ref{fig:mv-cifar}. We draw the same conclusion as mentioned in the main context. \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/cifar10_steps_0.50.png}\\ CIFAR=10 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_steps_0.50.png}\\ ImageNet \end{minipage}\hfill \vspace{-2mm} \caption{Certified accuracy with different fast sampling steps $b$. Each line in the figure shows the certified accuracy among different $L_2$ adversarial perturbation bound with Gaussian noise $\sigma=0.50$.} \label{fig:steps_0.5} \end{figure*} \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/cifar10_steps_1.00.png}\\ CIFAR=10 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_steps_1.00.png}\\ ImageNet \end{minipage}\hfill \vspace{-2mm} \caption{Certified accuracy with different fast sampling steps $b$. Each line in the figure shows the certified accuracy among different $L_2$ adversarial perturbation bound with Gaussian noise $\sigma=1.00$.} \label{fig:steps_1.0} \end{figure*} \subsection{Experiments for Different Architectures}\label{exp:models} We try different model architectures of ImageNet including Wide ResNet-50-2 and ResNet 152 with $b=2$ and $K=10$. The results are shown in Figure~\ref{fig:wrn}. we find that our method outperforms ~\citep{carlini2022certified} for all $\sigma$ among different classifiers. \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/cifar10_wrn_0.25.png}\\ CIFAR=10 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_wrn_0.25.png}\\ ImageNet \end{minipage}\hfill \vspace{-2mm} \caption{Certified accuracy with different architectures. Each line in the figure shows the certified accuracy among different $L_2$ adversarial perturbation bound with Gaussian noise $\sigma=0.25$.} \label{fig:modelarch} \end{figure*} \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_wrn.png}\\ Wide ResNet-50-2 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_resnet.png}\\ ResNet152 \end{minipage}\hfill \vspace{-2mm} \caption{Certified accuracy of ImageNet for different architectures. The lines represent the certified accuracy with different $L_2$ perturbation bound with different Gaussian noise $\sigma \in \{0.25, 0.50, 1.00\}$.} \label{fig:wrn} \end{figure*} \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_mv.png}\\ ImageNet \end{minipage} \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_steps.png}\\ ImageNet \end{minipage} \vspace{-2mm} \caption{Ablation study. The left image shows the certified accuracy among different vote numbers with different radius $\epsilon \in \{0.0, 0.25, 0.5, 0.75\}$. Each line in the figure represents the certified accuracy of our method among different vote numbers $K$ with Gaussian noise $\sigma=0.25$. The right image shows the certified accuracy with different fast sampling steps $b$. Each line in the figure shows the certified accuracy among different $L_2$ adversarial perturbation bound.} \label{fig:mv-cifar} \end{figure*} \section{Limitation and Discussion} \section{Conclusion} In this work, we theoretically prove that the diffusion model could purify adversarial examples back to the corresponding clean sample with high probability, as long as the data density of the corresponding clean samples is high enough. Our theoretical analysis characterizes the conditional distribution of the reversed samples given the adversarial input, generated by the diffusion model reverse process. Using the highest density point in the conditional distribution as the deterministic reversed sample, we identify the robust region of a given instance under the diffusion model reverse process, which is potentially much larger than previous methods. Our analysis inspires us to propose an effective pipeline \textsf{DensePure}, for adversarial robustness. We conduct comprehensive experiments to show the effectiveness of \textsf{DensePure}~ by evaluating the certified robustness via the randomized smoothing algorithm. Note that \textsf{DensePure}~ is an off-the-shelf pipeline that does not require training a smooth classifier. Our results show that \textsf{DensePure}~ achieves the new SOTA certified robustness for perturbation with $\mathcal{L}_2$-norm. We hope that our work sheds light on an in-depth understanding of the diffusion model for adversarial robustness. \textbf{Limitations.} The time complexity of \textsf{DensePure}~ is high since it requires repeating the reverse process multiple times. In this paper, we use fast sampling to reduce the time complexity and show that the setting ($b=2$ and $K=10$) can achieve nontrivial certified accuracy. We leave the more advanced fast sampling strategy as the future direction. \section*{Ethics Statement} Our work can positively impact the society by improving the robustness and security of AI systems. We have not involved human subjects or data set releases; instead, we carefully follow the provided licenses of existing data and models for developing and evaluating our method. \section*{Reproducibility Statement} For theoretical analysis, all necessary assumptions are listed in \ref{appendassump} and the complete proofs are included in \ref{app:proofs}. The experimental setting and datasets are provided in section \ref{experiments}. The pseudo-code for \textsf{DensePure}~ is in \ref{appendpseudocode} and the fast sampling procedures are provided in \ref{sec:fast}. \section{Experiments}\label{experiments} In this section, we use \textsf{DensePure}~ to evaluate certified robustness on two standard datasets, CIFAR-10~\citep{krizhevsky2009learning} and ImageNet~\citep{deng2009imagenet}. \textbf{Experimental settings} We follow the experimental setting from~\citet{carlini2022certified}. Specifically, for CIFAR-10, we use the 50-M unconditional improved diffusion model from \citet{nichol2021improved} as the diffusion model. We select ViT-B/16 model \citet{dosovitskiy2020image} pretrained on ImageNet-21k and finetuned on CIFAR-10 as the classifier, which could achieve 97.9\% accuracy on CIFAR-10. For ImageNet, we use the unconditional 256$\times$256 guided diffusion model from \citet{dhariwal2021diffusion} as the diffusion model and pretrained BEiT large model~\citep{bao2021beit} trained on ImageNet-21k as the classifier, which could achieve 88.6\% top-1 accuracy on validation set of ImageNet-1k. We select three different noise levels $\sigma \in \left\{ 0.25, 0.5, 1.0 \right\}$ for certification. For the parameters of \textsf{DensePure}~{}, we set $K = 40$ and $b$ = 10 except the results in ablation study. The details about the baselines are in the appendix. \begin{table}[t] \label{4} \begin{center} \resizebox{\linewidth}{!}{% \begin{tabular}{lrrrrr|rrrrr} \toprule \multicolumn{1}{r}{} & &\multicolumn{9}{c}{Certified Accuracy at ${\epsilon}$(\%)} \\ \multicolumn{1}{r}{} & &\multicolumn{4}{c}{CIFAR-10} &\multicolumn{5}{c}{ImageNet}\\ Method & Off-the-shelf & 0.25 & 0.5 & 0.75 & \multicolumn{1}{r|}{1.0} &0.5 &1.0 &1.5 &2.0 &3.0 \\ \midrule PixelDP~\citep{lecuyer2019certified} & \xmark & $^{(71.0)}22.0$ & $^{(44.0)}2.0$ &- &- & $^{(33.0)}16.0$ & - &- &- &- \\ RS~\citep{Cohen2019ICML} & \xmark & $^{(75.0)}61.0$ &$^{(75.0)}43.0$ & $^{(65.0)}32.0$ & $^{(65.0)}23.0$ & $^{(67.0)}49.0$ &$^{(57.0)}37.0$ & $^{(57.0)}29.0$ & $^{(44.0)}19.0$ &$^{(44.0)}12.0$ \\ SmoothAdv ~\citep{salman2019provably} & \xmark & $^{(82.0)}68.0$ & $^{(76.0)}54.0$ & $^{(68.0)}41.0$ &$^{(64.0)}32.0$ & $^{(63.0)}54.0$ & $^{(56.0)}42.0$ & $^{(56.0)}34.0$ & $^{(41.0)}26.0$ &$^{(41.0)}18.0$ \\ Consistency ~\citep{jeong2020consistency} & \xmark & $^{(77.8)}68.8$ & $^{(75.8)}58.1$ & $^{(72.9)}48.5$ & $^{(52.3)}37.8$ & $^{(55.0)}50.0$ & $^{(55.0)}44.0$ & $^{(55.0)}34.0$ &$^{(41.0)}24.0$ &$^{(41.0)}17.0$ \\ MACER ~\citep{zhai2020macer} & \xmark & $^{(81.0)}71.0$ &$^{(81.0)}59.0$ & $^{(66.0)}46.0$ & $^{(66.0)}38.0$ & $^{(68.0)}57.0$ &$^{(64.0)}43.0$ & $^{(64.0)}31.0$ & $^{(48.0)}25.0$ &$^{(48.0)}14.0$ \\ Boosting ~\citep{horvath2021boosting} & \xmark & $^{(83.4)}70.6$ & $^{(76.8)}60.4$ & $^{(71.6)}\textbf{52.4}$ & $^{(73.0)}\textbf{38.8}$ & $^{(65.6)}57.0$ & $^{(57.0)}44.6$ & $^{(57.0)}38.4$ & $^{(44.6)}28.6$ & $^{(38.6)}21.2$ \\ SmoothMix ~\citep{jeong2021smoothmix} & \cmark & $^{(77.1)}67.9$ & $^{(77.1)}57.9$ & $^{(74.2)}47.7$ & $^{(61.8)}37.2$ & $^{(55.0)}50.0$ & $^{(55.0)}43.0$ & $^{(55.0)}38.0$ & $^{(40.0)}26.0$ & $^{(40.0)}17.0$ \\ \midrule Denoised ~\citep{salman2020denoised} & \cmark &$^{(72.0)}56.0$ &$^{(62.0)}41.0$ &$^{(62.0)}28.0$ &$^{(44.0)}19.0$ & $^{(60.0)}33.0$ &$^{(38.0)}14.0$ &$^{(38.0)}6.0$ &- &- \\ Lee ~\citep{lee2021provable} & \cmark &60.0& 42.0 &28.0& 19.0 &41.0& 24.0 &11.0& - &- \\ Carlini~\citep{carlini2022certified} & \cmark &$^{(88.0)}73.8 $ &$^{(88.0)}56.2$ &$^{(88.0)}41.6$ &$^{(74.2)}31.0$ &$^{(82.0)}74.0 $ &$^{(77.2.0)}59.8$ &$^{(77.2)}47.0$ &$^{(64.6)}31.0$ & $^{(64.6)}19.0$ \\ \textbf{Ours} & \cmark &$^{(87.6)}$\textbf{76.6} &$^{(87.6)}$\textbf{64.6} &$^{(87.6)}{50.4}$ &$^{(73.6)}{37.4}$ &$^{(84.0)}$\textbf{77.8} &$^{(80.2)}$\textbf{67.0} &$^{(80.2)}$\textbf{54.6} &$^{(67.8)}$\textbf{42.2} & $^{(67.8)}$\textbf{25.8} \\ \bottomrule \end{tabular}} \end{center} \caption{Certified accuracy compared with existing works. The certified accuracy at $\epsilon=0$ for each model is in the parentheses. The certified accuracy for each cell is from the respective papers except \citet{carlini2022certified}. Our diffusion model and classifier are the same as \citet{carlini2022certified}, where the off-the-shelf classifier uses ViT-based architectures trained on a large dataset (ImageNet-22k).} \vspace{-3mm} \label{tbl:cifar} \end{table} \begin{figure*}[t] \small \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/cifar10_500_comparation.png}\\ CIFAR-10 \end{minipage}\hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_100_comparation.png}\\ ImageNet \end{minipage}\hfill \vspace{-2mm} \caption{Comparing our method vs \citet{carlini2022certified} on CIFAR-10 and ImageNet. The lines represent the certified accuracy with different $L_2$ perturbation bound with different Gaussian noise $\sigma \in \{0.25, 0.50, 1.00\}$.} \vspace{-5mm} \label{fig:cifarimg} \end{figure*} \vspace{-1mm} \subsection{Main Results} We compare our results with other baselines. The results are shown in Table~\ref{tbl:cifar}. For CIFAR-10, comparing with the models which are {\em carefully} trained with randomized smoothing techniques in an end-to-end manner (i.e., w/o off-the-shelf classifier), we observe that our method with the standard off-the-shelf classifier outperforms them at smaller $\epsilon= \{0.25, 0.5\}$ on both CIFAR-10 and ImageNet datasets while achieves comparable performance at larger $\epsilon=\{0.75, 1.0\}$. Comparing with the non-diffusion model based methods with off-the-shelf classifier (i.e., Denoised~\citep{salman2020denoised} and Lee~\citep{lee2021provable}), both our method and \citet{carlini2022certified} are significantly better than them. These results verify the non-trivial adversarial robustness improvements introduced from the diffusion model. For ImageNet, our method is consistently better than all priors with a large margin. Since both \citet{carlini2022certified} and \textsf{DensePure}~ use the diffusion model, to better understand the importance of our design, that approximates the label of the high density region in the conditional distribution, we compare \textsf{DensePure}~ with \citet{carlini2022certified} in a more fine-grained manner. We show detailed certified robustness of the model among different $\sigma$ at different radius for CIFAR-10 in Figure~\ref{fig:cifarimg}-left and for ImageNet in Figure~\ref{fig:cifarimg}-right. We also present our results of certified accuracy at different ${\epsilon}$ in Appendix~\ref{main}. From these results, we find that our method is still consistently better at most $\epsilon$ (except $\epsilon= 0$) among different $\sigma$. The performance margin between ours and \citet{carlini2022certified} will become even larger with a large $\epsilon$. These results further indicate that although the diffusion model improves model robustness, leveraging the posterior data distribution conditioned on the input instance (like \textsf{DensePure}~) via reverse process instead of using single sample (\citep{carlini2022certified}) is the key for better robustness. Additionally, we use the off-the-shelf classifiers, which are the VIT-based architectures trained a larger dataset. In the later ablation study section, we select the CNN-based architecture wide-ResNet trained on standard dataset from scratch. Our method still achieves non-trivial robustness. \begin{figure*}[t] \small \centering \begin{minipage}{0.43\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_mv.png}\\ \end{minipage} \begin{minipage}{0.43\linewidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_steps.png}\\ \end{minipage} \vspace{-2mm} \caption{Ablation study on ImageNet. The left image shows the certified accuracy among different vote numbers with different radius $\epsilon \in \{0.0, 0.25, 0.5, 0.75\}$. Each line in the figure represents the certified accuracy of our method among different vote numbers $K$ with Gaussian noise $\sigma=0.25$. The right image shows the certified accuracy with different fast sampling steps $b$. Each line in the figure shows the certified accuracy among different $L_2$ adversarial perturbation bound.} \vspace{-5mm} \label{fig:mv-image} \end{figure*} \subsection{Ablation study} \textbf{Voting samples ($K$)} We first show how $K$ affects the certified accuracy. For efficiency, we select $b=10$. We conduct experiments for both datasets. We show the certified accuracy among different $r$ at $\sigma=0.25$ in Figure~\ref{fig:mv-image}. The results for $\sigma=0.5, 1.0$ and CIFAR-10 are shown in the Appendix~\ref{exp:vote}. Comparing with the baseline~\citep{carlini2022certified}, we find that a larger majority vote number leads to a better certified accuracy. It verifies that \textsf{DensePure}~ indeed benefits the adversarial robustness and making a good approximation of the label with high density region requires a large number of voting samples. We find that our certified accuracy will almost converge at $r=40$. Thus, we set $r=40$ for our experiments. The results with other $\sigma$ show the similar tendency. \textbf{Fast sampling steps ($b$)} To investigate the role of $b$, we conduct additional experiments with $b\in \{2,5\}$ at $\sigma=0.25$. The results on ImageNet are shown in Figure \ref{fig:mv-image} and results for $\sigma=0.5, 1.0$ and CIFAR-10 are shown in the Appendix~\ref{exp:steps}. By observing results {\em with} majority vote, we find that a larger $b$ can lead to a better certified accuracy since a larger $b$ generates images with higher quality. By observing results {\em without} majority vote, the results show opposite conclusions where a larger $b$ leads to a lower certified accuracy, which contradicts to our intuition. We guess the potential reason is that though more sampling steps can normally lead to better image recovery quality, it also brings more randomness, increasing the probability that the reversed image locates into a data region with the wrong label. These results further verify that majority vote is necessary for a better performance. \textbf{Different architectures} One advantage of \textsf{DensePure}~ is to use the off-the-shelf classifier so that it can plug in any classifier. We choose Convolutional neural network (CNN)-based architectures: Wide-ResNet28-10~\citep{zagoruyko2016wide} for CIFAR-10 with $95.1\%$ accuracy and Wide-ResNet50-2 for ImageNet with $81.5\%$ top-1 accuracy, at $\sigma=0.25$. The results are shown in Table~\ref{tbl:wrn_0.25} and Figure~\ref{fig:modelarch} in Appendix~\ref{exp:models}. Results for more model architectures and $\sigma$ of ImageNet are also shown in Appendix~\ref{exp:models}. We show that our method can enhance the certified robustness of any given classifier trained on the original data distribution. Noticeably, although the performance of CNN-based classifier is lower than Transformer-based classifier, \textsf{DensePure}~{} with CNN-based model as the classifier can outperform \citet{carlini2022certified} with ViT-based model as the classifier (except $\epsilon=0$ for CIFAR-10). \begin{table}[t] \centering \resizebox{.9\linewidth}{!}{% \begin{tabular}{lllrrrrlrrrr} \toprule & & \multicolumn{10}{c}{Certified Accuracy at $\boldsymbol{{\epsilon}}(\%)$} \\ Datasets & Methods & Model & 0.0 & 0.25 & 0.5 & 0.75 & Model &0.0 &0.25 &0.5 &0.75 \\ \midrule CIFAR-10 &Carlini~\citep{carlini2022certified} &ViT-B/16 & \textbf{93.0} & 76.0 & 57.0 & \multicolumn{1}{r|}{47.0} &WRN28-10 & 86.0 & 66.0 & 55.0 & 37.0 \\ & \textbf{Ours} &ViT-B/16 & 92.0 & \textbf{82.0} & \textbf{69.0} & \multicolumn{1}{r|}{\textbf{56.0}} &WRN28-10 & \textbf{90.0} & \textbf{77.0} & \textbf{63.0} & \textbf{50.0} \\ \midrule ImageNet& Carlini~\citep{carlini2022certified} &BEiT & 77.0 & 76.0 & 71.0 & \multicolumn{1}{r|}{60.0} &WRN50-2 & 73.0 & 67.0 & 57.0 & 48.0 \\ & \textbf{Ours} &BEiT & \textbf{80.0} & \textbf{78.0} & \textbf{76.0} & \multicolumn{1}{r|}{\textbf{71.0}} &WRN50-2 & \textbf{81.0} & \textbf{72.0} & \textbf{66.0} & \textbf{61.0}\\ \bottomrule \end{tabular}} \caption{Certified accuracy of our method among different classifier. BeiT and ViT are pre-trained on a larger dataset ImageNet-22k and fine-tuned at ImageNet-1k and CIFAR-10 respectively. WideResNet is trained on ImageNet-1k for ImageNet and trained on CIFAR-10 from scratch for CIFAR-10. } \vspace{-5mm} \label{tbl:wrn_0.25} \end{table} \section{Introduction} 1. diffusion model is good at removing adv noise based on observations 2. but it is unclear about the fundamental reason or sufficient conditions under which diffusion models can effectively remove adv noise 3. we theoretically analyze, xxxxx (2 conclusions) (intuition: the union of xxxx, better than rs under general cases) 4. in practice, we propose to leverage the majority vote to approximate such high density region and therefore improve the certified robustness 5. exp results ..... \weili{Should we emphasize more the certified robustness in the intro?} Diffusion models have demonstrated powerful image generation ability recently \citep{Ho2020DDPM, Song2021ICLR} due to the iterative diffusion and denoising processes. Such models so far have achieved the state-of-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score}, as well as effective mode coverage \citep{song2021maximum}. In particular, diffusion models usually consist of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, \emph{empirical} studies have been conducted to show that it is possible to leverage diffusion models to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citeauthor{nie2022diffusion} introduced a diffusion model based purification model \textit{DiffPure}, which first added Gaussian noises to the adversarial sample to obtain a diffused sample (via diffusion process) and then solved a reverse stochastic differential equation (SDE) (via reverse process) to remove the adversarial perturbation to recover a ``clean" sample \bo{certification} . They showed that by carefully choosing the amount of Gaussian noises added during the diffusion process, the adversarial perturbations can be removed while preserving the true label semantics. \citet{carlini2022certified} instantiated the randomized smoothing approach with diffusion model to empirically show the non-trivial certified robustness. Despite the empirical success of diffusion models in removing adversarial perturbations, the theoretical understanding for such process is still lacking. In this work, we aim to ask: \textit{What are the sufficient conditions under which a diffusion model is able to remove adversarial perturbation?} \textit{Is it possible to further improve the purification performance based on the fundamental properties of diffusion models?} In particular, we theoretically prove that under mild conditions, with the diffusion model, an adversarial example will always be recovered back to the corresponding clean sample with high probability, as long as the data density of the clean samples is high enough. \bo{maybe more result later} Based on our theoretical analysis, we show that in fact the diffusion process itself is not necessary when leveraging diffusion models for adversarial purification. Instead, by sending the adversarial example directly through the reverse process of the diffusion model, we can obtain a better-recovered sample in terms of its distance to the corresponding clean sample. On the other hand, after the reverse process, we actually obtain a distribution instead of a deterministic sample, which is a posterior of the data distribution conditioned on the adversarial example. The above characteristics enable us to design a more effective adversarial purification pipeline \textsf{DensePure}~. Concretely, \textsf{DensePure}~ incorporates two steps: (i) we first solve the reverse stochastic differential equation (SDE) to obtain a posterior data distribution conditioned on the input adversarial example, and (ii) then we take the point with the highest density in the obtained distribution as the clean sample, approximated by taking the majority vote for the prediction on different reversed samples. We show that the robust region for each true prototype under \textsf{DensePure}~ is the union of all convex robust sub-region for prototypes (i.e., high density region with the same label as the true prototype). We believe that this characterization has the potential to provide larger robustness radius for inputs, since it is able to connect different convex robust sub-regions together. In addition, based on our formal characterization of the robust regions, we show that the size of robust regions is affected by the relative density and between the clean sample and neighborhood samples with different labels, and the distance of true prototype to prototypes with different labels. The influence of these two factors is controlled by the timestamp we choose in the reverse process. Moreover, we instantiate \textsf{DensePure}~ with the randomized smoothing algorithm~\citep{Cohen2019ICML} to evaluate the certified robustness in practice. However, it is hard for \textsf{DensePure}~ to exactly sample the point of the highest density in practice as it needs an enormous amount of samplings to make a confident estimation of the densities for all points in the sample space. We propose to leverage the \textit{majority vote} on the classifier's prediction among different reversed samples instead. Our pipeline is shown in Figure~\ref{pipeline}. We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings. We show that our method could achieve the new state-of-the-art \emph{certified} robustness without tuning any model parameters (off-the-shelf).\chaowei{give numbers later} We noted that although we can characterize the robust region for an input, generally it is non-convex with (possible) infinite constraints, which makes it hard to find a deterministic certification method to provide the robustness radius (lower bound) for each true prototype. Therefore, we resort to the randomized certification method in \cite{Cohen2019ICML}. On the other hand, we also noted that highest density point locating in high-dimensional spaces requires a huge amount of sampling, which is impossible in practice because we have to solve a SDE for every sampling. As an alternative, we proposed a label majority voting step to approximate the highest density point locating step. \vspace{-0.2in} \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{figures/} \end{center} \caption{Pipeline of \textsf{DensePure}~.}\label{pipeline} \label{robustfigure} \end{figure} \textbf{\underline{Contributions.}} \begin{itemize}[leftmargin=*] W take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. \item Based on our analysis, we proposed \textsf{DensePure}~, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and a highest density point locating step. \item We characterized the robust region for each point under \textsf{DensePure}~. \item We demonstrated state-of-art performance of \textsf{DensePure}~ on xxxx \end{itemize} \section{Introduction} Diffusion models have demonstrated powerful image generation ability recently \citep{Ho2020DDPM, Song2021ICLR} due to the iterative diffusion and denoising processes. Such models so far have achieved state-of-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score}, as well as effective mode coverage \citep{song2021maximum}. In particular, diffusion models usually consist of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, \emph{empirical} studies have been conducted to show that it is possible to leverage diffusion models to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citeauthor{nie2022diffusion} introduced a diffusion model based purification model \textit{DiffPure}, which first added Gaussian noises to the adversarial sample to obtain a diffused sample (via diffusion process) and then solved a reverse stochastic differential equation (SDE) (via reverse process) to remove the adversarial perturbation to recover a ``clean" sample. They showed that by carefully choosing the amount of Gaussian noises added during the diffusion process, the adversarial perturbations can be removed while preserving the true label semantics. \citet{carlini2022certified} instantiated the randomized smoothing approach with the diffusion model to remove added Gaussian noise, so as to provide certified robustness which offers a provable guarantee that a clean model's prediction is robust to $L_2$-norm bounded adversarial example. Despite the success of diffusion models in removing adversarial perturbations, theoretical understanding is still lacking. Thus, natural questions emerge: \textit{What are the sufficient conditions under which a diffusion model is able to remove adversarial perturbation?} \textit{Is it possible to further improve the purification performance based on the fundamental properties of diffusion models?} In this work, we theoretically prove that with the diffusion model based adversarial purification pipelines~\citep{nie2022diffusion}, an adversarial example will always be recovered back to the sample with the ground-truth label with high probability, as long as the data density of the sample with the ground-truth label is high enough. Specifically, when we feed an adversarial example into the reverse process of the diffusion model, the reverse process generates a conditional distribution over the trained data manifold based on the given adversarial sample. When we explicitly characterize the above obtained conditional density function, its formulation implies that the manifold with the ground-truth label of the given adversarial example has a higher density than the manifolds with other labels. Thus, If we use the highest density point in the conditional distribution as the reversed sample, it is more likely to predict the reversed sample as ground-truth compared to directly feeding the adversarial examples into the classifier. Moreover, if the density of the ground-truth label is high enough, the reversed example will always be predicted as the ground-truth label. The above analysis inspires us to propose the pipeline, \textsf{DensePure}~. \textsf{DensePure}~ incorporates two steps: (i) first using the reverse process of the diffusion model to obtain a posterior data distribution conditioned on the input adversarial input; and (ii) taking the point with the highest density in the obtained distribution as the purified sample for prediction. In practice, the second step is, however, hard to be implemented as it needs an enormous amount of samplings to confidently estimate the densities in the whole sample space. To address this problem, we use \textit{majority vote} to reduce the heavy computations. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to get the predictions. We then apply the \textit{majority vote} on such predictions to get the final predicted label. We prove that the reverse process of the diffusion model will generate samples in the surrounding neighborhood of the manifold with ground-truth label with a high probability. This implies that \textit{majority vote} will high likely output the label of the highest density point. Additionally, we also analyze the robustness radius of \textsf{DensePure}~. We characterize the robustness region of any data manifolds with the ground-truth label such that all adversarial samples will be classified correctly. We identify that the robustness region of \textsf{DensePure}~ is a union of multiple convex sets, each surrounding a manifold with the ground-truth label. Compared to the robustness radius of previous work~\cite{Cohen2019ICML}, only focusing on the neighborhood of {\em one} manifold with the ground-truth label, \textsf{DensePure}~ will have the potential to provide a larger robustness radius. Moreover, the characterization implies that the size of robustness regions is affected by the relative density and the distance of data manifolds with the ground-truth label and manifolds with other labels. The influence of these two factors is controlled by the timestamp we choose in the reverse process. \chaowei{Not sure whether we need the last two sentences. I find it not strictly connected with the previous sentences. It seems that we miss something in the middle. } We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of \textsf{DensePure}~. In particular, we follow the setting from \citet{carlini2022certified} and rely on randomized smoothing to certify robustness to adversarial perturbations bounded in the $\mathcal{L}_2$-norm. We show that \textsf{DensePure}~~ achieves the new state-of-the-art \emph{certified} robustness on the clean model without tuning any model parameters (off-the-shelf). \textsf{DensePure}~ obtains significantly better certified robustness results xxxx \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{figures/} \end{center} \caption{Pipeline of \textsf{DensePure}~.}\label{pipeline} \label{robustfigure} \end{figure} \textbf{\underline{Technical Contributions.}} In this paper, we take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We make contributions on both theoretical and empirical fronts. \begin{itemize}[leftmargin=*] \item We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. \item Based on our analysis, we proposed \textsf{DensePure}~, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and a highest density point locating step. \item We characterized the robust region for each point under \textsf{DensePure}~. \item We demonstrated state-of-art performance of \textsf{DensePure}~~ on xxxx \end{itemize} \section{Introduction} Diffusion models have demonstrated powerful image generation ability recently \citep{Ho2020DDPM, Song2021ICLR} due to the iterative diffusion and denoising processes. Such models so far have achieved the state-of-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score}, as well as effective mode coverage \citep{song2021maximum}. In particular, diffusion models usually consist of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, \emph{empirical} studies have been conducted to show that it is possible to leverage diffusion models to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citeauthor{nie2022diffusion} introduced a diffusion model based purification model \textit{DiffPure}, which first added Gaussian noises to the adversarial sample to obtain a diffused sample (via diffusion process) and then solved a reverse stochastic differential equation (SDE) (via reverse process) to remove the adversarial perturbation to recover a ``clean" sample. They showed that by carefully choosing the amount of Gaussian noises added during the diffusion process, the adversarial perturbations can be removed while preserving the true label semantics. \citet{carlini2022certified} instantiated the randomized smoothing approach with diffusion model to remove added Gaussian noise, so as to provide certified robustness for a clean model. Despite the success of diffusion models in removing adversarial perturbations, the theoretical understanding is still lacking. In this work, we aim to ask: \textit{What are the sufficient conditions under which a diffusion model is able to remove adversarial perturbation?} \textit{Is it possible to further improve the purification performance based on the fundamental properties of diffusion models?} In this work, we proved that with the diffusion model based adversarial purification pipelines~\cite{xxxx}, an adversarial example will always be recovered back to the corresponding clean sample with high probability, as long as the data density of the clean samples is high enough. \bo{do we want to use clean sample or sample with the ground-truth label here?} In particular, we analyze the properties of the highest density point in the conditional distribution generated by the reverse process of the diffusion model given an adversarial sample. In addition, we theoretically characterized the robust region for each true prototype,\bo{prototype? define it since never mentioned before} within which if the adversarial sample locates, the highest density point will always have the same label as the true prototype.\bo{this sentene is not clear, what's ``locates"?} This robustness region is actually the union of all convex sub-regions for prototypes with high density and the same label as the true prototype. We believe that this characterization has the potential to provide larger robustness radius for inputs, since it is able to connect different convex robust sub-regions together\zc{refrase: larger to what?}. In addition, the characterization also implies that the size of robustness regions is affected by the relative density and the distance of true prototype to prototypes with different labels. The influence of these two factors is controlled by the timestamp we choose in the reverse process. In practice, it is challenging to exactly sample the point of the highest density in practice as it needs an enormous amount of samplings to make a confident estimation of the densities for all points in the sample space. Thus, we propose to leverage the \textit{majority vote} on the classifier's prediction among different reversed samples to approximate the true prediction. We proved that reverse process of the diffusion model will generate samples in the surrounding neighbourhood of the true prototype where it is more likely that the prototypes with the same label as the true prototype have the highest probability and one such prototype takes the highest density. In this case, \textit{majority vote} will output the label of the highest density point. The theoretical analysis enables us to design a certifiably robust adversarial purification pipeline \textsf{DensePure}~ (Figure~\ref{pipeline}), which incorporates two steps: (i) we first solve the reverse stochastic differential equation (SDE) to obtain a posterior data distribution conditioned on the input adversarial example, and (ii) then we take the point with the highest density in the obtained distribution as the clean sample, approximated by taking the majority vote for the prediction on different reversed samples. We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings. We show that \textsf{DensePure}~ achieves the new state-of-the-art \emph{certified} robustness on clean model without tuning any model parameters (off-the-shelf).\chaowei{give numbers later} \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{figures/} \end{center} \caption{Pipeline of \textsf{DensePure}~.}\label{pipeline} \label{robustfigure} \end{figure} \textbf{\underline{Technical Contributions.}} In this paper, we take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We make contributions on both theoretical and empirical fronts. \begin{itemize}[leftmargin=*] \item We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. \item Based on our analysis, we proposed \textsf{DensePure}~, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and a highest density point locating step. \item We characterized the robust region for each point under \textsf{DensePure}~. \item We demonstrated state-of-art performance of \textsf{DensePure}~ on xxxx \end{itemize} \section{Introduction} Diffusion models have demonstrated powerful image generation ability recently \citep{Ho2020DDPM, Song2021ICLR} due to the iterative diffusion and denoising processes. Such models so far have achieved state-of-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score}, as well as effective mode coverage \citep{song2021maximum}. In particular, diffusion models usually consist of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, {empirical} studies have been conducted to show that it is possible to leverage diffusion models to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citeauthor{nie2022diffusion} introduced a diffusion model based purification model \textit{DiffPure}, which first added Gaussian noises to the adversarial sample to obtain a diffused sample (via diffusion process) and then solved a reverse stochastic differential equation (SDE) (via reverse process) to remove the adversarial perturbation to recover a ``clean" sample. They showed that by carefully choosing the amount of Gaussian noises added during the diffusion process, the adversarial perturbations can be removed while preserving the true label semantics. \citet{carlini2022certified} instantiated the randomized smoothing approach with the diffusion model to remove added Gaussian noise, so as to provide certified robustness for a clean model. Despite the success of diffusion models in removing adversarial perturbations, theoretical understanding is still lacking. Thus, natural questions emerge: \textit{What are the sufficient conditions under which a diffusion model is able to remove adversarial perturbation?} \textit{Is it possible to further improve the purification performance based on the fundamental properties of diffusion models?} In this work, we first prove that given a diffusion model, an adversarial example will always be recovered back to the sample with the ground-truth label with high probability, as long as the data density of the sample with the ground-truth label is high enough. Concretely, we show that the reverse process of the diffusion model generates a conditional distribution conditioned on a given adversarial sample. We then explicitly characterized the conditional density function whose formulation implies that if the ground-truth samples are of high density, the recovered instance will always be mapped to . For further theoretical analysis, we use the highest density point in the conditional distribution as the clean sample, who will be sent to the classifier to predict the ground-truth label. The clean sample will locate in the data manifold and thus the classification outcome will be much more reliable than directly classifying the adversarial sample. In addition to the robustness radius studied in previous works, we characterize the robustness region of any true prototype such that all adversarial samples will be classified correctly. We show the robustness region is a union of convex sets, each surrounding a true prototype, and this characterization (the connection of convex sets) has the potential to provide a larger robustness radius than other certification methods \zc{cite} only focusing on the neighborhood of one true prototype. Moreover, the characterization also implies that the size of robustness regions is affected by the relative density and the distance of true prototypes to other prototypes. The influence of these two factors is controlled by the timestamp we choose in the reverse process. Based on the above analysis, we design \textsf{DensePure}~ as shown in Figure~\ref{}, consisting of two steps: (i) we first use the reverse process of diffusion model to obtain a posterior data distribution conditioned on the input adversarial example; and (ii) we perform the reverse process multiple times to approximate the high density instances via majority vote. We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of \textsf{DensePure}~. In particular, we follow the setting from \citet{carlini2022certified} and rely on randomized smoothing to certify robustness to adversarial perturbations bounded in the $\mathcal{L}_2$-norm. We show that \textsf{DensePure}~~ achieves the new state-of-the-art \emph{certified} robustness on the clean model without tuning any model parameters (off-the-shelf). \textsf{DensePure}~ obtains significantly better certified robustness results xxxx \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{figures/} \end{center} \caption{Pipeline of \textsf{DensePure}~.}\label{pipeline} \label{robustfigure} \end{figure} \textbf{\underline{Technical Contributions.}} In this paper, we take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We make contributions on both theoretical and empirical fronts. \begin{itemize}[leftmargin=*] \item We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. \item Based on our analysis, we proposed \textsf{DensePure}~, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and a highest density point locating step. \item We characterized the robust region for each point under \textsf{DensePure}~. \item We demonstrated state-of-art performance of \textsf{DensePure}~~ on xxxx \end{itemize} \section{Introduction} Diffusion models have demonstrated powerful image generation ability recently \citep{Ho2020DDPM, Song2021ICLR} due to the iterative diffusion and denoising processes. Such models so far have achieved state-of-the-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score}, as well as effective mode coverage \citep{song2021maximum}. Diffusion models usually consist of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, \emph{empirical} studies have leveraged them to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citeauthor{nie2022diffusion} introduced a diffusion model based purification model \textit{DiffPure}. They showed that by carefully choosing the amount of Gaussian noises added during the diffusion process, the adversarial perturbations can be removed while preserving the true label semantics {\em empirically}. \citet{carlini2022certified} instantiated the randomized smoothing approach with the diffusion model to remove added Gaussian noise. They viewed the diffusion model as a {\em blackbox} and showed that using a diffusion model achieves nontrivial certified robustness, which offers a {\em provable guarantee} that a clean model's prediction is robust to $L_2$-norm bounded adversarial example. Despite the success of diffusion models in removing adversarial perturbations, theoretical understanding is still lacking. Thus, natural questions emerge: \textit{What are the sufficient conditions under which a diffusion model is able to remove adversarial perturbation?} \textit{Is it possible to further improve the purification performance (e.g., certified robustness) based on the fundamental properties of diffusion models?} In this work, we prove that given a diffusion model, an adversarial example will be recovered back to the sample with the ground-truth label with high probability, as long as the \textit{data} density of the samples with the ground-truth label is high enough. Concretely, when we directly feed an adversarial example into the reverse process of a diffusion model, the reverse process generates a distribution over the training data manifold conditioned on the given adversarial sample. We explicitly characterize the obtained conditional density function, and show that as long as the data manifold with the ground-truth label has a higher conditional density than the manifolds with other labels, the reversed sample will have a high probability to be mapped to the manifold with ground-truth prediction. For further theoretical analysis, we take the highest density point in the conditional distribution as the reversed sample for the classifier prediction. We identify that the robust region for a given instance under the diffusion model reverse process is the union of multiple convex sets, each surrounding a manifold with the ground-truth label. Compared with the robustness region of previous work~\cite{Cohen2019ICML}, which only focuses on the neighborhood of {\em one} manifold with the ground-truth label, such union of multiple convex sets has the potential to provide a much larger robust region. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data manifolds with the ground-truth label and those with other labels. In practice, inspired by our theoretical analysis, we propose a pipeline \textsf{DensePure}, which incorporates two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times to approximate the label of high density manifold via majority vote. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to get the final prediction. We then apply the \textit{majority vote} on such predictions to get the final predicted label. We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of \textsf{DensePure}. In particular, we follow the setting from \citet{carlini2022certified} and rely on randomized smoothing to certify robustness to adversarial perturbations bounded in the $\mathcal{L}_2$-norm. We show that \textsf{DensePure}~ achieves the new state-of-the-art \emph{certified} robustness on the clean model without tuning any model parameters (off-the-shelf). On ImageNet, it achieves a consistently higher certified accuracy than the existing methods among every $\sigma$ at every radius $\epsilon$ , 7\% improvement on average. \begin{figure}[h] \begin{center} \includegraphics[width=0.75\linewidth]{figures/densepure_flowchart.png} \end{center} \caption{Pipeline of \textsf{DensePure}.}\label{pipeline} \label{robustfigure} \end{figure} \textbf{\underline{Technical Contributions.}} In this paper, we take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We make contributions on both theoretical and empirical fronts: (1) We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. (2) In theory, we characterized the robust region for each point by further taking the highest density point in the conditional distribution generated by the reverse process as the reversed sample. (3) In practice, we proposed \textsf{DensePure}, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and label \textit{majority vote}. (4) We demonstrated state-of-art performance of \textsf{DensePure}~ on ImageNet and CIFAR-10. \section{Introduction} Diffusion models have been shown to be a powerful image generation tool \citep{Ho2020DDPM, Song2021ICLR} owing to their iterative diffusion and denoising processes. These models have achieved state-of-the-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score} as well as effective mode coverage \citep{song2021maximum}. A diffusion model usually consists of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, \emph{empirical} studies have leveraged them to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citeauthor{nie2022diffusion} introduce a diffusion model based purification model \textit{DiffPure}. They {\em empirically} show that by carefully choosing the amount of Gaussian noises added during the diffusion process, adversarial perturbations can be removed while preserving the true label semantics. Despite the significant empirical results, there is no provable guarantee on the robustness. \citet{carlini2022certified} instantiate the randomized smoothing approach with the diffusion model to offer a {\em provable guarantee} of model robustness against $L_2$-norm bounded adversarial example. However, they do not provide a theoretical understanding of why and how the diffusion models contribute to such nontrivial certified robustness. \textbf{Our Approach.} We theoretically analyze the fundamental properties of diffusion models to understand why and how it enhances certified robustness, which then allows us to propose \textsf{DensePure}, which improves the certified robustness of a given model by more effectively using the diffusion model. We theoretically prove that given a diffusion model, an adversarial example can be corrected back to the sample with the ground-truth label with high probability, as long as the \textit{data} density of the samples with the ground-truth label is sufficiently high. Concretely, when we directly feed an adversarial example into the reverse process of a diffusion model, the reverse process generates a distribution over the training data region conditioned on the given adversarial sample. We explicitly characterize the obtained conditional density function, and show that as long as the data region with the ground-truth label has a higher data density than the regions with other labels, the reversed sample will have a high probability to be mapped to the region with ground-truth prediction. We then use the highest density point in the conditional distribution as the denoised sample for the classifier prediction. We show that the robust region for a given sample under the diffusion model's reverse process is the union of multiple convex sets, each surrounding a region around the ground-truth label. Compared with the robust region identified in previous work~\citep{Cohen2019ICML}, which only focuses on the neighborhood of {\em one} region with the ground-truth label, such union of multiple convex sets can provide a much larger robust region. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data regions with the ground-truth label and those with other labels. In practice, \textsf{DensePure}~ is designed to approximate the label of the high density region in the conditional distribution by incorporating two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times to approximate the label of high density region in the conditional distribution via a majority vote. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to get the final prediction. We then apply the \textit{majority vote} on such predictions to get the final predicted label. We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of \textsf{DensePure}. In particular, we follow the setting from \citet{carlini2022certified} and rely on randomized smoothing to certify robustness to adversarial perturbations bounded in the $\mathcal{L}_2$-norm. We show that \textsf{DensePure}~ achieves the new state-of-the-art \emph{certified} robustness on the clean model without tuning any model parameters (off-the-shelf). On ImageNet, it achieves a consistently higher certified accuracy than the existing methods among every $\sigma$ at every radius $\epsilon$ , 7\% improvement on average. \begin{figure}[h] \begin{center} \includegraphics[width=0.75\linewidth]{figures/densepure_flowchart.png} \end{center} \caption{Pipeline of \textsf{DensePure}.}\label{pipeline} \label{robustfigure} \end{figure} \textbf{\underline{Technical Contributions.}} In this paper, we take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We make contributions on both theoretical and empirical fronts: (1) We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. (2) In theory, we characterize the robust region for each point by further taking the highest density point in the conditional distribution generated by the reverse process as the reversed sample. (3) In practice, we propose \textsf{DensePure}, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and label \textit{majority vote}. (4) We demonstrate state-of-the-art performance of \textsf{DensePure}~ on ImageNet and CIFAR-10. \section{Introduction} Diffusion models have been shown to be a powerful image generation tool \citep{Ho2020DDPM, Song2021ICLR} owing to their iterative diffusion and denoising processes. These models have achieved state-of-the-art performance on sample quality \citep{dhariwal2021diffusion, vahdat2021score} as well as effective mode coverage \citep{song2021maximum}. A diffusion model usually consists of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time~\citep{Song2021ICLR}. Given the natural denoising property of diffusion models, \emph{empirical} studies have leveraged them to perform adversarial purification~\citep{nie2022diffusion,wu2022guided,carlini2022certified}. For instance, \citet{nie2022diffusion} introduce a diffusion model based purification model \textit{DiffPure}. They {\em empirically} show that by carefully choosing the amount of Gaussian noises added during the diffusion process, adversarial perturbations can be removed while preserving the true label semantics. Despite the significant empirical results, there is no provable guarantee of the achieved robustness. \citet{carlini2022certified} instantiate the randomized smoothing approach with the diffusion model to offer a {\em provable guarantee} of model robustness against $L_2$-norm bounded adversarial example. However, they do not provide a theoretical understanding of why and how the diffusion models contribute to such nontrivial certified robustness. \textbf{Our Approach.} We theoretically analyze the fundamental properties of diffusion models to understand why and how it enhances certified robustness. This deeper understanding allows us to propose a new method \textsf{DensePure}~ to improve the certified robustness of any given classifier by more effectively using the diffusion model. An illustration of the \textsf{DensePure}~ framework is provided in Figure \ref{pipeline}, where it consists of a pretrained diffusion model and a pretrained classifier. \textsf{DensePure}~ incorporates two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times with different random seeds to approximate the label of high density region in the conditional distribution via a majority vote. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to get their labels. We then apply the \textit{majority vote} on the set of labels to get the final predicted label. \textsf{DensePure}~ is inspired by our theoretical analysis, where we show that the diffusion model reverse process provides a conditional distribution of the reversed sample given an adversarial input, and sampling from this conditional distribution enhances the certified robustness. Specifically, we prove that when the data density of clean samples is high, it is a sufficient condition for the conditional density of the reversed samples to be also high. Therefore, in \textsf{DensePure}, samples from the conditional distribution can recover the ground-truth labels with a high probability. For the convenience of understanding and rigorous analysis, we use the highest density point in the conditional distribution as the deterministic reversed sample for the classifier prediction. We show that the robust region for a given sample under the diffusion model's reverse process is the union of multiple convex sets, each surrounding a region around the ground-truth label. Compared with the robust region of previous work~\citep{Cohen2019ICML}, which only focuses on the neighborhood of {\em one} region with the ground-truth label, such union of multiple convex sets has the potential to provide a much larger robust region. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data regions with the ground-truth label and those with other labels. We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of \textsf{DensePure}. In particular, we follow the setting from \citet{carlini2022certified} and rely on randomized smoothing to certify robustness to adversarial perturbations bounded in the $\mathcal{L}_2$-norm. We show that \textsf{DensePure}~ achieves the new state-of-the-art \emph{certified} robustness on the clean model without tuning any model parameters (off-the-shelf). On ImageNet, it achieves a consistently higher certified accuracy than the existing methods among every $\sigma$ at every radius $\epsilon$ , 7\% improvement on average. \vspace{-5mm} \begin{figure}[h] \begin{center} \includegraphics[width=0.75\linewidth]{figures/densepure_flowchart.png} \end{center} \vspace{-0.12in} \caption{Pipeline of \textsf{DensePure}.}\label{pipeline} \vspace{-5mm} \label{robustfigure} \end{figure} \textbf{\underline{Technical Contributions.}} In this paper, we take the first step towards understanding the sufficient conditions of \textit{adversarial purification} with diffusion models. We make contributions on both theoretical and empirical fronts: (1) We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model. (2) In theory, we characterized the robust region for each point by further taking the highest density point in the conditional distribution generated by the reverse process as the reversed sample. (3) In practice, we proposed \textsf{DensePure}, which is a state-of-art adversarial purification pipeline directly leveraging the reverse process of a pre-trained diffusion model and label \textit{majority vote}. (4) We demonstrated comparable performance of \textsf{DensePure}~ on CIFAR-10 and state-of-the-art performance on ImageNet. \section{\textsf{DensePure}~ } Inspired by the theoretical analysis, we introduce \textsf{DensePure}~ and show how to calculate its certified robustness radius via the randomized smoothing algorithm. \textbf{Framework.} Our framework, \textsf{DensePure}, consists of two components: (1) an off-the-shelf diffusion model with reverse process $ \mathbf{rev}$ and (2) an off-the-shelf base classifier $f$. The pipeline of \textsf{DensePure}~ is shown in Figure~\ref{pipeline}. Given an input ${\bm{x}}$, we feed it into the reverse process $\mathbf{rev}$ of the diffusion model to get the reversed sample $\mathbf{rev}({\bm{x}})$ and then repeat the above process $K$ times to get $K$ reversed samples $\{\mathbf{rev}({\bm{x}})_1, \cdots, \mathbf{rev}({\bm{x}})_{K}\}$. We feed the above $K$ reversed samples into the classifier to get the corresponding prediction $\{f(\mathbf{rev}({\bm{x}})_1), \cdots, f(\mathbf{rev}({\bm{x}})_{K})\}$ and then apply the \textit{majority vote}, termed $\textbf{MV}$, on these predictions to get the final predicted label $\hat{y} = \textbf{MV}(\{f(\mathbf{rev}({\bm{x}})_1), \cdots, f(\mathbf{rev}({\bm{x}})_{K})\}) = \argmax_c \sum_{i=1}^{K} \pmb{1} \{f(\mathbf{rev}({\bm{x}})_i) = c\}$ . \textbf{Certified Robustness of \textsf{DensePure}~ with Randomized Smoothing.} In this paragraph, we will illustrate the algorithm to calculate certified robustness of \textsf{DensePure}~ via RS, which offers robustness guarantees for a model under a $L_2$-norm ball. In particular, we follow the similar setting of \citet{carlini2022certified} which uses a DDPM-based diffusion model. The overall algorithm contains three steps: (1) Our framework estimates $n$, the number of steps used for the reverse process of DDPM-based diffusion model. Since Randomized Smoothing~\citep{Cohen2019ICML} adds Gaussian noise $\boldsymbol{\epsilon}$, where $\dst \boldsymbol{\epsilon} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2 {\bm{I}})$, to data input ${\bm{x}}$ to get the randomized data input, ${\bm{x}}_\text{rs} = {\bm{x}}+\boldsymbol{\epsilon}$, we map between the noise required by the randomized example ${\bm{x}}_\text{rs}$ and the noise required by the diffused data ${\bm{x}}_n$ (i.e., $\dst {\bm{x}}_n \sim \mathcal{N}({\bm{x}}_n; \sqrt{\overline{\alpha}_n} {\bm{x}}_0, (1-\overline{\alpha}_n) {\bm{I}})$) with $n$ step diffusion processing so that $\overline{\alpha}_n = \frac{1}{1+\sigma^2}$. In this way, we can compute the corresponding timestep $n$, where $ n = \argmin_{s} \{ |\overline{\alpha}_s - \frac{1}{1+\sigma^2} | \ |~ s\in \{1, 2, \cdots, N\} \}$. (2). Given the above calculated timestep $n$, we scale ${\bm{x}}_{rs}$ with $\sqrt{\overline{\alpha}_n}$ to obtain the scaled randomized smoothing sample $\sqrt{\overline{\alpha}_n} {\bm{x}}_{rs}$. Then we feed $\sqrt{\overline{\alpha}_n} {\bm{x}}_{rs}$ into the reverse process of the diffusion model by $K$-times to get the reversed sample set $\{ \hat{{\bm{x}}}_{0}^1 , \hat{{\bm{x}}}_{0}^2, \cdots, \hat{{\bm{x}}}_{0}^i,\cdots ,\hat{{\bm{x}}}_{0}^K\}$. (3). We feed the obtained reversed sample set into a standard \emph{off-the-shelf} classifier $f$ to get the corresponding predicted labels $\{ f(\hat{{\bm{x}}}_{0}^1), f(\hat{{\bm{x}}}_{0}^2), \dots, f(\hat{{\bm{x}}}_{0}^i), \dots ,f(\hat{{\bm{x}}}_{0}^K)\}$, and apply \textit{majority vote}, denoted $\textbf{MV}({\cdots})$, on these predicted labels to get the final label for ${\bm{x}}_{rs}$. \textbf{Fast Sampling.} To calculate the reversed sample, the standard reverse process of DDPM-based models require repeatedly applying a ``single-step'' operation $n$ times to get the reversed sample $\hat{{\bm{x}}}_{0}$ (i.e., $\hat{{\bm{x}}}_{0} = \underbrace{ \textbf{Reverse}( \cdots \textbf{Reverse}(\cdots \textbf{Reverse}( \textbf{Reverse}(\sqrt{\overline{\alpha}_n} {\bm{x}}_{rs}; n); n-1); \cdots; i);\cdots 1) }_{n \text{ steps}} $). Here $\hat{{\bm{x}}}_{i-1} = \textbf{Reverse}(\hat{{\bm{x}}}_i; i)$ is equivalent to sample $\hat{{\bm{x}}}_{i-1}$ from $ \mathcal{N}(\hat{{\bm{x}}}_{i-1}; \boldsymbol{\mu}_{\boldsymbol{\theta}} (\hat{{\bm{x}}}_i, i), \boldsymbol{\Sigma}_{\boldsymbol{\theta}} (\hat{{\bm{x}}}_i, i))$, where $\dst \boldsymbol{\mu}_{\boldsymbol{\theta}}(\hat{{\bm{x}}}_i, i) = \frac{1}{\sqrt{1-\beta_i}} \left(\hat{ {\bm{x}}_i} - \frac{\beta_i}{\sqrt{1-\overline{\alpha}_i}} \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\hat{{\bm{x}}}_i,i)\right)$ and $\boldsymbol{\Sigma}_{\boldsymbol{\theta}} := \exp(v\log\beta_i+(1-v) \log\widetilde{\beta}_i)$. Here $v$ is a parameter learned by DDPM and $\widetilde{\beta}_i=\frac{1-\overline{\alpha}_{i-1}}{1-\overline{\alpha}_i}$. To reduce the time complexity, we use the uniform sub-sampling strategy from~\citet{nichol2021improved}. We uniformly sample a subsequence with size $b$ from the original $N$-step the reverse process. Note that \citet{carlini2022certified} set $b=1$ for the ``one-shot'' sampling, in this way, $\hat{{\bm{x}}}_0 = \frac{1}{\sqrt{\overline{\alpha}_n}}({\bm{x}}_n-\sqrt{1-\overline{\alpha}_n}\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\sqrt{\overline{\alpha}_n} {\bm{x}}_{rs},n))$ is a deterministic value so that the reverse process does not obtain a posterior data distribution conditioned on the input. Instead, we can tune the number of the sub-sampled DDPM steps to be larger than one ($b>1$) to sample from a posterior data distribution conditioned on the input. The details about the fast sampling are shown in appendix~\ref{sec:fast}. \section{\textsf{DensePure}~ } Based on the theoretical analysis introduced before, it inspires us to propose the robust framework, \textsf{DensePure}~. In this section, we will introduce \textsf{DensePure}~ and show how to calculate the certified robustness via randomized smoothing algorithm. \paragraph{Framework. } Our framework, \textsf{DensePure}~, consists of two components: (1) an off-the-self diffusion model and (2) a classifier trained on the standard clean dataset (e.g. ImageNet). Given an adversarial input, \textsf{DensePure}~ incorporates two steps for adversarial robustness. First, we feed the adversarial input into the reverse process of the diffusion model to obtain a posterior data distribution conditioned on the input adversarial example. Second, we take the point with the highest density in the obtained distribution as the purified sample. Note that as we have shown before, diffusion process itself is not necessary when leveraging diffusion models for adversarial purification. Instead, by sending the adversarial example directly through the reverse process of the diffusion model, we can obtain a better-recovered sample in terms of its distance to the corresponding clean sample. Thus, \textsf{DensePure}~ indeed removes the diffusion process of the diffusion model in our pipeline. When applying the above steps in practice, One challenge is that find the point with the highest density in the obtained distribution since it requires an enormous amount of samplings to confidently estimate the densities in the whole sample space. For diffusion model, it is already quite expensive to even do sampling once. Thus, it is almost impossible to find such the point. To address it, we approximate the second step with \textit{majority vote} to reduce the heavy computation. Specifically, if we only focus on classification, we first repeatedly send the scaled perturbed sample $\dst {\bm{x}}_{a,t}$ through the reverse process of the diffusion to obtain multiple reversed samples and feed them into the classifier to get the predictions. Then we use the majority vote on such predictions to provide the final label assigned to $\dst {\bm{x}}_a$. When most of the samples surround the true sample $\dst {\bm{x}}_0$, we can say the label majority vote will give the label of the highest density point with high probability. \chaowei{@zhongzhu} \paragraph{Certified Robustness with \textsf{DensePure}~} Here, we show how to use Randomized Smoothing (RS)\citep{Cohen2019ICML} to calculate the certified robustness of \textsf{DensePure}~. We follow the similar process of \citet{carlini2022certified}. Specifically, our framework first calculate the $t$, the step used for reverse process of diffusion model. To calculate it, given a Gaussian noise $\epsilon \sim N(0, \sigma^2I)$, randomized smoothing adds $\epsilon$ to data input $x$ to get the randomized data input $x_\text{rs} = x+\epsilon$. Thus, we map the randomized example $x_\text{rs}$ to data $x_t$ with $t$ step diffusion processing, where $x_t \sim \sqrt{\alpha_t}\mathcal{N}(x_0, \frac{1-\alpha_t}{\alpha_t} I)$ so that $\sigma^2 = \frac{1-\alpha_t}{\alpha_t}$. In this way, we could directly compute $\alpha_t$ as: $\alpha_t = \frac{1}{1+\sigma^2}$ After that, we can get the corresponding t of SDE by solving the following equation based on the selected schedule $\beta(t)$: \begin{equation*} \alpha_t = e^{-\int_0^t\beta(s)ds} \end{equation*} For discrete $\beta(t)$ schedule, we choose the timestep t which makes $\sqrt{\alpha_t}$ the nearest value to $\frac{1}{1+\sigma^2}$ in the set $\left\{\sqrt{\alpha_s} | s=1...T \right\}$. Second, given the timestep $t$, we could feed the randomized smoothing sample $x_{rs}$ into the reverse processing of the diffusion to resample the original input $\hat{x_{0}}$, denoting as $\hat{x_{0}} = \text{Reverse}(x_{rs}, t)$. As mentioned before, \textsf{DensePure}~ needs to locate the highest density point in the conditional distribution $\dst p\left(\hat{{\mathbf{x}}}_0 ={\bm{x}}| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$ characterized by Theorem \ref{distribution:reverse}. However, it is hard to do it in reality since we need a huge amount of samplings to makes a confidential estimation of the densities for all points in the sample space. It is extremely computational expensive in the high dimensional space. Thus, here we propose a simple but efficient strategy: \textit{majority voting}. Since the reversed sample will be fed into the classifier for certification, if we only focus on classification, we propose to replace the highest density point locating step with label majority vote step to reduce the heavy computation. In other words, we first send the randomized sample $x_{rs}$ through the reverse process of the diffusion model $b$ times to obtain $b$ samples of the conditional distribution, then we use the majority label of these samples as the label. The algorithm has been shown in Algo~\ref{}. \paragraph{Reduced reverse sampling steps.} For the reverse process of the diffusion model, it requires to repeatedly apply a ``single-step'' operation $t$ times to (e.g. $x_{0} = Denoise( \cdots Denoise( Denoise(x_t; t); t-1); \cdots 1)$). Here $x_{t-1} = Denoise(x_t;t)$ means we sample $x_{t-1} \sim \mathcal{N}(\widetilde{\mu}(x_t,x_0), \widetilde{\beta}_t I)$, where \begin{equation*} \widetilde{\mu}(x_t,x_0) := \frac{\sqrt{\alpha_{t-1}}\beta_t}{1-\alpha_t}x_0 + \frac{\sqrt{\alpha_t}(1-\alpha_{t-1})}{1-\alpha_t}x_t \end{equation*} Thus, using single step repeatedly to perform the whole reverse process in the diffusion model is really time-consuming process. Instead of using single step, here we reduce the repeatedly single-step reverse sampling processing to $st$-step sparse sampling strategy by changing the reverse sampling schedule from $\{t, t-1, \cdots, 1\}$ to $\{t, t-[\frac{t}{st}], \cdots, 1 \}$, where we fix the last step $t-st[\frac{t}{st}]$ to 1, so that $x_{0} = Denoise( \cdots Denoise( Denoise(x_t; t-[\frac{t}{st}]); t- [\frac{2t}{st}]); \cdots ;1)$. In details, For the new reverse sampling schedule $S = \{t, t-[\frac{t}{st}], \cdots, 1 \}$, we need to revise our $\alpha_t$ to make it adapted to our reduced reverse sampling methods. If we denote $S_t$ as the $t\text{th}$ element in $S$, we will have the new noise schedule $\alpha_{S_t}$, which can be used to compute the following $\beta_{S_t}$ and $\widetilde{\beta}_{S_t}$: \begin{equation*} \beta_{S_t} = 1 - \frac{\alpha_{S_t}}{\alpha_{S_{t-1}}}, \quad \widetilde{\beta}_{S_t} = \frac{1-\alpha_{S_{t-1}}}{1-\alpha_{S_t}}\beta_{S_t} \end{equation*} Then we can sample $x_{S_{t-1}} \sim \mathcal{N}(\widetilde{\mu}(x_{S_{t}},x_0), \widetilde{\beta}_{S_t} I)$ step by step to speed up sampling in only $st$ steps. \section{\textsf{DensePure}~ } Based on the theoretical analysis introduced before, it inspires us to propose the robust framework, \textsf{DensePure}~. In this section, we will introduce \textsf{DensePure}~ and show how to calculate the certified robustness via randomized smoothing algorithm. \paragraph{Framework. } Our framework, \textsf{DensePure}~, consists of two components: (1) an off-the-shelf diffusion model $f$ and (2) a classifier $h$ trained on the standard clean dataset (e.g. ImageNet). Given an adversarial input, \textsf{DensePure}~ incorporates two steps for adversarial robustness. First, we feed the adversarial input into the reverse process of the diffusion model to obtain a posterior data distribution conditioned on the input adversarial example. Second, we perform the reverse process multiple times to approximate the label with high density manifold via majority vote. When applying the above steps in practice, One challenge is that find the point with the highest density in the obtained distribution since it requires an enormous amount of samplings to confidently estimate the densities in the whole sample space. For diffusion model, it is already quite expensive to even do sampling once. Thus, it is almost impossible to find such the point. To address it, we approximate the second step with \textit{majority vote} to reduce the heavy computation. Specifically, if we only focus on classification, we first repeatedly send the scaled perturbed sample $\dst {\bm{x}}_{a,t}$ through the reverse process of the diffusion to obtain multiple reversed samples and feed them into the classifier to get the predictions. Then we use the majority vote on such predictions to provide the final label assigned to $\dst {\bm{x}}_a$. \paragraph{Certified Robustness with \textsf{DensePure}~} Here, we show how to use Randomized Smoothing (RS)\citep{Cohen2019ICML} to calculate the certified robustness of \textsf{DensePure}~. We follow the similar process of \citet{carlini2022certified}. Specifically, our framework first calculate the $t$, the step used for reverse process of diffusion model. To calculate it, given a Gaussian noise $\epsilon \sim N(0, \sigma^2I)$, randomized smoothing adds $\epsilon$ to data input $x$ to get the randomized data input $x_\text{rs} = x+\epsilon$. Thus, we map the randomized example $x_\text{rs}$ to data $x_t$ with $t$ step diffusion processing, where $x_t \sim \sqrt{\alpha_t}\mathcal{N}(x_0, \frac{1-\alpha_t}{\alpha_t} I)$ so that $\sigma^2 = \frac{1-\alpha_t}{\alpha_t}$. In this way, we could directly compute $\alpha_t$ as: $\alpha_t = \frac{1}{1+\sigma^2}$ After that, we can get the corresponding $t$ of SDE by solving the following equation based on the selected schedule $\beta(t)$: \begin{equation*} \alpha_t = e^{-\int_0^t\beta(s)ds} \end{equation*} Note that for discrete $\beta(t)$ schedule, we choose the timestep $t$ which makes the value of $\frac{1}{1+\sigma^2}$ in the set $\left\{\sqrt{\alpha_s} | s=1...T \right\}$ nearest to $\sqrt{\alpha_t}$ . Second, given the timestep $t$, we scale the randomized smoothing sample $x_{rs}$ with $\sqrt{\alpha_t}$ to get the scaled randomized smoothing sample $\sqrt{\alpha_t} x_{rs}$. We repeatedly feed the scaled randomized smoothing sample $\sqrt{\alpha_t} x_{rs}$ into the reverse processing of the diffusion $r$-times to resample the reversed input $\{ \hat{x_{0}}^1 , \hat{x_{0}}^2, \cdots, \hat{x_{0}}^i,\cdots ,\hat{x_{0}}^r\}$, where $\hat{x_{0}}^i = \textbf{Reverse}( \sqrt{\alpha_t} x_{rs}, t)$. Third, we feed the above reversed samples into a standard \emph{off-the-shelf} classifier to get the corresponding predicted labels $\{ h(\hat{x_{0}}^1), h(\hat{x_{0}}^2), h(\cdots, \hat{x_{0}}^i),\cdots ,h(\hat{x_{0}}^r\})$. We, then, apply \textit{majority vote}, denoted $\textbf{MV}({\cdots})$ on these predicted labels to get the final label for $x_{rs}$. For the reverse process of the diffusion model, it requires to repeatedly apply a ``single-step'' operation $t$ times to get the reversed sample $x_{0}$ (i.e., $x_{0} = \underbrace{ \textbf{Reverse}( \cdots \textbf{Reverse}( \textbf{Reverse}(x_t; 1); 1); \cdots 1)}_{t \text{ steps}}$). Here $x_{t-1} = \textbf{Reverse}(x_t;1)$ means we sample $x_{t-1} \sim \mathcal{N}(\widetilde{\mu}(x_t,x_0), \widetilde{\beta}_t I)$ with ``single-step'' operation, where $ \widetilde{\mu}(x_t,x_0) := \frac{\sqrt{\alpha_{t-1}}\beta_t}{1-\alpha_t}x_0 + \frac{\sqrt{\alpha_t}(1-\alpha_{t-1})}{1-\alpha_t}x_t$. However, using single step repeatedly to perform the whole $t$-step reverse process in the diffusion model is really time-consuming process. To reduce the time complexity, instead of using single step, here we reduce the repeatedly ``single-step'' reverse sampling processing with ``$b$-step ($b < t$)'' sparse sampling strategy by changing the reverse sampling schedule from $S^1 = \underbrace{ \{t, t-1, \cdots, 1\}}_{t}$ to $S^b = \underbrace{ \{t, t-\floor{\frac{t}{b}}, \cdots, 1 \}}_{b}$, where we fix the last step $t-b\floor{\frac{t}{st}}$ to 1, so that $x_{0} = \underbrace{\textbf{Reverse}( \cdots \textbf{Reverse}( \textbf{Reverse}(x_t; \floor{t/b}]); \floor{t/b}]); \cdots ;\floor{t/b})}_{b \text{}}$. Within this context, for the new reverse sampling schedule $S^b = \{t, t-[\frac{t}{st}], \cdots, 1 \}$, we need to revise $\alpha_t$ to make it adapt to our reduced reverse sampling methods.Specifically, we denote $S_i^b$ as the $i$-th element in $S^b$. We have the new noise schedule $\alpha_{S_i}$\chaowei{It is still unclear!}, which can be used to compute the following $\beta_{S_t}$ and $\widetilde{\beta}_{S_t}$: \begin{equation*} \beta_{S_t} = 1 - \frac{\alpha_{S_t}}{\alpha_{S_{t-1}}}, \quad \widetilde{\beta}_{S_t} = \frac{1-\alpha_{S_{t-1}}}{1-\alpha_{S_t}}\beta_{S_t} \end{equation*} Then we can sample $x_{S^b_{i-1}} \sim \mathcal{N}(\widetilde{\mu}(x_{S^b_{i}},x_0), \widetilde{\beta}_{S^b_i} I)$ to speed up sampling with only $b$ steps. \section{Default Notation} In an attempt to encourage standardized notation, we have included the notation file from the textbook, \textit{Deep Learning} \cite{goodfellow2016deep} available at \url{https://github.com/goodfeli/dlbook_notation/}. Use of this style is not required and can be disabled by commenting out \texttt{math\_commands.tex}. \centerline{\bf Numbers and Arrays} \bgroup \def1.5{1.5} \begin{tabular}{p{1in}p{3.25in}} $\displaystyle a$ & A scalar (integer or real)\\ $\displaystyle {\bm{a}}$ & A vector\\ $\displaystyle {\bm{A}}$ & A matrix\\ $\displaystyle {\tens{A}}$ & A tensor\\ $\displaystyle {\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\ $\displaystyle {\bm{I}}$ & Identity matrix with dimensionality implied by context\\ $\displaystyle {\bm{e}}^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\ $\displaystyle \text{diag}({\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\bm{a}}$\\ $\displaystyle {\textnormal{a}}$ & A scalar random variable\\ $\displaystyle {\mathbf{a}}$ & A vector-valued random variable\\ $\displaystyle {\mathbf{A}}$ & A matrix-valued random variable\\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Sets and Graphs} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle {\mathbb{A}}$ & A set\\ $\displaystyle \mathbb{R}$ & The set of real numbers \\ $\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\ $\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\ $\displaystyle [a, b]$ & The real interval including $a$ and $b$\\ $\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\ $\displaystyle {\mathbb{A}} \backslash {\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\mathbb{A}}$ that are not in ${\mathbb{B}}$\\ $\displaystyle {\mathcal{G}}$ & A graph\\ $\displaystyle \parents_{\mathcal{G}}({\textnormal{x}}_i)$ & The parents of ${\textnormal{x}}_i$ in ${\mathcal{G}}$ \end{tabular} \vspace{0.25cm} \centerline{\bf Indexing} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle {a}_i$ & Element $i$ of vector ${\bm{a}}$, with indexing starting at 1 \\ $\displaystyle {a}_{-i}$ & All elements of vector ${\bm{a}}$ except for element $i$ \\ $\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\bm{A}}$ \\ $\displaystyle {\bm{A}}_{i, :}$ & Row $i$ of matrix ${\bm{A}}$ \\ $\displaystyle {\bm{A}}_{:, i}$ & Column $i$ of matrix ${\bm{A}}$ \\ $\displaystyle {\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\tens{A}}$\\ $\displaystyle {\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\ $\displaystyle {\textnormal{a}}_i$ & Element $i$ of the random vector ${\mathbf{a}}$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Calculus} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex] $\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\ $\displaystyle \nabla_{\bm{x}} y $ & Gradient of $y$ with respect to ${\bm{x}}$ \\ $\displaystyle \nabla_{\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\bm{X}}$ \\ $\displaystyle \nabla_{\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\tens{X}}$ \\ $\displaystyle \frac{\partial f}{\partial {\bm{x}}} $ & Jacobian matrix ${\bm{J}} \in \mathbb{R}^{m\times n}$ of $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$\\ $\displaystyle \nabla_{\bm{x}}^2 f({\bm{x}})\text{ or }{\bm{H}}( f)({\bm{x}})$ & The Hessian matrix of $f$ at input point ${\bm{x}}$\\ $\displaystyle \int f({\bm{x}}) d{\bm{x}} $ & Definite integral over the entire domain of ${\bm{x}}$ \\ $\displaystyle \int_{\mathbb{S}} f({\bm{x}}) d{\bm{x}}$ & Definite integral with respect to ${\bm{x}}$ over the set ${\mathbb{S}}$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Probability and Information Theory} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle P({\textnormal{a}})$ & A probability distribution over a discrete variable\\ $\displaystyle p({\textnormal{a}})$ & A probability distribution over a continuous variable, or over a variable whose type has not been specified\\ $\displaystyle {\textnormal{a}} \sim P$ & Random variable ${\textnormal{a}}$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r $\displaystyle \mathbb{E}_{{\textnormal{x}}\sim P} [ f(x) ]\text{ or } \mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\textnormal{x}})$ \\ $\displaystyle \mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\textnormal{x}})$ \\ $\displaystyle \mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\textnormal{x}})$\\ $\displaystyle H({\textnormal{x}}) $ & Shannon entropy of the random variable ${\textnormal{x}}$\\ $\displaystyle D_{\mathrm{KL}} ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\ $\displaystyle \mathcal{N} ( {\bm{x}} ; {\bm{\mu}} , {\bm{\Sigma}})$ & Gaussian distribution % over ${\bm{x}}$ with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Functions} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle f: {\mathbb{A}} \rightarrow {\mathbb{B}}$ & The function $f$ with domain ${\mathbb{A}}$ and range ${\mathbb{B}}$\\ $\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\ $\displaystyle f({\bm{x}} ; {\bm{\theta}}) $ & A function of ${\bm{x}}$ parametrized by ${\bm{\theta}}$. (Sometimes we write $f({\bm{x}})$ and omit the argument ${\bm{\theta}}$ to lighten notation) \\ $\displaystyle \log x$ & Natural logarithm of $x$ \\ $\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\ $\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\ $\displaystyle || {\bm{x}} ||_p $ & $L^p$ norm of ${\bm{x}}$ \\ $\displaystyle || {\bm{x}} || $ & $L^2$ norm of ${\bm{x}}$ \\ $\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\ $\displaystyle \bm{1}_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\ \end{tabular} \egroup \vspace{0.25cm} \section{Preliminaries and Backgrounds} \paragraph{Continuous-Time Diffusion Model.} The diffusion model has two components: the \textit{diffusion process} followed by the \textit{reverse process}. Given an input random variable ${\mathbf{x}}_0 \sim p$, the diffusion process adds isotropic Gaussian noises to the data so that the diffused random variable at time $t$ is $\dst {\mathbf{x}}_t = \sqrt{\alpha_t} ({\mathbf{x}}_0 + \boldsymbol{\epsilon}_t)$, s.t., $\dst \boldsymbol{\epsilon}_t \sim \mathcal{N}(\boldsymbol{0}, \sigma_t^2 {\bm{I}})$, and $\dst \sigma_t^2 = (1-\alpha_t)/\alpha_t$, and we denote $\dst {\mathbf{x}}_t \sim p_t$. The forward diffusion process can also be defined by the stochastic differential equation \begin{equation*} \tag{SDE}\label{SDE} \dst d {\bm{x}} = h({\bm{x}}, t) dt + g(t) d {\bm{w}}, \end{equation*} where $\dst {\bm{x}}_0 \sim p$, $\dst h: \mathbb{R}^d \times \mathbb{R} \mapsto \mathbb{R}^d$ is the drift coefficient, $\dst g: \mathbb{R} \mapsto \mathbb{R}$ is the diffusion coefficient, and $\dst {\bm{w}}(t) \in \mathbb{R}^n$ is the standard Wiener process. Under mild conditions \ref{appendassump}, the reverse process exists and removes the added noise by solving the reverse-time SDE \citep{anderson1982reverse} \begin{equation}\tag{reverse-SDE}\label{reverseSDE} d \hat{{\bm{x}}} = [h(\hat{{\bm{x}}}, t) - g(t)^2 \triangledown_{\hat{{\bm{x}}}} \log p_t(\hat{{\bm{x}}})] dt + g(t) d \overline{{\bm{w}}}, \end{equation} where $\dst dt$ is an infinitesimal reverse time step, and $\overline{{\bm{w}}}(t)$ is a reverse-time standard Wiener process. In our context, we use the conventions of VP-SDE \citep{Song2021ICLR} where $h({\bm{x}}; t): =-\frac{1}{2} \gamma(t)x$ and $g(t):= \sqrt{\gamma(t)}$ with $\gamma(t)$ positive and continuous over $[0,1]$, such that $x(t) = \sqrt{{\alpha}_t} x(0)+ \sqrt{1-{\alpha}_t} \boldsymbol{\epsilon}$ where ${\alpha}_t = e^{-\int_0^t \gamma(s)ds}$ and $\boldsymbol{\epsilon}\sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$. We use $\{{\mathbf{x}}_t\}_{t\in [0,1]}$ and $\{\hat {\mathbf{x}}_t\}_{t\in [0,1]}$ to denote the diffusion process and the reverse process generated by \ref{SDE} and \ref{reverseSDE} respectively, which follow the same distribution. \paragraph{Discrete-Time Diffusion Model (or DDPM \citep{Ho2020DDPM}). } DDPM constructs a discrete Markov chain $\dst \{{\mathbf{x}}_0, {\mathbf{x}}_1, \cdots, {\mathbf{x}}_i, \cdots, {\mathbf{x}}_N\}$ as the forward process for the training data ${\mathbf{x}}_0 \sim p$, such that $\dst \mathbb{P}({\mathbf{x}}_i | {\mathbf{x}}_{i-1}) = \mathcal{N}({\mathbf{x}}_i; \sqrt{1-\beta_i} {\mathbf{x}}_{i-1}, \beta_i I)$, where $\dst 0 < \beta_1 < \beta_2 < \cdots < \beta_N < 1$ are predefined noise scales such that ${\mathbf{x}}_N$ approximates the Gaussian white noise. Denote $\dst \overline{\alpha}_i = \prod_{i=1}^N (1-\beta_i)$, we have $\dst \mathbb{P}({\mathbf{x}}_i | {\mathbf{x}}_0) = \mathcal{N}({\mathbf{x}}_i; \sqrt{\overline{\alpha}_i} {\mathbf{x}}_{0}, (1-\overline{\alpha}_i) {\bm{I}})$, i.e., $\dst {\mathbf{x}}_t({\mathbf{x}}_0, \epsilon) = \sqrt{\overline{\alpha}_i} {\mathbf{x}}_{0} +(1-\overline{\alpha}_i) \boldsymbol{\epsilon}, \boldsymbol{\epsilon} \sim \mathcal{N}(\boldsymbol{0},{\bm{I}})$. The reverse process of DDPM learns a reverse direction variational Markov chain $\dst p_{\boldsymbol{\theta}} ({\mathbf{x}}_{i-1} | {\mathbf{x}}_i) = \mathcal{N}({\mathbf{x}}_{i-1}; \boldsymbol{\mu}_{\boldsymbol{\theta}} ({\mathbf{x}}_i, i), \Sigma_{\boldsymbol{\theta}} ({\mathbf{x}}_i, i))$. \cite{Ho2020DDPM} defines $\dst \boldsymbol{\epsilon}_{\boldsymbol{\theta}}$ as a function approximator to predict $\boldsymbol{\epsilon}$ from ${\bm{x}}_i$ such that $\dst \boldsymbol{\mu}_{\boldsymbol{\theta}}({\mathbf{x}}_i, i) = \frac{1}{\sqrt{1-\beta_i}} \left( {\mathbf{x}}_i - \frac{\beta_i}{\sqrt{1-\overline{\alpha}_i}} \boldsymbol{\epsilon}_{\boldsymbol{\theta}}({\mathbf{x}}_i,i)\right)$. Then the reverse time samples are generated by $\dst \hat{{\mathbf{x}}}_{i-1} = \frac{1}{\sqrt{1-\beta_i}}\left( \hat {\mathbf{x}}_i - \frac{\beta_i}{\sqrt{1-\overline{\alpha}_i}} \boldsymbol{\epsilon}_{\boldsymbol{\theta}^*}(\hat {\mathbf{x}}_i,i) \right) + \sqrt{\beta_i} \boldsymbol{\epsilon}, \boldsymbol{\epsilon} \sim \mathcal{N}(\pmb{0},I)$, and the optimal parameters $\dst \boldsymbol{\theta}^*$ are obtained by solving $\boldsymbol{\theta}^* := \argmin_{\boldsymbol{\theta}} \mathbb{E}_{{\mathbf{x}}_0,\boldsymbol{\epsilon}}\left[ || \boldsymbol{\epsilon} - \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\sqrt{\overline{\alpha}_i} {\mathbf{x}}_{0} +(1-\overline{\alpha}_i),i) ||_2^2 \right]$. \paragraph{Randomized Smoothing.} Randomized smoothing is used to certify the robustness of a given classifier against $L_2$-norm based perturbation. It transfers the classifier $f$ to a smooth version $ g({\bm{x}}) = \argmax_c \mathbb{P}_{\boldsymbol{\epsilon} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2 \boldsymbol{I}) }(f({\bm{x}}+\boldsymbol{\epsilon}) = c)$, where $g$ is the smooth classifier and $\sigma$ is a hyperparameter of the smooth classifier $g$, which controls the trade-off between robustness and accuracy. \citet{Cohen2019ICML} shows that $g(x)$ induces the certifiable robustness for ${\bm{x}}$ under the $L_2$-norm with radius $R$, where $\dst R = \frac{\sigma}{2} \left( \Phi^{-1}(p_A) - \Phi^{-1}(p_B) \right)$; $p_A$ and $p_B$ are probability of the most probable class and ``runner-up'' class respectively; $\Phi$ is the inverse of the standard Gaussian CDF. The $p_A$ and $p_B$ can be estimated with arbitrarily high confidence via Monte Carlo method \citep{Cohen2019ICML}. \section{Related Work} Using an off-the-shelf generative model to purify adversarial perturbations has become an important direction in adversarial defense. Previous works have developed various purification methods based on different generative models, such as GANs~\citep{samangouei2018defense}, autoregressive generative models~\citep{song2018pixeldefend}, and energy-based models~\citep{du2019implicit, grathwohl2020your, hill2021stochastic}. More recently, as diffusion models (or score-based models) achieve better generation quality than other generative models~\citep{Ho2020DDPM,dhariwal2021diffusion}, many works consider using diffusion models for adversarial purification~\citep{nie2022diffusion,wu2022guided,sun2022pointdp} Although they have found good empirical results in defending against existing adversarial attacks~\citep{nie2022diffusion}, there is no provable guarantee about the robustness about such methods. On the other hand, certified defenses provide guarantees of robustness~\citep{mirman2018differentiable,Cohen2019ICML,lecuyer2019certified,salman2020denoised,horvath2021boosting, zhang2018efficient,raghunathan2018certified, raghunathan2018semidefinite,salman2019convex,wang2021beta}. They provide a lower bounder of model accuracy under constrained perturbations. Among them, approaches~\cite{lecuyer2019certified, Cohen2019ICML,salman2019provably,jeong2020consistency,zhai2020macer,horvath2021boosting,jeong2021smoothmix,salman2020denoised,lee2021provable,carlini2022certified} based on randomized smoothing~\citep{Cohen2019ICML} show the great scalability and achieve promising performance on large network and dataset. The most similar work to us is \citet{carlini2022certified}, which uses diffusion models combined with standard classifiers for certified defense. They view diffusion model as blackbox without having a theoretical under- standing of why and how the diffusion models contribute to such nontrivial certified robustness. \section{Robustness for Diffusion Models} \section{Theoretical Analysis} In this section, we theoretically analyze why and how the diffusion model can enhance the robustness of a given classifier. We will analyze directly on \ref{SDE} and \ref{reverseSDE} as they generate the same stochastic processes $\{{\mathbf{x}}_t\}_{t\in [0,T]}$ and the literature works establish an approximation on \ref{reverseSDE}~\citep{Song2021ICLR,Ho2020DDPM}. We first show that given a diffusion model, solving \ref{reverseSDE} will generate a conditional distribution based on the scaled adversarial sample, which will have high density on data region with high \textit{data} density and near to the adversarial sample in Theorem \ref{distribution:reverse}. See detailed conditions in \ref{appendassump}. \begin{theorem}\label{distribution:reverse} Under conditions \ref{appendassump}, solving \eqref{reverseSDE} starting from time $t$ and sample $\dst {\bm{x}}_{a,t}= \sqrt{\alpha_t} {\bm{x}}_a$ will generate a reversed random variable $\dst \hat{\mathbf{x}}_0 $ with density $ \dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 ={\bm{x}}| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) \propto p({\bm{x}}) \cdot \frac{1}{\sqrt{\left(2\pi\sigma^2_t\right)^n}} \exp\left({\frac{-|| {\bm{x}} -{\bm{x}}_a||^2_2}{2\sigma^2_t}}\right)$, where $p$ is the data distribution, $\dst \sigma_t^2 = \frac{1-\alpha_t}{\alpha_t}$ is the variance of Gaussian noise added at time $\dst t$ in the diffusion process. \end{theorem} \vspace{-0.15in} \begin{proof} (sketch) Under conditions \ref{appendassump}, we know $\{{\mathbf{x}}_t\}_{t\in [0,1]}$ and $\{\hat {\mathbf{x}}_t\}_{t\in [0,1]}$ follow the same distribution, and then the rest proof follows Bayes' Rule. \end{proof} Please see the full proofs of this and the following theorems in Appendix \ref{app:proofs}. \begin{remark} Note that $\dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 ={\bm{x}}| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)>0$ if and only if $p({\bm{x}})>0$, thus the generated reverse sample will be on the data region where we train classifiers. \end{remark} In Theorem \ref{distribution:reverse}, the conditional density $\dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$ is high only if both $\dst p({\bm{x}})$ and the Gaussian term have high values, i.e., $\dst {\bm{x}}$ has high \textit{data} density and is close to the adversarial sample ${\bm{x}}_a$. The latter condition is reasonable since adversarial perturbations are typically bounded due to budget constraints. Then, the above argument implies that a reversed sample will have the ground-truth label with a high probability if data region with the ground-truth label has high enough \textit{data} density. For the convenience of theoretical analysis and understanding, we take the point with highest conditional density $\dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$ as the reversed sample, defined as $\mathcal{P}({\bm{x}}_{a};t):=\argmax_{{\bm{x}}} \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$. $\mathcal{P}({\bm{x}}_{a};t)$ is a representative of the high density data region in the conditional distribution and $\mathcal{P}(\cdot; t)$ is a deterministic purification model. In the following, we characterize the robust region for data region with ground-truth label under $\dst \mathbb{P}\left(\cdot; t\right)$. The robust region and the robust radius for a general deterministic purification model given a classifier are defined below. \begin{definition}[Robust Region and Robust Radius]\label{def:robust} Given a classifier $\dst f$ and a point $\dst {\bm{x}}_0$, let $\mathcal{G}({\bm{x}}_0):=\{{\bm{x}}: f({\bm{x}})=f({\bm{x}}_0)\}$ be the data region where samples have the same label as ${\bm{x}}_0$. Then given a deterministic purification model $\dst \mathcal{P}(\cdot~; \psi)$ with parameter $\dst \psi$, we define the robust region of $\dst \mathcal{G}({\bm{x}}_0)$ under $\dst \mathcal{P}$ and $f$ as $ \dst \mathcal{D}_{\mathcal{P}}^{f}\left(\mathcal{G}({\bm{x}}_0); \psi\right):=\left\{{\bm{x}} : f\left(\mathcal{P}({\bm{x}}; \psi)\right) = f({\bm{x}}_0) \right\}$, i.e., the set of ${\bm{x}}$ such that purified sample $\dst \mathcal{P}({\bm{x}};\psi)$ has the same label as $\dst {\bm{x}}_0$ under $f$. Further, we define the robust radius of $\dst {\bm{x}}_0$ as $ \dst r_{\mathcal{P}}^{f}({\bm{x}}_0;\psi):= \max\left\{r: {\bm{x}}_0+ ru\in \dst \mathcal{D}_{\mathcal{P}}^{f}\left({\bm{x}}_0; \psi\right)~, ~ \forall ||u||_2 \le 1 \right\}$, i.e., the radius of maximum inclined ball of $\dst \mathcal{D}_{\mathcal{P}}^{f}\left({\bm{x}}_0; \psi\right)$ centered around $\dst {\bm{x}}_0$. We will omit $\dst \mathcal{P}$ and $\dst f$ when it is clear from the context and write $\dst \mathcal{D}\left(\mathcal{G}({\bm{x}}_0); \psi\right)$ and $\dst r({\bm{x}}_0;\psi)$ instead. \end{definition} \begin{remark} In Definition \ref{def:robust}, the robust region (resp. radius) is defined for each class (resp. point). When using the point with highest $\dst \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right)$ as the reversed sample, $\psi:=t$. \end{remark} Now given a sample ${\bm{x}}_0$ with ground-truth label, we are ready to characterize the robust region $\dst \mathcal{D}\left(\mathcal{G}({\bm{x}}_0); \psi\right)$ under purification model $\mathcal{P}(\cdot ;t)$ and classifier $f$. Intuitively, if the adversarial sample $\dst {\bm{x}}_a$ is near to $\dst {\bm{x}}_0$ (in Euclidean distance), $\dst {\bm{x}}_a$ keeps the same label semantics of $\dst {\bm{x}}_0$ and so as the purified sample $\dst \mathcal{P}({\bm{x}}_a; t)$, which implies that $\dst f\left(\mathcal{P}({\bm{x}}_a; \psi)\right) = f({\bm{x}}_0)$. However, the condition that $\dst {\bm{x}}_a$ is near to $\dst {\bm{x}}_0$ is sufficient but not necessary since we can still achieve $\dst f\left(\mathcal{P}({\bm{x}}_a; \psi)\right) = f({\bm{x}}_0)$ if $\dst {\bm{x}}_a$ is near to any sample $\dst \tilde{{\bm{x}}}_0$ with $\dst f\left(\mathcal{P}(\tilde{\bm{x}}_a; \psi)\right) = f({\bm{x}}_0)$. In the following, we will show that the robust region $\dst \mathcal{D}\left(\mathcal{G}({\bm{x}}_0); \psi\right)$ is the union of the convex robust sub-regions surrounding every $\dst \tilde{{\bm{x}}}_0$ with the same label as $\dst {\bm{x}}_0$. The following theorem characterizes the convex robust sub-region and robust region respectively. \begin{theorem}\label{robustregion} Under conditions \ref{appendassump} and classifier $f$, let $\dst {\bm{x}}_0$ be the sample with ground-truth label and $\dst {\bm{x}}_a$ be the adversarial sample, then (i) the purified sample $\dst \mathcal{P}({\bm{x}}_a; t)$ will have the ground-truth label if $\dst {\bm{x}}_a $ falls into the following convex set, \begin{align*} \dst \mathcal{D}_{{\tiny\mbox{sub}}}\left({\bm{x}}_0;t\right):=\bigcap_{\left\{{\bm{x}}'_0:f({{\bm{x}}'_0})\neq f({\bm{x}}_0)\right\}} \left\{{\bm{x}}_a : ({{\bm{x}}}_a -{{\bm{x}}_0})^\top ({{\bm{x}}}'_0-{{\bm{x}}}_0) < \sigma_t^2 \log\left(\frac{p({{\bm{x}}}_0)}{p({{\bm{x}}}'_0)}\right)+\frac{||{\bm{x}}'_0 -{{\bm{x}}}_0||^2_2 }{2} \right\}, \end{align*} and further, (ii) the purified sample $\dst \mathcal{P}({\bm{x}}_a; t)$ will have the ground-truth label if and only if $\dst {\bm{x}}_a $ falls into the following set, $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right) := \bigcup_{\tilde{{{\bm{x}}}}_0: f\left(\tilde{{{\bm{x}}}}_0\right) = f\left({{\bm{x}}}_0\right)} \mathcal{D}_{{\tiny\mbox{sub}}}\left(\tilde{{{\bm{x}}}}_0;t\right)$. In other words, $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ is the robust region for data region $\mathcal{G}({{\bm{x}}}_0)$ under $\dst \mathcal{P}(\cdot ; t)$ and $f$. \end{theorem} \begin{proof} (sketch) (i). Each convex half-space defined by the inequality corresponds to a ${\bm{x}}_0'$ such that $\dst f({\bm{x}}'_0)\neq f({\bm{x}}_0)$ where ${\bm{x}}_a$ within satisfies $\mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {{\bm{x}}}_0| {\hat{{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) > \mathbb{P}\left(\hat{{\mathbf{x}}}_0 = {\bm{x}}'_0\mid{\hat{{\mathbf{x}}}_t = \pmb{x}_{a,t}}\right)$. This implies that $\dst \mathcal{P}({\bm{x}}_a; t) \neq {\bm{x}}_0'$ and $\dst f\left(\mathcal{P}({\bm{x}}_a; \psi)\right) = f({\bm{x}}_0)$. The convexity is due to that the intersection of convex sets is convex. (ii). The ``if" follows directly from (i). The ``only if" holds because if $\dst {\bm{x}}_a \notin \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$, then exists $\dst \tilde{{{\bm{x}}}}_1$ such that $\dst f(\tilde{{{\bm{x}}}}_1) \neq f({{\bm{x}}}_0)$ and $\mathbb{P}\left(\hat{{\mathbf{x}}}_0 =\tilde{{\bm{x}}}_1| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right) > \mathbb{P}\left(\hat{{\mathbf{x}}}_0 =\tilde{{\bm{x}}}_0| {\hat {{\mathbf{x}}}_t = {\bm{x}}_{a,t}}\right), \forall \tilde{{\bm{x}}}_0$ s.t. $\dst f(\tilde{{{\bm{x}}}}_0) = f({{\bm{x}}}_0)$, and thus $\dst f\left(\mathcal{P}({\bm{x}}_a; \psi)\right) \neq f({\bm{x}}_0)$. \end{proof} \begin{remark} Theorem \ref{robustregion} implies that when data region $\mathcal{G}({\bm{x}}_0)$ has higher \textit{data} density and larger distances to data regions with other labels, it tends to have larger robust region and points in data region tends to have larger radius. \end{remark} In the literature, people focus more on the robust radius (lower bound) $\dst r\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ \citep{Cohen2019ICML, carlini2022certified}, which can be obtained by finding the maximum inclined ball inside $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ centering $\dst {{\bm{x}}}_0$. Note that although $\dst \mathcal{D}_{{\tiny\mbox{sub}}}\left({{\bm{x}}}_0;t\right)$ is convex, $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ is generally not. Therefore, finding $\dst r\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ is a non-convex optimization problem. In particular, it can be formulated into a disjunctive optimization problem with integer indicator variables, which is typically NP-hard to solve. One alternative could be finding the maximum inclined ball in $\dst \mathcal{D}_{{\tiny\mbox{sub}}}\left({{\bm{x}}}_0;t\right)$, which can be formulated into a convex optimization problem whose optimal value provides a lower bound for $\dst r\left(\mathcal{G}({{\bm{x}}}_0);t\right)$. However, $\dst \mathcal{D}\left(\mathcal{G}({{\bm{x}}}_0);t\right)$ has the potential to provide much larger robustness radius because it might connect different convex robust sub-regions into one, as shown in Figure \ref{robustfigure}. \vspace{-0.05in} \begin{figure}[h] \begin{center} \includegraphics[width=0.55\linewidth]{figures/D_sub_figure.png} \end{center} \vspace{-0.1in} \caption{An illustration of the robust region $\mathcal{D}({\bm{x}}_0; t) = \bigcup_{i=1}^3 \mathcal{D}_{sub}({\bm{x}}_i; t)$, where ${\bm{x}}_0, {\bm{x}}_1, {\bm{x}}_2$ are samples with ground-truth label and ${\bm{x}}_3$ is a sample with another label. ${\bm{x}}_a = {\bm{x}}_0+\boldsymbol{\epsilon}_a$ is an adversarial sample such that $\mathcal{P}({\bm{x}}_a; t) = {\bm{x}}_1 \neq {\bm{x}}_0$ and thus the classification is correct but ${\bm{x}}_a$ is not reversed back to ${\bm{x}}_0$. $r_{sub}({\bm{x}}_0) < r({\bm{x}}_0)$ shows our claim that the union leads to a larger robust radius.} \vspace{-3mm} \label{robustfigure} \end{figure} \vspace{-0.2in} In practice, we cannot guarantee to establish an exact reverse process like \ref{reverseSDE} but instead try to establish an approximate reverse process to mimic the exact one. As long as the approximate reverse process is close enough to the exact reverse process, they will generate close enough conditional distributions based on the adversarial sample. Then the density and locations of the data regions in two conditional distributions will not differ much and so is the robust region for each data region. We take the score-based diffusion model in \cite{Song2021ICLR} for an example and demonstrate Theorem \ref{closeconddist} to bound the KL-divergnece between conditional distributions generated by \ref{reverseSDE} and score-based diffusion model. \cite{Ho2020DDPM} showed that using variational inference to fit DDPM is equivalent to optimizing an objective resembling score-based diffusion model with a specific weighting scheme, so the results can be extended to DDPM. \begin{theorem}\label{closeconddist} Under score-based diffusion model \cite{Song2021ICLR} and conditions \ref{appendassump}, we have $\dst D_{\text{KL}}(\mathbb{P}(\hat {\mathbf{x}}_0 ={\bm{x}} \mid \hat {\mathbf{x}}_{t} = {\bm{x}}_{a,t}) \| \mathbb{P}({\mathbf{x}}^{\theta}_0 ={\bm{x}} \mid {\mathbf{x}}^{\theta}_{t} = {\bm{x}}_{a,t})) = \mathcal{J}_{\mathrm{SM}}(\theta, t ; \lambda(\cdot))$, where $\{\hat {\bm{x}}_\tau\}_{\tau\in [0,t]}$ and $\{{\bm{x}}^\theta_\tau\}_{\tau\in [0,t]}$ are stochastic processes generated by \ref{reverseSDE} and score-based diffusion model respectively, $\dst \mathcal{J}_{\mathrm{SM}}(\theta, t ; \lambda(\cdot)):=\frac{1}{2} \int_0^{t} \mathbb{E}_{p_\tau(\mathbf{x})}\left[\lambda(\tau)\left\|\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})-\boldsymbol{s}_{\theta}(\mathbf{x}, \tau)\right\|_2^2\right] \mathrm{d} \tau,$ $\boldsymbol{s}_{\theta}(\mathbf{x}, \tau)$ is the score function to approximate $\nabla_{\mathbf{x}} \log p_\tau(\mathbf{x})$, and $\lambda: \mathbb{R}\rightarrow \mathbb{R}$ is any weighting scheme used in the training score-based diffusion models. \end{theorem} \vspace{-0.15in} \begin{proof}(sketch) Let $\dst \boldsymbol{\mu}_{t}$ and $\dst \boldsymbol{\nu}_{t}$ be the path measure for reverse processes $\dst \{\hat {\mathbf{x}}_\tau\}_{\tau\in [0,t]}$ and $\dst \{{\mathbf{x}}^\theta_\tau\}_{\tau\in [0,t]}$ respectively based on the ${\bm{x}}_{a, t}$. Under conditions \ref{appendassump}, $\dst \boldsymbol{\mu}_{t}$ and $\dst \boldsymbol{\nu}_{t}$ are uniquely defined and the KL-divergence can be computed via the Girsanov theorem \cite{oksendal2013stochastic}. \end{proof} \begin{remark} Theorem \ref{closeconddist} shows that if the training loss is smaller, the conditional distributions generated by \ref{reverseSDE} and score-based diffusion model are closer, and are the same if the training loss is zero. \end{remark} \subsection{Details about Fast Sampling} \label{sec:fast} Applying single-step operation $n$ times is a time-consuming process. In order to reduce the time complexity, we follow the method used in ~\citep{nichol2021improved} and sample a subsequence $S^b$ with $b$ values (i.e., $S^b= \underbrace{ \{n, \floor{n-\frac{n}{b}}, \cdots, 1 \}}_{b}$ , where $S_j^b$ is the $j$-th element in $S^b$ and $S_j^b= \floor{n - \frac{jn}{b}}, \forall j < b \text{ and } S_b^b = 1$) from the original schedule $S$ (i.e., $S = \underbrace{ \{n, n-1, \cdots, 1\}}_{n}$, where $S_j= j $ is the $j$-th element in $S$). Within this context, we adapt the original $\overline{\alpha}$ schedule $\overline{\alpha}^S$ = $\{\overline{\alpha}_1, \cdots, \overline{\alpha}_i, \cdots, \overline{\alpha}_n\}$ used for single-step to the new schedule $\overline{\alpha}^{S^b}$ = $\{\overline{\alpha}_{S_1^b}, \cdots, \overline{\alpha}_{S_j^b}, \cdots, \overline{\alpha}_{S_b^b}\}$ (i.e., $\overline{\alpha}^{S^{b}}_j = \overline{\alpha}_{S_j^b} = \overline{\alpha}_{S_{\floor{n - \frac{jn}{b}} }}$ is the $j$-th element in $\overline{\alpha}^{S^b}$). We calculate the corresponding $\beta^{S^b} = \{\beta^{S^b}_1, \beta^{S^b}_2, \cdots, \beta^{S^b}_j, \cdots,\beta^{S^b}_b \}$ and $\widetilde{\beta}^{S^b} = \{ \widetilde{\beta}^{S^b}_1, \widetilde{\beta}^{S^b}_2, \cdots, \widetilde{\beta}^{S^b}_j, \cdots, \widetilde{\beta}^{S^b}_b \}$ schedules, where $ \beta_{S^b_j}=\beta^{S^b}_j = 1 - \frac{\overline{\alpha}^{S^{b}}_j }{\overline{\alpha}^{S^{b}}_{j-1}}, \quad \widetilde{\beta}_{S^b_j}=\widetilde{\beta}^{S^{b}}_j = \frac{1-\overline{\alpha}^{S^{b}}_{j-1}}{1-\overline{\alpha}^{S^b}_{j}}\beta_{S^b_j}$. With these new schedules, we can use $b$ times reverse steps to calculate $\hat{{\bm{x}}}_{0} = \underbrace{\textbf{Reverse}( \cdots \textbf{Reverse}( \textbf{Reverse}({\bm{x}}_n; S^b_b); S^b_{b-1}); \cdots ; 1)}_{b}$. Since $\boldsymbol{\Sigma}_{\boldsymbol{\theta}} ({\bm{x}}_{S^b_{j}}, S^b_{j})$ is parameterized as a range between $\beta^{S^b}$ and $\widetilde{\beta}^{S^b}$, it will automatically be rescaled. Thus, $\hat{{\bm{x}}}_{S^b_{j-1}} = \textbf{Reverse}(\hat{{\bm{x}}}_{S^b_j}; S^b_j) $ is equivalent to sample ${\bm{x}}_{S^b_{j-1}}$ from $\mathcal{N}({\bm{x}}_{S^b_{j-1}}; \boldsymbol{\mu}_{\boldsymbol{\theta}} ({\bm{x}}_{S^b_{j}}, S^b_{j}), \boldsymbol{\Sigma}_{\boldsymbol{\theta}} ({\bm{x}}_{S^b_{j}}, S^b_{j}))$.
1,477,468,750,513
arxiv
\section{An analytic estimation of pad response \section*{Appendix: An analytic estimation of pad response and spatial resolution} One way to estimate the spatial resolution of a TPC is to write a realistic Monte-Carlo simulation code. This technique is applicable to any situation, and has been developed by several groups. On the other hand, an analytic approach is applicable only to a restricted case where incident particles are normal to the pad row. However, the resultant formula is rather simple and is sometimes enlightening as shown below. Though a numerical calculation is needed to evaluate the formula, the demanded CPU time is much less than a Monte-Carlo simulation. In addition, the analytic calculation can be used to check the reliability of a Monte-Carlo simulation program, which is usually long and complicated. This appendix is devoted to briefly summarize our analytic approach, based on the following assumptions: \begin{enumerate} \item Particle tracks are normal to the pad row; \item Track coordinate is determined by the charge centroid method; \item Contribution of ambient electronic noise is negligible; \item Displacement of arriving drift electrons due to $E \times B$ effect near the entrance to the detection device is negligible; \item Displacement of arriving electrons due to the finite granularity of amplification elements of the detection device (line intervals in MicroMEGAS or a hole pitch in GEM) is negligible. \end{enumerate} \renewcommand{\thesection}{\Alph{subsection}} \subsection{Pad response} Let us calculate here the width of pad response with respect to the true coordinate assuming that the "pad response function (PRF)~\footnote { In the case of conventional MWPC readout, PRF is defined as the charge distribution on the pad plane caused by a single drift electron arriving at a sense wire. Therefore it is {\it static} and is determined electro-statically. On the other hand, in the case of MicroMEGAS or GEMs the charge distribution for a single drift electron is caused mainly by avalanche spread due to diffusion or by diffusion in the transfer and induction gaps. Therefore it is essentially {\it stochastic\/}. In the analytic approach discussed here, however, PRF is treated as if it were {\it static\/}, assuming a large avalanche multiplication factor. }" is $\delta$ function. \input{Eq1.tex} The interpretation of the result is quite simple. The squared pad-response width is a quadratic sum of the widths, one due to diffusion and the other originated from the finite pad pitch. This can be readily generalized for the case where the width of PRF ($\sigma_{PRF}$) is finite: \begin{displaymath} \left< (x^\# - \tilde{x})^2 \right> = \sigma_d^2 + \sigma_{PRF}^2 + \frac{w^2}{12} = \frac{w^2}{12} + \sigma_{PRF}^2 + D^2 \cdot z \; , ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(A.2) \end{displaymath} where $D$ is the diffusion constant and $z$ is the drift distance. Therefore if the square of the width of pad response is plotted against $z$ one gets a straight line with a slope of $D^2$ and an intercept of $w^2/12 + \sigma_{PRF}^2$. In fact, we use the width of pad response with respect to the charge centroid ($\equiv \bar{x}$), instead of the unknown (precise) true coordinate ($\tilde{x}$), in the present paper. Therefore Eq. (A.1) needs a slight modification accordingly as briefly shown below for the case where PRF is $\delta$ function ($\sigma_{PRF} = 0$). In the calculation, signal charge fluctuation represented by $P_q(q)$ is not included explicitly since it does not affect the final result. From now on we avoid to explicitly show the integrals weighted by PDFs and use instead average symbols denoted by $\left< \cdot \cdot \cdot \cdot \cdot \right>$ in order to save space. \input{Eq1x.tex} The first term is what we have calculated above (Eq. (A.1)) while the second term is nothing but the spatial resolution (squared) obtained with the charge centroid method, which is to be evaluated in the next section. The contribution of second term is small except at small drift distances. \renewcommand{\thesection}{\Alph{section}} \addtocounter{section}{1} \subsection{Spatial resolution} Let us consider first the spatial resolution to be obtained with infinitesimal pad pitch and $\sigma_{PRF}$ (PRF: $\delta$ function) since the calculation is very simple in this case~\cite{Kobayashi}. In the following, the measured track coordinate is assumed to be determined by the centroid of charges collected by the readout pads: \begin{displaymath} X \equiv \frac{\sum_{i=1}^{N} q_i \cdot x_i}{\sum_{i=1}^{N} q_i} \; , \end{displaymath} where $q_i$, $x_i$ are the signal charge and the arrival position, respectively, of $i$-th electron. In the calculation below and in the rest of this appendix, the symbol $<.....>_{x~(q)}$ stands for the average taken over the variables $x~(q)$ with the corresponding PDFs. The subscript $x$ or $q$ may be omitted when the meaning of average is clear itself. Then \input{Eq2.tex} Next, let us assume a finite pad pitch ($w$) but still an infinitesimal PRF width ($\sigma_{PRF}$). In this case, the charge centroid is given by \begin{displaymath} X = \frac{\sum_{i=1}^N q_i \cdot x_i^\#}{\sum_{i=1}^{N} q_i} \; , \end{displaymath} where $x_i^\#~(= j \cdot w$) is the central coordinate of the pad on which electron $i$ arrives, and \input{Eq3.tex} The first term in the final expression originates from the bias due to the charge centroid method combined with the finite pad pitch. This term is independent of $N$ and rapidly decreases with increasing $z$ because of diffusion~\cite{LCWSweb}. On the other hand, the second term is the square of {\it observed} charge spread relative to the charge centroid (Eq. (A.3): $\sim \sigma_d^2 + w^2/12$) divided by $N_{eff}$. Finally let us assume a finite PRF width ($\sigma_{PRF}$). In this case, the charge centroid is given by \begin{displaymath} X = \frac{\sum_{i=1}^N \sum_j q_{ji} \cdot x_j^*} {\sum_{i=1}^{N} \sum_j q_{ji}} \equiv \frac{\sum_{i=1}^N Q_i \sum_j F_j(x_i) \cdot x_j^*}{\sum_{i=1}^N Q_i}\;, \end{displaymath} where (see Fig.~\ref{figA2}) \begin{eqnarray*} q_{ji} &\equiv& Q_i \cdot F_j (x_i) {\rm~:~signal~charge~on~pad~} j, {\rm ~created~by~electron~} i \;, \\ x_i &:& {\rm arrival~position~of~electron~} i {\rm ~at~the~entrance~to ~the~detection~device} \;, \\ x_j^* &\equiv& j \cdot w {\rm ~~:~central~coordinate~of~pad~} j \;\;\; (j = \cdot\cdot\cdot, -2,-1,0,+1,+2, \cdot\cdot\cdot) \;, \\ Q_i &\equiv& \sum_j q_{ji} {\rm ~:~total~signal~charge~created~by~electron~} i \;, \\ F_j(x_i) &\equiv& \frac{q_{ji}}{Q_i} \equiv \int_{jw-w/2}^{jw+w/2} f(\xi-x_i) d\xi \;, \\ f(\xi) &:& {\rm (normalized)~PRF} \; . \end{eqnarray*} And \input{Eq4.tex} It should be pointed out here that $\sigma_X^2$ depends on the position of $\tilde{x}$ relative to the corresponding pad center, and that the beam spot size is usually much larger than the pad pitch. Therefore unless the incident positions of incoming particles are measured precisely by an external tracker (e.g. by a set of silicon strip detectors) on an event-by-event basis, $\sigma_X^2$ obtained above (Eq. (A.5) or (A.6)) has to be averaged over $\tilde{x}$ in a range, say, [-$w$/2, +$w$/2]. It is easy to show that Eq. (A.6) is a generalization of Eqs. (A.4) and (A.5). Eq. (A.5) is expected to be a good approximation when $\sigma_{PRF}$ is much smaller than the pad pitch $w$, i.e. in the case of MicroMEGAS. On the other hand, Eq. (A.6) has to be used for GEM readout since $\sigma_{PRF}$ is several hundred microns and is not negligible as compared to $w$. Evaluation of Eq. (A.5) or (A.6), including the average over $\tilde{x}$, can be done numerically using a short and simple program, with much shorter demanded CPU time than Monte-Carlo simulations. The results of the analytic calculation and a Monte-Carlo simulation are compared in Fig.~\ref{figA3} for the triple GEM readout. \begin{figure}[htbp] \centering \includegraphics[width=14cm,clip]{figA3.pdf} \caption[figA1]{\label{figA3} \footnotesize Comparison between the analytic calculation and the Monte-Carlo simulation. In the calculation $N_{eff}$ is assumed to be 21 and PRF is assumed to be a Gaussian with $\sigma$ = 363 $\mu$m. The diffusion constant ($D$) is set to 166 $\mu$m/$\sqrt{{\rm cm}}$ in both cases. } \end{figure} The Monte-Carlo simulation takes into account the primary ionization statistics, diffusion in the drift space, avalanche multiplication and its fluctuation in the GEM holes, and the diffusion in the transfer and induction gaps. The figure shows that they are almost identical, indicating the reliability of both the analytic approach and the Monte-Carlo simulation. A major advantage of Monte-Carlo simulation is that it can easily be generalized to be applicable to inclined tracks. To summarize, the analytic calculation gives reliable evaluation of the spatial resolution of a TPC for tracks perpendicular to the pad row once the effective number of electrons ($N_{eff}$), the diffusion constant ($D$), and the pad response function (PRF) are known. $N_{eff}$ is determined from the primary ionization statistics (average density of primary ionizations and their cluster size distribution) and the relative variance of avalanche fluctuation for a single drift electron~\cite{Kobayashi}. They are experimentally measurable or found in literature. The diffusion constant in the drift region is determined from the slope of the pad-response width squared as a function of drift distance (Eq. (A.2)). It may be estimated using the simulation by MAGBOLTZ. Finally, the width of pad response function is estimated from the intercept of the squared pad-response width plotted against drift distance (Eq. (A.2)). This can be estimated also by using the simulated value(s) of diffusion constant in the detection gap(s). The most reliable PRF would, however, be provided by a dedicated experiment using a single-electron source and finer readout pads. \section{Introduction} One of the major physics goals of the future linear collider experiment is to study properties of the Higgs boson, which is expected to be well within the reach of the center-of-mass energy of the machine ~\cite{ILC1}~\cite{ILC2}. This goal demands unprecedented high performance of each detector component. For example, the central tracker is required to have a high momentum resolution, high two-track resolving power, and a high momentum resolution, for precise reconstruction of hard muons and each of charged particle tracks in dense jets. A time projection chamber (TPC) is a strong candidate for the central tracker of the experiment since it can cover a large volume with a small material budget while maintaining a high tracking density (granularity). If micro-pattern gas detectors (MPGDs: micro-mesh gaseous structure (MicroMEGAS)~\cite{Giomataris}, gas electron multiplier (GEM)~\cite{Sauli} etc.) are employed for the detection devices of the TPC, instead of conventional multi-wire proportional chambers (MWPCs), one can expect a better spatial resolution at a lower gas gain, a higher granularity, and a smaller or negligible $E \times B$ effect at the entrance to the detection plane. Furthermore, the MPGDs have inherently smaller positive-ion back flow rate than that of MWPCs. We therefore constructed a small prototype TPC with a replaceable readout device (MWPC, MicroMEGAS or triple GEM) and have conducted a series of beam tests at KEK in order to study the performance, especially its spatial resolution under an axial magnetic field. We begin with brief descriptions of the prototype TPC and the experimental setup. Next, some preliminary results are presented along with our interpretation, in which special emphasis is placed on an analytic expression of the spatial resolution. Finally, the spatial resolution of the ILC-TPC is estimated from that measured with the prototype. \\ ------------------------------------------------------\\ {\footnotesize \boldmath $Contributed~paper~to~the~Linear~Collider~Workshop, ~March~2006,~I.I.Sc~Bangalore,~India$} \\ \section{Experimental setup} A photograph of the prototype is shown in Fig.~1. \begin{figure}[htbp] \begin{center} \hspace{10mm} \includegraphics[width=14.0cm,clip]{fig1cr.pdf} \caption[fig1]{\label{fig1} \footnotesize Photograph of the prototype just before installation into the gas vessel. } \end{center} \end{figure} It consists of a field cage and an easily replaceable gas amplification device attached to one end of the field cage. Gas amplified electrons are detected by a pad plane at ground potential placed right behind the amplification device. A drift electrode is attached to the other end of the field cage. The maximum drift length is about 260 mm. The pad plane, with an effective area of $\sim 75 \times 75$ mm$^2$, has 12 pad rows at a pitch of 6.3 mm, each consisting of $2 \times 6$ ($1.27 \times 6$) mm$^2$ rectangular pads arranged at a pitch of 2.3 (1.27) mm when combined with MicroMEGAS (GEMs). Pad signals are fed to charge sensitive preamplifiers located on the outer surface of the bulkhead of the gas vessel behind the pad plane. The amplified signals are sent to shaper amplifiers with a shaping time of 500 ns in the counting room via coaxial cables, and then processed by 12.5 MHz digitizers. The mesh of MicroMEGAS, made of 5-$\mu$m thick copper, has 35 $\mu$m$^\phi$ holes spaced at intervals of 61 $\mu$m. The distance between the mesh and the pad plane is maintained to 50 $\mu$m by kapton pillars arranged in-between. The typical gain is about 3650 at the mesh potential of -320 V. The triple GEM, CERN standard, has two 1.5-mm transfer gaps and a 1-mm induction gap. The transfer and induction fields are 2 kV/cm and 3 kV/cm, respectively. The typical total effective gain in a P5 (TDR) gas is about 3000 with 335 (340) V applied across each GEM foil. The chamber gases are Ar-isobutane (5\%) for MicroMEGAS, and a TDR gas (Ar-methane (5\%)-carbon dioxide (2\%)) or Ar-Methane (5\%) for GEMs, at atmospheric pressure and room temperature. The gas pressure and the ambient temperature are continuously monitored since they are not controlled actively. The drift-field strengths are 200, 220 and 100 V/cm, respectively for Ar-isobutane, TDR gas and Ar-methane. The prototype TPC is placed in the uniform field region of a super conducting solenoid without return yoke, having bore diameter of 850 mm, effective length of 1000 mm, and the maximum field strength of 1.2 T. The prototype was then subjected to the beam, mostly 4 GeV/c pions, at the $\pi$2 test beam facility of the KEK proton synchrotron. \section{Preliminary results} In this section we show some preliminary results of the analysis up to now, only for the data taken with an axial magnetic field of 1 T and with tracks normal to the pad rows. The results of analytic evaluations are used or presented here without comments. Readers are therefore advised to read Appendix and the slides available on-line~\cite{LCWSweb} as well, where the analytic method is briefly summarized and illustrated. The observed pad responses for different drift distances ($z$) are shown in Fig.~2~(a) while the widths of distributions are plotted as a function of drift distance in Fig.~2~(b). \begin{figure}[http] \hspace{15mm} \begin{minipage}{0.8\hsize} \begin{tabular}{cc} \begin{minipage}[t]{0.48\hsize} \centering \includegraphics*[scale=0.37]{fig2acr.pdf} \label{fig2a} \end{minipage} & \begin{minipage}[t]{0.48\hsize} \centering \includegraphics*[scale=0.37]{fig2bcr.pdf} \label{fig2b} \end{minipage} \end{tabular} \caption[fig2]{\label{fig2}\footnotesize (a) Pad responses for different drift distances. (b) Pad-response width squared ($\sigma_{PR}^2$) vs. drift distance ($z$). The width of pad response is parametrized as $\sigma_{PR}^2 = \sigma_{PR0}^2 + D^2 \cdot z$, with $D$ being the diffusion constant. } \end{minipage} \end{figure} The measured spatial resolution against drift distance is shown in Fig.~3 (a) and (b), respectively for the MicroMEGAS and triple GEM readout, along with the result of the analytic calculation. In the calculation the pad response function (PRF) was assumed to be $\delta$ function for the MicroMEGAS and a Gaussian for the Gems \footnote { PRF is the avalanche charge spread on the pad plane for a {\it single\/} drift electron and should not be confused with the pad response. In the case of MicroMEGAS it is much smaller than the pad pitch (2.3 mm) and is, therefore, neglected. The width (standard deviation) of the Gaussian PRF for the triple GEM has been determined from the intercept of the pad-response width squared vs. $z$ (Fig. 2 (b)): $\sigma_{PR}^2 = \sigma_{PR0}^2 + D^2 \cdot z$ with $\sigma_{PR0}^2 = w^2 / 12 + \sigma_{PRF}^2$, where the pad pitch $w$ = 1.27 mm and $\sigma_{PR0} \sim$ 511 $\mu$m, yielding $\sim$ 356 $\mu$m for $\sigma_{PRF}$. The value of $\sigma_{PR0}$ thus obtained is consistent with a simple estimation taking into account only the diffusion in the transfer and induction gaps. }. \begin{figure}[http] \begin{tabular}{cc} \vspace*{0.2cm} \begin{minipage}[t]{0.48\hsize} \centering \includegraphics*[scale=0.48]{fig3acr.pdf} \label{fig3a} \end{minipage} & \vspace*{-0.2cm} \begin{minipage}[t]{0.48\hsize} \centering \includegraphics*[scale=0.48]{fig3bcr.pdf} \label{fig3b} \end{minipage} \end{tabular} \caption[fig2]{\label{fig3}\footnotesize (a) Spatial resolution vs. z obtained with MicroMEGAS. Gas: Ar-isobutane (5\%). (b) Spatial resolution vs. z obtained with GEMs. Gas: Ar-methane (5\%). } \end{figure} The obtained behavior of the pad response, and the spatial resolution at long drift distances~\footnote { When PRF is $\delta$ function the asymptotic behavior of the spatial resolution at long distances (diffusion dominant asymptotic region) is described by $\sigma_X^2 \equiv \sigma_{X0}^2 + D_X^2 \cdot z \sim 1./N_{eff} \cdot (w^2/12 + D^2 \cdot z)$, where $N_{eff}$ is the effective number of electrons and $D$ is the diffusion constant (see Appendix). } are compared with expectations in table 1. The comparisons show \begin{enumerate} \item $\sigma_{PR0}$ is in reasonable agreement with the expectation ($\sqrt{w^2/12 + \sigma_{PRF}^2}$) if the contribution of $\sigma_{PRF}$ is taken in to account (in the case of GEMs); \item $\sigma_{X0}$ is in good agreement with the expectation ($w/\sqrt{12 \cdot N_{eff}}$) for the MicroMEGAS, and better than this for the GEMs because of the significant charge spread in the transfer and induction gaps; \item The values of diffusion constant ($D$) are comparable to those given by the simulation (MAGBOLTZ~\cite{Biagi}); \item $N_{eff}$ (16 $\sim$ 22) is significantly smaller than the average number of drift electrons per pad row ($\sim$71)~\cite{Kobayashi}. \end {enumerate} \input{table1_new.tex} \section{Expected spatial resolution of the ILC-TPC} Calculated spatial resolutions of the ILC-TPC at $B$ = 4 T are shown in Fig.~4 for tracks perpendicular to the pad row. \begin{figure}[htbp] \centering \includegraphics[width=12.0cm,clip]{fig4ixc.pdf} \caption[fig4]{\label{fig4}\footnotesize Expected spatial resolutions of the ILC-TPC obtained with MicroMEGAS or GEMs. Gas: Ar-methane (5\%), $B$ = 4 T ($D$ = 50 $\mu$m/$\sqrt{\rm cm}$), and $N_{eff}$ = 22. } \end{figure} In the calculations the values of diffusion constants ($D$) given by MAGBOLTZ were used. The figure tells us that under a strong magnetic field it is important to reduce the pad-pitch dominant region (at small drift distances) in the ILC-TPC by enhancing the charge sharing among the readout pads, in order to maintain a good resolution over the entire sensitive volume. There are several possibilities to realize effective charge sharing: \begin{itemize} \item zigzag (chevron) pads. \item a smaller pad pitch with a larger number of readout channels. \item defocussing of electrons after gas amplification (natural dispersion in the transfer and induction gaps of GEMs, {\it stochastic} PRF). \item Use of resistive anode technique with a moderate number of readout channels (applicable to both GEMs and MicroMEGAS, {\it static} PRF)~\cite{Dixit}. \item pixel readout (Digital TPC)~\cite{Colas}. \end{itemize} \section{Summary} To summarize, the prototype TPC equipped with a MicroMEGAS or GEMs operated stably during the beam tests. The tests provided us with valuable information on its performance under axial magnetic fields of up to 1 T: \begin{itemize} \item The obtained spatial resolution is understood in terms of pad pitch, diffusion constant, PRF, and the effective number of electrons. \item The expected resolution can be estimated by a numerical calculation (NOT a Monte-Carlo) for given geometry, gas mixture and PRF if the relevant parameters are known. \item The calculation is based on a simple formula, easy to code and fast, though it is applicable only to tracks perpendicular to the pad row. \item In the case of MicroMEGAS, the spatial resolution as a function of drift distance is well described by the analytic formula, assuming $\delta$ function for PRF. \item In the case of GEMs, the spatial resolution as a function of drift distance is satisfactorily described by the analytic formula, assuming a Gaussian for PRF with the width determined from the intercept of the pad-response width squared as a function of drift distance. \item It is important to make the pad pitch small, {\it physically or effectively}, in order to reduce both the overall offset term ($\sigma_{X0}$) and the resolution degradation due to finite pad pitch. \item The spatial resolution required from the ILC-TPC (100 $\sim$ 200 $\mu$m for the maximum drift distance of $\sim$ 2.5 m) is now within the reach for tracks normal to the pad row. \end{itemize} \input{appendix_input_latest.tex} \section*{\nonumber Acknowledgements} The author would like to thank the people at Indian Institute of Science for their support and hospitality. He is also grateful to many colleagues in the ILC-TPC collaboration for their continuous encouragement and support.
1,477,468,750,514
arxiv
\section{Introduction} In the analysis of numerical programs, a recurrent difficulty when we want to assess the influence of finite precision on an implementation, is the possibility for a test to be unstable: when, for a given input, the finite precision control flow can differ from the control flow that would be taken by the same execution in real numbers. Not taking this possibility into account may be unsound if the difference of paths leads to a discontinuity in the computation, while taking it into account without special care soon leads to large over-approximations. And when considering programs that compute with approximations of real numbers, potentially unstable tests lie everywhere: we want to automatically characterize conditional blocks that perform a continuous treatment of inputs, and are thus robust, and those that do not. This unstable test problem is thus closely related to the notion of continuity/discontinuity in programs, first introduced in \cite{DBLP:conf/issta/Hamlet02}. Basically, a program is continuous if, when its inputs are slightly perturbed, its output is also only slightly perturbed, very similarly to the concept of a continuous function. Discontinuity in itself can be a symptom of a major bug in some critical systems, such as the one reported in \cite{Bushnell12}, where a F22 Raptor military aircraft almost crashed after crossing the international date line in 2007, due to a discontinuity in the treatment of dates. Consider the toy program presented on the left hand side of Figure \ref{lst::ex1}, where input $x$ takes its real value in $[1,3]$, with an initial error $0 < u << 1$, that can come either from previous finite precision computations, or from any uncertainty on the input such as sensor imperfection. The test is potentially unstable: for instance, if the real value of $x$ at control point [1] is $r^x_{[1]}=2$, then its floating-point value is $f^x_{[1]}=2+u$. Thus the execution in real numbers would take the \texttt{then} branch and lead at control point [2] to $r^y_{[2]} = r^x_{[1]}+2=4$, whereas the floating-point execution would take the \texttt{else} branch and lead to $f^y_{[4]}=f^x_{[1]}=2+u$. The test is not only unstable, but also introduces a discontinuity around the test condition $(x == 2)$. Indeed, for $r^x_{[1]}=2$, there is an error due to discontinuity of $f^y_{[4]}-r^y_{[2]}=-2+u$. Of course, the computation of $z$ around the test condition is continuous. \\ In the rest of the paper, we propose a new analysis, that enhances earlier work by the authors~\cite{vmcai11}, by computing and propagating bounds on those discontinuity errors. This previous work characterized the computation error due to the implementation in finite precision, by comparing the computations in real-numbers with the same computations in the floating-point semantics, relying on the stable test assumption: the floating-point number control flow does not diverge from the real number control flow. In its implementation in FLUCTUAT~\cite{fmics2009}, in the case when the analysis determined a test could be unstable, it issued a warning, and the comparison between the two semantics could be unsound. This issue, and the stable test assumption, appear in all other (static or dynamic) existing analyzes of numerical error propagation; the expression unstable test is actually taken from CADNA \cite{CADNA}, a stochastic arithmetic instrumentation of programs, to assert their numerical quality. In Hoare provers dealing with both real number and floating-point number semantics, e.g. \cite{BoldoFilliatre07} this issue has to be sorted out by the user, through suitable assertions and lemmas. Here as in previous work, we rely on the relational abstractions of real number and floating numbers semantics using affine sets (concretized as zonotopes)~\cite{arxiv08,arxiv09,cav09,cav10,vmcai11}. But we now also, using these abstractions, compute and solve constraints on inputs such that the execution potentially leads to unstable tests, and thus accurately bound the discontinuity errors, computed as the difference of the floating-point value in one branch and the real value in another, when the test distinguishing these two branches can be unstable. Let us exemplify and illustrate this analysis on the program from Figure \ref{lst::ex1}. \begin{figure} \centering \begin{tikzpicture}[xscale=2.8,yscale=0.4] \begin{scope}[yscale=0.9] \draw (-0.75 cm, 6 cm) node[right] {x := [1,3] + u; // [1]}; \draw (-0.75 cm, 5 cm) node[right,brown] {\small /* $\hat r^x_{[1]} = 2 + \varepsilon_1^r; \; \hat{e}^x_{[1]} = u$ */}; \draw (-0.75 cm, 4 cm) node[right] {if (x $\leq$ 2) \{ }; \draw (-0.75 cm, 3 cm) node[right] { y = x+2; // [2]}; \draw (-0.75 cm, 2 cm) node[right] { z = x*x; // [3]}; \draw (-0.75 cm, 1 cm) node[right,brown] {\small /* $\hat r^y_{[2]} = 4 + \varepsilon_1^r; \; \hat{e}^y_{[2]} = u + \delta \varepsilon_2^e$ */}; \draw (-0.75 cm, 0 cm) node[right] { \} else \{ }; \draw (-0.75 cm, -1 cm) node[right] { y = x; // [4]}; \draw (-0.75 cm, -2 cm) node[right] { z = x*x; // [5]}; \draw (-0.75 cm, -3 cm) node[right,brown] {\small /* $\hat r^y_{[4]} = 2 + \varepsilon_1^r; \; \hat{e}^y_{[4]} = u $ */}; \draw (-0.75 cm, -4 cm) node[right] { \} // [6]}; \draw (-0.75 cm, -4 cm) node[right,brown] {\hspace*{1.cm} \small /* $\hat r^y_{[6]} = \hat{r}^y_{[2]} \sqcup \hat{r}^y_{[6]}$ }; \draw (-0.75 cm, -5 cm) node[right,brown] {\small $\hat e^y_{[6]} = \hat{e}^y_{[2]} \sqcup \hat{e}^y_{[4]} + d^y_{[6]}$ */ }; \draw (-0.75,-5.5) -- (1.,-5.5) -- (1.,6.5) -- (-0.75,6.5) --cycle; \end{scope} \begin{scope}[xshift=0.4cm] \draw[xstep=1cm,ystep=1cm,gray,very thin] (1,0) grid (3,5); \draw[->] (1,0) -- (3.2,0) node[anchor=south] {$\varepsilon_1^r$}; \draw (1 cm,0 cm) node[below] {-1}; \draw (2 cm,0 cm) node[below] {0}; \draw (1.8 cm,0 cm) node[below] {$-u$}; \draw (3 cm,0 cm) node[below] {1}; \draw[->] (1,0) -- (1,6) node[anchor=east] {$y$}; \foreach \bfm{y}/\ytext in {0,1,2,3,4,5} \draw (1 cm,\bfm{y} cm) node[left] {$\ytext$}; \draw [domain=2:3,purple,thick] plot (\bfm{x},{\bfm{x}}); \draw [domain=1:2,purple,thick] plot (\bfm{x},{\bfm{x}+2}); \draw [domain=1.8:2,purple,thick,dotted] plot (\bfm{x},{\bfm{x}}); \draw [domain=1:1.8,purple,thick,dashed] plot (\bfm{x},{\bfm{x}+2.2}); \draw [domain=1.8:3,purple,thick,dashed] plot (\bfm{x},{\bfm{x}+0.2}); \draw (0.9 cm,-2 cm) node[anchor=south] {$\Phi^r$:}; \draw[thick] (1,-2) -- (3,-2); \draw (0.9 cm,-3.5 cm) node[anchor=south] {$\Phi^f$:}; \draw[thick] (1,-3.5) -- (3,-3.5); \draw[thick,green] (1,-2) -- (2,-2); \draw[green] (1.5 cm,-2 cm) node[anchor=south] {[then]: $\varepsilon_1^r \leq 0$}; \draw[thick,blue] (2,-2) -- (3,-2); \draw[blue] (2.5 cm,-2 cm) node[anchor=south] {[else]: $\varepsilon_1^r > 0$}; \draw[thick,green] (1,-3.5) -- (1.8,-3.5); \draw[green] (1.5 cm,-3.5 cm) node[anchor=south] {[then]: $\varepsilon_1^r \leq -u$}; \draw[thick,blue] (1.8,-3.5) -- (3,-3.5); \draw[blue] (2.5 cm,-3.5 cm) node[anchor=south] {[else]: $\varepsilon_1^r > -u$}; \draw[red,dashed] (1.8,-5) -- (1.8,5); \draw[red,dashed] (2,-5) -- (2,5); \draw (1 cm,-5 cm) node[anchor=south] {$\Phi^r \cap \Phi^f$:}; \draw[thick,red] (1.8,-5) -- (2,-5); \draw[red] (1.85 cm, -5 cm) node[anchor=south] {[unstable]: $ -u < \varepsilon_1^r \leq 0$}; \draw[purple] (1.5 cm, 2.2 cm) node[anchor=south] {$\hat r^y_{[2]}$}; \draw[purple] (1.3 cm, 3.4 cm) node[anchor=south] {$\hat f^y_{[2]}$}; \draw[purple] (2.5 cm, 2.8 cm) node[anchor=south] {$\hat f^y_{[4]}$}; \draw[purple] (2.7 cm, 1.4 cm) node[anchor=south] {$\hat r^y_{[4]}$}; \end{scope} \end{tikzpicture} \caption{Running example} \label{lst::ex1} \end{figure} The real value of input \texttt{x} will be abstracted by the affine form $\hat r^x_{[1]}=2 + \varepsilon_1^r$, where $\varepsilon_1^r$ is a symbolic variable with values in $[-1,1]$. Its error is $\hat{e}^x_{[1]}=u$ and its finite precision value is $\hat{f}^x_{[1]}=\hat{r}^x_{[1]} + \hat{e}^x_{[1]}= 2 + \varepsilon_1^r +u$. Note the functional abstraction: affine forms represent a function from inputs to variable values. We will use this to interpret tests, and in particular to compute unstable tests conditions. For instance, the condition for the execution in real numbers to take the \texttt{then} branch is here $2 + \varepsilon_1^r \leq 2$, that is $\varepsilon_1^r \leq 0$. Now, the condition for the execution in finite precision to take the \texttt{else} branch is $\hat f^x_{[1]}>2$, that is $2 + \varepsilon_1^r +u > 2$, which is equivalent to $\varepsilon_1^r > -u$. Thus, the unstable test condition being that for the same input - or equivalently here the same value of $\varepsilon_1^r$ - the real and float control flow are different, this amounts to intersecting these two conditions on $\varepsilon_1^r$, and yields $-u < \varepsilon_1^r \leq 0$. These constraints are illustrated on Figure \ref{lst::ex1}, with $u=0.2$: $\Phi_r$ denotes the constraints on the real value, $\Phi_f$, the constraints on the finite precision value, and $\Phi^r \cap \Phi^f$, the unstable test condition. For the other possibility for an unstable test, that is the execution in real numbers takes the \texttt{else} branch while the float execution takes the \texttt{then} branch, the constraints are $\varepsilon_1^r < 0$ and $\varepsilon_1^r \leq -u$, which are incompatible. This possibility is thus excluded. We will see later that these constraints allow us in general to refine the bounds on the discontinuity error, but they are also useful to characterize the set of inputs that can lead to unstable test: $-u < \varepsilon_1^r \leq 0$ corresponds to $2-u < r^x < 2$. Take now variable \texttt{y}. In the \texttt{then} branch, its real value is $\hat r^y_{[2]}= \hat{r}^x_{[1]} + 2 = 4 + \varepsilon_1^r$, the error $\hat e^y_{[2]}= \hat{e}^x_{[1]} + \delta \varepsilon_2^e$, where $\delta$ is the bound on the elementary rounding error on \texttt{y}, due to the addition, we deduce $\hat{f}^y_{[2]}= \hat{r}^y_{[2]} + \hat{e}^y_{[2]}$. In the \texttt{else} branch, the real value is $\hat r^y_{[4]}= \hat{r}^x_{[1]} = 2 + \varepsilon_1^r$, the error $\hat e^y_{[4]}= \hat{e}^x_{[1]}$, and we deduce $\hat f^y_{[4]}= \hat{r}^y_{[4]} + \hat{e}^y_{[4]}$. In Figure \ref{lst::ex1}, we represent in solid lines the real value of $y$ and in dashed lines its finite precision value. With the previous analysis~\cite{vmcai11} that makes the stable test assumption, we compute when joining branches at control point [6], $\hat r^y_{[6]}=\hat{r}^y_{[2]} \sqcup \hat{r}^y_{[4]} = 3 + \varepsilon_6^r \in [2,4]$ with new noise symbol $\varepsilon_6^r$ (note that we will not detail here the upper bound operator on affine forms, discussed in e.g. \cite{arxiv09,vmcai11,modular}), $\hat e^y_{[6]} = \hat{e}^y_{[2]} \sqcup \hat{e}^y_{[4]} = u + \delta \varepsilon_2^e \in [u-\delta,u+\delta]$, and $\hat f^y_{[6]}=\hat{r}^y_{[6]}+\hat{e}^y_{[6]} = 3 + u + \varepsilon_6^r + \delta \varepsilon_2^e$. This is sound for the real and float values $\hat r^y_{[6]}$ and $\hat f^y_{[6]}$, but unsound for the error because of the possibility of an unstable test. Our new analysis, when joining branches, also computes bounds for $\hat r^y_{[4]}-\hat{r}^y_{[2]}=2 + \varepsilon_1^r-(4 + \varepsilon_1^r)=-2$ under the unstable test condition $-u < \varepsilon_1^r \leq 0$ (or $2-u < \hat r^x < 2$): a new discontinuity term is added and the error is now $\hat e^y_{[6]} + d^y_{[6]}$ where $d^y_{[6]} = -2 \chi_{[-u,0]}(\varepsilon_1)$ and $\chi_{[a,b]}(x)$ equals 1 if $x$ is in $[a,b]$ and 0 otherwise. \paragraph{Related work} In \cite{CGL10}, the authors introduce a continuity analysis of programs. This approach is pursued in particular in \cite{DBLP:conf/sigsoft/ChaudhuriGLN11,CGL12}, where several refinements of the notion of continuity or robustness of programs are proposed, another one being introduced in \cite{DBLP:conf/rtss/MajumdarS09}. These notions are discussed in \cite{Gazeau}, in which an interactive proof scheme for proving a general form of robustness is discussed. In \cite{DBLP:conf/rtss/MajumdarS09}, the algorithm proposed by the authors symbolically traverses program paths and collects constraints on input and output variables. Then for each pair of program paths, the algorithm determines values of input variables that cause the program to follow these two paths and for which the difference in values of the output variable is maximized. We use one of their examples (transmission shift, Section \ref{experiments}), and show that we reach similar conclusions. One difference between the approaches is that we give extra information concerning the finite precision flow divergence with respect to the real number control flow, potentially exhibiting flawed behaviors. Also, their path-sensitive analysis can exhibit witnesses for worst discontinuity errors, but at the expense of a much bigger combinatorial complexity. Actually, we will show that our unstable test constraints also allow us to provide indication on the inputs leading to discontinuity errors. Robustness has also been discussed in the context of synthesis and validation of control systems, in \cite{DBLP:journals/corr/abs-1108-3540,DBLP:conf/emsoft/TabuadaBCSM12}. The formalization is based on automata theoretic methods, providing a convenient definition of a metric between B\"uchi automata. Indeed, robustness has long been central in numerical mathematics, in particular in control theory. The field of robust control is actually concerned in proving stability of controlled systems where parameters are only known in range. A notion which is similar to the one of \cite{DBLP:conf/emsoft/TabuadaBCSM12}, but in the realm of real numbers and control of ordinary differential equations, is the input-output stability/continuity in control systems as discussed in \cite{Sontag}. This problematic is also of primary importance in computational geometry, see for instance \cite{Shewchuk} for a survey on the use of ``robust geometric predicates''. Nevertheless, the aim pursued is different from ours: we are mostly interested in critical embedded software, where the limited resources generally prevent the use of complicated, refined arithmetic algorithms. \paragraph{Contents} Our main contribution is a tractable analysis that generalizes both the abstract domain of \cite{vmcai11} and the continuity or robustness analyses: it ensures the finite precision error analysis is now sound even in the presence of unstable tests, by computing and propagating discontinuity error bounds for these tests. We first review in Section \ref{zonotopicreal} the basics of the relational analysis based on affine forms for the abstraction of real number semantics necessary to understand this robustness analysis presented here. We then introduce in Section \ref{abstraction} our new abstract domain, based on an abstraction similar to that of \cite{vmcai11}, but refined to take care of unstable tests properly. We present in Section \ref{sec::technical} some refinements that are useful for reaching more accurate results, but are not central to understand the principles of the analysis. We conclude with some experiments using our implementation of this abstraction in our static analyzer FLUCTUAT. \section{Preliminaries: affine sets for real valued analysis} We recall here the key notions on the abstract domains based on affine sets for the analysis of real value of program variables that will be needed in Sections \ref{abstraction} and \ref{sec::technical} for our robustness analysis. We refer to \cite{sas06,arxiv08,arxiv09,cav09,cav10} for more details. \vspace*{-0.2cm} \subsubsection{From affine arithmetic to affine sets} \label{zonotopicreal} Affine arithmetic is a more accurate extension of interval arithmetic, that takes into account affine correlations between variables. An {\em affine form} is a formal sum over a set of {\em noise symbols} $\varepsilon_i$ \[ \hat{x} \,\overset{\textup{def}}{=}\, \alpha^x_0 + \sum_{i=1}^n \alpha^x_i \varepsilon_i,\] with $\alpha^x_i \in \bfm{R}$ for all $i$. Each noise symbol $\varepsilon_i$ stands for an independent component of the total uncertainty on the quantity $\hat x$, its value is unknown but bounded in [-1,1]; the corresponding coefficient $\alpha^x_i$ is a known real value, which gives the magnitude of that component. The same noise symbol can be shared by several quantities, indicating correlations among them. These noise symbols can not only model uncertainty in data or parameters, but also uncertainty coming from computation. The values that a variable $x$ defined by an affine form $\hat x$ can take is in the range $ \gamma(\hat x) = \left[ \alpha^x_0 - \sum_{i=1}^n |\alpha^x_i| , \alpha^x_0 + \sum_{i=1}^n |\alpha^x_i| \right].$ The assignment of a variable $x$ whose value is given in a range $[a,b]$, is defined as a centered form using a fresh noise symbol $\varepsilon_{n+1} \in [-1,1]$, which indicates unknown dependency to other variables: $\hat x = \frac{(a+b)}{2} + \frac{(b-a)}{2} \, \varepsilon_{n+1}$. The result of linear operations on affine forms is an affine form, and is thus interpreted exactly. For two affine forms $\hat x$ and $\hat y$, and a real number $\lambda$, we have $ \lambda \hat x + \hat{y} = (\lambda \alpha^x_0 + \alpha^y_0) + \sum_{i=1}^n(\lambda \alpha^x_i + \alpha^y_i) \varepsilon_i.$ For non affine operations, we select an approximate linear resulting form, and bounds for the error committed using this approximate form are computed, that are used to add a new noise term to the linear form. As a matter of fact, the new noise symbols introduced in these linearization processes, were given different names in \cite{arxiv09,vmcai11}: the $\eta_j$ symbols. Although they play a slightly different role than that of $\varepsilon_i$ symbols, for sake of notational simplicity, we will only give formulas in what follows, using the same $\varepsilon_i$ symbols for both types of symbols. The values of the variables at a given control point as a linearized function of the values of the inputs of the program, that we generally identify with a prefix of the $\varepsilon_i$ vector. The uncertainties, due to the abstraction of non-linear features such as the join and the multiplication will be abstracted on a suffix of the $\varepsilon_i$ vector - previously the $\eta_j$ symbols. In what follows, we use the matrix notations of \cite{arxiv09} to handle affine sets, that is tuples of affine forms. We note ${\cal M}(n,p)$ the space of matrices with $n$ lines and $p$ columns of real coefficients. A tuple of affine forms expressing the set of values taken by $p$ variables over $n$ noise symbols $\varepsilon_i, \; 1 \leq i \leq n$, can be represented by a matrix $A \in {\cal M}(n+1,p)$. \vspace*{-0.2cm} \subsubsection{Constrained affine sets} \label{constrainedzonotopes} As described in \cite{cav10}, we interpret tests by adding some constraints on the $\varepsilon_i$ noise symbols, instead of having them vary freely into [-1,1]: we restrain ourselves to executions (or inputs) that can take the considered branch. We can then abstract these constraints in any abstract domain, the simplest being intervals, but we will see than we actually need (sub-)polyhedric abstractions to accurately handle unstable tests. We note $\cal A$ for this abstract domain, and use $\gamma: {\cal A} \rightarrow \wp(\bfm{R}^n)$ for the concretisation operator, and $\alpha: \wp(\bfm{R}^n) \rightarrow {\cal A}$ for some ``abstraction'' operator, not necessarily the best one (as in polyhedra, this does not exist): we only need to be able to get an abstract value from a set of concrete values, such that $X \subseteq \gamma \circ \alpha(X)$. This means that abstract values $X$ are now composed of a zonotope identified with its matrix $R^X \in {\cal M}(n+1,p)$, together with an abstraction $\Phi^X$ of the constraints on the noise symbols, $X =(R^X,\Phi^X)$. The concretisation of such constrained zonotopes or affine sets is $\gamma(X)=\left\{\transpose{C^X} \epsilon \mid \epsilon \in \gamma(\Phi^X) \right\}.$ For $\Phi \in {\cal A}$, and $\hat x$ an affine form, we note $\Phi(\hat x)$ the interval $[J^{-},J^{+}]$ with $J^{-}$ and $J^{+}$ given by the linear programs $J^{-}=\inf_{\varepsilon \in \gamma(\Phi)} \hat{x}(\varepsilon)$ and $J^{+}=\sup_{\varepsilon \in \gamma(\Phi)} \hat{x}(\varepsilon)$. \begin{example} For instance on the running example, starting with program variable $x$ in $[1,3]$, we associate the abstract value $X$ with $R^X=(2\mbox{ } 1)$, i.e. $\hat x = 2 + \varepsilon_1$, and $\gamma(\Phi^X)=\gamma(\varepsilon_1)=[-1,1]$. The interpretation of the test \texttt{if (x<=2)} in the \texttt{then} branch is translated into constraint $\varepsilon_1 \leq 0$, thus $\gamma({\Phi^X})=[-1,0]$. Then, the interval concretisation of $\hat x$ is $\gamma(\hat x)=[2-1,2]=[1,2]$. \end{example} \subsubsection{Transfer functions for arithmetic expressions} Naturally, the transfer functions described in the unconstrained case are still correct when we have additional constraints on the noise symbols; but for the non linear operations such as the multiplication, the constraints can be used to refine the result by computing more accurate bounds on the non affine part which is over-approximated by a new noise term, solving with a guaranteed linear solver\footnote{For an interval domain for the constraints on noise symbols, a much more straightforward computation can be made, of course.} the linear programming problems $\sup_{\epsilon \in \gamma(\Phi^X)} \varepsilon$ (resp. $\inf$). Transfer functions are described, respectively in the unconstrained and constrained cases in \cite{arxiv09} and \cite{cav10}, and will not be detailed here, except in the example below. \begin{example} \label{runex2} Consider the computation \texttt{z=x*x} at control point $3$ in the \texttt{then} branch of the running example (Figure \ref{lst::ex1}). If computed as in the unconstrained case, we write $\hat z_{[3]} = (2 + \varepsilon_1) (2 + \varepsilon_1) = 4 + 4 \varepsilon_1 + (\varepsilon_1)^2$, which, using the fact that $(\varepsilon_1)^2$ is in [0,1], can be linearized using a new noise symbol by $\hat z_{[3]} = 4.5 + 4 \varepsilon_1 + 0.5 \varepsilon_3$ (new noise symbol called $\varepsilon_3$ because introduced at control point 3). The concretisation of $\hat z_{[3]}$, using $\varepsilon_1 \in [-1,0]$, is then $\gamma (\hat z_{[3]}) = [0,5]$. But it is better to use the constraint on $\varepsilon_1$ to linearize \texttt{z=x*x} at the center of the interval $\varepsilon_1 \in [-1,0]$: we then write $\hat z_{[3]} = (1.5 + (\varepsilon_1+0.5)) (1.5 + (\varepsilon_1+0.5)) = 2.25 + 1.5+(\varepsilon_1+0.5)+ (\varepsilon_1+0.5)^2$, which, using $(\varepsilon_1+0.5)^2 \in [0,0.25]$, can be linearized as $\hat z_{[3]} = 3.875 + 3 \varepsilon_1 + 0.125 \varepsilon_3$. Its concretisation is $\gamma (\hat z_{[3]}) = [0.75,4]$. In the else branch, \texttt{z=x*x} interpreted at control point $5$ with $\varepsilon_1 \in [0,1]$ is linearized by $\hat z_{[5]} = (2.5 + (\varepsilon_1-0.5)) (2.5 + (\varepsilon_1-0.5)) = 3.875 + 5 \varepsilon_1 + 0.125 \varepsilon_5$. And $\gamma (\hat z_{[5]}) = [3.75,9]$. \end{example} \vspace*{-0.2cm} \subsubsection{Join} We need an upper bound operator to combine abstract values coming from different branches. The computation of upper bounds (and if possible minimal ones) on constrained affine sets is a difficult task, already discussed in several papers~\cite{arxiv08,arxiv09,cav10,nsad12}, and orthogonal to the robustness analysis presented here. We will thus consider we have an upper bound operator on constrained affine sets we note $\sqcup$, and focus on the additional term due to discontinuity in tests. \section{Robustness analysis of finite precision computations} \label{abstraction} We introduce here an abstraction which is not only sound in presence of unstable tests, but also exhibits the potential discontinuity errors due to these tests. For more concision, we insist here on what is directly linked to an accurate treatment of these discontinuities, and rely on previous work~\cite{vmcai11} for the rest. \subsection{Abstract values} \label{abstractvalue} As in the abstract domain for the analysis of finite precision computations of~\cite{vmcai11}, we will see the floating-point computation as a perturbation of a computation in real numbers, and use zonotopic abstractions of real computations and errors (introducing respectively noise symbols $\varepsilon_i^r$ and $\varepsilon_j^e$), from which we get an abstraction of floating point computations. But we make here no assumptions on control flows in tests and will interpret tests independently on the real value and the floating-point value. For each branch, we compute conditions for the real and floating-point executions to take this branch. The test interpretation on a zonotopic value~\cite{cav10} lets the affine sets unchanged, but yields constraints on noise symbols. For each branch, we thus get two sets of constraints: $\varepsilon^r=(\varepsilon^r_1,\ldots,\varepsilon^r_n) \in \Phi^{X}_r$ for the real control flow (test computed on real values $R^X$), and $(\varepsilon^r,\varepsilon^e)=(\varepsilon^r_1,\ldots,\varepsilon^r_n,\varepsilon^e_1,\ldots,\varepsilon^e_m) \in \Phi^{X}_f$ for the finite precision control flow (test computed on float values $R^X+E^X$). \begin{definition} An abstract value $X$, defined at a given control point, for a program with $p$ variables $x_1,\ldots,x_p$, is thus a tuple $X=(R^X,E^X,D^X,\Phi_r^{X},\Phi_f^{X})$ composed of the following affine sets and constraints, for all $k=1,\ldots,p$: \[ \left\lbrace \begin{array}{rllll} R^X\ : \ \hat r_k^X &=& r_{0,k}^X + \sum_{i=1}^n r_{i,k}^X \, \varepsilon_i^r && \mbox{ where } \varepsilon^r \in \Phi_r^{X}\\ E^X\ : \ \hat e_k^X &=& e_{0,k}^X + \sum_{i=1}^n e_{i,k}^X \, \varepsilon_i^r + \sum_{j=1}^{m} e_{n+j,k}^X \, \varepsilon_j^e && \mbox{ where } (\varepsilon^r,\varepsilon^e) \in \Phi_f^{X} \\ D^X\ : \ \hat d_k^X &=& d_{0,k}^X + \sum_{i=1}^o d_{i,k}^X \, \varepsilon_i^d && \\ \hat f_k^X &=& \hat r_k^X + \hat{e}_k^X && \mbox{ where } (\varepsilon^r,\varepsilon^e) \in \Phi_f^{X} \end{array} \right. \] where \begin{itemize} \item $R^X \in {\cal M}(n+1,p)$ is the affine set defining the real values of variables, and the affine form $\hat r_k^X$ giving the real value of $x_k$, is defined on the $\varepsilon_i^r$, \item $E^X \in {\cal M}(n+m+1,p)$ is the affine set defining the rounding errors (or initial uncertainties) and their propagation through computations as defined in~\cite{vmcai11}, and the affine form $\hat e_k^X$ is defined on the $\varepsilon_i^r$ that model the uncertainty on the real value, and the $\varepsilon_i^e$ that model the uncertainty on the rounding errors, \item $D^X \in {\cal M}(o+1,p)$ is the affine set defining the discontinuity errors, and $\hat d_k^X$ is defined on noise symbols $\varepsilon_i^d$, \item the floating-point value is seen as the perturbation by the rounding error of the real value, $\hat{f}_k^X = \hat{r}_k^X + \hat{e}_k^X$. \item $\Phi_r^X$ is the abstraction of the set of constraints on the noise symbols such that the real control flow reaches the control point, $\varepsilon^r \in \Phi_r^{X}$, and $\Phi_f^{X}$ is the abstraction of the set of constraints on the noise symbols such that the finite precision control flow reaches the control point, $(\varepsilon^r,\varepsilon^e) \in \Phi_f^{X}$. \end{itemize} \end{definition} A subtlety is that the same affine set $R^X$ is used to define the real value and the floating-point value as a perturbation of the real value, but with different constraints: the floating-point value is indeed a perturbation by rounding errors of an idealized computation that would occur with the constraints $\Phi_f^{X}$. \subsection{Test interpretation} \label{test} Consider a test \texttt{e1 op e2}, where \texttt{e1} and \texttt{e2} are two arithmetic expressions, and \texttt{op} an operator among $\leq,<,\geq,>,=,\neq$, the interpretation of this test in our abstract model reduces to the interpretation of \texttt{z op 0}, where \texttt{z} is the abstraction of expression \texttt{e1 - e2} with affine sets: \begin{definition} \label{def-meet1} Let $X$ be a constrained affine set over $p$ variables. We define $Z = [ \! [ e1 \mbox{ op } e2 ] \! ] X$ by $Y= [ \! [ x_{p+1} := e1 - e2 ] \! ] X$ in $Z = drop_{p+1}([ \! [ x_{p+1} \mbox{ op } 0 ] \! ] Y)$, where function $drop_{p+1}$ returns the affine sets from which component $p+1$ (the intermediary variable) has been eliminated. \end{definition} As already said, tests are interpreted independently on the affine sets for real and floating-point value. We use in Definition \ref{def-meet2}, the test interpretation on constrained affine sets introduced in \cite{cav10}: \begin{definition} \label{def-meet2} Let $X=(R^X,E^X,D^X,\Phi_r^{X},\Phi_f^{X})$ a constrained affine set. We define $Z = ([ \! [ x_k \mbox{ op } 0 ] \! ] X$ by $$ \left\lbrace \begin{array}{l} (R^Z,E^Z,D^Z) = (R^X,E^X,D^X)\\ \Phi^Z_r = \Phi^X_r \bigcap \alpha\left(\varepsilon^r \mid r_{0,k}^X + \sum_{i=1}^{n} r_{i,k}^X \varepsilon_i^r \mbox{ op } 0 \right)\\ \Phi^Z_f = \Phi^X_f \bigcap \alpha\left((\varepsilon^r,\varepsilon^e) \mid r_{0,k}^X + e_{0,k}^X + \sum_{i=1}^{n} ( r_{i,k}^X + e_{i,k}^X) \varepsilon_i^r + \sum_{j=1}^{m} e_{n+j,k}^X \varepsilon_j^e \mbox{ op } 0 \right) \end{array} \right. $$ \end{definition} \begin{example} \label{ex::tests} Consider the running example. We start with $\hat r^x_{[1]} = 2 + \varepsilon_1^r$, $\hat e^x_{[1]} = u$. The condition for the real control flow to take the \texttt{then} branch is $\hat r^x_{[1]} = 2 + \varepsilon_1^r \leq 2$, thus $\Phi^r$ is $\varepsilon_1^r \in [-1,0]$. The condition for the finite precision control flow to take the \texttt{then} branch is $ \hat f^x_{[1]} = \hat{r}^x_{[1]} + \hat{e}^x_{[1]} = 2 + \varepsilon_1^r + u \leq 2$, thus $\Phi^f$ is $\varepsilon_1^r \in [-1,-u]$. \end{example} \subsection{Interval concretisation} \label{sec::int_conc} The interval concretisation of the value of program variable $x_k$ defined by the abstract value $X=(R^X,E^X,D^X,\Phi_r^{X},\Phi_f^{X})$, is, with the notations of Section \ref{constrainedzonotopes}: \[ \left\lbrace \begin{array}{lll} \gamma_r(\hat r_k^X) &=& \Phi^X_r(r_{0,k}^X + \sum_{i=1}^n r_{i,k}^X \, \varepsilon_i^r) \\ \gamma_e(\hat e_k^X) &=& \Phi^X_f(e_{0,k}^X + \sum_{i=1}^n e_{i,k}^X \, \varepsilon_i^r + \sum_{j=1}^m e_{n+j,k}^x \, \varepsilon_j^e)\\ \gamma_d(\hat d_k^X) &=& \Phi^X_f(d_{0,k}^X + \sum_{l=1}^o d_{l,k}^x \, \varepsilon_l^d) \\ \gamma_f(\hat f_k^X) &=& \Phi^X_f(r_{0,k}^X + e_{0,k}^X + \sum_{i=1}^n (r_{i,k}^X + e_{i,k}^X)\, \varepsilon_i^r + \sum_{j=1}^m e_{n+j,k}^x \, \varepsilon_j^e) \end{array} \right. \] \begin{example} Consider variable \texttt{y} in the \texttt{else} branch of our running example. The interval concretisation of its real value on $\Phi^X_r$, is $\gamma_r(\hat r^y_{[4]}) = \Phi^X_r (2 + \varepsilon_1^r) = 2 + [0,1] = [2,3]$. The interval concretisation of its floating-point value on $\Phi^X_f$, is $\gamma_f(\hat f^y_{[4]}) = \Phi^X_f ( \hat{r}^y_{[4]} + u) = 2 + [-u,1] + u = [2,3+u]$. Actually, $\hat r^y_{[4]}$ is defined on $\Phi^X_r \cup \Phi^X_f$, as illustrated on Figure \ref{lst::ex1}, because it is both used to abstract the real value, or, perturbed by an error term, to abstract the finite precision value. \end{example} In other words, the concretisation of the real value is not the same when it actually represents the real value at the control point considered ($\gamma_r(\hat r_k^X)$), or when it represents a quantity which will be perturbed to abstract the floating-point value (in the computation of $\gamma_f(\hat f_k^X)$). \subsection{Transfer functions: arithmetic expressions} \label{arithmetic} We rely here on the transfer functions of \cite{vmcai11} for the full model of values and propagation of errors, except than some additional care is required due to these constraints. As quickly described in Section \ref{constrainedzonotopes}, constraints on noise symbols can be used to refine the abstraction of non affine operations. Thus, in order to soundly use the same affine set $R^X$ both for the real value and the floating-point value as a perturbation of a computation in real numbers, we use constraints $\Phi_r^{X} \cup \Phi_f^{X}$ to abstract transfer functions for the real value $R^X$ in arithmetic expressions. Of course, we will then concretize them either for $\Phi_f^{X}$ or $\Phi_r^{X}$, as described in Section \ref{sec::int_conc}. \begin{example} \label{runex3} Take the running example. In example \ref{runex2}, we computed the real form $\hat r^z$ in both branches, interpreting instruction \texttt{z=x*x}, for both sets of constraints $\Phi_r$. In order to have an abstraction of $\hat r^z$ that can be soundly used both for the floating-point and real values, we will now need to compute this abstraction and linearization for $\Phi_r \cup \Phi_f$. In the \texttt{then} branch, $\varepsilon_1^r$ is now taken in $[-1,0] \cup [-1,-u] = [-1,0]$, so that $\hat r^z_{[3]} = 3.875+3 \varepsilon_1^r + 0.125 \varepsilon_3^r$ remains unchanged. But in the \texttt{else} branch, $\varepsilon_1^r$ is now taken in $[0,1] \cup [-u,1] = [-u,1]$, so that \texttt{z=x*x} can still be linearized at $\varepsilon_1^r=0.5$ but we now have $\hat r^z_{[5]}$ linearized from $(2.5 + (\varepsilon_1^r-0.5)) (2.5 + (\varepsilon_1^r-0.5)) = 6.25 + 5 (\varepsilon_1^r-0.5) + (\varepsilon_1^r-0.5)^2$ where $-0.5-u \leq \varepsilon_1^r-0.5 \leq 0.5$, so that $\hat r^z_{[5]} = (3.75+\frac{(0.5+u)^2}{2}) + 5 \varepsilon_1^r + \frac{(0.5+u)^2}{2} \varepsilon_5^r = 3.875 + \frac{u+u^2}{2} + 5 \varepsilon_1^r + (0.125+\frac{u+u^2}{2})\varepsilon^r_5$. \end{example} \subsection{Join} \label{join} In this section, we consider we have upper bound operator $\sqcup$ on constrained affine sets, and focus on the additional term due to discontinuity in tests. As for the meet operator, we join component-wise the real and floating-point parts. But, in the same way as for the transfer functions, the join operator depends on the constraints on the noise symbols: to compute the affine set abstracting the real value, we must consider the join of constraints for real and float control flow, in order to soundly use a perturbation of the real affine set as an abstraction of the finite precision value. Let us consider the possibility of an unstable test: for a given input, the control flows of the real and of the finite precision executions differ. Then, when we join abstract values $X$ and $Y$ coming from the two branches, the difference between the floating-point value of $X$ and the real value of $Y$, $(R^X+E^X)-R^Y$, and the difference between the floating-point value of $X$ and the real value of $Y$, $(R^Y+E^Y)-R^X$, are also errors due to finite precision. The join of errors $E^X$, $E^Y$, $(R^X+E^X)-R^Y$ and $(R^Y+E^Y)-R^X$ can be expressed as $E^Z + D^Z$, where $E^Z = E^X \sqcup E^Y$ is the propagation of classical rounding errors, and $D^Z = D^X \sqcup D^Y \sqcup (R^X-R^Y) \sqcup (R^Y-R^X)$ expresses the discontinuity errors. The rest of this section will be devoted to an accurate computation of these discontinuity terms. A key point is to use the fact that we compute these terms only in the case of unstable tests, which can be expressed as an intersection of constraints on the $\varepsilon_i^r$ noise symbols. Indeed this intersection of constraints express the unstable test condition as a restriction of the sets of inputs (or equivalently the $\varepsilon_i^r$), such that an unstable test is possible. The fact that the same affine set $R^X$ is used both to abstract the real value, and the floating-point value when perturbed, is also essential to get accurate bounds. \begin{definition} We join two abstract values $X$ and $Y$ by $Z = X \sqcup Y$ defined as $Z=(R^Z,E^Z,D^Z,\Phi^X_r \cup \Phi^Y_r,\Phi^X_f \cup \Phi^Y_f)$ where $$ \left\lbrace \begin{array}{l} (R^Z,\Phi^Z_r \cup \Phi^Z_f) = (R^X,\Phi^X_r \cup \Phi^X_f) \sqcup (R^Y,\Phi^Y_r \cup \Phi^Y_f) \\ (E^Z,\Phi^Z_f) = (E^X,\Phi^X_f) \sqcup (E^Y,\Phi^Y_f) \\ D^Z = D^X \sqcup D^Y \sqcup (R^X-R^Y,\Phi^X_f \sqcap \Phi^Y_r) \sqcup (R^Y-R^X,\Phi^Y_f \sqcap \Phi^X_r) \\ \end{array} \right. $$ \end{definition} \begin{example} Consider again the running example, and let us restrict ourselves for the time being to variable \texttt{y}. We join $X=(\hat{r}^y_{[2]}=4+\varepsilon_1^r,\hat{e}^y_{[2]}=u+\delta \varepsilon_2^e,0,\varepsilon_1^r \in [-1,0],(\varepsilon_1^r,\varepsilon_2^e) \in [-1,-u]\times[-1,1])$ coming from the \texttt{then} branch with $Y=(\hat{r}^y_{[4]}=2+\varepsilon_1^r,\hat{e}^y_{[4]}=u,0,\varepsilon_1^r \in [0,1],\varepsilon_1^r \in [-u,1])$ coming from the \texttt{else} branch. Then we can compute the discontinuity error due to the first possible unstable test, when the real takes the \texttt{then} branch and float takes the \texttt{else} branch: $\hat{r}^y_{[4]}-\hat{r}^y_{[2]} = 2+\varepsilon_1^r - 4+\varepsilon_1^r = -2$, for $\varepsilon_1^r \in \Phi^Y_f \cap \Phi^X_r = [-u,1] \cap [-1,0] = [-u,0]$ (note that the restriction on $\varepsilon_1^r$ is not used here but will be in more general cases). The other possibility of an unstable test, when the real takes the \texttt{else} branch and float takes the \texttt{then} branch, occurs for $\varepsilon_1^r \in \Phi^X_f \cap \Phi^Y_r = [-1,-u] \cap [0,1] = \emptyset$: the set of inputs for which this unstable test can occur is empty, it never occurs. We get $Z=(3+\varepsilon_6^r,u+\delta \varepsilon_2^e,-2\chi_{[-u,0]}(\varepsilon_1^r),(\varepsilon_1^r,\varepsilon_6^r) \in [-1,1]^2,(\varepsilon_1^r,\varepsilon_6^r,\varepsilon_2^e) \in [-1,1]^3)$. \end{example} \section{Technical matters} \label{sec::technical} We gave here the large picture. Still, there are some technical matters to consider in order to efficiently compute accurate bounds for the discontinuity error in the general case. We tackle some of them in this section. \subsection{Constraint solving using slack variables} Take the following program, where the real value of inputs $x$ and $y$ are in range [-1,1], and both have an error bounded in absolute value by some small value $u$: \begin{lstlisting}[frame=single,language=C,escapechar=@,basicstyle=\scriptsize] x := [@-@1,1] + [@-@u,u]; // [1] ; 0 < u << 1 y := [@-@1,1] + [@-@u,u]; // [2] if (x < y) t = y @-@ x; // [3] else t = x @-@ y; // [4] \end{lstlisting} The test can be unstable, we want to prove the treatment continuous. Before the test, $\hat{r}^x_{[1]} = \varepsilon^r_1$, $\hat{e}^x_{[1]} = u \varepsilon^e_1$, $\hat{r}^y_{[2]} = \varepsilon^r_2$, $\hat{e}^y_{[2]} = u \varepsilon^e_2$. The conditions for the control flow to take the \texttt{then} branch are $\varepsilon^r_1<\varepsilon^r_2$ for the real execution, and $\varepsilon^r_1 + u \varepsilon^e_1 <\varepsilon^r_2 + u \varepsilon^e_2$ for the float execution. The real value of $t$ in this branch is $\hat{r}^t_{[3]} = \varepsilon^r_2 - \varepsilon^r_1$. In the \texttt{else} branch, the conditions are the reverse and $\hat{r}^t_{[4]} = \varepsilon^r_1 - \varepsilon^r_2$. Let us consider the possibility of unstable tests. The conditions for the floating-point to take the else branch while the real takes the then branch are $\varepsilon^r_1 + u \varepsilon^e_1 \geq \varepsilon^r_2 + u \varepsilon^e_2$ and $\varepsilon^r_1<\varepsilon^r_2$, from which we can deduce $-2 u < \varepsilon^r_1-\varepsilon^r_2 < 0$. Under these conditions, we can bound $\hat{r}^t_{[4]} - \hat{r}^t_{[3]} = 2 (\varepsilon^r_1 - \varepsilon^r_2) \in [-4u,0]$. The other unstable test is symmetric, we thus have proven that the discontinuity error is of the order of the error on inputs, that is the conditional block is robust. Note that on this example, we needed more than interval constraints on noise symbols, and would in general have to solve linear programs. However, we can remark that constraints on real and floating-point parts share the same subexpressions on the $\varepsilon^r$ noise symbols. Thus, introducing slack symbols such that the test conditions are expressed on these slack variables, we can keep the full precision when solving the constraints in intervals. Here, introducing $\varepsilon_3^r = \varepsilon^r_1-\varepsilon^r_2$, the unstable test condition is expressed as $\varepsilon_3^r < 0$ and $\varepsilon_3^r > -2u$. This is akin to using the first step of the simplex method for linear programs, where slack variables are introduced to put the problem in standard form. \subsection{Linearization of non affine computations near the test condition} There can be a need for more accuracy near the test conditions: one situation is when we have successive joins, where several tests may be unstable, such as the interpolator example presented in the experiments. In this case, it is necessary to keep some information on the states at the extremities when joining values (and get rid of this extra information as soon as we exit the conditional block). More interesting, there is a need for more accuracy near the test condition when the conditional block contains some non linear computations. \begin{example} \label{ex3} Consider again the running example. We are now interested in variable $z$. There is obviously no discontinuity around the test condition; still, our present abstraction is not accurate enough to prove so. Remember from Examples \ref{runex2} and \ref{runex3} that we linearize in each branch \texttt{x*x} for $\Phi_r \cup \Phi_f$, introducing new noise symbols $\varepsilon_3^r$ and $\varepsilon_5^r$. Let us consider the unstable test when the real execution takes the then branch and the floating-point execution the other branch, the corresponding discontinuity error $\hat r^z_{[5]} - \hat{r}^z_{[3]}$, under unstable test constraint $ -u < \varepsilon_1^r < 0$, is: \begin{equation} \label{eq1} \hat{r}^z_{[5]} - \hat{r}^z_{[3]} = \frac{u+u^2}{2} + 2 \varepsilon_1^r + (0.125+\frac{u+u^2}{2})\varepsilon^r_5 - 0.125 \varepsilon_3^r. \end{equation} In this expression, from constraint $ -u < \varepsilon_1^r < 0$ we can prove that $\frac{u+u^2}{2} + 2 \varepsilon_1^r + \frac{u+u^2}{2}\varepsilon^r_5$ is of the order of the input error $u$. But the new noise term $0.125(\varepsilon^r_5-\varepsilon_3^r)$ is only bounded by $[-0.25,0.25]$. We thus cannot prove continuity here. This is illustrated on the left-hand side of Figure \ref{fig::linearization}, on which we represented the zonotopic abstractions $\hat r^z_{[3]}$ and $\hat r^z_{[5]}$: it clearly appears that the zonotopic abstraction is not sufficient to accurately bound the discontinuity error (in the ellipse), that will locally involve some interval-like computation. Indeed, in the linearization of $\hat r^z_{[3]}$ (resp $\hat r^z_{[5]}$), we lost the correlation between the new symbol $\varepsilon_3^r$ (resp $\varepsilon_5^r$), and symbol $\varepsilon_1^r$ on which the unstable test constraint is expressed. As a matter of fact, we can locally derive in a systematic way some affine bounds for the new noise symbols used for linearization in terms of the existing noise symbols, using the interval affine forms of \cite{SAS07}, centered at the extremities of the constraints $(\Phi_r^X \cup \Phi_f^X)(\varepsilon_i^r)$ of interest. In the \texttt{then} branch, we have $\varepsilon_1^r \in [-1,0]$, and \texttt{z=x*x} is linearized from $3.75 + (\varepsilon_1^r+0.5)+ (\varepsilon_1^r+0.5)^2$, using $(\varepsilon_1+0.5)^2 \in [0,0.25]$, into $\hat r^z_{[3]} = 3.875+3 \varepsilon_1^r + 0.125 \varepsilon_3^r$. We thus know at linearization time that $\varepsilon_3^r = f(\varepsilon_1^r) = 8(\varepsilon_1^r+0.5)^2 - 1$. Using the mean value theorem around $\varepsilon_1^r=0$ and restricting $\varepsilon_1^r \in [-0.25,0]$, we write $$\varepsilon_3^r(\varepsilon_1^r) = f(0) + \Delta \varepsilon_1^r,$$ where interval $\Delta$ bounds the derivative $f'(\varepsilon_1^r)$ in the range $[-0.25,0]$. We get $\varepsilon_3^r = 1 + 16 ([-0.25,0]+0.5) \varepsilon_1^r = 1+[4,8]\varepsilon_1^r,$ which we can also write $1+8\varepsilon_1^r \leq \varepsilon_3^r \leq 1+4\varepsilon_1^r$ for $\varepsilon_1^r \in [-0.25,0]$. Variable \texttt{z} can thus locally (for $\varepsilon_1^r \in [-0.25,0]$) be expressed more accurately as a function of $\varepsilon_1^r$, this is what is represented by the darker triangular region inside the zonotopic abstraction, on the right-hand side of Figure \ref{fig::linearization}. \\ \begin{figure} \centering \begin{tabular}{ll} \begin{tikzpicture}[xscale=5,yscale=0.5] \draw[xstep=0.5cm,ystep=1cm,gray,very thin] (1.5,2) grid (2.5,7); \draw[->] (1.5,2) -- (2.7,2) node[anchor=south] {$\varepsilon_1^r$}; \draw (1.5 cm,2 cm) node[below] {-0.5}; \draw (2 cm,2 cm) node[below] {0}; \draw (2.5 cm,2 cm) node[below] {0.5}; \draw[->] (1.5,2) -- (1.5,8) node[anchor=east] {$z$}; \foreach \bfm{y}/\ytext in {2,3,4,5,6,7} \draw (1.5 cm,\bfm{y} cm) node[left] {$\ytext$}; \filldraw[opacity=0.5, fill=green!30!white, draw=green!50!black] (1.5,2.25) -- (1.5,2.5) --(2,4) -- (2,3.750) -- cycle; \filldraw[opacity=0.5, fill=blue!30!white, draw=blue!50!black] (1.95,3.55) -- (1.95,3.8) -- (2.5,6.4) -- (2.5,6.15) -- cycle; \draw [domain=1.5:2.5,purple,thick,dashed] plot (\bfm{x},{\bfm{x}*\bfm{x}}); \draw[red,dashed] (1.95,2) -- (1.95,7); \draw[red,dashed] (2,2) -- (2,7); \draw[red] (1.975,3.75) ellipse (0.2 and 1); \draw[green] (1.6 cm, 2.8 cm) node[anchor=south] {$r^z_{[3]}$}; \draw[blue] (2.1 cm, 4.8 cm) node[anchor=south] {$r^z_{[5]}$}; \end{tikzpicture} & \begin{tikzpicture}[xscale=15,yscale=2] \draw[xstep=0.125cm,ystep=0.5cm,gray,very thin] (1.75,3) grid (2.,4); \draw[->] (1.75,3) -- (2.05,3) node[anchor=south] {$\varepsilon_1^r$}; \draw (1.75 cm,3 cm) node[below] {-0.25}; \draw (2. cm,3 cm) node[below] {0}; \draw[->] (1.75,3) -- (1.75,4.2) node[anchor=east] {$z$}; \foreach \bfm{y}/\ytext in {3,4} \draw (1.75 cm,\bfm{y} cm) node[left] {$\ytext$}; \filldraw[opacity=0.5, fill=green!30!white, draw=green!50!black] (1.75,3.0) -- (1.75,3.25) --(2,4) -- (2,3.750) -- cycle; \draw [domain=1.75:2.,purple,thick,dashed] plot (\bfm{x},{\bfm{x}*\bfm{x}}); \filldraw[opacity=0.5, fill=brown!50!white, draw=black] (1.75,3) -- (1.75,3.125) --(2,4) -- cycle; \end{tikzpicture} \end{tabular} \caption{Improvement by local linearization for non affine computations} \label{fig::linearization} \end{figure} In the same way, $\varepsilon_5^r$ can be expressed in the \texttt{else} branch as an affine form $1+\Delta' \varepsilon_1^r$ with interval coefficient $\Delta'$, so that with the unstable test constraint $ -u < \varepsilon_1^r < 0$, we can deduce from Equation (\ref{eq1}) that there exists some constant $K$ such that $|\hat{r}^z_{[5]} -\hat{r}^z_{[3]}| \leq K u$, that is the test is robust. Of course, we could refine even more the bounds for the discontinuity error by considering linearization on smaller intervals around the boundary condition. \end{example} \section{Experiments} \label{experiments} In what follows, we analyze some examples inspired by industrial codes and literature, with our implementation in our static analyzer FLUCTUAT. \paragraph{A simple interpolator} The following example implements an interpolator, affine by sub-intervals, as classically found in critical embedded software. It is a robust implementation indeed. In the code below, we used the FLUCTUAT assertion \texttt{FREAL\_WITH\_ERROR(a,b,c,d)} to denote an abstract value (of resulting type \texttt{float}), whose corresponding real values are $x \in [a,b]$, and whose corresponding floating-point values are of the form $x+e$, with $e \in [c,d]$. \begin{lstlisting}[language=C,frame=single,escapechar=@,basicstyle=\tiny] float R1[3], E, res; R1[0] = 0; R1[1] = 5 * 2.25; R1[2] = R1[1] + 20 * 1.1; E = FREAL_WITH_ERROR(0.0,100.0,@[email protected],0.00001); if (E < 5) res = E*2.25 + R1[0]; else if (E < 25) res = (E@-@5)*1.1 + R1[1]; else res = R1[2]; return res; \end{lstlisting} The analysis finds that the interpolated \texttt{res} is within [-2.25e-5,33.25], with an error within [-3.55e-5,2.4e-5], that is of the order of magnitude of the input error despite unstable tests. \paragraph{A simple square root function} This example is a rewrite in some particular case, of an actual implementation of a square root function, in an industrial context: \begin{lstlisting}[language=C,frame=single,escapechar=@,basicstyle=\tiny double sqrt2 = 1.414213538169860839843750; double S, I; I = DREAL_WITH_ERROR(1,2,0,0.001); if (I>=2) S = sqrt2*(1+(I/2@-@1)*(.5@[email protected]*(I/2@-@1))); else S = 1+(I@-@1)*(.5+(I@-@1)*(@[email protected]+(I@-@1)*.0625)); \end{lstlisting} With the former type of analysis within FLUCTUAT, we get the unsound result - but an unstable test is signalled - that \texttt{S} is proven in the real number semantics to be in [1,1.4531] with a global error in [-0.0005312,0.00008592]. As a matter of fact, the function does not exhibit a big discontinuity, but still, it is bigger than the one computed above. At value 2, the function in the \texttt{then} branch computes \texttt{sqrt2} which is approximately 1.4142, whereas the \texttt{else} branch computes 1+0.5-0.125+0.0625=1.4375. Therefore, for instance, for a real number input of 2, and a floating-point number input of 2+$ulp(2)$, we get a computation error on $S$ of the order of 0.0233. FLUCTUAT, using the domain described in this paper finds that \texttt{S} is in the real number semantics within [1,1.4531] with a global error within [-0.03941,0.03895], the discontinuity at the test accounting for most of it, i.e. an error within [-0.03898,0.03898] (which is coherent with respect to the rough estimate of 0.0233 we made). \paragraph{Transmission shift from \cite{DBLP:conf/rtss/MajumdarS09}} We consider here the program from \cite{DBLP:conf/rtss/MajumdarS09} that implements a simple model of a transmission shift: according to a variable \texttt{angle} measured, and the \texttt{speed}, lookup tables are used to compute \texttt{pressure1} and \texttt{pressure2}, and deduce also the current \texttt{gear} (3 or 4 here). As noted in \cite{DBLP:conf/rtss/MajumdarS09}, \texttt{pressure1} is robust. But a small deviation in \texttt{speed} can cause a large deviation in the output \texttt{pressure2}. As an example, when \texttt{angle} is 34 and \texttt{speed} is 14, \texttt{pressure2} is 1000. But if there is an error of 1 in the measurement of \texttt{angle}, so that its value is 35 instead of 34, then \texttt{pressure2} is found to be 0. Similarly with an error of 1 on \texttt{speed}: if it is wrongly measured to be 13 instead of 14, \texttt{pressure2} is found equal to 0 instead of 1000, again. This is witnessed by our discontinuity analysis. For \texttt{angle} in [0,90], with an error in [-1,1] and \texttt{speed} in [0,40], with an error in [-1,1], we find \texttt{pressure1} equal to 1000 without error and \texttt{pressure2} in [0,1000] with an error in [-1000,1000], mostly due to test \texttt{if (oval <= 3)} in function \texttt{lookup2\_2d}. The treatment on \texttt{gear} is found discontinuous, because of test \texttt{if (3*speed <= val1)}. \paragraph{Householder} Let us consider the C code printed on the left hand side of Figure \ref{fig::householder}, which presents the results of the analysis of this program by FLUCTUAT. This program computes in variable \texttt{Output}, an approximation of the square root of variable \texttt{Input}, which is given here in a small interval [16.0,16.002]. The program iterates a polynomial approximation until the difference between two successive iterates \texttt{xn} and \texttt{xnp1} is smaller than some stopping criterion. At the end, it checks that something indeed close to the mathematical square root is computed, by adding instruction \texttt{should\_be\_zero = Output-sqrt(Input);} Figure \ref{fig::householder} presents the result of the analysis for the selected variable \texttt{should\_be\_zero}, at the end of the program. The analyzer issues an unstable test warning, which line in the program is highlighted in red. On the right hand side, bounds for the floating-point, real values and error of \texttt{should\_be\_zero} are printed. The graph with the error bars represents the decomposition on the error on its provenance on the lines of the program analyzed: in green are standard rounding errors, in purple the discontinuity error due to unstable tests. When an error bar is selected (here, the purple one), the bounds for this error are printed in the boxes denoted ``At current point''. \begin{figure}[htbp] \begin{center} \epsfig{file=Householder_sqrt.png,width=12.cm,height=6cm,clip=} \caption{Fluctuat analysis of the Householder scheme: error due to unstable test is purple} \label{fig::householder} \end{center} \end{figure} The analyzer here proves that when the program terminates, the difference in real numbers between the output and the mathematical square root of the input is bounded by $[-1.03e^{-8},1.03e^{-8}]$: the algorithm in real numbers indeed computes something close to a square root, and the method error is of the order of the stopping criterion \texttt{eps}. The floating-point value of the difference is only bounded in $[-1.19e^{-6},1.19e^{-6}]$, and the error mainly comes from the instability of the loop condition: this signals a difficulty of this scheme when executed in simple precision. And indeed, this scheme converges very quickly in real numbers (FLUCTUAT proves that it always converges in 6 iterations for the given range of inputs), but there exists input values in [16.0,16.002] for which the floating-point program never converges. \section{Conclusion} We have proposed an abstract interpretation based static analysis of the robustness of finite precision implementations, as a generalization of both software robustness or continuity analysis and finite precision error analysis, by abstracting the impact of finite precision in numerical computations and control flow divergences. We have demonstrated its accuracy, although it could still be improved. We could also possibly use this abstraction to automatically generate inputs and parameters leading to instabilities. In all cases, this probably involves resorting to more sophisticated constraint solving: indeed our analysis can generate constraints on noise symbols, which we only partially use for the time being. We would thus like to go along the lines of \cite{rueher2012}, which refined the results of a previous version of FLUCTUAT using constraint solving, but using more refined interactions in the context of the present abstractions. \bibliographystyle{plain}
1,477,468,750,515
arxiv
\section*{Introduction} Geometric mean of positive operators is introduced by Pusz and Woronowicz in terms of associated positive sesquilinear forms, which is later generalized in various directions (see \cite{B,KA,LL} for example). When this is applied to normalized positive functionals on a *-algebra (so-called states), it leads us to an approach to the theory of positive cones in the modular theory of W*-algebras, which turns out to be closely related to A.~Uhlmann's transition probability (or fidelity) between states (see \cite{AU,U1,R}). We here first clarify how inner products between square roots of states (which is referred to as ``transition amplitude'' in this paper just from its superficial appearance but without any physically serious justification) is relevant in representation theory of C*-algebras, which is then combined with Pusz-Woronowicz' geometric mean to get a variational expression for our transition amplitudes. The result is especially useful in establishing an approximation formula, which says that, if states $\varphi$ and $\psi$ of a C*-algebra $A$ are restricted to an increasing sequence of C*-subalgebras $A_n$ with the induced states of $A_n$ denoted by $\varphi_n$ and $\psi_n$ respectively, then we have \[ (\varphi^{1/2}|\psi^{1/2}) = \lim_{n \to \infty} (\varphi_n^{1/2}|\psi_n^{1/2}) \] under a mild assumption on the density of $\cup_n A_n$ in $A$. A decomposition theory of transition amplitudes is also described in the framework of W*-algebras for further applications. \section{Geometric Mean of Positive Forms} We begin with reviewing Pusz-Woronowicz' geometric mean on positive forms in a slightly modified fashion from the original account. Let $\alpha, \beta$ be positive (sesquilinear) forms on a complex vector space $H$. By a \textbf{representation} of an unordered pair $\{ \alpha, \beta\}$, we shall mean a linear map $j: H \to K$ of $H$ into a Hilbert space $K$ together with (possibly unbounded) positive self-adjoint operators $A$, $B$ on $K$ such that $A$ commutes with $B$ in the strong sense, $j(H)$ is a core for the self-adjoint operator $A + B$ and \[ \alpha(x,y) = (j(x)|Aj(y)), \quad \beta(x,y) = (j(x)|Bj(y)) \] for $x, y \in H$. Note that, from the core condition, $j(H)$ is included in the domains of $A = \frac{A}{A+B+I} (A+B+I)$, $B = \frac{B}{A+B+I} (A+B+I)$ ($I$ being the identity operator) and therefore in the domains of $A^{1/2}$ and $B^{1/2}$. When $A$ and $B$ are bounded, we say that the representation is \textbf{bounded}. Note that, the core condition is reduced to the density of $j(H)$ in $K$ for a bounded representation. A hermitian form $\gamma$ on $H$ is said to be \textbf{dominated} by $\{ \alpha, \beta\}$ if $|\gamma(x,y)|^2 \leq \alpha(x,x)\,\beta(y,y)$ for $x, y \in H$. Note that the order of $\alpha$ and $\beta$ is irrelevant in the domination. \begin{Theorem}[Pusz-Woronowicz] Let $(j:H \to K, A,B)$ be a representation of positive forms $\alpha, \beta$ on $H$. Then, for $x \in H$, we have the following variational expression. \begin{align*} (A^{1/2}j(x)|B^{1/2}j(x)) &= \sup \{ \gamma(x,x); \text{$\gamma$ is positive and dominated by $\{ \alpha, \beta\}$} \}\\ &= \sup \{ \gamma(x,x); \text{$\gamma$ is dominated by $\{ \alpha, \beta\}$} \}. \end{align*} \end{Theorem} The positive form defined by the right hand side of the theorem is called the \textbf{geometric mean} of $\{ \alpha, \beta \}$ and denoted by $\sqrt{\alpha\beta} = \sqrt{\beta\alpha}$. \section{$L^2$-Analysis on Quasi-equivalence of States} Associated to a W*-algebra $M$, we have the standard Hilbert space $L^2(M)$ so that its positive cone consists of symbols $\varphi^{1/2}$ where $\varphi$ varies in the set $M_*^+$ of normal positive linear functionals of $M$. On the Hilbert space $L^2(M)$, $M$ is represented by compatible left and right actions in such a way that \[ (\varphi^{1/2}|x\varphi^{1/2}) = \varphi(x) = (\varphi^{1/2}|\varphi^{1/2}x) \quad \text{for $x \in M$} \] (inner products being linear in the second variable by our convention). This type of vectors are known to satisfy the following inequalities (Powers-St\o rmer-Araki) \[ \|\varphi^{1/2} -\psi^{1/2}\|^2 \leq \| \varphi - \psi\| \leq \| \varphi^{1/2} - \psi^{1/2}\|\, \| \varphi^{1/2} + \psi^{1/2}\|. \] Note that, given a central projection $q$ in $M$, we have the following natural identifications for the reduced W*-algebra $qM = Mq$: \[ (qM)_* = qM_* = M_*q, \quad L^2(qM) = qL^2(M) = L^2(M)q. \] Also note that there is a natural bilinear map $L^2(N)\times L^2(N) \to N_* = L^1(N)$ such that $\varphi^{1/2}\times \varphi^{1/2}$ is mapped to $\varphi$. The evaluation map $N_* \ni \varphi \mapsto \varphi(1) \in \text{\ym C}$ is also denoted by $\langle \varphi\rangle = \varphi(1)$ in this paper, which satisfies trace property $\langle \varphi^{1/2}\psi^{1/2} \rangle = \langle \psi^{1/2}\varphi^{1/2} \rangle$. If $\varphi$ is faithful, we denote by $\Delta_\varphi$ and $J_\varphi$ the associated modular operator and modular conjugation respectively. The positive (self-adjoint) operator $\Delta_\varphi^{1/2}$ has a linear subspace $M\varphi^{1/2}$ as a core and we see \[ \Delta_\varphi^{1/2}(x\varphi^{1/2}) = \varphi^{1/2}x \quad \text{and} \quad J_\varphi(x\varphi^{1/2}) = \varphi^{1/2}x^*. \] More generally, if $\psi$ is another positive normal functional of $M$, then the half-powered relative modular operator $\Delta_{\psi,\varphi}^{1/2}$ contains $M\varphi^{1/2}$ as a core and we have $\Delta_{\psi,\varphi}(x\varphi^{1/2}) = \psi^{1/2}x$ for $x \in M$. Consult \cite{AA, MTB} for systematic accounts on all these operations other than the standard texts on modular theory such as \cite{BR1,T1,T2}. \begin{Lemma} For a positive normal functional $\omega$ of a W*-algebra, let $e$ and $z$ be its support and central support respectively. Then we have the equalities \begin{gather*} L^2(zM) = zL^2(M) = L^2(M)z = \overline{M\omega^{1/2}M},\\ eL^2(M) = \overline{\omega^{1/2}M}, \quad L^2(M)e = \overline{M\omega^{1/2}}, \quad eL^2(M)e = L^2(eMe). \end{gather*} Here bar denotes the closure in $L^2(M)$. Furthermore, $\overline{M\omega^{1/2}} = \overline{\omega^{1/2}M}$ if and only if $\omega$ is faithful on $zM = Mz$. \end{Lemma} \begin{proof} For $x \in M$, \[ (\omega^{1/2}x|\omega^{1/2}e) = \omega(ex^*) = \omega(x^*) = (\omega^{1/2}x|\omega^{1/2}) \] shows that $\omega^{1/2}(1-e) = 0$; $\overline{M\omega^{1/2}} \subset L^2(M)e$. Let $\pi$ be a normal representation of $M$ on $\overline{M\omega^{1/2}}$ given by left multiplication. Since the projection to the subspace $\overline{M\omega^{1/2}}$ commutes with the left action of $M$, we can find a projection $p$ in $M$ such that $\overline{M\omega^{1/2}} = L^2(M)p$. Particularly we have $\omega(1-p) = \omega^{1/2}\omega^{1/2}(1-p) = 0$ and therefore $e \leq p$. Let $Q$ be the projection to the subspace $\overline{M\omega^{1/2}M} \subset L^2(M)$. Then $Q$ is realized by multiplication of a central projection $q$ of $M$. From $(1-q)\omega = (1-q)\omega^{1/2}\omega^{1/2} = 0$, we see that $z \leq q$; $L^2(M)z \subset \overline{M\omega^{1/2}M}$. On the other hand, $x\omega^{1/2}yz = x\omega^{1/2}zy = x\omega^{1/2}y$ shows the reverse inclusion. Assume that $\overline{M\omega^{1/2}} = \overline{\omega^{1/2}M}$. If $x \in zM$ satisfies $\omega(x^*x) = 0$, i.e., $x\omega^{1/2} = 0$, then \[ xL^2(zM) = \overline{xM\omega^{1/2}M} = \overline{x\omega^{1/2}MM} = 0 \] and hence $x = 0$. Conversely, if $\omega$ is faithful on $zM$, the associated vector $\omega^{1/2}$ is cyclic and separating for $zM$; \[ \overline{M\omega^{1/2}} = L^2(zM) = \overline{\omega^{1/2}M}. \] Let $\omega_e$ be the restriction of $\omega$ to $eMe$, which is faithful. Since $e$ commutes with $\omega$, the relation $a\omega_e^{1/2} = \omega_e^{1/2}b$ with $a, b \in eMe$ implies $a\omega^{1/2} = \omega^{1/2}b$ by the reduction relation for modular operators (a consequence of Connes' $2\times 2$ matrix analysis and more results can be found in \cite{PT}), which gives the unitarity of \[ L^2(eMe) \ni x \omega_e^{1/2}y \mapsto x\omega^{1/2}y \in eL^2(M)e. \] \end{proof} \begin{Remark}~ The support projection $e$ is characterized as the minimal one among projections $p$ in $M$ satisfying $\overline{M\omega^{1/2}} = L^2(M)p$. \end{Remark} \begin{Corollary} Let $\varphi$ and $\psi$ be states of a C*-algebra $A$. \begin{enumerate} \item $\varphi$ and $\psi$ are disjoint if and only if $A\varphi^{1/2}A$ and $A\psi^{1/2}A$ are orthogonal. \item $\varphi$ and $\psi$ are quasi-equivalent if and only if $\overline{A\varphi^{1/2}A} = \overline{A\psi^{1/2}A}$. \item The state $\varphi$ is pure if and only if $\overline{A\varphi^{1/2}} \cap \overline{\varphi^{1/2}A} = \text{\ym C} \varphi^{1/2}$. \end{enumerate} \end{Corollary} \begin{proof} Given a state $\varphi$ of a C*-algebra $A$, let $z(\varphi)$ be the central support of $\varphi$ in the universal envelope $A^{**}$. Then it is well-known (see \cite[Chapter~3]{P} for example) that $\varphi$ and $\psi$ are disjoint (resp.~quasi-equivalent) if and only if $z(\varphi)z(\psi) = 0$ (resp.~$z(\varphi) = z(\psi)$). Since $\overline{A\varphi^{1/2} A} = \overline{A^{**}\varphi^{1/2} A^{**}}$ in $L^2(A^{**})$, (i) and (ii) are consequences of the lemma. Let $e$ be the support of $\varphi$ in $A^{**}$. Then the identity \[ \overline{A\varphi^{1/2}} \cap \overline{\varphi^{1/2}A} = L^2(A^{**})e \cap eL^2(A^{**}) = L^2(eA^{**}e) \] shows that the condition in (iii) is equivalent to $eA^{**}e = \text{\ym C} e$, i.e., the purity of $\varphi$. \end{proof} Let $\omega$ be a state of a C*-algebra $A$ and $\{ \tau_t \in {\rm Aut}(A) \}_{t \in \text{\ym R}}$ be a one-parameter group of *-isomorphisms. Recall that $\omega$ and $\{ \tau_t \}$ satisfy the \textbf{KMS-condition} if the following requirements are satisfied: Given $x, y \in A$, the function $\text{\ym R} \ni t \mapsto \omega(x\tau_t(y))$ is analytically extended to a continuous function on the strip $\{ \zeta \in \text{\ym C}; -1 \leq \Im \zeta \leq 0 \}$ so that $\omega(x\tau_t(y))|_{t=-i} = \omega(yx)$. If one replaces $y$ with $\tau_s(y)$ and $x$ with $1$, then the condition takes the form $\omega(\tau_{s-i}(y)) = \omega(\tau_s(y))$ for $s \in \text{\ym R}$ and we see that the analytic function $\omega(\tau_z(y))$ is periodically extended to an entire analytic function. Thus $\omega(\tau_t(y))$ is a constant function of $t$; the automorphisms $\tau_t$ make $\omega$ invariant. \begin{Lemma} If $\omega$ satisfies the KMS-condition, then $\overline{A\omega^{1/2}} = \overline{\omega^{1/2}A}$. \end{Lemma} \begin{proof} We argue as in \cite{BR2}: By the invariance of $\omega$, a unitary operator $u(t)$ in $\overline{A\omega^{1/2}}$ is defined by $u(t)(x\omega^{1/2}) = \tau_t(x)\omega^{1/2}$, which is continuous in $t$ from the continuity assumption on the function $\omega(x\tau_t(y))$. Moreover, the function $\text{\ym R} \ni t \mapsto u(t)x\omega^{1/2}$ is analytically continued to the strip $\{ -1 \leq \Im\zeta \leq 0 \}$. By Kaplansky's density theorem and analyticity preservation for local uniform convergence, the same property holds for $x \in A^{**}$ and the KMS-condition takes the form \[ (x\omega^{1/2}|u(t)y\omega^{1/2})|_{t=-i} = (\omega^{1/2}x|\omega^{1/2}y) \quad \text{for $x, y \in A^{**}$.} \] Let $z$ be the central support of $\omega$ in $A^{**}$ and assume that $a \in zA^{**}$ satisfies $a\omega^{1/2} = 0$. Then, $xa\omega^{1/2} = 0$ for $x \in A^{**}$ and therefore $(\omega^{1/2}(xa)|\omega^{1/2}y) = 0$ for any $y \in A^{**}$ by analytic continuation, whence $\omega^{1/2}xa = 0$ for $x \in A^{**}$. Thus $zL^2(A^{**})a = 0$ and we have $a = 0$. \end{proof} As a simple application of our analysis, we record here a formula which describes transition amplitude between purified states. First recall the notion of purification on states introduced by S.L.~Woronowicz (\cite{W}): Given a state $\varphi$ of a C*-algebra $A$, its purification $\Phi$ is a state on $A\otimes A^\circ$ defined by \[ \Phi(a\otimes b^\circ) = \langle \varphi^{1/2} a\varphi^{1/2}b \rangle. \] Here $A^\circ$ denotes the oppositve algebra of $A$ with $a \mapsto a^\circ$ denoting the natural antimultiplicative isomorphism. From the above definition, $(a\otimes b^\circ) \Phi^{1/2} \mapsto a\varphi^{1/2} b$ gives rise to a unitary isomorphism $\overline{(A\otimes A^\circ)\Phi^{1/2}} \cong \overline{A\varphi^{1/2} A}$ and the GNS-representation of $A\otimes A^\circ$ with respect to $\Phi$ generates the von Neumann algebra $M\vee M'$ with $M = A^{**} z(\varphi)$ represented on $\overline{A \varphi^{1/2} A}$ by left multiplication. Thus $\varphi$ is a factor state if and only if $\Phi$ is a pure state. Moreover, two factor states $\varphi$ and $\psi$ of $A$ are quasi-equivalent if and only if their purifications are equivalent. \begin{Proposition} Let $\varphi$ and $\psi$ be factor states of a C*-algebra $A$ with their purifications denoted by $\Phi$ and $\Psi$ respectively. Then we have \[ (\Phi^{1/2}|\Psi^{1/2}) = (\varphi^{1/2}|\psi^{1/2})^2. \] \end{Proposition} \begin{proof} In view of the equalities \[ (\varphi^{1/2}|\psi^{1/2}) = 0 = (\Phi^{1/2}|\Psi^{1/2}). \] for disjoint $\varphi$ and $\psi$, we need to consider the case that $\varphi$ and $\psi$ are quasi-equivalent, i.e., $z(\varphi) = z(\psi)$. Since $\varphi$ and $\psi$ are assumed to be factor states, their purifications $\Phi$ and $\Psi$ are pure with the associated GNS-representations of $A\otimes A^\circ$ generate the full operator algebra ${\hbox{\sy L}}(L^2(M))$. Thus, through the obvious identification $L^2({\hbox{\sy L}}(L^2(M))) = L^2(M)\otimes L^2(M)$, $\Phi^{1/2}$ and $\Psi^{1/2}$ correspond to $\varphi^{1/2}\otimes \varphi^{1/2}$ and $\psi^{1/2}\otimes \psi^{1/2}$ respectively, whence we have \[ (\Phi^{1/2}|\Psi^{1/2}) = (\varphi^{1/2}\otimes \varphi^{1/2} | \psi^{1/2}\otimes \psi^{1/2}) = (\varphi^{1/2}|\psi^{1/2})^2. \] \end{proof} \begin{Remark} For purifications of states on a commutative C*-algebra, we have $(\Phi^{1/2}|\Psi^{1/2}) = (\varphi^{1/2} | \psi^{1/2})$. The general case is a mixture of these two formulas. \end{Remark} \begin{Lemma} Let $\pi: A \to M$ be a homomorphism from a C*-algebra $A$ into a W*-algebra $M$ and assume that $\pi(A)$ is *-weakly dense in $M$. Then we have an isometry $T: L^2(M) \to L^2(A^{**})$ such that $T(\pi(a)\varphi^{1/2}\pi(b)) = a(\varphi\circ \pi)^{1/2}b$ if $\varphi \in M_*^+$ and $a, b \in A$. \end{Lemma} \begin{proof} Since the map $M_* \ni \varphi \mapsto \varphi\circ\pi \in A^*$ is norm-continuous (in fact it is contractive), we have $A^{**} \to M$ as its transposed map, which is $\pi$ when restricted to $A \subset A^{**}$. In other words, we see that $\pi$ is extended to a normal homomorphism $\widetilde\pi: A^{**} \to M$ of W*-algebras in such a way that, if $\varphi\circ\pi$ is regarded as a normal functional on $A^{**}$, it is equal to $\varphi\circ\widetilde\pi$. By our weak*-density assumption, $\widetilde\pi$ is surjective and and we can find a central projection $z \in A^{**}$ so that $\ker\widetilde\pi = zA^{**}$ and $(1 - z)A^{**} \cong M$ by $\widetilde\pi$. From the relation \[ z(\varphi\circ\pi) = z(\varphi\circ \widetilde\pi) = \widetilde\pi(z)(\varphi\circ \widetilde\pi) = 0, \] one sees that the isomorphism $M_* \cong zA^*$ takes the form $\varphi \mapsto (1-z)(\varphi\circ\pi) = \varphi\circ\pi$, which yields the formula in question by taking square roots. \end{proof} \begin{Corollary} Let $\pi:A \to B$ be a *-homomorphism between C*-algebras and $\varphi$, $\psi$ be positive functionals of $B$. Assume that, given $a \in A$ and $b \in B$, we can find a norm-bounded sequence $\{ a_n\}_{n \geq 1}$ in $A$ such that \[ \lim_{n \to \infty} \pi(a_n)\pi(a)\varphi^{1/2} = b\pi(a)\varphi^{1/2}, \quad \lim_{n \to \infty} \pi(a_n)\pi(a)\psi^{1/2} = b\pi(a)\psi^{1/2}. \] Then we have \[ (\varphi^{1/2}|\psi^{1/2}) = ((\varphi\circ\pi)^{1/2}|(\psi\circ\pi)^{1/2}). \] \end{Corollary} \begin{proof} Let $z(\varphi)$ and $z(\psi)$ be the central projections in $B^{**}$ specified by \[ \overline{B\varphi^{1/2}B} = z(\varphi)L^2(B^{**}), \quad \overline{B\psi^{1/2}B} = z(\psi) L^2(B^{**}). \] Let $M = (z(\varphi)\vee z(\psi)) B^{**}$ and $\pi_M: A \to M$ be a homomorphism defined by $\pi_M(a) = (z(\varphi)\vee z(\psi))\pi(a)$. Let $\rho$ be the direct sum of GNS-representations associated to $\varphi$ and $\psi$. Then $\rho$ is supported by $z(\varphi)\vee z(\psi)$: $\rho$ is extended to an isomorphism of $(z(\varphi)\vee z(\psi))B^{**}$ onto $\rho(B)''$. On the other hand, by the approximation assumption, $\rho(\pi(A))$ is dense in $\rho(B)$ with respect to the strong operator topology. Thus $\pi_M(A)$ is *-weakly dense in $M$ and the lemma can be applied if one notices that $(z(\varphi)\vee z(\psi))\varphi = \varphi$ and $(z(\varphi)\vee z(\psi))\psi = \psi$ as identities in the predual of $B^{**}$. \end{proof} \begin{Example} Consider quasifree states $\varphi_S$ and $\varphi_T$ of a CCR C*-algebra $C^*(V,\sigma)$. Let $(\ |\ )$ be a positive inner product in $V$ majorizing both of $S+\overline S$ and $T+\overline T$. For example, one may take $(x|y) = (S+ \overline{S} + T + \overline{T})(x,y)$ as before. Then the presymplectic form $\sigma$ is continuous relative to $(\ |\ )$ and, if we let $V'$ be the associated Hilbert space (i.e., the completion of $V/\ker(\ |\ )$ with respect to $(\ |\ )$), $\sigma$ induces a presymplectic form $\sigma'$ on $V'$. Moreover, $S$ and $T$ also give rise to polarizations $S'$ and $T'$ on the presymplectic vector space $(V', \sigma')$ respectively. Let $\pi: C^*(V,\sigma) \to C^*(V',\sigma')$ be the *-homomorphism induced from the canonical map $V \to V'$ ($\pi(e^{iv}) = e^{iv'}$ if $v'$ represents the quotient of $v$). Then $\pi$ satisfies the approximation condition with respect to quasifree states associated to $S'$ and $T'$ (see the proof of Proposition~4.3). Since $\varphi_S = \varphi_{S'}\circ \pi$ and similarly for $T$, we obtain \[ (\varphi_S^{1/2}| \varphi_T^{1/2}) = (\varphi_{S'}^{1/2}| \varphi_{T'}^{1/2}). \] Thus local positions of square roots of quasifree states are described under the assumption that $V$ is complete and $\sigma$ is continuous with respect to a non-degenerate inner product. \end{Example} \section{Transition Amplitude between States} Let $\omega$ be a positive functional of a C*-algebra $A$. According to \cite{PW}, we introduce two positive sesquilinear forms $\omega_L$ and $\omega_R$ on $A$ defined by \[ \omega_L(x,y) = \omega(x^*y), \qquad \omega_R(x,y) = \omega(yx^*), \quad x, y \in A. \] \begin{Lemma} Let $M$ be a W*-algebra and let $\varphi$, $\psi$ be positive normal functionals of $M$. Then \[ \sqrt{\varphi_L\psi_R}(x,y) = \langle \varphi^{1/2} x^*\psi^{1/2}y \rangle \quad \text{for $x, y \in M$.} \] \end{Lemma} \begin{proof} By the positivity $\langle \varphi^{1/2} x^*\psi^{1/2}x \rangle = (x\varphi^{1/2}x^*|\psi^{1/2}) \geq 0$ and the Schwarz inequality $|\langle \varphi^{1/2}x^*\psi^{1/2}y\rangle|^2 \leq \varphi(x^*x) \psi(yy^*)$, the positive form $(x,y) \mapsto \langle \varphi^{1/2} x^*\psi^{1/2}y \rangle$ is dominated by $\{ \varphi_L, \psi_R \}$. Assume for the moment that $\varphi$ and $\psi$ are faithful and consider the embedding $j: M \ni x \mapsto x\varphi^{1/2} \in L^2(M)$. Then $\varphi_L$ is represented by the identity operator, whereas $\psi(xx^*) = \| \psi^{1/2}x\|^2$ shows that $\psi_R$ is represented by the relative modular operator $\Delta$ with $\Delta^{1/2}(x\varphi^{1/2}) = \psi^{1/2} x$. Recall that $M\varphi^{1/2}$ is a core for $\Delta^{1/2}$. Thus Theorem~1.1 gives \[ \sqrt{\varphi_L\psi_R}(x,y) = (x\varphi^{1/2}| \Delta^{1/2}(y\varphi^{1/2})) = (x\varphi^{1/2}|\psi^{1/2}y) = \langle \varphi^{1/2}x^*\psi^{1/2}y \rangle. \] Now we relax $\varphi$ and $\psi$ to be not necessarily faithful. Let $e$ be the support projection of $\varphi+\psi$. Then it is the supprot for $\varphi_n = \varphi + \frac{1}{n}\psi$ and $\psi_n = \frac{1}{n}\varphi + \psi$ as well. In particular, $\varphi_n$ and $\psi_n$ are faithful on the reduced algebra $eMe$. Let $\gamma$ be a positive form on $M$ dominated by $\{ (\varphi_n)_L, (\psi_n)_R \}$. Then $\varphi_n(1-e) = 0 = \psi_n(1-e)$ shows that \[ |\gamma(x(1-e), (1-e)y)|^2 \leq \varphi_n((1-e)x^*x(1-e)) \psi_n((1-e)yy^*(1-e)) = 0, \] i.e., $\gamma(x,y) = \gamma(xe,ey)$ for $x, y \in M$, whence we have \[ \gamma(x,y) = \gamma(xe,ey) = \overline{\gamma(ey,xe)} = \overline{\gamma(eye,exe)} = \gamma(exe,eye). \] Since the restriction $\gamma|_{eMe}$ is dominated by $(\varphi_n|_{eMe})_L$ and $(\psi_n|_{eMe})_R$ with $\varphi_n$ and $\psi_n$ faithful on $eMe$, we have \[ \gamma(x,x) = \gamma(exe,exe) \leq \langle e\varphi_n^{1/2}ex^*e \psi_n^{1/2} ex\rangle = \langle \varphi_n^{1/2}x^*\psi_n^{1/2}x \rangle. \] Taking the limit $n \to \infty$, we obtain $\gamma(x,x) \leq \langle \varphi^{1/2}x^*\psi^{1/2}x \rangle$ in view of the Powers-St\o rmer inequality. \end{proof} \begin{Remark}~ \begin{enumerate} \item The case $\varphi = \psi$ was dealt with in the proof of \cite[Theorem~3.1]{PW} under the separability assumption on $M_*$. \item In the notation of \cite{U2}, we have $QF_t(\varphi_L,\psi_R)(x,y) = \langle \varphi^{1-t}x^*\psi^ty \rangle$ for $0 \leq t \leq 1$ and $x, y \in M$. \end{enumerate} \end{Remark} Given a positive functional $\varphi$ of a C*-algebra $A$, let $\widetilde\varphi$ be the associated normal functional on the W*-envelope $A^{**}$ through the canonical duality pairing. \begin{Lemma} Let $\varphi$ and $\psi$ be positive functionals on a C*-algebra $A$ with $\widetilde\varphi$ and $\widetilde\psi$ the corresponding normal functionals on $A^{**}$. Then \[ \sqrt{\varphi_L\psi_R}(x,y) = \langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}y \rangle \quad \text{for $x, y \in A \subset A^{**}$.} \] \end{Lemma} \begin{proof} The positive form $A\times A \ni (x,y) \mapsto \langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}y \rangle$ (recall that $x^*{\widetilde\psi}^{1/2}x$ is in the positive cone to see the positivity) is dominated by ${\widetilde\varphi}_L$ and ${\widetilde\psi}_R$ because of \[ |\langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}y \rangle|^2 \leq {\widetilde\varphi}(x^*x) {\widetilde\psi}(yy^*) = \varphi(x^*x) \psi(yy^*). \] Consequently, \[ \langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}x \rangle \leq \sqrt{\varphi_L\psi_R}(x,x) \quad \text{for $x \in A$.} \] To get the reverse inequality, let $\gamma$ be a positive form on $A\times A$ dominated by $\varphi_L$ and $\psi_R$. Then we have the domination inequality \[ |\gamma(x,y)|^2 \leq \varphi(x^*x) \psi(yy^*) = \| x{\widetilde\varphi}^{1/2}\|^2\, \| {\widetilde\psi}^{1/2}y\|^2. \] Since $A$ is dense in $A^{**}$ relative to the $\sigma^*$-topology, we see that $\gamma$ is extended to a positive form $\widetilde\gamma$ on $A^{**}\times A^{**}$ so that \[ |\widetilde\gamma(x,y)|^2 \leq \| x{\widetilde\varphi}^{1/2}\|^2\, \| {\widetilde\psi}^{1/2} y\|^2 \quad \text{for $x, y \in A^{**}$,} \] whence \[ \gamma(x,x) = \widetilde\gamma(x,x) \leq \sqrt{{\widetilde\varphi}_L {\widetilde\psi}_R}(x,x) = \langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}x \rangle \quad \text{for $x \in A$.} \] Maximization on $\gamma$ then yields the inequality \[ \sqrt{\varphi_L\psi_R}(x,x) \leq \langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}x \rangle \quad \text{for $x \in A$} \] and we are done. \end{proof} \begin{Corollary} Given a normal state $\varphi$ of a W*-algebra $M$, let $\widetilde\varphi$ be the associated normal state of the second dual W*-algebra $M^{**}$. Then \[ L^2(M) \ni \varphi^{1/2} \mapsto {\widetilde\varphi}^{1/2} \in L^2(M^{**}) \] defines an isometry of $M$-$M$ bimodules. \end{Corollary} \begin{proof} Combining two lemmas just proved, we have \[ \langle \varphi^{1/2} x^*\psi^{1/2}y \rangle = \sqrt{\varphi_L\psi_R}(x,y) = \langle {\widetilde \varphi}^{1/2} x^* {\widetilde\psi}^{1/2}y \rangle \] for $x, y \in M$. \end{proof} In what follows, $\varphi^{1/2}$ is identified with ${\widetilde\varphi}^{1/2}$ via the isometry just established: Given a positive normal functional $\varphi$ of a W*-algebra $M$, $\varphi^{1/2}$ is used to stand for a vector commonly contained in the increasing sequence of Hilbert spaces \[ L^2(M) \subset L^2(M^{**}) \subset L^2(M^{****}) \subset \dots. \] In accordance with this convention, the formula in the previous lemma is simply expressed by \[ (x\varphi^{1/2}|\psi^{1/2}y) = \sqrt{\varphi_L\psi_R}(x,y) \quad \text{for $x, y \in A$.} \] Here the left hand side is the inner product in $L^2(A^{**})$, whereas the right hand side is the geometric mean of positive forms on the C*-algebra $A$. Note that, the formula is compatible with the invariance of geometric means: $\sqrt{\varphi_L\psi_R}(x,y) = \sqrt{\psi_L\varphi_R}(y^*,x^*) = \sqrt{\varphi_R\psi_L}(y^*,x^*)$. \begin{Remark}~ \begin{enumerate} \item When $\varphi$ and $\psi$ are vector states of a full operator algebra ${\hbox{\sy L}}(\mathscr H)$ associated to normalized vectors $\xi, \eta$ in $\mathscr H$, the inner product $(\varphi^{1/2}|\psi^{1/2})$ is reduced to the transition \textit{probability} $|(\xi|\eta)|^2$. Moreover, in view of the inequality $t \varphi^{1/2} + (1-t)\psi^{1/2} \leq (t\varphi + (1-t)\psi)^{1/2}$ for $0 \leq t \leq1$ (which follows from $(\varphi^{1/2} - \psi^{1/2})^2 \geq 0$), our transition amplitude meets the requirements for transition probability listed in \cite{S} \item Let $P(\varphi,\psi)$ be the transition probability between states in the sense of A.~Uhlmann (\cite{U1}). Then we have $P(\varphi,\psi) = \langle |\varphi^{1/2}\psi^{1/2}| \rangle^2$ (cf.~\cite{R}) and \[ (\varphi^{1/2}|\psi^{1/2})^2 \leq P(\varphi,\psi) \leq (\varphi^{1/2}|\psi^{1/2}). \] \end{enumerate} \end{Remark} \section{Approximation on Transition Amplitudes} In this section, we shall see how transition amplitudes are approximated by states obtained by restriction to subalgebras. \begin{Lemma}[cf.~{\cite[Proposition~17]{U2}}] Let $\Phi: A \to B$ be a unital Schwarz map between unital C*-algebras. Then, for positive linear functionals $\varphi, \psi$ of $B$, \[ (\varphi^{1/2}|\psi^{1/2}) \leq ((\varphi\circ\Phi)^{1/2}| (\psi\circ \Phi)^{1/2}). \] \end{Lemma} \begin{proof} Let $\gamma: B \times B \to \text{\ym C}$ be a positive form dominated by $\{ \varphi_L,\psi_R\}$. Then \[ |\gamma(\Phi(x),\Phi(y))|^2 \leq \varphi(\Phi(x)^*\Phi(x)) \psi(\Phi(y)\Phi(y)^*) \leq \varphi(\Phi(x^*x)) \psi(\Phi(yy^*)) \] shows that the positive form $A \times A \ni (x,y) \mapsto \gamma(\Phi(x),\Phi(y))$ is dominated by $\{ (\varphi\circ \Phi)_L, (\psi\circ \Phi)_R \}$. Thus \[ \gamma(1,1) = \gamma(\Phi(1),\Phi(1)) \leq \sqrt{(\varphi\circ\Phi)_L(\psi\circ\Phi)_R}(1,1) = ((\varphi\circ\Phi)^{1/2} | (\psi\circ\Phi)^{1/2}). \] Maximizing $\gamma(1,1)$ with respect to $\gamma$, we obtain the inequality. \end{proof} \begin{Theorem} Let $\varphi$ and $\psi$ be positive functionals on a C*-algebra $A$ with unit $1_A$. Let $\{ A_n \}_{n \geq 1}$ be an increasing sequence of C*-subalgebras of $A$ containing $1_A$ in common and assume that, given any $a \in A$, we can find a sequence $\{ a_n \in A_n \}_{n \geq 1}$ satisfying \[ \lim_{n \to \infty} a_n\varphi^{1/2} = a\varphi^{1/2}, \quad \lim_{n \to \infty} \psi^{1/2}a_n = \psi^{1/2}a \] in norm topology. Set $\varphi_n = \varphi|_{A_n}, \psi_n = \psi|_{A_n} \in A_n^*$. Then the sequence $\{ (\varphi_n^{1/2}|\psi_n^{1/2}) \}_{n \geq 1}$ is decreasing and converges to $(\varphi^{1/2}|\psi^{1/2})$. \end{Theorem} \begin{proof} The sequence $\{ (\varphi_n^{1/2}|\psi_n^{1/2}) \}$ is decreasing with $(\varphi^{1/2}|\psi^{1/2})$ a lower bound by the previous lemma. Let $e_n$ and $f_n$ be projections on $L^2(A^{**})$ defined by \[ e_n L^2(A^{**}) = \overline{A_n \varphi^{1/2}}, \quad f_nL^2(A^{**}) = \overline{\psi^{1/2}A_n}. \] Choose positive forms $\gamma_n: A_n \times A_n \to \text{\ym C}$ for $n \geq 1$ so that $\gamma_n$ is dominated by $\{ (\varphi_n)_L, (\psi_n)_R \}$ and satisfies \[\gamma_n(1,1) \geq (\varphi_n^{1/2}|\psi_n^{1/2}) - 1/n. \] From the domination estimate on $\gamma_n$, we can find a linear map $C_n': \overline{\psi^{1/2} A_n} \to \overline{A_n \varphi^{1/2}}$ such that \[ \gamma_n(x,y) = (x\varphi^{1/2}|C_n'(\psi^{1/2}y)) \quad \text{for $x, y \in A_n$,} \] which satisfies $\| C_n'\| \leq 1$. Let $C_n = e_nC_n'f_n: \overline{\psi^{1/2}A} \to \overline{A\varphi^{1/2}}$. Since $\| C_n \| \leq 1$, we may assume that $C_n \to C$ in weak operator topology by passing to a subsequence if necessary. Now set \[ \gamma(x,y) = (x\varphi^{1/2}|C(\psi^{1/2}y)), \] which is a sesquilinear form on $A$ satisfying $|\gamma(x,y)| \leq \| x\varphi^{1/2}\|\, \| \psi^{1/2}y\|$. Moreover, if $x \in A_m$ for some $m \geq 1$, \[ \gamma(x,x) = \lim_{n \to \infty} (x\varphi^{1/2}| C_n(\psi^{1/2}x)) = \lim_{n \to \infty} \gamma_n(x,x) \geq 0 \] shows that $\gamma$ is positive on $\displaystyle \bigcup_{m \geq 1} A_m$ and hence on $A$ by the approximation assumption. Thus, $\gamma$ is a positive form dominated by $\{ \varphi_L, \psi_R \}$ and we have \begin{align*} (\varphi^{1/2}|\psi^{1/2}) &\geq \gamma(1,1) = \lim_{n \to \infty} (\varphi^{1/2}|C_n\psi^{1/2}) = \lim_{n \to \infty} \gamma_n(1,1)\\ &\geq \lim_{n \to \infty} \left( (\varphi_n^{1/2}|\psi_n^{1/2}) - \frac{1}{n} \right) = \lim_{n \to \infty} (\varphi_n^{1/2}|\psi_n^{1/2}). \end{align*} \end{proof} As a concrete example, we have the following situation in mind: Let $(V,\sigma)$ be a real presymplectic vector space and $C^*(V,\sigma)$ be the associated C*-algebra. Let $\varphi$ and $\psi$ be quasifree states of $C^*(V,\sigma)$ asscoated to covariance forms $S$ and $T$ respectively: \[ \varphi(e^{ix}) = e^{-S(x,x)/2}, \quad \psi(e^{ix}) = e^{-T(x,x)/2} \quad \text{for $x \in V$.} \] Note that $S$ is a positive form on the complexification $V^\text{\ym C}$ satisfying $S(x,y) - \overline{S(x,y)} = i\sigma(x,y)$ for $x, y \in V$ and similarly for $T$. Let $\{ V_n \}_{n \geq 1}$ be an increasing sequence of subspaces of $V$ and assume that $\displaystyle \bigcup_{n \geq 1}V_n$ is dense in $V$ with respect to the inner product \[ V\times V \ni (x,y) \mapsto (x|y) \equiv S(x,y) + \overline{S(x,y)} + T(x,y) + \overline{T(x,y)} \in \text{\ym R}. \] Note that $(x|x)$ may vanish on non-zero $x \in V$. Let $A_n$ be the C*-sualgebra of $A = C^*(V,\sigma)$ generated by $\{ e^{ix}; x \in V_n \}$. \begin{Proposition} In the setting described above, the increasing sequence $\{ A_n \}_{n \geq 1}$ satisfies the approximation property with respect to $\varphi$, $\psi$. \end{Proposition} \begin{proof} For $x \in V$, choose a sequence $x_n \in V_n$ so that $(x_n - x| x_n - x) \to 0$. Then, for any $y \in V$, \[ \| (e^{ix_n} - e^{ix}) e^{iy}\varphi^{1/2}\|^2 = 2 - 2e^{-S(x_n-x)/2} \Re e^{i\sigma(x_n-x,y+x/2)} \to 0 \] beacuse of the continuity of $\sigma$ with respect to $(\ |\ )$. Similarly, we see $\| \psi^{1/2}e^{iy}(e^{ix_n} - e^{ix}) \|^2 \to 0$. Since any $a \in C^*(V,\sigma)$ is approximated in norm by a finite linear combination of $e^{ix}$'s, we are done. \end{proof} \section{Central Decomposition} In this final section, we describe a decomposition theory for transition amplitudes between normal states, which will be effectively used in \cite{GQFS}. Let $M$ be a W*-algebra with a separable predual and $Z$ be a central W*-subalgebra of $M$. Since $Z$ is a commutative W*-algebra, we have an expression $Z = L^\infty(\Omega)$ with $\Omega$ a measurable space furnished with a measure class $d\omega$. If we choose a measure $\mu$ representing the measure class $d\omega$, then it further induces a decomposition of the form \[ \int_{L^\infty(\Omega)}^\oplus M_\omega\, d\omega \quad \text{on} \quad L^2(M) = \int_{L^2(\Omega)}^\oplus L^2(M_\omega)\, \mu(d\omega). \] Here $\{ M_\omega \}$ is a measurable family of W*-algebras and each normal functional $\varphi$ of $M$ is expressed by a measurable family $\{ \varphi_\omega \}_{\omega \in \Omega}$ of normal functional in such a way that \[ \varphi(x) = \int_\Omega \varphi_\omega(x_\omega)\, \mu(d\omega) \quad \text{if} \ x = \int_{L^\infty(\Omega)}^\oplus x_\omega\, d\omega \] and the $L^2$-identification is given by \[ (\varphi^{1/2}|\psi^{1/2}) = \int_\Omega (\varphi_\omega^{1/2}|\psi_\omega^{1/2})\, \mu(d\omega). \] This can be seen as follows: \begin{Lemma} Let $\{ M_\omega, {\hbox{\sy H}}_\omega\}$ be a measurable family of von Neumann algebras and set \[ M = \int_{L^\infty}^\oplus M_\omega\,d\omega, \quad {\hbox{\sy H}} = \int_{L^2}^\oplus {\hbox{\sy H}}_\omega\,\mu(d\omega). \] Let $\xi = \int^\oplus \xi_\omega \mu(d\omega)$ be a vector in ${\hbox{\sy H}}$. Then $\xi$ is cyclic for $M$ if and only if $\xi_\omega$ is a cyclic vector of $M_\omega$ for a.e.~$\omega$. \end{Lemma} \begin{proof} The `only if' part follows from the fact that \[ \int_{L^2}^\oplus (M_\omega\xi_\omega)^\perp\, \mu(d\omega) \] is a subspace orthogonal to $M\xi$. For the `if' part, we use the commutant formula \[ M' = \int_{L^\infty}^\oplus M_\omega'\,d\omega \] and check that, if $\xi_\omega$ is a separating vector of $M_\omega'$ for a.e.~$\omega$, then \[ x' \xi = \int_{L^2}^\oplus x'_\omega \xi_\omega\, \mu(d\omega) = 0 \] for $x' = \int^\oplus x'_\omega\,d\omega \in M'$ implies $x'_\omega \xi_\omega = 0$ for a.e.~$\omega$ and hence $x'_\omega = 0$ for a.e.~$\omega$, i.e., $x' = 0$. \end{proof} By replacing $(M_\omega,\mathscr H_\omega)$ with $(M_\omega\otimes 1_\mathscr K,\mathscr H_\omega\otimes \mathscr K)$ ($\mathscr K$ being a Hilbert space) and then restricting to cyclic subspaces, we may assume that we can find a cyclic and separating $\xi$ for the von Neumann algebra $M$. Then $\xi_\omega \in \mathscr H_\omega$ is a cyclic and separating vector of $M_\omega$ for a.e.~$\omega$. Let $J_\omega$ be the modular conjugation associated to $\xi_\omega$ (these are defined up to null sets). Then, from the relevant definitions on modular stuff, we see that $\{J_\omega\}$ is a measurable family of operators and \[ J = \int_{L^\infty}^\oplus J_\omega\,d\omega \] gives the modular conjugation associated to $\xi$. Let \[ \varphi = \int_{L^1(\Omega)}^\oplus \varphi_\omega\,\mu(d\omega), \quad \psi = \int_{L^1(\Omega)}^\oplus \psi_\omega\,\mu(d\omega) \] and choose $\xi$ so that both of $\varphi$ and $\psi$ is majorized by the functional $(\xi|\cdot\xi)$. By a Radon-Nikodym type theorem (cf.~\cite{A}), we can find $a, b \in M$ such that $\varphi$, $\psi$ are associated to the vectors $a\xi a^* = aJaJ\xi$, $b\xi b^* = bJbJ\xi$ respectively. (In the notation of \cite{AA}, we can choose $a = \varphi^{1/4}\xi^{-1/2}$, $b = \psi^{1/4}\xi^{-1/2}$.) Then $\varphi_\omega$ and $\psi_\omega$ are represented by vectors $a_\omega J_\omega a_\omega J_\omega \xi_\omega$ and $b_\omega J_\omega b_\omega J_\omega \xi_\omega$ for a.e.~$\omega$. Thus, for $x, y \in M$, \[ \langle x_\omega \varphi_\omega^{1/2} y_\omega \psi_\omega^{1/2}\rangle = (a_\omega \xi_\omega a_\omega^*x_\omega^*| y_\omega b_\omega \xi_\omega b_\omega^*) \] is a measurable function of $\omega$ and we have \[ \langle x\varphi^{1/2}y\psi^{1/2}\rangle = \int_\Omega \ \langle x_\omega \varphi_\omega^{1/2} y_\omega \psi_\omega^{1/2}\rangle\, \mu(d\omega). \] Consequently, $\{ L^2(M_\omega) \}$ is a measurable family of Hilbert spaces in such a way that for any $\varphi \in M_*^+$, the decomposed components $\{ \varphi_\omega^{1/2} \}$ is measurable. Moreover, we have a decomposable unitary \[ L^2(M) \ni x\phi^{1/2} \mapsto \int_{L^2(\Omega)}^\oplus x_\omega \phi_\omega^{1/2} y_\omega. \] We shall now rewrite the results so far to fit into representation theory of C*-algebras. Let $\{ A(\omega) \}$ be a family of quotient C*-algebras of a C*-algebra $A$ indexed by elements in a standard Borel space $\Omega$. Denote by $a(\omega) \in A(\omega)$ the quotient element of $a \in A$. A positive functional $\varphi$ of a C*-algebra $A$ is said to be \textbf{separable} if the Hilbert space $\overline{A\varphi^{1/2}A}$ is separable. A family $\{ \varphi_\omega \in A(\omega)^*_+ \}_{\omega \in \Omega}$ of positive functionals is said to be \textbf{measurable} if $\varphi_\omega$ is separable for each $\omega \in \Omega$ and \[ \omega \mapsto \langle a(\omega)\varphi_\omega^{1/2} b(\omega) \varphi_\omega^{1/2}\rangle \] is measurable for $a, b \in A$. Given a measurable family of states $\{ \varphi_\omega \}$, $\{ \overline{A(\omega)\varphi_\omega^{1/2} A(\omega)} \}_{\omega \in \Omega}$ is a measurable family of Hilbert spaces in an obvious way. If we are further given a probability measure $\mu$, \[ \varphi(x) = \int_\Omega \varphi_\omega(x(\omega))\, \mu(d\omega) \] defines a state of $A$. A measurable family of positive functionals $\{ \varphi_\omega \}$ is said to be \textbf{disjoint} with respect to a probability measure $\mu$ of $\Omega$ if $\int_{\Omega'} \varphi_\omega \mu(d\omega)$ and $\int_{\Omega''} \varphi_{\omega} \mu(d\omega)$ are disjoint for $\Omega' \cap \Omega'' = \emptyset$. \begin{Proposition}~ \begin{enumerate} \item Given a probability measure $\mu$ and a $\mu$-disjoint family of separable states $\{ \varphi_\omega \}$, the integrated state $\varphi = \int \varphi_\omega \mu(d\omega)$ is separable and we have a unitary map \[ \overline{A\varphi^{1/2}A} \ni a \varphi^{1/2} b \mapsto \int_{L^2(\Omega)}^\oplus a(\omega)\varphi_\omega^{1/2} b(\omega) \mu(d\omega) \in \int_{L^2(\Omega)}^\oplus \overline{A(\omega) \varphi_\omega^{1/2} A(\omega)}\, \mu(d\omega). \] \item Let $\{ \psi_\omega\}$ be another $\mu$-disjoint family of separable states with $\psi = \int \psi_\omega \mu(d\omega)$ the integrated state of $A$. Then, for $a, b \in A$, $\langle a(\omega)\varphi^{1/2} b(\omega) \psi^{1/2} \rangle$ is measurable and \[ \langle a \varphi^{1/2} b \psi^{1/2}\rangle = \int_\Omega \langle a(\omega) \varphi_\omega^{1/2} b(\omega) \psi_\omega^{1/2}\rangle \mu(d\omega). \] \end{enumerate} \end{Proposition} \begin{proof} Let $M$ be the von Neumann algebras generated by the left multiplication of $A$ on $\overline{A\varphi^{1/2}A}$. Then, by the disjointness of $\{ \varphi_\omega\}$, $L^\infty(\Omega,\mu) \subset M$ and we can apply the results on W*-algebras. \end{proof}
1,477,468,750,516
arxiv
\section{Introduction} In the quest for exact solutions of the Einstein-Maxwell (EM) equations considerable research has been devoted to the study of aligned EM fields, in which at least one of the principal null directions (PNDs) of the electromagnetic field $\mathbf {F}$ is parallel to a PND of the Weyl tensor, a so called Debever-Penrose (DP) direction. One of the main triumphs of this effort, spread out\footnote{see for example the reviews in \cite{GrifPod, Kramer}} between 1960 and 1980, has been the complete integration of the field equations (with a possible nonzero cosmological constant $\Lambda $), for the Petrov type D doubly aligned non-null EM fields, in which \emph{both} real PNDs of $\mathbf {F}$ are parallel to a corresponding double DP vector and are geodesic as well as shear-free, the so called\cite{DebeverMcLen1981} class $\mathcal{D}$ metrics\footnote{this class contains famous examples such as the Reissner-Nordstr{\"o}m and Kerr-Newman solutions and, together with the Pleba\'{n}ski-Hacyan space-times\cite{PlebHacyan79} and Garc{\'\i}a-Pleba\'{n}ski space-times\cite{GarciaPleban}, represents the general solution for the doubly aligned Petrov type D EM fields}. In a recent study\cite{NVdB2017} of non-aligned algebraically special EM fields it was noted that, at least for nonzero cosmological constant $\Lambda $, the double alignment condition of the class $\mathcal{D}$ metrics is actually a consequence of their multiple DP vectors being geodesic and shear-free. Therefore this is also a necessary condition for the existence of a 2-index Killing spinor, with the consequence of enabling\cite{WalkerPenrose70} to completely integrate the null geodesic equation for the whole class $\mathcal{D}$. A natural question therefore arises as to whether EM solutions exist which are of Petrov type D, have $\Lambda =0$ and in which the two real DP vectors $\mathbf {k} ,\mathbf {l }$ are geodesic and shear-free, but are \emph{both non-aligned}\footnote{a related question for Petrov type III was dealt with recently in \cite{NVdB2018}} with the PND's of a non-null electromagnetic field $\mathbf {F}$. While the ``Kundt'' case of vanishing divergence of either $\mathbf {k}$ or $\mathbf {l }$ (i.e.~$\rho$ or $\mu=0$) can be dismissed, as it immediately implies at least half-alignment\footnote{one can also prove that ''half-Kundt'' necessarily implies ''double Kundt'' and hence double alignment}, the general case with $\rho \mu \neq 0$ remained elusive, even under the simplifying ``double Robinson-Trautman'' (RT) assumption that $\mathbf {k}$ and $\mathbf {l }$ are both non-twisting. In this paper we give an affirmative answer to the above question. We present all corresponding double RT space-times satisfying the extra condition that the complex null vectors of the Weyl canonical tetrad are hypersurface orthogonal and discuss some of their properties. The structure of the paper is as follows: in \S\ref{Main_eqs} we set up a suitable null tetrad, present the relevant Geroch-Held-Penrose\cite{GHP} (GHP) equations and show that the ``normalised'' Maxwell components $\frac{\Phi_0}{\rho \overline{\pi}}$ and $\frac{\Phi_2}{\mu \pi}$ are opposite complex numbers, allowing us to write $\Phi_0 = \rho \overline{\pi} C_0 f$, $\Phi_2 = - \mu \pi C_0 f$ with $f$ and $C_0$ $(0,0)$-weighted GHP variables.\footnote{$f$ positive and $| C_0 | = 1$} A completely integrable system is then constructed for the GHP variables describing the situation at hand. In \S\ref{MainRTeqs} we translate this into the corresponding Newman-Penrose (NP) variables. We then obtain a final system of partial differential equations and construct its general solution. In \S\ref{Discussion} some properties of the resulting metrics are discussed.\\ Throughout we assume that the reader is familiar with the GHP and NP formalisms, but for convenience a short overview of GHP is presented in the Appendix.\\ For notations and sign conventions we refer to \cite{Kramer}. \section{Main equations}\label{Main_eqs} Investigating non-aligned Einstein-Maxwell fields first requires choosing an appropriate null tetrad, either adapting it to the Weyl tensor or to the electromagnetic field. Both approaches can have their advantages, but here, as we aim to study non-null Einstein-Maxwell fields of Petrov type D, with an additional assumption on the DP vectors (namely their being geodesic and shear-free), it appears preferable to use a canonical Weyl tetrad. The relevant equations are then obtained by substituting $\Psi_0=\Psi_1=\Psi_3=\Psi_4=0$, together with $\kappa=\nu=\sigma=\lambda=\overline{\rho} -\rho = \overline{\mu} -\mu = 0$ ($\mathbf {k} ,\mathbf {l }$ are assumed to be geodesic, shear-free and non-twisting) into equations (\ref{ghp1}-\ref{bi4}) of the Appendix. Note that we also impose the assumption $\tau+\overline{\pi}=0$, which guarantees that the complex null vectors $\w{m}$ and $\overline{\w{m}}$ of the Weyl canonical tetrad are hypersurface orthogonal ($\w{m}\wedge \textrm{d} \w{m}=0$).\footnote{some preliminary work shows that large classes of solutions may exist when $\w{m}\wedge \textrm{d} \w{m}\neq 0$} Next we define extension variables $\Eu= \textrm{\TH} \Phi_2,\mathcal{S}=\eth \Phi_2,\EJ=-\textrm{\TH}'\rho$ ($\Eu, \mathcal{S}$ complex and $\EJ$ real), after which the Ricci, Bianchi and Maxwell equations (\ref{ghp1}-\ref{bi4}) are solved\footnote{to solve the Bianchi identities for the variables $\textrm{\TH}'\Phi_2,\eth ' \Phi_2$ it is essential that the electromagnetic field is non-null: $\Phi_0 \Phi_2 -\Phi_1^2 \neq 0$} to yield the following system: \begin{eqnarray} \eth\rho &= \Phi_0 \overline{\Phi_1} , \ \textrm{\TH}\rho = \rho^2+\Phi_0 \overline{\Phi_0}, \ \textrm{\TH}'\rho = -\EJ, \label{srho}\\ \eth \pi &= -\pi \overline{\pi} -\rho \mu +\EJ -\Psi_2 , \ \eth ' \pi = -\Phi_2 \overline{\Phi_0} -\pi^2 , \ \textrm{\TH}' \pi = -\Phi_2 \overline{\Phi_1} , \label{spi} \\ \eth \Phi_1 &= \mu \Phi_0 -2 \overline{\pi} \Phi_1 - \Eu', \ \textrm{\TH}'\Phi_1 = -2 \mu \Phi_1 +\overline{\pi} \Phi_2 + \mathcal{S}, \label{sone} \\ \eth \Phi_2 &= \mathcal{S} , \eth ' \Phi_2 = 0, \ \textrm{\TH} \Phi_2 = \Eu,\ \textrm{\TH}'\Phi_2 = 0, \label{stwo}\\ \eth \Psi_2 &= 2 \rho \Phi_1 \overline{\Phi_2} +2 \overline{\pi} \Phi_1 \overline{\Phi_1} -\overline{\Phi_2} \mathcal{S}'+\overline{\Phi_1} \Eu'-3 \overline{\pi} \Psi_2 ,\nonumber \\ \textrm{\TH} \Psi_2 &= 2 \rho \Phi_1 \overline{\Phi_1} +2 \overline{\pi} \Phi_1 \overline{\Phi_0} -\overline{\Phi_1} \mathcal{S}'+\overline{\Phi_0} \Eu'+3 \rho \Psi_2,\ \nonumber \\ \textrm{\TH}' \Psi_2 &= -2 \mu \Phi_1 \overline{\Phi_1} +2\pi \Phi_1 \overline{\Phi_2} + \overline{\Phi_1} \mathcal{S} -\overline{\Phi_2} \Eu -3 \mu \Psi_2, \label{spsi} \end{eqnarray} with, by $(\ref{ghp5d})'-\overline{(\ref{ghp5d})}$, $\EJ-\EJ' = \overline{\Psi_2}-\Psi_2$ and hence ($\EJ, \EJ'$ being real) $\EJ-\EJ'=0$ and $\overline{\Psi_2}=\Psi_2$. The equations for $\mu$ and $\Phi_0$ have been omitted, as they can be obtained by ``priming'' equations (\ref{srho}) and (\ref{stwo}). Similarly $\eth ' \rho$, $\textrm{\TH} \pi$, $\eth ' \Phi_1$, $\textrm{\TH} \Phi_1$, $\eth ' \Psi_2$ and $\textrm{\TH} \Psi_2$ can be obtained by the prime and complex conjugation of (\ref{srho}-\ref{spsi}). This will hold throughout this section and results in a significant reduction of the computational effort.\\ Note that if $\rho, \mu$ or $\pi=0$, then by (\ref{srho},\ref{spi}) it immediately implies half-alignment\footnote{one can show that also double-alignment follows, i.e.~half-aligned Petrov type D Einstein-Maxwell-Kundt solutions with geodesic and shear-free DP vectors do not exist}.\\ We now apply the $\left[\eth ',\eth \right], \left[\eth,\textrm{\TH}'\right],\left[\eth ',\textrm{\TH} \right]$ commutators to $\Phi_2$ and the $\left[\eth ',\textrm{\TH}'\right]$ commutator to $\Phi_1$ to obtain the following derivatives of $\Eu$ and $\mathcal{S}$, \begin{eqnarray} \eth ' \mathcal{S} &= 2(\Psi_2 - \Phi_1 \overline{\Phi_1} - \rho \mu) \Phi_2,\label{d2v} \\ \textrm{\TH}' \mathcal{S} &= 2( \mu \overline{\pi} - \Phi_1 \overline{\Phi_2} ) \Phi_2-\mu \mathcal{S}, \label{d3v} \\ \eth ' \Eu &= -2( \Phi_1 \overline{\Phi_0} + \pi \rho ) \Phi_2-\pi \Eu,\label{d2u} \\ \textrm{\TH}' \Eu &= ( 2 \pi \overline{\pi} -2 \Phi_1 \overline{\Phi_1} + \Psi_2 ) \Phi_2-3 (\mu \Eu- \pi \mathcal{S}), \label{d3u} \end{eqnarray} after which the $\left[ \textrm{\TH}', \textrm{\TH}\right]$ commutator applied to $\Phi_2$ results in an algebraic relation \begin{equation} \Phi_2 \Psi_2-\mu \Eu +\pi \mathcal{S} = 0 .\label{eq3} \end{equation} We will use (\ref{eq3}) to express $\Eu$ and $\mathcal{S}$ in terms of the $(0,0)$-weighted quantity $w = \mathcal{S} / \mu$. Evaluating the combination of commutators, $[\eth ',\eth]\Phi_1-[\eth ',\textrm{\TH}'] \Phi_0+[\eth,\textrm{\TH}] \Phi_2$, we obtain the relation \begin{eqnarray} \fl \pi \eth w+ \overline{\pi} \eth ' w' &= ( w'+w )(2 \rho \mu-\pi \overline{\pi} -\EJ)+2 \Phi_0 \pi \mu +2 \Phi_2 \overline{\pi} \rho\nonumber \\ \fl &+\frac{\Phi_0}{\rho} (2 \Phi_1-w)( \overline{\Phi_0} \mu- \overline{\Phi_1} \pi)-\frac{\Phi_2}{\mu} ( 2 \Phi_1+w' )( \overline{\Phi_2} \rho+ \overline{\Phi_1}\overline{\pi}) \nonumber \\ \fl & +\Psi_2 (\frac {\Phi_0 \pi}{\rho} +{\frac {\Phi_2 \overline{\pi}}{\mu}} +\frac { \Phi_1 \Phi_0 \overline{\Phi_0}}{\rho^2} -\frac {\Phi_1 \Phi_2 \overline{\Phi_2} }{\mu^2}), \label{eq5} \end{eqnarray} which can be used to simplify the expression $$ \rho [\eth,\textrm{\TH}'] \mathcal{S}' +\tau [\eth ',\eth] \mathcal{S}' +\mathcal{S}' [\eth ', \eth]\tau -\frac{\Phi_0 \Psi_2 +\tau \mathcal{S}'}{\rho} [\eth ',\eth]\rho $$ to yield \begin{eqnarray} \fl \mu \overline{\Phi_0} \eth w -\rho \overline{\Phi_2} \eth ' w' &= ( 2 \rho \mu+\Psi_2 ) ( \rho \mu-\Psi_2 )(\frac {\Phi_0 \overline{\Phi_0} }{\rho^2}- \frac{\Phi_2 \overline{\Phi_2} }{\mu^2})+4 \Phi_1 (\pi \rho \overline{\Phi_2}+\overline{\pi}\mu \overline{\Phi_0}) \nonumber \\ & + ( w'+w )( \overline{\Phi_1} \mu \rho +\overline{\Phi_1} \pi\overline{\pi}+\Phi_1 \overline{\Phi_0} \overline{\Phi_2})+2\Phi_1 (\mu\rho-\pi\overline{\pi})(\overline{\Ew}+\overline{\EW}) \nonumber \\ & +\pi\rho\overline{\Phi_2} (3w'+w)-\overline{\pi} \mu \overline{\Phi_0} (3 w+w') \nonumber \\ & +\pi\overline{\pi} (w\overline{\Ew}-w'\overline{\EW}) +\rho \mu(w'\overline{\Ew}-w \overline{\EW}). \label{eq8} \end{eqnarray} One can show that, when (\ref{eq5},\ref{eq8}), considered as a system for $\eth w$ and $\eth ' w'$, is non-singular, solutions are necessarily doubly-aligned or conformally flat. We omit the tedious and lengthy proof of this property: the proof (in which the earlier derived reality of $\Psi_2$ plays an essential role) is available from the authors, either by email to the second author for a semi-automated version, using the algebraic computing package STEM, or, via a more manual approach, from \cite{proof_Norbert}. When this system is singular, i.e.~when \begin{equation}\mu \pi \Phi_0 + \rho \overline{\pi} \Phi_2=0, \label{spec} \end{equation} we can write \begin{equation} \Phi_0 = \rho \overline{\pi} C_0 f,\ \Phi_2 = - \mu \pi C_0 f , \label{phi0and2} \end{equation} with $(0,0)$-weighted quantities $f$ and $C_0$, such that $f$ is real positive and $|C_0|=1$. Acting now on (\ref{spec}) with the operators $\pi \eth+ \overline{\pi} \eth '$ and $\pi \eth + \rho \tho '$ results in \begin{equation} w+w' = 0 \label{Wwrelation} \end{equation} and \begin{equation} (\rho \mu+\pi\overline{\pi})(\pi \Phi_0 \overline{\Phi_1} -\overline{\pi} \Phi_1 \overline{\Phi_0})=0. \end{equation} Rejecting the case $\rho \mu+\pi\overline{\pi}=0$ (acting on this with the $\eth$ and $\tho '$ operators immediately leads to conformal flatness), we have \begin{equation} \pi \Phi_0 \overline{\Phi_1} -\overline{\pi} \Phi_1 \overline{\Phi_0} = 0.\label{sbs} \end{equation} This allows us to define a (real positive) $(0,0)$-weighted function $g$ by \begin{equation} \Phi_1 = C_0 g, \label{sbsg} \end{equation} which combining with the $\eth '$ derivative of (\ref{spec}) and (\ref{sbs}) yields \begin{equation} \EJ = \Psi_2+\rho \mu (1+f^2 \pi \overline{\pi})-C_0 \frac{\overline{\Ew}}{f}. \label{Jexpr} \end{equation} Finally, acting on (\ref{Wwrelation}) or (\ref{sbs}) leads to \begin{equation} \eth w = 2 C_0 f \overline{\pi} (g^2 +\rho \mu -2\Psi_2) - w \overline{\pi} f g . \end{equation} We also note that (\ref{Jexpr}) shows that $C_0 \overline{\Ew}$ is real, so that we can write $w= C_0 w_0$ with $w_0$ real and $(0,0)$-weighted. Application of $\eth, \eth ',\textrm{\TH},\textrm{\TH}'$ to the $(0,0)$-weighted quantity $C_0$ all return 0 and hence $C_0$ is a constant. The only remaining variables are then $f,g,w_0,\Psi_2,\rho$ and $\pi$ (with $f'=f,g'=g,w_0'=w_0,\Psi_2'=\Psi_2$, $\rho'=-\mu$, $\pi'=\overline{\pi}$), which satisfy the completely integrable system \begin{eqnarray} \eth f &=-\overline{\pi} f (f^2 \mu \rho+f g-1), \ \textrm{\TH} f =-f \rho (f^2 \pi\overline{\pi} -f g+1),\nonumber\\ \eth g &=\overline{\pi} (f \rho \mu-f \Psi_2-2 g+w_0), \ \textrm{\TH} g =\rho (f \pi \overline{\pi} +2 g-w_0),\nonumber \\ \eth \pi &= \pi\overline{\pi}(\rho \mu f^2 -1) -\frac{w_0}{f},\ \textrm{\TH} \pi = -\rho\pi f g ,\nonumber \\ \eth \rho &= \rho \overline{\pi} f g ,\ \eth ' \rho = \rho \pi f g,\nonumber\\ \textrm{\TH} \rho &= \rho^2(1+f^2\pi\overline{\pi}), \ \textrm{\TH}' \rho = -\rho\mu (f^2\pi\overline{\pi} +1) + \frac{w_0}{f} -\Psi_2,\nonumber \\ \eth w_0 &= \overline{\pi} f (2 g^2-g w_0+2 \rho \mu-2 \Psi_2),\ \textrm{\TH} w_0 = f \pi \overline{\pi} \rho (2 f g-f w_0+2),\nonumber \\ \eth \Psi_2 &=-\pi (2g f \rho \mu-f \rho \mu w_0-f g \Psi_2-2 g^2+g w_0+3 \Psi_2),\nonumber \\ \textrm{\TH} \Psi_2 &= \rho (f^2 \pi\overline{\pi} \Psi_2+2f g \pi \overline{\pi} -f \pi \overline{\pi} w_0+2 g^2-g w_0+3 \Psi_2). \label{sysfinal} \end{eqnarray} \section{General solution of the double RT-case}\label{MainRTeqs} We now use the previous results to set up an NP null tetrad $(\w{e}_a)=(\w{m},\overline{\w{m}}, \w{l},\w{k})$ with dual basis $\w{\omega}^a)$, construct an appropriate coordinate system and solve the field equations. All results from the previous sections can be translated to the NP formalism by means of the relations (\ref{GHP_NP}). In particular all relations involving only derivatives of the $(0,0)$-weighted GHP quantities carry over without modification. In order to fix the null tetrad we use the fact that $\rho$ and $\mu$ are real and $\tau+\overline{\pi}=0$, allowing one to specify a boost and spatial rotation such that $\pi$ and $\tau$ are real as well and \begin{eqnarray} \mu &= e \rho, \quad (e=\pm 1),\\ \pi &= -\tau. \end{eqnarray} $D \pi, \Delta \pi$ being real implies $\epsilon$ and $\gamma$ are real, while $\delta (\frac{\mu}{\rho})=0$ implies $\beta+\overline{\alpha}=0$. From $D (\frac{\mu}{\rho})=\Delta (\frac{\mu}{\rho})=0$ and $\overline{\delta \pi}=\overline{\delta} \pi$ it follows then that the spin coefficients $\alpha,\beta,\epsilon,\gamma$ are given by \begin{eqnarray} \alpha &= -\beta = w_0 /(4 \pi f),\\ \gamma &= e \epsilon = (f \Psi_2 - w_0)/(4 \rho f ). \end{eqnarray} Consequently the Cartan equations become \begin{eqnarray} \textrm{d} \w{\omega}^1 &= \w{\omega}^1 \wedge ( -e \rho \w{\omega}^3 + \rho \w{\omega}^4 + \frac{w_0}{2\pif} \w{\omega}^2), \label{Cartan1} \\ \textrm{d} \w{\omega}^2 &= \w{\omega}^2 \wedge ( -e \rho \w{\omega}^3 + \rho \w{\omega}^4 + \frac{w_0}{2\pif} \w{\omega}^1), \\ \textrm{d} \w{\omega}^3 &= \w{\omega}^3 \wedge ( -\pi \w{\omega}^1 -\pi \w{\omega}^2 + e \frac{w_0-f \Psi_2 }{2\rho f}\w{\omega}^4),\\ \textrm{d} \w{\omega}^4 &= \w{\omega}^4 \wedge ( -\pi \w{\omega}^1 -\pi \w{\omega}^2 - \frac{w_0-f \Psi_2 }{2\rho f}\w{\omega}^3), \label{Cartan4} \end{eqnarray} showing that the basis vectors are all hypersurface-orthogonal. From (\ref{Cartan1}-\ref{Cartan4}) it is clear that this also holds for the basis dual to the one-forms $e \w{\omega}^3+\w{\omega}^4, \w{\omega}^1-\w{\omega}^2, \w{\Omega}^1=e \w{\omega}^3-\w{\omega}^4, \w{\Omega}^2=\w{\omega}^1+\w{\omega}^2$, the latter two of which satisfy \begin{equation} \textrm{d} \w{\Omega}^1 = - \pi \w{\Omega}^1 \wedge \w{\Omega}^2, \ \textrm{d} \w{\Omega}^2 = \rho \w{\Omega}^1 \wedge \w{\Omega}^2. \label{O1O2eq} \end{equation} Next we introduce new variables $h = w_0-2g$ and $j = f\Psi_2 + 2g - w_0$, which simplify\footnote{using, for example, $\textrm{d} f = \w{\omega}^1 \delta f+\w{\omega}^2 \overline{\delta} f +\w{\omega}^3 \Delta f + \w{\omega}^4 D f$ etc.} the system (\ref{sysfinal}) to \begin{eqnarray} \textrm{d} f &= f [(f^2\pi^2\rho - fg\rho + \rho)\w{\Omega}^1 + (-e f^2\pi\rho^2 - fg\pi + \pi) \w{\Omega}^2],\label{n_eqf}\\ \textrm{d} g &= -\w{\Omega}^1 (f \pi^2 - h) \rho + \pi (e f \rho^2 - j) \w{\Omega}^2, \label{n_eqg}\\ \textrm{d} h &= h [ (f^2 \pi^2 \rho - 2 \rho) \w{\Omega}^1 - (f g + 2) \pi \w{\Omega}^2], \label{n_eqh}\\ \textrm{d} j &= j[ - (f g + 2) \rho \w{\Omega}^1 + (-e f^2 \pi \rho^2 - 2 \pi) \w{\Omega}^2], \label{n_eqj}\\ \textrm{d} \rho &= -\frac{1}{2 e f}(2 e f^3 \pi^2 \rho^2 + 2 e f \rho^2 - 2 g + j) \w{\Omega}^1 + \w{\Omega}^2 f g \pi \rho, \label{n_eqrho}\\ \textrm{d} \pi &= \frac{1}{2 f} (2 e f^3 \pi^2 \rho^2 - 2 f \pi^2 - 2 g - h) \w{\Omega}^2 + \w{\Omega}^1 f g \pi \rho. \label{n_eqpi} \end{eqnarray} The null tetrad being fixed and $f,g,h,j,\rho,\pi$ being the remaining (non-constant) spin coefficients and (suitably transformed) Maxwell and curvature components, (\ref{n_eqf}-\ref{n_eqpi}) show that this set contains at most two functionally independent functions and hence the corresponding space-times will admit at least two Killing vectors. One can actually show that this is the maximally allowed number, as the vanishing of all double wedge products would lead to an inconsistency (this will be obvious from the explicit solutions as well). Introducing coordinates $t,z,\u,\v$ such that \begin{eqnarray} & \w{\omega}^1-\w{\omega}^2 = i \mathcal{P} \textrm{d} z,\ e \w{\omega}^3+\w{\omega}^4 =\mathcal{Q} \textrm{d} t,\label{PQdef} \\ & \w{\Omega}^1 = \mathcal{B} \textrm{d} \u, \ \w{\Omega}^2 = \mathcal{C} \textrm{d} \v, \label{BCdef} \end{eqnarray} ($t$ and $z$ clearly then being the ignorable coordinates corresponding to the two Killing vectors) it follows from (\ref{O1O2eq}) that \begin{equation} \mathcal{B}_{,\u} = \pi \mathcal{B} \mathcal{C}, \ \mathcal{C}_{,\v} = \rho \mathcal{B} \mathcal{C}, \label{rhopisol} \end{equation} after which a linear combination of (\ref{n_eqrho})and (\ref{n_eqpi}) shows that $(\log(\mathcal{B} / \mathcal{C}))_{,\u \v}=0$. Hence $\mathcal{B} / \mathcal{C}$ is separable in $\u$ and $\v$ and a coordinate transformation exists such that (re-defining $\mathcal{B}$ and $\mathcal{C}$) $\mathcal{B} = \mathcal{C}$. The exterior derivatives of (\ref{PQdef}) lead then to two partial differential equations for $\mathcal{P}$ and $\mathcal{Q}$, \begin{eqnarray} \textrm{d} \log (\mathcal{P} {\mathcal{P}}_0) &= -\frac{\mathcal{B}}{2\pif} (h + 2g)\textrm{d} \u + \mathcal{B} \rho \textrm{d} \v, \label{PQdefbis1} \\ \textrm{d} \log (\mathcal{Q} {\mathcal{Q}}_0) &= \frac{e \mathcal{B}}{2\rhof} (2g - j)\textrm{d} \v + \mathcal{B} \pi \textrm{d} \u , \label{PQdefbis2} \end{eqnarray} with ${\mathcal{P}}_0= {\mathcal{P}}_0 (z)$ and ${\mathcal{Q}}_0 = {\mathcal{Q}}_0 (t)$. Without loss of generality one can put ${\mathcal{P}}_0 = {\mathcal{Q}}_0 =1$, such that, by (\ref{n_eqrho},\ref{n_eqpi}), the previous relations can be rewritten as \begin{eqnarray} \textrm{d} \log (\frac{\mathcal{Q} }{\mathcal{B} \rho} ) &= \mathcal{B} f \pi (\rho \pi f \textrm{d} \v - g \textrm{d} \u), \label{PQdefbis3} \\ \textrm{d} \log (\frac{\mathcal{P}}{\mathcal{B} \pi} ) &= -\mathcal{B} f \rho (e \rho \pi f \textrm{d} \u + g \textrm{d} \v). \label{PQdefbis4} \end{eqnarray} At this point we will make a distinction between the cases $h j \neq 0$ (i.e. $(\delta \Phi_2 - \mu \Phi_1)( \delta \Phi_2 - \mu \Phi_1 -\frac{\mu}{2 \rho \pi}\Psi_2 \Phi_0) \neq 0$) and $h=0$ ($\delta \Phi_2 - \mu \Phi_1=0$) or $j=0$ ($\delta \Phi_2 - \mu \Phi_1 -\frac{\mu}{2 \rho \pi}\Psi_2 \Phi_0 = 0$). Only the first two will be treated in detail below, as the analysis of the case $j=0$ is essentially identical to that of $h=0$, since the transformation \begin{eqnarray} (f,g,h,j,\pi,\rho) &\rightarrow (f, g, -j, - h, i \rho / \sqrt{e} , i \pi \sqrt{e}) , \nonumber \\ (\w{\Omega}^1,\w{\Omega}^2) &\rightarrow ( i \w{\Omega}^2 / \sqrt{e}, -i \w{\Omega}^1 \sqrt{e}) , \label{invariance} \end{eqnarray} leaves the system (\ref{n_eqf}-\ref{n_eqpi}) invariant. Note that $h=j=0$ is excluded, as it would imply either conformal flatness ($\Psi_2=0$) or double alignment ($f=0$). \subsection{The case $h j \neq0$}\label{hjnonzero} When $h j \neq 0$ one can use (\ref{n_eqh},\ref{n_eqj}) for integrating (\ref{n_eqf}) and (\ref{PQdefbis3},\ref{PQdefbis4}) to obtain \begin{equation} \mathcal{P} = j \pi \mathcal{B}^3,\ \mathcal{Q} = h \rho \mathcal{B}^3\label{PQdef3} \end{equation} and \begin{equation} f = f_0 h j \mathcal{B}^5.\label{fzerodef} \end{equation} Here $f_0$ is an integration constant, which we will put $= 1$ by a global re-scaling of the metric\footnote{we note that the system (\ref{n_eqf}-\ref{n_eqpi}) is invariant under the transformation $\textrm{d} s^2 \to a^2 \textrm{d} s^2$ (and hence $\w{\omega}^b \to a\, \w{\omega}^b$ for the dual basis vectors of the NP null tetrad), as this implies $f \to a f$, $\mathcal{B} \to a \mathcal{B}$ and $(\pi,\rho,j,h,g) \to a^{-1} (\pi,\rho,j,h,g)$}.\\ Now (\ref{n_eqh},\ref{n_eqj}) imply \begin{eqnarray} \textrm{d} \log (h \rho \mathcal{B}^2) &= \frac{2 g -j -2 e hj \mathcal{B}^5 \rho^2}{2 e hj \mathcal{B}^4 \rho} \textrm{d} \v ,\\ \textrm{d} \log (j \pi \mathcal{B}^2) &= -\frac{2 g +h +2 hj \mathcal{B}^5 \pi^2}{2 hj \mathcal{B}^4 \pi}\textrm{d} \u , \end{eqnarray} showing that functions $\varsigma=\varsigma(\u), \xi=\xi(\v)$ exist such that \begin{equation} \pi = \frac{\varsigma}{j \mathcal{B}^2}, \ \rho = \frac{\xi}{h \mathcal{B}^2} \label{rhopiexpr2} \end{equation} and \begin{eqnarray} g &= e j \mathcal{B}^2 \xi' + e \frac{j \mathcal{B} \xi^2}{h} + \smfrac{1}{2} j \label{gsola} \\ &= - h \mathcal{B}^2 \varsigma' - \frac{h \mathcal{B} \varsigma^2}{j} - \smfrac{1}{2} h, \label{gsolb} \end{eqnarray} where we have written $\xi', \varsigma',\xi'', \ldots $ for the derivatives of $\xi$ and $\varsigma$ w.r.t.~$\u$ and $\v$. Subtracting (\ref{gsolb}) from (\ref{gsola}) reveals the following key algebraic relation between $j$ and $h$, \begin{equation} \mathcal{B} (h^2\varsigma^2 + e j^2\xi^2) + (\mathcal{B}^2\varsigma' + \smfrac{1}{2})\jh^2 + (e \mathcal{B}^2 \xi' + \smfrac{1}{2})hj^2 = 0, \label{key1} \end{equation} while the expressions for (\ref{PQdef3}) reduce to \begin{equation} \mathcal{P} = \mathcal{B}\varsigma,\ \mathcal{Q}=\mathcal{B}\xi . \label{PQdef4} \end{equation} The metric then becomes \begin{equation} \textrm{d} s^2 = \frac{\mathcal{B}^2}{2} ( \textrm{d} \u^2 + e \textrm{d} \v^2 - e \xi^2 \textrm{d} t^2 + \varsigma^2\textrm{d} z^2 ) . \label{ds2_1} \end{equation} With the introduction of new variables $N,J,H$ by $$j=J/ \mathcal{B},\ h=H / \mathcal{B},\ \mathcal{B}=N^{-1/2},$$ equations ({\ref{key1}) and (\ref{n_eqh},\ref{n_eqj}) simplify to \begin{equation} H J ( H + J) N + 2 e H J^2 \xi' + 2 e J^2 \xi^2 + 2 H^2 J \varsigma' + 2 H^2 \varsigma^2 = 0 , \label{KEY1} \end{equation} \begin{eqnarray} \fl \textrm{d} J &= - \frac{J}{2 N^2 H} \xi (2 e H J^2 \xi' + 2 e J^2 \xi^2 + H J^2 N + 2 N^2) \textrm{d} \v - \frac{\varsigma}{N^2 } ( e J^2 \xi^2 + N^2) \textrm{d} \u ,\label{EQ1}\\ \fl \textrm{d} H &= - \frac{\varsigma}{2 N^2 J} H (2 e H J^2 \xi' + 2 e J^2 \xi^2 + H J^2 N + 2 N^2) \textrm{d} \u + \frac{\xi}{N^2} ( H^2 \varsigma^2 - N^2) \textrm{d} \v ,\label{EQ2} \end{eqnarray} whereas (\ref{rhopisol}) implies \begin{equation} \textrm{d} N = -2 N (\frac{\xi}{H} \textrm{d} \v + \frac{\varsigma}{J} \textrm{d} \u ). \label{EQ0} \end{equation} A second algebraic equation is now obtained from (\ref{n_eqg}) and (\ref{gsola}), \begin{eqnarray} \fl & (4 e H^2 J^2 \xi'' + 16 e H J^2 \xi \xi' + 12 e J^2 \xi^3 - H^2 J^4 \xi + 8 H^2 J \xi \varsigma' + 12 H^2 \xi \varsigma^2) N^2 \nonumber \\ \fl & - 4 H J^4 e \xi ( H \xi' + \xi^2) N - 4 J^2 \xi ( e H^2 \xi^2 \varsigma^2 + H^2 J^2 {\xi'}^2 + 2 H J^2 \xi^2 \xi' + J^2 \xi^4)=0 \label{KEY2} \end{eqnarray} or, using (\ref{gsolb}) instead of (\ref{gsola}), \begin{eqnarray} \fl & ( 4 H^2 J^2 \varsigma'' + 16 H^2 J \varsigma \varsigma' + 12 H^2 \varsigma^3) N^2 + H^4 J^2 + 8 e H J^2 \xi' \varsigma + 12 e J^2 \xi^2 \varsigma \varsigma \nonumber \\ \fl & + 4 H^4 J \varsigma ( J \varsigma' + \varsigma^2) N + 4 \varsigma H^2 ( e J^2 \xi^2 \varsigma^2 + H^2 J^2 {\varsigma'}^2 + 2 H^2 J \varsigma^2 \varsigma' + H^2 \varsigma^4) = 0, \label{KEY3} \end{eqnarray} an equation which also can be obtained by taking the exterior derivative of (\ref{KEY1}). Eliminating the first derivatives of $\xi, \varsigma$ from (\ref{KEY2},\ref{KEY3}) yields \begin{equation} e \frac{\xi''}{\xi} + \frac{\varsigma''}{\varsigma} = 3 N ( \frac{1}{J} + \frac{1}{H}). \label{KEY23} \end{equation} Taking the exterior derivative of this equation leads to one more algebraic relation between $J,H$ and $N$, \begin{equation} (\frac{1}{J^2} - \frac{1}{H^2}) N^2 -\frac{1}{3} ( \Xi e + \Sigma) N - e \xi^2 - \varsigma^2 = 0, \label{KEY4} \end{equation} where we have defined \begin{equation} \Xi= \frac{\xi'''}{\xi^2}- \frac{\xi' \xi''}{ \xi^3}, \textrm{ and } \Sigma = \frac{\varsigma'''}{ \varsigma^2}- \frac{\varsigma' \varsigma''}{\varsigma^3}.\label{defCHISigma} \end{equation} The exterior derivative of (\ref{KEY4}) now yields two ODE's, \begin{equation} \Xi' e - 3 \xi = 0 = \Sigma' + 3 \varsigma, \label{CHISigmaODE} \end{equation} first integrals of which are given by \begin{equation} \varsigma'' = -\frac{\varsigma}{6}\Sigma^2 + 3 \Sigma_0 \varsigma \textrm{ and } \xi'' = e\frac{\xi}{6}\Xi^2 + 3 \Xi_0 \xi \label{firstints} \end{equation} ($\Sigma_0,\Xi_0$ constants). Taking succesive derivatives of the components of (\ref{EQ0}) and using (\ref{EQ1},\ref{EQ2}), we obtain two linear equations for $\nN_{,\u}$ and $\nN_{,\v}$, \begin{eqnarray} N_{,\u\u\u} &= -\nN_{,\u} (\frac{1}{6}\Sigma^2 -3 \Sigma_0 + 3\frac{{\varsigma'}^2}{\varsigma^2}) + 3\nN_{,\u\u}\frac{\varsigma'}{\varsigma},\\ N_{,\v\v\v} &= \nN_{,\v}(\frac{e}{6} \Xi^2 + 3 \Xi_0 - 3 \frac{{\xi'}^2}{\xi^2}) + 3\nN_{,\v\v}\frac{\xi'}{\xi}, \end{eqnarray} the general solutions of which are given by \begin{eqnarray} \nN_{,\u} &= F_1(\v) \varsigma + F_2(\v) s \, \varsigma,\label{Nu}\\ \nN_{,\v} &= F_3(\u)\xi + F_4(\u)x \, \xi,\label{Nv} \end{eqnarray} with $F_1, \ldots F_4$ being arbitrary functions of $\u$ or $\v$ and where we have defined $s$ and $x$ by \begin{equation} \varsigma = s' , \ \xi= x' . \label{xi0sigma0def} \end{equation} The integrability conditions for (\ref{Nu},\ref{Nv}) then show that $F_1, \ldots F_4$ must be quadratic functions of $\u$ or $\v$. Herewith (\ref{Nu},\ref{Nv}) can be integrated to yield \begin{equation} N = c_1 x^2s^2 + c_2x s^2 + c_3 x^2 s + c_4 s^2 + c_5 x s + c_6 x^2 + c_7 s + c_8 x + c_9 .\label{N_expression} \end{equation} Substituting this into (\ref{EQ0}) gives expressions for $J$ and $H$, \begin{eqnarray} J &= -2 N (2 c_1 x^2 s + 2 c_2 x s + c_3 x^2 + 2 c_4 s + c_5 x + c_7)^{-1}, \\ H &= -2 N (2 c_1 x s^2 + c_2 s^2 + 2 c_3 x s + c_5 s + 2 c_6 x + c_8)^{-1}, \end{eqnarray} which, together with (\ref{KEY23}), imply $c_1=0, c_2=1, c_3=-1$, and \begin{equation} \Xi = 3 e (x - c_6 - \frac{c_5}{2}), \textrm{ and } \Sigma = - 3 (s + c_4 + \frac{c_5}{2}), \label{CHISIGMAexpr} \end{equation} together with \begin{eqnarray} & [ 3 x^2 - 3 (c_5 + 2 c_6 ) x + c_{10} - 3 c_7] \xi - 2 e \xi'' = 0, \label{xicond1}\\ & [ 3 s^2 +3 ( c_5 + 2 c_4 ) s + c_{10} + 3 c_8] \varsigma + 2 \varsigma'' = 0 \label{varsigmacond1}. \end{eqnarray} Combining (\ref{CHISIGMAexpr},\ref{xicond1},\ref{varsigmacond1}) with (\ref{EQ1},\ref{EQ2}) leads to \begin{equation} c_{10} = -2 c_4 c_6 + \smfrac{1}{2} c_5^2 + 2c_7 - 2 c_8, \end{equation} and two quadratures determining $\varsigma, \xi$ as functions of $\u$ and $\v$: \begin{eqnarray} 4 e \xi^2 &= x^4 - 2 (c_5 + 2 c_6) x^3 - (4 c_4 c_6 - c_5^2 + 2 c_7 + 4 c_8) x^2 \nonumber \\ & - 2 (2 c_4 c_8 - c_5 c_7 + 2 c_9) x - 4 c_4 c_9 + c_7^2, \label{xisq1} \\ 4 \varsigma^2 &= -s^4 - 2 (c_5+2 c_4 ) s^3 + (4 c_4 c_6 - c_5^2 - 2 c_8- 4 c_7 ) s^2 \nonumber \\ & + 2 (2 c_6 c_7 - c_8 c_5 - 2 c_9) s + 4 c_6 c_9 - c_8^2. \label{varsigmasq1} \end{eqnarray} One can adjust the constants $c_4$ and $c_6$ by means of a translation of $s$ and $x$. Specifically we can choose $- 2 c_4=- 2 c_6= c_5\equivp$ and $c_9\equivq$, so that (\ref{CHISIGMAexpr}) reduces to \begin{equation} \Xi= 3 e x, \textrm{ and } \Sigma = - 3 s . \label{CHISIGMAexpr2} \end{equation} Replacing $c_7,c_8$ by \begin{equation} 3 \Sigma_0 = -\frac{c_8}{2} - c_7,\ 3 e \Xi_0 = -\frac{c_7}{2} + c_8, \end{equation} the relations (\ref{N_expression},\ref{xisq1},\ref{varsigmasq1}) simplify to \begin{eqnarray} N &= x s^2-\nsx^2-\frac{p}{2}(x - s)^2 - 2 e (2 x - s) \Xi_0 + 2 ( x - 2 s) \Sigma_0 + q ,\label{N_expressionfinal}\\ \xi^2 &= \smfrac{e}{4} x^4 + 3 \Xi_0 x^2 - (p(e \Sigma_0 + \Xi_0) + e q ) x + e (2 \Sigma_0 - \Xi_0)^2 + \smfrac{1}{2} e p q , \label{xisimp} \\ \varsigma^2 &= -\smfrac{1}{4}s^4 + 3 \Sigma_0 s^2 + (p(e \Xi_0 + \Sigma_0) - q) s - (2\Xi_0-e \Sigma_0)^2 - \smfrac{1}{2} p q ,\label{sigsimp} \end{eqnarray} $p,q,\Sigma_0,\Xi_0$ being independent constants of integration, where $\Sigma_0,\Xi_0$ are the two conserved quantities introduced in (\ref{firstints}). Using $s$ and $x$ as coordinates instead of $\u$ and $\v$ and re-introducing a global scale-factor $k^2$ (which we used in (\ref{fzerodef}) to put the integration constant $f_0=1$), the metric (\ref{ds2_1}) finally reads \begin{equation} \textrm{d} s^2 = \frac{k^2}{2 N} ( \varsigma^{-2} {\textrm{d} s}^2 + e \xi^{-2} {\textrm{d} x}^2 - e \xi^2 \textrm{d} t^2 + \varsigma^2\textrm{d} z^2 ), \label{ds2_2} \end{equation} with $N$ given by (\ref{N_expressionfinal}) and $\xi,\varsigma$ by (\ref{xisimp},\ref{sigsimp}). \subsection{The case $h=0$}\label{hzero} When $h=0$ the expressions for $\mathcal{P}$ and $\pi$ in (\ref{PQdef3},\ref{rhopiexpr2},\ref{PQdef4}) and for $g$ in (\ref{gsola}) are obtained as in the previous section, but (\ref{n_eqj},\ref{n_eqpi}) now provide the following algebraic restriction on $f,j$ and $\mathcal{B}$, \begin{equation} (2\nBj\varsigma' + 2\varsigma^2)f^2 + \mathcal{B}^4j^3(2 e \mathcal{B}^2 \xi' + 1)f + 2e \mathcal{B}^{10} j^4\xi^2 = 0. \label{h0key1} \end{equation} However, instead of (\ref{rhopiexpr2}b), we use the condition $\frac{1}{j}\times$(\ref{n_eqj})-$\frac{1}{f}\times$(\ref{n_eqf})-$\frac{1}{\rho}\times$(\ref{n_eqrho}), which now implies \begin{equation} \rho = \frac{\xi j}{f} \mathcal{B}^3. \end{equation} In this case (\ref{PQdefbis2}) reduces to $\textrm{d} \log \left[\mathcal{Q} / (\mathcal{B} \xi) \right] =0$, so that again (\ref{PQdef4}) holds.\\ Two more algebraic relations between $f,j$ and $\mathcal{B}$ are obtained by substituting (\ref{gsola}) in (\ref{n_eqg}) and into the exterior derivative of (\ref{n_eqf}), thereby leading to \begin{eqnarray} \fl & [- \mathcal{B}^4 \xi (4 \mathcal{B}^4 \xi'^2 + 4 \mathcal{B}^2 e \xi' + 1) j^4 - 4 \mathcal{B}^2 e ( \mathcal{B}^4 \xi^3 \varsigma^2 - \xi'') j^2 + 4 \varsigma^2 \xi] f^2 \nonumber \\ \fl &+ [-4 \mathcal{B}^{10} \xi^3 (2 \mathcal{B}^2 \xi' + e) j^5 + 4 \mathcal{B}^4 \xi (2 \mathcal{B}^2 e \xi' - 1) j^3] f - 4 \mathcal{B}^{16} j^6 \xi^5 + 4 \mathcal{B}^{10} e j^4 \xi^3 = 0, \label{h0key2} \\ \fl & ( \mathcal{B}^2 j^2 \varsigma \varsigma'^2 + 2 \mathcal{B} j \varsigma^3 \varsigma' + \varsigma^5) f^4 + [ \mathcal{B}^6 ( \mathcal{B}^4 e \xi^2 \varsigma^3 + \varsigma'') j^4 + 2 \mathcal{B}^5 j^3 \varsigma \varsigma' + \mathcal{B}^4 j^2 \varsigma^3] f^2 \nonumber \\ \fl & - \mathcal{B}^8 f j^5 \varsigma + \mathcal{B}^{14} e j^6 \xi^2 \varsigma = 0. \label{h0key3} \end{eqnarray} Again we introduce new variables $N, J, F$ by $\mathcal{B}=N^{-1/2}, j=J/\mathcal{B}^3,f=J F/\mathcal{B}$ and combine (\ref{h0key1},\ref{h0key2},\ref{h0key3}) to obtain \begin{eqnarray} & J^2 F N^2 + 2 F J (e J \xi' + F \varsigma') N + 2 e J^2 \xi^2 + 2 F^2 \varsigma^2 = 0, \label{h0KEY1}\\ & J^2 F \frac{ F^3 \xi \varsigma'^2 - e F \xi'' + 2 \xi}{ F^2 \varsigma^2 + 1} N^2 + 2 J \varsigma' F^2 \xi N + \xi ( e J^2 \xi^2 + F^2 \varsigma^2) = 0,\label{h0KEY2}\\ & \frac{e \xi'' \varsigma + \xi \varsigma''}{ \xi \varsigma} - \frac{3}{F} = 0,\label{h0KEY3} \end{eqnarray} while the partial differential equations for $\mathcal{B},f,j$ become \begin{eqnarray} \fl \textrm{d} N &= -2 \frac{\xi}{F} \textrm{d} \v - 2 \frac{\varsigma}{J} \textrm{d} \u \label{h0dN}, \\ \fl \textrm{d} F &= -\frac{F}{2 J N} \varsigma (2 F J^2 N e \xi' + F J^2 N^2 + 2 J^2 e \xi^2 - 2) \textrm{d} \u \nonumber \\ \fl & + \frac{\xi}{2 N} (2 F J^2 N e \xi' + 2 F^2 J N \varsigma' + F J^2 N^2 + 2 J^2 e \xi^2 + 4 F^2 \varsigma^2 + 2) \textrm{d} \v \label{h0dF}\\ \fl \textrm{d} J &= -\frac{\varsigma}{N} (J^2 e \xi^2 - 1) \textrm{d} \u - \frac{\xi J}{2 F N} (2\overline{\pi} F J^2 N e \xi' + F J^2 N^2 + 2 J^2 e \xi^2 - 2) \textrm{d} \v \label{h0dJ} . \end{eqnarray} Herewith (and with the quantities $\Xi,\Sigma$ defined by (\ref{defCHISigma})), the exterior derivatives of (\ref{h0KEY2},\ref{h0KEY3}) yield \begin{eqnarray} N &= -\frac{3 e ( F^2 \varsigma^2 + 1)}{ \Xi F^2}, \label{h0KEY2d} \\ J &= \frac{ e F \Xi}{3 F \varsigma' + \Sigma}. \label{h0KEY3d} \end{eqnarray} While the exterior derivative of (\ref{h0KEY2d}) becomes an identity under (\ref{h0KEY1}-\ref{h0KEY3d}), the exterior derivative of (\ref{h0KEY3d}) results in (compare with (\ref{CHISigmaODE})) \begin{equation} \Sigma' + 3 \varsigma = 0 \textrm{ and } \Xi'=0. \label{CHISigmaODEbis} \end{equation} One therefore again obtains the first integral for $\varsigma$ as in (\ref{firstints}), but now this is complemented by $\Xi=\Xi_0$, where $\Xi_0$ is an integration constant (with $\Xi_0 \neq 0$ by (\ref{h0KEY2d}) ). \\ As in section \ref{hjnonzero} one simplifies (\ref{h0KEY2d}) with (\ref{h0KEY3}) and (\ref{h0KEY3d}), to obtain a partial differential equation for $N$, which can be integrated to yield (with $s, x$ defined as in (\ref{xi0sigma0def})) \begin{eqnarray} 12 e \Xi_0 N &= -(3 s^2 -2 e \Xi_0 x)^2 + (-6 c_1 e + 36 \Sigma_0) s^2 - 4 \Xi_0 (6 e \Sigma_0 - c_1) x \nonumber\\ & - 36 \varsigma^2 + 12 e \Sigma_0 c_1 - 36 \Sigma_0^2 - c_1^2, \label{h0tempN} \end{eqnarray} together with the condition \begin{equation} -\Xi_0 x^2 + c_1 x + c_0 + 2 \xi' = 0. \label{h0_xi_equation} \end{equation} Now (\ref{h0dF},\ref{h0dJ}) determine $F,J$ and by substituting these into the equations (\ref{h0KEY1}-\ref{h0dJ}) one further obtains restrictions on the functions $\xi$ and $\varsigma$. Using a translation of $x$ to put the integration constant $c_1=0$, together with some tedious algebra, eventually leads to the following relations: \begin{eqnarray} \xi^2 &= \frac{\Xi_0}{3} x^3 - p x +\frac{3 e}{4\Xi_0^2}(8\Xi_0\Sigma_0 p - 96\Sigma_0^3 + 3q^2) \label{h0xi_equation_bis},\\ \varsigma^2 &= -\smfrac{1}{4}s^4 + 3\Sigma_0s^2 - \smfrac{p}{3}\Xi_0 - q s + 3\Sigma_0^2 , \label{h0s_eq3bis} \end{eqnarray} where $p, q$ are new constants of integration.\\ Using $s , x$ as coordinates instead of $\u,\v$ the metric remains as given by (\ref{ds2_2}), but now $\xi$ is given by (\ref{h0xi_equation_bis}), while (\ref{h0tempN}) reduces to \begin{equation} N = (x - 6 e\frac{\Sigma_0}{\Xi_0})s^2 + 3 \frac{eq}{\Xi_0} s - \frac{e \Xi_0}{3} x^2 - 2\Sigma_0x + e \frac{\Xi_0 p - 12\Sigma_0^2}{\Xi_0} .\label{h0Nexp} \end{equation} \subsection{The case $j=0$}\label{jzero} As the analysis of the case $j=0$ is almost identical to that of $h=0$ (cf.~the invariance of the system (\ref{n_eqf}-\ref{n_eqpi}) under the transformation (\ref{invariance})), we limit ourselves to only presenting the results.\\ The metric is still given by (\ref{ds2_2}), but now $\xi$ and $\varsigma$ read \begin{eqnarray} \xi^2 &= \smfrac{e}{4} x^4 + 3\Xi_0 x^2 + \smfrac{p}{3} \Sigma_0 + \nKx - 3 e \Xi_0^2, \label{j0x_eq3bis} \\ \varsigma^2 &= \frac{\Sigma_0}{3} s^3 - e p s + \frac{3}{4\Sigma_0^2}( 8 \Xi_0\Sigma_0 p - 96 e \Xi_0^3- 3q^2), \label{j0s_eq3bis} \end{eqnarray} while (\ref{h0Nexp}) is replaced by \begin{equation} N = (6 e \frac{\Xi_0}{\Sigma_0} - s) x^2 - \frac{\Sigma_0}{3} s^2 + 3 \frac{eq}{\Sigma_0} x - 2 e \Xi_0 s + \frac{e \Sigma_0 p - 12\Xi_0^2}{\Sigma_0}. \label{j0Nexp} \end{equation} \section{Discussion}\label{Discussion} We have constructed all Petrov type D Einstein-Maxwell fields of Robinson-Trautman type (i.e.~with expanding but non-twisting DP vectors) in which the Maxwell field is totally non-aligned with the DP vectors and in which the latter are assumed to be geodesic and shear-free, with $\w{m}$ being hypersurface orthogonal. All these solutions necessarily have a vanishing cosmological constant and are given by the metric (\ref{ds2_2}). Three different 5-parameter classes exist: \begin{itemize} \item $h j \neq 0$ with $N,\varsigma,\xi$ given by (\ref{N_expressionfinal},\ref{sigsimp},\ref{xisimp}), \item $h=0, j \neq 0$ with $N,\varsigma,\xi$ given by (\ref{h0Nexp},\ref{h0s_eq3bis},\ref{h0xi_equation_bis}), \item $j=0, h \neq 0$ with $N,\varsigma,\xi$ given by (\ref{j0Nexp},\ref{j0s_eq3bis},\ref{j0x_eq3bis}), \end{itemize} In all cases the electromagnetic field is given by $$\Phi_0=-\Phi_2=C_0\xi\varsigmaN^{-\smfrac{1}{2}}k^{-1}, \ \Phi_1=\Czerog k^{-1}$$ with $g$ a not very illuminating expression obtainable from (\ref{gsola}) or (\ref{gsolb}). We note that all solutions with $e=+1$ are static in the domain where $N, \varsigma,\xi$ are positive, with time-like Killing vector $\partial_{t}$. The Einstein-Maxwell equations for static space-times in which both electrostatic and magnetostatic fields are present have been investigated in \cite{Das1979}, where it was proved that the electric and magnetic field vectors $\w{E}$ and $\w{B}$ (evaluated w.r.t. the time-like Killing vector) must be parallel. This is consistent with our results, as $E_a+ i B_a =\sqrt{2}[ \Phi_2 m_a -\Phi_0 \overline{m}_a+\Phi_1 (k_a-l_a)]$, with $\Phi_0,\Phi_1,\Phi_2$ all having the same (constant) phase factor. From the general expression \footnote{note that this is less obvious when using the simplified form given by (\ref{N_expressionfinal},\ref{xisimp},\ref{sigsimp}), as there the required 3d degree terms have been removed from $N$ by translations of $s$ and $x$} of the metric (\ref{ds2_2}), with $N,\varsigma,\xi$ given by (\ref{N_expression},\ref{varsigmasq1},\ref{xisq1}), a limiting procedure, consisting of a coordinate transformation $[t,z,s,x] \rightarrow$ \begin{equation} [\sqrt{2} A^{-1}t a^{-2},\sqrt{2} A^{-1}z a^{-2},(m A s +\smfrac{1}{6}) a^{4},(m A x -\smfrac{1}{6}) a^{4}], \end{equation} together with a redefinition of the constants $c_i$, $[c_4,c_5,c_6,c_7,c_8,c_9] \rightarrow$ \begin{equation} [\frac{1}{m\sqrt{2}} a^{-6},\frac{\sqrt{2}}{m} a^{-6},\frac{1}{m\sqrt{2}} a^{-6},0,-\smfrac{1}{6} a^8,(\smfrac{1}{54}-m^2 A^2) a^{12}], \end{equation} reduces, after performing the limit $a \rightarrow 0$, both cases $e=\pm 1$ of (\ref{ds2_2}) to the vacuum C-metric\cite{EhlersKundt1962,LeviCivita,Weyl1917,GrifPod}, \begin{equation} \textrm{d} s^2 = \frac{1}{A^2 (x+s)^2} ( - F \textrm{d} t^2 + G \textrm{d} z^2 + \frac{1}{F} {\textrm{d} x}^2 +\frac{1}{G} \textrm{d} s^2 ), \end{equation} with $F=-1 +x^2 -2 A m x^3$ and $G=1-s^2-2 A m s^3$.\\ Whether, in addition, a non-trivial sub-case of the charged C-metric can be obtained by a limiting procedure (including a singular coordinate transformation, as discussed in \cite{Paivaetal}) is not clear, since any attempts at removing the $x s^2-s x^2$ term from $N$ tend to switch off the Maxwell field.\\ From the GHP equations obtained for $\rho,\mu,\pi$ and $\tau$ in \S2 it is clear\cite{McLenVdB1993} that a valence 2 Killing spinor\cite{WalkerPenrose70} exists. There is more: the form of (\ref{ds2_2}) suggests that one should have a closer look at the metric when $N=k=1$, with $\varsigma=\varsigma(s)$ and $\xi=\xi(x)$. It is easy to verify that for this metric all $(0,0)$-weighted GHP spin coefficients vanish, while the only non-0 curvature components are $R, \Phi_{11}$ and $\Psi_2$, with $\Psi_2=-\frac{R}{12}$ and \begin{eqnarray} e(\xi \xi_{,x x}+\xi^2_{,x}) +\frac{R}{8}-\Phi_{11} &=0,\\ (\varsigma \varsigma_{,s s}+\varsigma^2_{,s}) +\frac{R}{8}+\Phi_{11} &=0, \end{eqnarray} showing that this is one of the Killing-Yano spaces studied in \cite{DietzRudiger1}. \section {Acknowledgment} All calculations were done using the Maple symbolic algebra system. The properties of the Killing-Yano space, obtained by putting $N=1$, were checked with Maple's DifferentialGeometry package\cite{Anderson_Torre}. \section{Appendix: Ricci, Maxwell and Bianchi equations in the GHP formalism}\label{appendix1} Below we list some relevant information from the Geroch-Held-Penrose formalism (weights, commutators and prime operation and Ricci-Maxwell and Bianchi equations) for the special case of vanishing cosmological constant ($\Lambda=0$). Note that $\Phi_{ij}=\Phi_i \overline{\Phi_j}$. \\ \noindent Weights \footnote{Objects $x$ transforming under boosts and rotations as $x \rightarrow A^{\frac{p+q}{2}}e^{i\frac{p-q}{2}\theta} x$ are called {\em well-weighted of type} $\left(p,q\right)$.} of the spin-coefficients, the Maxwell and Weyl spinor components and the GHP operators: \begin{eqnarray} &\kappa : (3, 1), \nu : (-3, -1), \sigma : (3, -1), \lambda : (-3, 1), \nonumber \\ &\rho : (1, 1), \mu : (-1, -1), \tau : (1, -1), \pi : (-1, 1), \nonumber \\ &\Phi_0 : (2, 0), \Phi_1 : (0, 0), \Phi_2 : (-2, 0), \nonumber \\ &\Psi_0 : (4, 0), \Psi_1 : (2, 0), \Psi_2 : (0, 0), \Psi_3: (-2,0), \Psi_4 : (-4, 0) ,\nonumber \\ &\eth : (1, -1), \eth ' : (-1,1), \tho ' : (-1,-1), \textrm{\TH} : (1,1). \nonumber \end{eqnarray} \noindent The GHP operators are related to the NP operators by \begin{eqnarray} \textrm{\TH} \eta &= (D - p \epsilon - q \overline{\epsilon}) \eta, \ \tho ' \eta &= (\Delta -p \gamma-q \overline{\gamma}) \eta , \nonumber \\ \eth \eta &= (\delta -p \beta -q \overline{\alpha}) \eta, \ \eth ' \eta &= (\overline{\delta} -p \alpha -q \overline{\beta}) \eta . \label{GHP_NP} \end{eqnarray} for any $(p,q)$-weighted scalar $\eta$.\\ \noindent The prime operation is an involution with \begin{eqnarray} \kappa' &= -\nu,\sigma'=-\lambda,\rho'=-\mu, \tau'=-\pi,\\ {\Psi_0}' &= \Psi_4, {\Psi_1}'=\Psi_3, {\Psi_2}'=\Psi_2,\\ \Phi_0' &= -\Phi_2, \Phi_1'=-\Phi_1. \end{eqnarray} and satisfies $\overline{\eth}=\eth '$.\\ \noindent The GHP commutators acting on $(p,q)$-weighted quantities are given by: \begin{eqnarray} \fl \left[ \textrm{\TH},\textrm{\TH}' \right] &= (\pi+\overline{\tau})\eth +(\overline{\pi}+\tau)\eth ' +(\kappa\nu-\pi\tau -\Phi_{11}-\Psi_2)p \nonumber \\ \fl &+(\overline{\kappa}\overline{\nu}-\overline{\pi}\overline{\tau} -\Phi_{11}-\overline{\Psi}_2)q,\\ \fl \left[ \eth,\eth ' \right] &= (\mu-\overline{\mu})\textrm{\TH} +(\rho-\overline{\rho})\textrm{\TH}' +(\lambda\sigma-\mu\rho-\Phi_{11}+\Psi_2)p \nonumber \\ \fl &-(\overline{\lambda\sigma}-\overline{\mu}\overline{\rho}-\Phi_{11}+\overline{\Psi}_2)q,\\ \fl \left[ \textrm{\TH},\eth \right] &= \overline{\pi}\,\textrm{\TH} -\kappa\textrm{\TH}' +\overline{\rho}\,\eth +\sigma\eth '+(\kappa\mu-\sigma\pi-\Psi_1)p + (\overline{\kappa\lambda}-\overline{\pi}\overline{\rho}-\Phi_{01})q . \end{eqnarray} \noindent Ricci equations: \begin{eqnarray} \textrm{\TH}\rho-\eth '\kappa &= \rho^2+\sigma\overline{\sigma}-\overline{\kappa}\tau+\kappa\pi+\Phi_{00}, \label{ghp1}\\ \textrm{\TH}\sigma-\eth\kappa &= (\rho+\overline{\rho})\sigma+(\overline{\pi}-\tau)\kappa+\Psi_0, \label{ghp2}\\ \textrm{\TH}\tau-\textrm{\TH}'\kappa &= (\tau+\overline{\pi})\rho+(\overline{\tau}+\pi)\sigma+\Phi_{01}+\Psi_1, \label{ghp3}\\ \textrm{\TH} \nu-\textrm{\TH}' \pi &= (\pi+\overline{\tau})\mu+(\overline{\pi}+\tau)\lambda+\Psi_3+\overline{\Phi_1}\Phi_2,\label{ghp6}\\ \eth\rho-\eth '\sigma &= (\rho-\overline{\rho})\tau+(\mu-\overline{\mu})\kappa+\Phi_{01}-\Psi_1,\label{ghp8}\\ \textrm{\TH}'\sigma-\eth\tau &= -\sigma\mu-\overline{\lambda}\rho-\tau^2+\kappa\overline{\nu}-\Phi_{02},\label{ghp4d}\\ \textrm{\TH}'\rho-\eth '\tau &= -\overline{\mu}\rho-\lambda\sigma-\tau\overline{\tau}+\kappa\nu-\Psi_2. \label{ghp5d} \end{eqnarray} \noindent Maxwell equations: \begin{eqnarray} \textrm{\TH} \Phi_1-\eth ' \Phi_0 &= \pi \Phi_0+2\rho\Phi_1-\kappa \Phi_2, \label{max1}\\ \textrm{\TH} \Phi_2-\eth ' \Phi_1 &= -\lambda \Phi_0+2 \pi \Phi_1+\rho\Phi_2. \label{max2} \end{eqnarray} \noindent Bianchi equations: \begin{eqnarray} \fl {\eth '} \Psi_{{0}} -{\textrm{\TH}} \Psi_{{1}} +{\textrm{\TH}} \Phi_{{01}} -{\eth} \Phi_{ {00}} &= -\pi\,\Psi_{{0}}-4\,\rho\,\Psi_{{1}}+3\,\kappa\,\Psi_{ {2}}+ \overline{\pi} \Phi_{{00}}+2\, \overline{\rho} \Phi_{{01}}+2\,\sigma\,\Phi_{{10}} \nonumber \\ \fl &-2\,\kappa\,\Phi_{{11}}-\overline{\kappa} \Phi_{{02}}, \label{bi1}\\ \fl {\tho '} \Psi_{{0}} -{\eth} \Psi_{{1}} +{\textrm{\TH}} \Phi_{{02}} -{\eth} \Phi_{ {01}} &= -\mu\,\Psi_{{0}}-4\,\tau\,\Psi_{{1}}+3\,\sigma\,\Psi_{ {2}}-\overline{\lambda} \Phi_{{00}}+2\, \overline{\pi} \Phi_{{01}}+2\,\sigma\,\Phi_{{11}}\nonumber \\ \fl &+ \overline{\rho} \Phi_{{02}}-2\,\kappa\,\Phi_{{12}}, \label{bi2} \end{eqnarray} \begin{eqnarray} \fl &3\,{\eth '} \Psi_{{1}} -3\,{\textrm{\TH}} \Psi_{{2}} +2\,{\textrm{\TH}} \Phi_{{11}} -2\,{\eth} \Phi_{{10}} +{\eth '} \Phi_{{01}} -{\tho '} \Phi_{{00}} = 3\,\lambda\,\Psi_{{0}}-9\,\rho\,\Psi_{{2 }}-6\,\pi\,\Psi_{{1}}+6\,\kappa\,\Psi_{{3}}+ (\overline{\mu} -2\,\mu ) \Phi_{{00}}\nonumber \\ \fl & \ \ + 2\,(\pi+ \overline{\tau} ) \Phi_{{01}}+2\, ( \tau+ \overline{\pi} ) \Phi_{{10}}+2\, ( 2\, \overline{\rho} -\rho ) \Phi_{{11}} +2\,\sigma\,\Phi_{{20} }-\overline{\sigma} \Phi_{{02}}-2\,\overline{\kappa} \Phi_{{12}}-2\,\kappa\,\Phi_{{21}}, \label{bi3}\\ \fl &3\,{\tho '} \Psi_{{1}} -3\,{\eth} \Psi_{{2}} +2\,{\textrm{\TH}} \Phi_{{12}} -2\,{\eth} \Phi_{{11}} +{\eth '} \Phi_{{02}} -{\tho '} \Phi_{{01}} = 3\,\nu\,\Psi_{{0}}-6\,\mu\,\Psi_{{1}}-9 \,\tau\,\Psi_{{2}}+6\,\sigma\,\Psi_{{3}}-\overline{\nu} \Phi_{{00}}\nonumber \\ \fl & \ \ \ +2\, (\overline{\mu} -\mu) \Phi_{{01}} -2\,\overline{\lambda} \Phi_{{10} }+2\, ( \tau+2 \overline{\pi}) \Phi_{{11 }}+ (2\,\pi+\overline{\tau}) \Phi_{{02}} + 2\,(\overline{\rho} -\rho) \Phi_{{12 }}+2\,\sigma\,\Phi_{{21}}-2\,\kappa\,\Phi_{{22}}. \label{bi4} \end{eqnarray} \section*{References}
1,477,468,750,517
arxiv
\section{Introduction}\label{sec:introduction} The growing penetration of high-end consumer devices like smartphones and tablets running bandwidth hungry applications (e.g. mobile multimedia streaming) has led to a commensurate surge in demand for mobile data (pegged to soar up to 77 exabytes by 2022 \cite{cisco2018cisco}). An anticipated second wave will result from the emerging Augmented/Virtual Reality (AR/VR) industry \cite{al2017energy} and more broadly, the Internet-of-Things that will connect an unprecedented number of intelligent devices to next-generation (5G and beyond) mobile networks as shown in Fig.~\ref{mle}. Existing wireless networks, both cellular and Wi-Fi, must therefore greatly expand their aggregate {\em network} capacity to meet this challenge. This is being achieved by a combination of approaches including use of multi-input, multi-output (MIMO) techniques \cite{gampala2018massive}, network densification (i.e. deploying small cells \cite{sathya2014placement}) and more efficient traffic management and radio resource allocation. Since licensed spectrum is a limited and expensive resource, its optimal utilization may require spectrum sharing between multiple network operators/providers of different types -increasingly licensed-unlicensed sharing is being contemplated to enhance network spectral efficiency, beyond the more traditional unlicensed-unlicensed sharing. As the most common unlicensed incumbent, Wi-Fi is now broadly deployed in the unlicensed $5$ GHz band in North America where approximately $500$ MHz of bandwidth is available. However, these $5$ GHz unlicensed bands are also seeing increasing deployment of cellular services such as Long Term Evolution (LTE) Licensed Assisted Access (LTE-LAA). Recently, the Federal Communications Commission (FCC) sought to open up 1.2 GHz of additional spectrum for unlicensed operation in the 6 GHz band through a Notice of Proposed Rule Making (NPRM) \cite{FCC1}. This allocation of spectrum for unlicensed operation will thus only accelerate the need for further coexistence solutions among heterogeneous systems. \begin{figure}[htb!] \begin{center} \includegraphics[height=5.3cm,width=9cm]{ML.pdf} \caption{Future Applications on Unlicensed Spectrum Band.} \label{mle} \end{center} \end{figure} However, the benefits of spectrum sharing are not devoid of challenges, the foremost being the search for effective coexistence solutions between cellular (LTE and 5G) and Wi-Fi networks whose medium access control (MAC) protocols are very different. While cellular systems employ a Time Division Multiple Access (TDMA)/Frequency Division Multiple Access (FDMA) scheduling mechanism, Wi-Fi depends on the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) mechanism. The 5 GHz band being unlicensed and offering ~500 MHz of available bandwidth has prompted several key players in the cellular industry to develop the LTE-LAA specification within the Third Generation Partnership Project (3GPP). Specification differences between LTE and the incumbent Wi-Fi will lead to many issues due to the incompatibility between the two standards. Therefore, to ensure fair coexistence, certain medium access protocols have been developed as an addition to the licensed LTE standard. In addition to LTE-LAA, there also exists LTE-U which was developed by an industry consortium called the LTE-U Forum and will be the main focus of this paper. LTE-LAA was proposed by 3GPP~\cite{3gpp,TCCN} and its working mechanism is similar to the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol used by Wi-Fi. In LTE-LAA, an LAA base station (BS) acts essentially similar to a Wi-Fi access point (AP) in terms of channel access, \textit{i.e.}, a BS needs to ensure that the channel is free before transmitting any data, otherwise it will perform an exponential back-off procedure similar to CSMA/CA in Wi-Fi. Therefore, there is no need to precisely determine the number of coexisting Wi-Fi APs, due to the channel sensing and back-off mechanism which is adaptable to varying channel occupancy. However, LTE-U which was developed by the LTE-U forum~\cite{forum}, uses a simple duty-cycling technique where the LTE-U BS will periodically switch between ON and OFF states in an interval set according to the number of Wi-Fi APs present in the channel. In the ON state, the BS transmits data as a normal LTE transmission while in the OFF state, the BS does not transmit any data but passively senses the channel for the presence of Wi-Fi. The number of sensed Wi-Fi APs is then used to properly adjust the duty cycle interval, and this process is known as Carrier Sense Adaptive Transmission (CSAT). Therefore, accurately determining the number of coexisting Wi-Fi APs is important for optimum operation of the CSAT procedure. Existing literature addresses the LTE-U and Wi-Fi coexistence in terms of optimizing the ON and OFF duty cycle \cite{singh2018wi}, power control \cite{chaves2013lte}, hidden node problem \cite{atif2019complete}, etc. On the other hand, the LTE-U specification does not specify, and there has been relatively less work on, how a LTE-U operator should detect the number of Wi-Fi APs on the channel to adjust the duty cycle appropriately. There are a number of candidate techniques to determine the number of Wi-Fi APs as follows: \begin{itemize} \item \textbf{Header-Based CSAT (HD):} Wi-Fi APs transmit beacon packets every 102.4 ms, containing important information about the AP, such as the Basic Service Set Identification (BSSID) which is unique to each AP. This is a straightforward way to identify the Wi-Fi AP, but it adds additional complexity since the LTE-U BS would require a full Wi-Fi decoder to obtain this information from the packet. \item \textbf{Energy-Based CSAT (ED):} Rather than a full decoding process, it is hypothesized that sensing the energy level of the channel is enough to detect the number of Wi-Fi APs on the channel. However, it is still a challenging problem since the energy level may not correctly correlate to the number of APs under varying conditions (\textit{e.g.}, different category of traffic, large number of Wi-Fi APs, variations in transmission powers, multipath, etc). \item \textbf{Autocorrelation-Based CSAT (AC):} To detect the Wi-Fi signal at the LTE-U BS, one can develop an auto-correlation (AC) based detector where the LTE-U BS performs auto-correlation on the Wi-Fi preamble, without fully decoding the preamble. This is possible since all Wi-Fi preambles~\footnote{All Wi-Fi frames, even those in newer specifications like 802.11ax, begin with the legacy short training field (L-STF) symbol.} contain the legacy short training field (L-STF) and legacy long training field (L-LTF) symbols which contain multiple repeats of a known sequence. However, the AC function can only determine whether a signal is a Wi-Fi signal and cannot derive any distinct information pertaining to each APs. \end{itemize} Table~\ref{table:csat} lists the different types of CSAT approaches with their own pros and cons. We studied energy detection (ED) and AC based detection of \mbox{Wi-Fi} APs in our previous work \cite{sathya2018energy}\cite{sathya2019auto}~\footnote{The latest version can be found here: http://bit.ly/2LDVWWo}, and proved that our algorithms performed reasonably well under various scenarios. \par Of late, Machine Learning (ML) approaches are beginning to be used in wireless networks to solve problems such as agile management of network resources using real-time analytics based on data. The advantage of ML is that it has the ability to learn useful information from input data, which can help improve network performance. ML models enable us to replace heuristics with more robust and general alternatives. In this paper, we propose observing the \mbox{Wi-Fi} AP energy values during LTE-U OFF duration and using the data to train different ML models~\cite{zhang2019deep}. We also apply the models in an online experiment to detect the number of \mbox{Wi-Fi} APs. Finally, we demonstrate significant improvement in the performance of the ML approach as compared to the ED and AC detectors. \begin{table} \caption{Different Types of LTE-U CSAT. } \centering \begin{tabular}{|p{1.5cm}| p{2cm}| p{1.8cm}| p{1.8cm} |} \hline \bfseries \cellcolor{Gray} CSAT Types &\bfseries \cellcolor{Gray} Method &\bfseries \cellcolor{Gray} Pros &\bfseries \cellcolor{Gray} Cons \\ \hline Header Decoding (HD) & Decodes the \mbox{Wi-Fi} MAC header at the \mbox{LTE-U} BS & 100\% accurate & Additional Complexity~\cite{chai2016lte}, high cost\\ \hline Energy Detection (ED) & Based on the change in the \textit{energy level} of the air medium & Low-cost, low-complexity & Low-accuracy \cite{sathya2018energy}\\ \hline Auto-correlation (AC) & LTE-U BS performs correlation on the \mbox{Wi-Fi} L-STF symbol in the preamble & Low-cost, low-complexity & Medium accuracy (more accurate than ED)~\cite{sathya2019auto} \\ \hline Machine Learning (ML) & Train the model based on energy values on the channel & Much more accurate than ED and AC methods & Requires gathering data and training models\\ \hline \end{tabular} \label{table:csat} \end{table} \begin{figure}[htb!] \begin{center} \includegraphics[height=6.5cm,width=9cm]{DenseLTECoexistenceDeploymentSetup.png} \caption{Dense LTE \mbox{Wi-Fi} Co-existence Deployment Setup.} \label{expp} \end{center} \end{figure} \par Fig.~\ref{expp} illustrates an example of a dense LTE-U/\mbox{Wi-Fi} coexistence, where a number of \mbox{Wi-Fi} APs and one \mbox{LTE-U} BS are operating on the same channel, with multiple clients associated with each AP and BS. In such a situation, it is crucial that LTE-U reduce its duty-cycle proportional to the number of Wi-Fi APs, else with a duty-cycle of 50\% the Wi-Fi APs will be starved of air-time. As the number of Wi-Fi APs increase on the channel, it becomes increasingly important to detect the number of APs accurately at the LTE-U BS with out any co-ordination \emph{i.e.,} in a distributed manner. According to the LTE-U forum, it is expected that the \mbox{LTE-U} BS will adjust its duty cycle when one or more Wi-Fi APs turned off, and vice versa. With a large number of Wi-Fi APs, it becomes harder to detect the number accurately using either energy-based or correlation-based approaches. In this work, our goal is to infer the presence of one or more Wi-Fi APs accurately from the collected energy level data using ML algorithms that have been trained on real data. We accomplish this by creating a realistic open lab experimental scenarios using a National Instruments (NI) USRP RIO board with a LTE-U module, five Netgear \mbox{Wi-Fi} APs, and five Wi-Fi clients. The rest of the paper is organized as follows. Section~\ref{sec:related-work} presents a brief overview of existing studies on ML as applied to wireless networks and LTE/Wi-Fi coexistence in the unlicensed spectrum. Section~\ref{ca} explains the channel access procedure in Wi-Fi using CSMA/CA and the LTE-U duty cycle mechanism. Section~\ref{sm} presents the coexistence system model and the impact of LTE-U and Wi-Fi transmissions on each other. Section~\ref{algo} explains the HD, ED and AC based LTE-U duty cycle adaptation algorithms. Section~\ref{sec:ac-setup} describes the experimental set-up used to measure energy values and gather statistics of the energy level in the presence of one or more \mbox{Wi-Fi} APs. Section~\ref{sec:ML} then evaluates various ML algorithms and chooses the most appropriate one for adjusting the duty cycle based on the collected data. Experimental results are presented in Section~\ref{sec:experimental-results}. Section~\ref{comp} presents the performance (in terms of successful detection, delay and different ML methods) comparison between HD, ED, AC and ML for fixed and different configuration. Finally, Section~\ref{sec:conclusion} concludes the paper with the main contributions and future work in this area. \section{Related Work}\label{sec:related-work} In this section, we briefly discuss (a) the existing work on LTE Wi-Fi coexistence without ML, (b) the use of ML in general wireless networks and (c) the application of ML to LTE Wi-Fi coexistence. \subsection{Existing work on LTE and Wi-Fi Coexistence} There has been a significant amount of research, from both academia and industry, on the coexistence of LTE and Wi-Fi that discuss several key challenges such as: Wi-Fi client association, interference management, fair coexistence, resource allocation, carrier sensing, etc. Coexistence scenarios are well studied in simulations for both LAA/Wi-Fi and LTE-U/Wi-Fi deployments \cite{chai2016lte,cano2016unlicensed,chen2016optimizing}. These papers examine coexistence fairness in varying combinations of detection threshold and duty-cycle. However, the auto-correlation based and energy based methods for spectrum sensing in this coexistence context have not been well studied. Recently, we proposed an energy-based CSAT for duty cycle adaptation in LTE-U ~\cite{sathya2018energy,sathya2018association,vs}, and studied this approach via rigorous theoretical and experimental analyses. The energy-based CSAT algorithm can infer the number of coexisting Wi-Fi APs by detecting the energy level in the channel, which is then used to adjust the duty cycle accordingly. Using a threshold of -42 dBm, the algorithm is able to differentiate between one or two Wi-Fi APs, with a successful detection probability $P_D$ of greater than 80\% and false positive probability $P_{FA}$ of less than 5\%. Hence, this initial work proved the feasibility of stand-alone energy-based detection, without the need for packet decoding. In our succeeding work, we proposed a novel algorithm that utilizes auto-correlation function (AC) ~\cite{sathya2019auto} to infer the number of active Wi-Fi APs operating in the channel. The AC function is performed on the preamble of a signal to determine if the signal is a Wi-Fi signal. This work further improved the performance of the energy-based approach, with $P_D$ of 0.9 and $P_{FA}$ of less than 0.02, when using an AC threshold $N_E$ of 0.8. In both \cite{sathya2018energy,sathya2019auto}, the maximum number of Wi-Fi APs considered on the channel was two. In realistic dense deployment scenarios, we can expect more than 2 APs on the same channel. Hence, in this paper we study the performance of ED and AC for more realistic dense deployment scenarios. \subsection{ML as applied to Wireless Networks} In \cite{sun2019application}, several state-of-the-art applications of ML in wireless communication and unresolved problems have been described. Resource management in the MAC layer, networking and mobility management in the network layer, and localization in the application layer are some topics that have been identified as being suitable fo ML approaches. Within each of these topics, the authors provide a survey of the diverse ML based approaches that have been proposed. In \cite{chen2019artificial,zappone2019wireless}, a comprehensive tutorial has been provided on the use of artificial neural networks-based machine learning for enabling a variety of applications in wireless networks. In particular, the authors presented an overview of a number of key types of neural networks such as recurrent, spiking, and deep neural networks. For each type, the basic architecture as well as the associated challenges and opportunities have been presented, followed by an overview of the variety of wireless communication problems that can be addressed using artificial neural networks (ANNs). This work further investigated many emerging applications including unmanned aerial vehicles, wireless virtual reality, mobile edge caching and computing, Internet of Things, and multi-Random Access Technology (RAT) wireless networks. For each application, the author provided the main motivation for using ANNs along with their associated challenges while also providing a detailed example for a use case scenario. \subsection{ML as applied to LTE Wi-Fi Coexistence} A learning-based coexistence mechanism for LTE unlicensed based heterogeneous networks (HetNets) was presented in \cite{tan2018learning}. The motivation was to maximize the normalized throughput of the unlicensed band while guaranteeing the Quality of Service (QoS) of users: the authors thus considered the joint resource allocation and network access problem. A two-level framework was developed to decompose the problem into two subproblems, which were then solved using learning-based approaches. The outcome of the proposed solution achieves near-optimal performance and is more efficient and adaptive due to the distributed and learning-based approach. Authors in \cite{bayhantutorial} provide an overview of earning schemes that enable efficient spectrum sharing using a generic cognitive radio setting as well as LTE and Wi-Fi coexistence scenarios. Most LTE-U duty cycle solutions rely on static coexistence parameter configurations, which may not be applicable in real-life scenarios which are dynamic. Hence in \cite{de2019dm}, the author uses the Markov decision process modeling along with a solution based on a ML CSAT algorithm which adapts the LTE duty-cycle ratio to the transmitted data rate, with the aim of maximizing the Wi-Fi and LTE-U aggregated throughput. A ML based approach was proposed in \cite{rastegardoost2018machine} for a model-free decision-making implementation of opportunistic coexistence of LTE-U with Wi-Fi, which enabled the LTE-U BS to dynamically identify and further exploit white spaces in the Wi-Fi channel, without requiring detailed knowledge of the Wi-Fi system. By adaptively adjusting the LTE- U duty cycle to Wi-Fi activity, the proposed algorithm enabled maximal utilization of idle resources for LTE-U transmissions, while decreasing the latency imposed on Wi-Fi traffic. The proposed approach also provided a means to control the trade-off between LTE-U utilization and Wi-Fi latency in the coexisting networks. In \cite{maglogiannis2018q}, the author analytically analyzes the LTE-U scheme when it coexists with Wi-Fi and introduces a ML technique that can be used by an LTE-U network to learn the wireless environment and autonomously select the transmission opportunity (TXOP) and muting period configurations that can provide fair coexistence with other co-located technologies. Simulation results show how ML can assist LTE-U in finding optimal configurations and adapt to changes of the wireless environment thus providing the desired fair coexistence. Authors in \cite{maglogiannis2019enhancing} propose a convolutional neural network (CNN) that is trained to perform identification of LTE and Wi-Fi transmissions which can also identify the hidden terminal effect caused by multiple LTE transmissions, multiple Wi-Fi transmissions, or concurrent LTE and Wi-Fi transmissions. The designed CNN has been trained and validated using commercial off-the-shelf LTE and Wi-Fi hardware equipment. The experimentation results show that the data representation affects the accuracy of CNN. The obtained information from CNN can be exploited by the LTE-U scheme in order to provide fair coexistence between the two wireless technologies. The above papers on ML in wireless and unlicensed spectrum do not address the problem of accurately identifying the number of Wi-Fi APs which is a crucial first step in addressing fair coexistence for LTE-U/Wi-Fi coexistence. Hence, in this paper, we modify the classical ML approaches to develop algorithms that can identify the number of Wi-Fi APs on air faster and more reliably than existing methods. Our approach is based on collecting data in realistic coexistence environments for both training and testing. We also compare the performance of the ML-based approaches with the more conventional ED and AC methods described above. \begin{table}[htb!] \caption{Experimental Set-up Parameters} \centering \begin{tabular}{|p{4cm}| p{4cm}|} \hline\bfseries \cellcolor{Gray} Parameter&\bfseries \cellcolor{Gray} Value \\ [0.4ex] \hline Available Spectrum and Frequency & 20 MHz and 5.825 GHz \\ \hline Maximum Tx power for both LTE and \mbox{Wi-Fi} & 23 dBm \\ \hline Wi-Fi sensing protocol & CSMA/CA \\ \hline Traffic & Full Buffer (Saturation Case) \\ \hline Wi-Fi \& LTE-U Antenna Type & MIMO \& SISO\\ \hline LTE-U data and control channel & PDSCH and PDCCH \\ \hline Type of Wi-Fi Clients & 2 Google Pixel, 1 Samsung, 1 Redmi, and 1 Apple Laptop \\ \hline \end{tabular} \label{sim} \end{table} \section{Channel Access Procedure for Wi-Fi and LTE-U}\label{ca} In this section, we discuss the differences in the channel access procedures for Wi-Fi, using CSMA/CA and LTE-U, using the duty cycle mechanism. \subsection{Wi-Fi CSMA/CA} The Wi-Fi MAC distributed coordination function (DCF) employs CSMA/CA as illustrated in Fig.~\ref{wifit}. Each node attempting transmission must first ensure that the medium has been idle for a duration of DCF Interframe Spacing (DIFS) using the ED and Carrier Sensing (CS) mechanism. If either ED or CS is true, the Clear Channel Assessment (CCA) is set to be busy. If the channel is idle and the station has not just completed a successful transmission, the station transmits. Otherwise, if the channel is sensed busy during the DIFS sensing period or the station is contending after a successful transmission, the station persists with monitoring the channel until it is measured idle for a DIFS period, then selects a random back-off duration (counted in units of slot time) and counts down. Specifically, a station selects a back-off counter uniformly at random in the range of [0; 2$^i$ $W_0$ - 1] where the value of i (the back-off stage) is initialized to 0 and $W_0$ is the minimum contention window chosen initially. Each failed transmission due to packet collision results in incrementing the back-off stage by 1 (binary exponential back-off or BEB) and the node counts down from the selected back-off value; \emph{i.e.,} the node decrements the counter every $\sigma$($\mu$s) corresponding to a back-off slot as long as no other transmissions are detected. If during the countdown a transmission is detected, the counting is paused (freeze the back-off counter), and nodes continue to monitor the busy channel until it goes idle; thereafter the medium must remain idle for a further DIFS period before the back-off countdown is resumed for accessing the channel. Once the counter hits zero, the node transmits a packet. When a transmission has been completed successfully, the value of i is reset to 0. The maximum value of back-off stage i is m with the maximum contention window size of $W_m$ and it stays in m-th stage for one more unsuccessful transmission with the same contention window size $W_m$, i.e. the retry limit is 1. The value of $W_0$ and m is determined in the standard. If the last transmission was unsuccessful, the node drops the packet and resets the backoff stage to i = 0. If a unicast transmission is successful the intended receiver will transmit an Acknowledgment frame (ACK) after a Short Interframe Spacing (SIFS) duration post successful reception; the ACK frame structure which consists of preamble and MAC header. The ACK frame chooses the highest basic data rate (6 Mbps, 12 Mbps, or 24 Mbps) for transmitting the MAC header which is smaller than the data rate used for data transmission. \begin{figure}[htb!] \begin{center} \includegraphics[width=\linewidth]{wifi-csma-ca-transmission.png} \caption{Wi-Fi CSMA/CA Transmission} \label{wifit} \end{center} \end{figure} \subsection{LTE-U Duty Cycle} LTE-U uses a duty-cycling approach (i.e. alternating the ON and OFF period, where the LTE BS is allowed to transmit only during the ON duration) where the duty cycle (ratio of ON duration to one cycle period) is determined by perceived Wi-Fi usage at the LTE-U BS, using carrier sensing. During the ON period, the LTE-U BS schedules DL transmissions to UEs, unlike Wi-Fi in which transmissions are governed by the CSMA/CA process. Fig.~\ref{lteu12} shows the LTE-U transmission for the duty cycle of 0.5. \begin{figure}[htb!] \begin{center} \includegraphics[height=1.8cm,width=9cm]{DutyCycleTransmission.png} \caption{LTE-U Duty Cycle Transmission} \label{lteu12} \end{center} \end{figure} LTE-U uses the basic LTE subframe structure, \emph{i.e.,} the subframe length of 1 ms; each sub-frame consists of two 0.5 ms slots. Each subframe consists of 14 OFDM symbols of which 1 to 3 are Physical Downlink Control Channel (PDCCH) symbols and the rest are Physical Downlink Shared Channel (PDSCH) data. LTE-U BSs start downlink transmissions synchronized with slot boundaries, for (at least) one subframe (2 LTE slots) duration. After transmission, the intended receiver (or receivers) transmits the ACK on the uplink via the licensed band if the decoding is successful. In LTE, a Resource Block (RB) is the smallest unit of radio resource which can be allocated to a user equipment (UE), equal to 180 kHz bandwidth over a Transmission Time Interval (TTI) of one subframe (1 ms). Each RB of 180 kHz bandwidth contains 12 subcarriers, each with 14 OFDM symbols, equaling 168 Resource Elements (REs). Depending upon the modulation and coding schemes (QPSK, 16-QAM, 64-QAM), each symbol or resource element in the RB carries 2, 4 or 6 bits per symbol, respectively. In the LTE system with 20 MHz bandwidth, there are 100 RBs available. \section{System Model and impact of LTE-U and Wi-Fi on each other}\label{sm} In this section, we describe the coexistence system model assumed in the paper followed by the mutual impact of LTE-U and Wi-Fi on each other. \subsection{Coexistence System Model} We assume a deployment where LTE-U and Wi-Fi are operating on the same unlicensed 20 MHz channel in the 5 GHz band. The LTE-U BS transmits only downlink packets on the unlicensed spectrum, while all uplink transmissions are on the licensed spectrum. Control and data packets are transmitted using PDCCH and PDSCH respectively. The LTE-U BS operates at maximum transmit power using all possible resource blocks and the highest modulation coding scheme (\textit{i.e.}, 64-QAM). We assume that the Wi-Fi APs also operate at maximum transmission power, transmitting a full buffer video traffic. CSMA/CA and duty-cycle adaptation mechanism are used for channel access for Wi-Fi and LTE-U, respectively. Both Wi-Fi and LTE-U follow their respective retransmission schemes such that when a packet transmission is unsuccessful (packet or acknowledgement lost), the packet will be re-transmitted. Finally, we assume that the Wi-Fi APs support both active and passive scanning mode, \emph{i.e.,} both beacon and probe response packets are transmitted by the AP during the association process. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{WIFI-impact.png} \caption{Wi-Fi Impact on LTE-U ON Transmission.} \label{lteu1} \end{center} \end{figure} \begin{algorithm}[H] \caption{: Header-decoding based LTE-U Scale Back}\label{alg:header} \textbf{Initialization:} $(i)$ $Beacon_i$ = 0 \\ $(ii)$ $Count.detect_{i}$ = 0, $Count.falsealarm_{i}$ = 0 \\ $(iii)$ $LastTime = 0$, $TimeSlot = 0.512 s$, $Threshold = 4$ \noindent\rule{8.7cm}{0.4pt} \While {true} { /* A Wi-Fi beacon with BSSID $i$ is detected at time $CurrentTime$ */ \\ $Beacon_i$ ++; \\ \If {$CurrentTime - LastTime \geq TimeSlot$} { $NumberOfAp = 0$; \\ \For {$i$ in $Beacon$} { \If {$Beacon_i \geq Threshold$} { $NumberOfAp$ ++; } $Beacon_i = 0$; \\ } $LastTime = CurrentTime$; \\ \For {$i$ = 1 to 5} { \If {$i$ Wi-Fi is ON} { \If {i == NumberOfAp} { $Count.detect_{i}$ ++; } \Else { $Count.falsealarm_{i}$ ++; } } } }} \end{algorithm} \subsection{Impact of Wi-Fi on LTE-U during the ON period}\label{s1} In order to observe the impact of Wi-Fi on LTE-U during the ON period (\emph{i.e.,} LTE-U is ON without appropriate sensing of a Wi-Fi transmission), we deploy a NI based LTE-U BS (Section~V describes the experiment set-up in detail) on channel 165 which is a 20 MHz channel and five Wi-Fi APs on the same channel. Each client is associated with one Wi-Fi AP with full buffer video transmission. Fig.~\ref{lteu1} (a) shows the constellation of received signals when there is no Wi-Fi AP on the channel, that is, LTE-U BS can transmit the data with high modulation coding scheme of 64-QAM. Similarly, Fig.~\ref{lteu1} (b) shows the energy value observed when there are 5 Wi-Fi APs on the same channel, where X-axis represents time and Y-axis represents energy values. Fig.~\ref{lteu1} (c) shows the effect of Wi-Fi transmissions on LTE-U during the ON period, when Wi-Fi APs are unaware of the sudden LTE-U ON cycle starting in the middle of an ongoing Wi-Fi transmission: clearly the constellation is distorted. This clearly points to the inefficient use of the spectrum and the need for the LTE-U BS to sense or learn the medium to identify the number of Wi-Fi APs on the air and scale back its duty cycle accordingly. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{LTE-impact-on-WIFI.png} \caption{LTE-U Impact on Wi-Fi Transmission.} \label{wifi12} \end{center} \end{figure} \subsection{Impact of LTE-U ON transmission on Wi-Fi Data}\label{s2} In case of Wi-Fi/Wi-Fi coexistence where 5 Wi-Fi APs are deployed at the distance of 6F, we observe successful transmission of packets as shown in Fig.~\ref{wifi12} (a) and (b). We see that the CSMA mechanism works well for Wi-Fi/Wi-Fi coexistence, since the number of packets in error with no LTE-U is similar that when Wi-Fi coexists with Wi-Fi. Fig~\ref{wifi12} (c) shows the packet transmission errors when Wi-Fi coexists with a fixed, LTE-U duty cycle: the number of Wi-Fi packets in error increase. To solve the above problem, the LTE-U forum proposed the dynamic CSAT approach \cite{forum,sathya2018energy,sathya2019auto} based on the number of Wi-Fi APs on the same channel. Fig.~\ref{lteu} shows the LTE-U duty cycle adaptation process when detecting a varying number of Wi-Fi APs. When no AP is detected on the channel, an LTE-U BS will operate at the maximum 95\% duty cycle~\cite{forum} (\emph{i.e.,} minimum of 1 ms OFF duration). When one AP is detected (assumed using a predetermined sensing technique), the BS will scale back to 50\% duty cycle (\textit{i.e.}, 20 ms ON time and 20 ms OFF time). If a new Wi-Fi AP starts transmitting, it will contend with the existing AP only during the OFF time which is 50\% of the available medium. Since this is unfair to the Wi-Fi APs, the LTE-U specification recommends scaling the duty cycle back to 33\% when more than one Wi-Fi AP is using the channel. However, there is no specific mechanism proposed to detect the number of coexisting Wi-Fi APs in both sparse and dense deployment scenarios. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{DutyCycleMechanism.png} \caption{LTE-U Duty Cycle Mechanism.} \label{lteu} \end{center} \end{figure} \section{Experimental Setup for Machine Learning Based Detection}\label{sec:ac-setup} Our experimental set-up consists of one LTE-U BS and a maximum of five Wi-Fi APs. To emulate the LTE-U BS, we use the National Instruments USRP 2953-R software defined radio (SDR) which is equipped with the LTE-U radio framework. There are five Netgear Wi-Fi APs and five Wi-Fi clients deployed in a static configuration. The Wi-Fi clients are combination of laptops and smartphones capable of Wi-Fi 802.11 ac connection. As soon as the client connects to the Wi-Fi AP, it starts a live video streaming application to simulate a full-buffer transmission. The experimental setup is shown in Fig.~\ref{exp} and the complete experimental parameters are described in Table~\ref{sim}. We set the BS and APs to be active in the same 20 MHz channel in the 5 GHz band (\textit{i.e.}, Wi-Fi channel 165 and LTE band 46 EARFCN 53540). We separated the APs and BS into six cells, with five cells (Cell A, C, D, E, and F) as Wi-Fi cells and one cell (Cell B) as the LTE-U cell. Each Wi-Fi cell consists of one AP and one client, while the LTE-U BS and UE are contained within the same USRP board. The BS transmits full buffer data at maximum power by enabling all of its resource blocks with the highest modulation coding scheme (\textit{i.e.}, 64-QAM). It operates at a 50\% duty cycle during the experiment, and listens to the configured unlicensed channel during the OFF period for \textit{RF power} and AC measurement. The \textit{RF power} measurement is configured in the LTE block control module of the NI LTE application framework, which outputs energy value as defined in \ref{s3}. The AC function is also configured in the LTE block control module of the same framework and outputs the AC events as defined in \ref{s4}. The energy values observed from Algorithm 2 are given as input to the ML algorithm (explained in detail in Section VII) to classify the number of Wi-Fi APs on the channel. Each Wi-Fi AP transmits full buffer downlink data and beacon frames, with occasional probe responses if it receives probe requests for clients in the vicinity. We also ensure that there is no extra interference in the channel from other Wi-Fi APs. We measure the energy, AC value and ML (same energy value as input to ML) at the LTE-U BS for the following scenarios: \begin{itemize} \item \textbf{Scenario 0:} No \mbox{Wi-Fi} APs are deployed and only one LTE-U cell (\textit{i.e.}, Cell B) is deployed. \item \textbf{Scenario 1:} One \mbox{Wi-Fi} AP (\emph{i.e.,} Cell A) and one LTE-U (\textit{i.e.}, Cell B) is deployed. \item \textbf{Scenario 2:} Two \mbox{Wi-Fi} APs (\textit{i.e.}, Cell A \& C) and one LTE-U (\textit{i.e.}, Cell B) are deployed. \item \textbf{Scenario 3:} Three \mbox{Wi-Fi} APs (\textit{i.e.}, Cell D, E, \& F) and one LTE-U (\textit{i.e.}, Cell B) are deployed. \item \textbf{Scenario 4:} Four \mbox{Wi-Fi} APs (\textit{i.e.}, Scenario 1: Cell A, Scenario 3: Cell D, E, \& F) and one LTE-U (\textit{i.e.}, Cell B) are deployed. \item \textbf{Scenario 5:} Five \mbox{Wi-Fi} APs (\textit{i.e.}, Cell A, C, D, E, \& F) and LTE-U (\textit{i.e.}, Cell B) are deployed. \end{itemize} In all scenarios, Cell B measures the energy and AC values during the LTE-U OFF period, while the rest of the Wi-Fi cells are transmitting full buffer downlink transmission. We also vary the distances and the LOS and NLOS environment of each cell. In NLOS setup, the wall act as a obstruction between the LTE-U and Wi-Fi APs. We measure the received Wi-Fi AP signals at the LTE-U BS for different 6 feet (For example in Scenario 5, where all the 5 Wi-Fi APs placed at 6 feet from the LTE-U BS), 10 feet and 15 feet distances. Our previous work focused only on detecting Scenarios 1 and 2 (\textit{i.e.}, 1 and 2 Wi-Fi APs coexisting with LTE-U)~\cite{sathya2018energy, sathya2019auto}. Also, we demonstrated that Scenario 0 can be easily distinguished from other scenarios~\cite{vs}. \section{LTE-U Duty Cycle Adaptation Algorithms}\label{algo} In order to solve the problems identified in the previous section, we propose header (HD), energy (ED) and auto-correlation (AC) based detection algorithms for a dense deployment scenario to identify the number of Wi-Fi APs on the channel. Fig.~\ref{dc1} explains how different sensing algorithms work based on the known Wi-Fi packet structure. \subsection{Header-Decoding based LTE-U duty cycle adaptation algorithm} We assume that there is either a common preamble \cite{Quantenna,att} between the LTE-U and Wi-Fi systems or the LTE-U BS has a full Wi-Fi decoder that will allow it to decode the Wi-Fi MAC header and hence obtain the BSSID. Doing so, one can accurately detect the number of Wi-Fi APs on the channel and hence header-based decoding is the most accurate method compared to energy, auto-correlation, and ML. However, the decision algorithm to adapt the duty cycle needs to be designed carefully to avoid misclassification. \begin{figure}[htb!] \begin{center} \includegraphics[height=7.6 cm,width=9cm]{LTE-duty-cycle-adaptation.png} \caption{LTE-U Duty Cycle Adaptation Algorithm.} \label{dc1} \end{center} \end{figure} We define a simple algorithm shown in Algorithm~\ref{alg:header}, to classify the number of active Wi-Fi APs at each time slot. In brief, the algorithm counts the number of beacon of each uniquely identifiable BSSID, for a defined time slot. Since we can expect that an AP in a real deployment may hop between channels frequently, it is important to collect beacons for a longer period of time rather than deciding based on just one beacon. We initially set a time slot of 10 beacons (\textit{i.e.}, 1.024 s) and count the number of beacons for each BSSID in the time slot. We set a threshold of 9 beacons for an AP to be considered as active, this means that there is 90\% confidence that the AP is actually active. The length of the time slot determines the inference delay, hence one would like this delay to be as small as possible. We reduced the time slot to 5 beacons (0.512 s), but to get the same accuracy we need to set the threshold to 4 beacons which means that the confidence rate is at a lower 80\%. Thus, with a slightly lower confidence rate, we can reduce the inference time to half without compromising the detection accuracy. \begin{algorithm}[H] \caption{Energy Based LTE-U Scale Back}\label{alg:energy} \textbf{Input:} $\alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5$ \\ \textbf{Initialization:} $(i)$ $\alpha_6 = \infty$ \\ \hspace{2.5cm} $(ii) Count.detect_i$ = 0, $Count.falsealarm_i$ = 0 \\ \noindent\rule{8.7cm}{0.4pt} \While {true} { /* Received $Avg(Energy Level)$ over one second */ \\ \For {$i$ = 1 to 5} { \If {$i$ Wi-Fi is ON} { \eIf {$\alpha_i \leq$ Avg(Energy Level) $\leq \alpha_{i+1}$} { $Count.detect_i$ ++; } { $Count.false.alarm_i$ ++; } } } } \end{algorithm} \subsection{Energy based LTE-U duty cycle adaptation algorithm}\label{s3} The experiment setup is shown in Fig.~\ref{exp}. We measure the received energy at the LTE-U BS for different distances between the LTE-U BS and Wi-fi APs and obtain histograms of the measured signal when one or more Wi-Fi APs are transmitting at 6, 10 and 15 feet from the LTE-U BS. We then fit the measured histograms to probability distribution functions as described in \cite{sathya2018energy} to develop a classification algorithm. In Algorithm \ref{alg:energy}, an energy-based detection listens to the energy level in the channel and according to a set threshold \cite{sathya2018energy}, decides whether to scale back the duty cycle or not. Since the measured energy threshold depends on the the number of detected Wi-Fi APs, the choice of threshold is important to the algorithm. Finally, we implement the algorithm in the LTE-U BS NI hardware and validate it experimentally. First, we modify the NI LTE application framework to measure \textit{RF power} during the LTE-U OFF period. The collected energy values are then averaged over one second time duration and used for algorithm input. If the averaged energy value is greater than the specified threshold $\alpha_1$, \textit{i.e.}, if energy value $\geq \alpha_1$ then there is a possibility of Wi-Fi packets (beacon, probe request, probe response, data, or ACK) transmitted in the channel. The BS then can declare whether one, two, three, four, or five AP is present, based on the other thresholds: $\alpha_2$, $\alpha_3$, $\alpha_4$, $\alpha_5$ (\textit{e.g.}, if $\alpha_3 \leq \text{energy value} \leq \alpha_4$ then there are 4 APs in the channel). By keeping count of correct and incorrect decisions made by the algorithm, we calculate the probability of correct detection and false positive on predicting the number of Wi-Fi APs in the unlicensed spectrum. These probability values are used as a metric to determine the performance of the threshold, such that we pick a set of threshold with high probability of correct detection and low probability of false positive. \begin{algorithm}[H] \caption{: Auto-correlation Based LTE-U Scale Back}\label{alg:ac} \textbf{Input:} $th_\rho$, $R$ \\ \textbf{Initialization:} $Count.detect_i$ = 0, $Count.falsealarm_i$ = 0 \\ \noindent\rule{8.7cm}{0.4pt} \While {true} { /* Received $T$ number of $AC$ values over one second */ \\ \For {i = 1 to 5} { \If {i Wi-Fi is ON}{ $Signal = 0$; \\ \For {t = 1 to T} { \If {$AC_t$ $\geq$ $th_{\rho}$} { $Signal$ ++; } } $ratio$ = $\frac{Signal}{T}$; \\ \eIf { $ratio \leq R_i$ } { $Count.detect_i$ ++; } { $Count.falsealarm_{i}$ ++; } } } } \end{algorithm} \subsection{AC based LTE-U duty cycle adaptation algorithm}\label{s4} In the same experiment setup as shown in Fig.~\ref{exp}, we count the total number of AC events that are above a threshold for every one second over the duration of 90 seconds. We measure the total number of events above the AC threshold at the LTE-U BS for 6, 10 and 15 feet distances. Then, we observe the PDF distribution of the number of AC events above the threshold \cite{sathya2019auto} for Scenario 0 to 5 described above. We make use of this key observation to develop a classification algorithm (\emph{i.e.,} Algorithm~\ref{alg:ac}) for both LOS and NLOS scenarios. The algorithm uses AC functions and optimal thresholds to determine the number of Wi-Fi APs in the channel, therefore the selection of threshold is also important and will be shown in this section. We implement the algorithm in the LTE-U BS hardware and validate it experimentally. The AC function is performed at LTE-U BS to sense the spectrum for Wi-Fi preamble signals (\emph{i.e.,} L-STF). The output of the function is an AC value which determine the likelihood that the signal is a Wi-Fi preamble. We observed on many experiments, that the threshold $th_\rho$ of 0.25 is sufficient to determine that the captured signal is a Wi-Fi signal (beacon, probe request, probe response, data, or ACK). Using the threshold, we predicted the number of Wi-Fi signals in every one second period. Next, we calculate the ratio \cite{sathya2019auto} and then compared to $R_i$ which is a threshold determined during a preliminary experiment with $i$ Wi-Fi AP and no LTE-U on the channel. The $R_i$ is determined such that the true positive rate is as high as possible and false positive rate is as low as possible during the preliminary experiment. Since it is not possible for the observed ratio to be higher than $R_i$, we set a correct prediction that $i$ Wi-Fi AP is present in the channel if the ratio is less than or equal to the threshold $R_i$, and false prediction otherwise. \begin{figure*}[htb!] \begin{center} \includegraphics[totalheight=11cm,width=12.5cm]{exe1.pdf} \caption{LTE \mbox{Wi-Fi} Co-existence Experimental Setup.} \label{exp} \end{center} \end{figure*} \section{ML Algorithms for LTE-U Duty Cycle Adaption}\label{sec:ML} ML models enable us to replace heuristics with more robust and general alternatives. For the problem of distinguishing between different numbers of \mbox{Wi-Fi} APs, we train a model to detect a pattern in the signals instead of finding a specific energy threshold in a heuristic manner. The state-of-the-art ML models leverage the unprecedented performance of neural network models that are able to surpass human performance on many tasks, for example, image recognition~\cite{he2015delving}, and help us answer complex queries on videos~\cite{krishnan2018deeplens}. This efficiency is a result of large amounts of data that can be collected and labeled as well as usage of highly parallel hardware such as GPUs or TPUs~\cite{jouppi2017datacenter,chetlur2014cudnn}. In the work described in this paper, we train our neural network models on NVidia GPUs and collect enough data samples that enable our models to achieve high accuracy. Our major task is a classification problem to distinguish between zero, one, two, three, four, or five \mbox{Wi-Fi} BSSs. We consider machine learning models that take time-series data of width $w$ as input, giving an example space of $\mathcal{X} \in \mathcal{R}^{w}$, where $\mathcal{R}$ denotes the real numbers. Our discrete label space of $k$ classes is represented as $\mathcal{Y} \in \{0,1\}^k$. For example, $k=3$ classes, enables us to distinguish between 0, 1, and 2 \mbox{Wi-Fi} APs. Machine learning models represent parametrized functions (by a weight vector $\theta$) between the example and label spaces $f(x;\theta): \mathcal{X} \mapsto \mathcal{Y}$. The weight vector $\theta$ is iteratively updated during the training process until the convergence of the training accuracy or training loss (usually determined by very small changes to the values despite further training), and then the final state of $\theta$ is used for testing and real-time inference. \subsection{Data preparation} The training and testing data is collected over an extended period of time with a single scenario taking about 8 hours. For ease of exposition, we consider the case with one and two \mbox{Wi-Fi} APs. We collect data for each \mbox{Wi-Fi} AP independently and store the two datasets in separate files. Each file contains more than 2.5 million values and the total raw data size in CSV format is of about 60 MB. Each file is treated as time-series data with a sequence of values that are first divided into chunks. We overlap the time-series chunks arbitrarily by three-fourths of their widths $w$. For example, for chunks width $w=128$, the first chunk starts at index 0, the second chunk is formed starting from index 32, the third chunk starts at index 64, and so on. This is part of our data augmentation and a soft guarantee that much fewer patterns are broken on the boundary of chunks. The width $w$ of the (time-series data) chunk acts as a parameter for our ML model. It denotes the number of samples that have to be provided to the model to perform the classification. The longer the time-series width $w$, the more data samples have to be collected during inference. The result is higher latency of the system, however, the more samples are gathered, the more accurate the predictions of the model. On the other hand, with smaller number of samples per chunk, the time to collect the samples is shorter, the inference is faster but of lower accuracy. We elaborate more on this topic in Section~\ref{sec:experimental-results}. The collection of chunks are shuffled randomly. We divide the input data into training and test sets, each 50\% of the overall data size. The aforementioned shuffling ensures that we evenly distribute different types of patterns through the training and test sets so that the classification accuracy of both sets is comparable. Each of the training and test sets contain roughly the same number of chunks that represent one or two \mbox{Wi-Fi} APs. We enumerate classes from 0. For the case of 2 classes (either one or two \mbox{Wi-Fi}s), we denote by \textit{0} the class that represents a single \mbox{Wi-Fi} AP and by \textit{1} the class that represents 2 \mbox{Wi-Fi} APs. Next, we compute the mean $\mu$ and standard deviation $\sigma$ only on the training set. We check for outliers and replace the values that are larger than $4\sigma$ with the $\mu$ value (e.g., there are only 4 such values in class \textit{1}). The data for the two classes have different ranges (from about -45.46 to -26.93 dBm for class \textit{0}, and from about -52.02 to about -22.28 dBm for class \textit{1}). Thus, we normalize the data $D$ in the standard way: $ND = \frac{(D - \mu)}{\sigma}$, where $ND$ is the normalized data output, $\mu$ and $\sigma$ are the mean and standard deviation computed on the training data. We attach the appropriate label to each chunk of the data. The overall size of the data after the preparation to detect one or two \mbox{Wi-Fi} APs is about 382 MB, where the \mbox{Wi-Fi} APs are on opposite sides of the \mbox{LTE-U} BSS and placed at 6 feet distance from the \mbox{LTE-U} BSS). We collect data for many more scenarios and present them in Section~\ref{sec:experimental-results}. The final size of the collected data is 3.4 GB. For training, we do not insert values from different numbers of \mbox{Wi-Fi} APs into a single chunk. The received signal in the \mbox{LTE-U} BSS has higher energy on average for more \mbox{Wi-Fi} APs, thus there are differences in the mean values for each dataset. Our data preparation script handles many possible numbers of \mbox{Wi-Fi} APs and generates the data in the format that can be used for model training and inference (we follow the format for datasets from the UCR archive). In the future, we plan on gathering additional data samples for more Wi-Fi APs and making the dataset more challenging for classification. \subsection{Neural network models: FC, VGG and FCN} \label{sec:fcn-neural-nets} Our data is treated as a uni-variate time-series for each chunk. There are many different models proposed for the standard time-series benchmark~\cite{chen2015ucr}. First, we test \textit{fully connected (FC)} neural networks. For simple architectures with two linear layers followed by the ReLU non-linearity the maximum accuracy achieved is about 90\%. More linear layers, or using other non-linearities (e.g. sigmoid) and weight decays do not help to increase the accuracy of the model significantly. Thus, next we extract more patterns from the data using the convolutional layers. Second, we adapt the \textit{VGG} network~\cite{simonyan2014very} to the one dimensional classification task. We changed the number of weight layers to 6 (we also tested 7, 5, and 4 layers, but found that 6 gives the highest test accuracy of about 99.52\%). However, the drawback is that with fewer convolutional layers, the fully connected layers at the end of \textit{VGG} net become bigger to the point that it hurts the performance (for 4 weight layers it drops to about 95.75\%). This architecture gives us higher accuracy but is rather difficult to adjust to small data.\footnote{The dimensionality of the data is reduced slowly because of the small filter of size 3.} Finally, one of the strongest and flexible models called \textit{FCN} is based on convolutional neural networks that find general patterns in time-series sequences~\cite{wang2017time}. The advantages of the model are: simplicity (no data-specific hyper-parameters), no additional data pre-processing required, no feature crafting required, and significant academic and industrial effort into improving the accuracy of convolutional neural networks~\cite{dziedzic2019band, lavin2016fast}. The architecture of the FCN network contains three blocks, where each of them consists of a convolutional layer, followed by batch normalization $f(x) = \frac{x - \mu}{\sqrt{\sigma^2 + \epsilon}}$ (where $\epsilon$ is a small constant added for numerical stability) and ReLU activation function $y(x) = \max(0, x)$. There are 128, 256, and 128 filter banks in each of the consecutive 3 layer blocks, where the sizes of the filters are: 8, 5, and 3, respectively. We follow the standard convention for Convolutional Neural Networks (CNNs) and refer to the discrete cross-correlation operation as convolution. The input $x$ to the first convolution is the time-series data chunk with a single channel $c$. After its convolution with $f$ filters, with filters denoted as $y$, the output feature map $o$ has $f$ channels. For training, we insert $s=32$ time-series data chunks into a mini-batch. We have $j \in f$ and the discrete convolution~\cite{vasilache2014fast} that can be expressed as: \begin{align} o &= x * y \end{align} and in the Einstein notation: \begin{align} o_{(s,j)} &= \sum_{i \in c} x_{(s,i)} \cdot y_{(j,i)} \end{align} \subsection{ML models from scikit-learn} \label{sec:sklearn-ml} To diversify the machine learning models used in our comparison, we select the most popular models from the scikit-learn (also denoted as \textit{sklearn}) library~\footnote{https://scikit-learn.org/stable/index.html}. The library exposes classical machine learning algorithms implemented in Python. This is a common tool used for science and engineering. We run our experiments using \textit{sklearn} version 0.19.1 with Python 3.6. We analyze how the following models perform on our WiFi data and report their test accuracy. The decision tree is a simple classifier that learns decision rules inferred from the data features. The deeper the tree, the more complex the decision rules and the fitter the model. The decision tree classifier achieves accuracy of 79.46\% for the task of distinguishing between one or two \mbox{Wi-Fi} APs. The AdaBoost~\cite{multiClassAdaBoost} classifier is one of the best out-of-the-box models in the \textit{sklearn} library that creates an ensemble of classifiers. In our experiments, AdaBoost begins by fitting a decision tree classifier on the original dataset and then fits additional decision tree classifiers on the same dataset but where the weights of incorrectly classified instances are modified such that subsequent classifiers focus more on difficult cases. It is tuned by adjusting the maximum number of the decision tree classifiers used. AdaBoost achieves accuracy of 94.57\%. Random Forest is an averaging algorithm based on randomized decision trees. Its test accuracy is 79.87\%. We find that the best tested model from the \textit{sklearn} library is AdaBoost. The highest test accuracy achieved for AdaBoost for the standard case with two \mbox{Wi-Fi} APs is worse by about 5\% when compared to the overall best FCN model (described in section~\ref{sec:fcn-neural-nets}), which achieves accuracy of 99.38\% for the same configuration (with 2 \mbox{Wi-Fi} APs, 512 chunk size, NLOS, and 6 feet distance). For more than 5 classes, Random Forest model achieves higher accuracy than AdaBoost. \subsection{Time-series specific models} \label{sec:boss-vs} BOSS in Vector Space (BOSS VS) model~\cite{boss-vs} is a time-series classification algorithm, whose properties make it suitable for our task. This algorithm is characterized by fast inference, tolerance to noise that enable us to achieve high test accuracy, moderate training time, which allows for periodic model updates. Moreover, BOSS VS achieves best test accuracy for repetitive and long time-series data. Within the time-series specific models, we also compared to WEASEL~\cite{WEASEL}, which yielded lower test accuracy despite much longer training time. We run the BOSS VS time-series specific model for the NLOS 6 feet case. Other time-series models train much longer (in the order of days) on our large (a few GBs) time-series data or do not fit even into 128 GB of RAM memory provided. We observe that from 2 to up to 4 WiFi APs, the performance of the BOSS VS model is on-par with the performance of FCN model. However, for the scenario where we have to distinguish between 0 to 5 WiFi APs, the accuracy of the FCN model is higher by about 7\%. One concern with the BOSS VS model is that we have to use a machine with 128 GB of RAM to train the model and for larger data sizes, the out of memory exception is thrown as well (the model is implemented in Java). For the FCN, we are able to scale to arbitrary amount of data. Based on the thorough experimental analysis, we see the FCN model and other neural network based models as the most accurate and scalable models that can be used to predict the number of Wi-Fi APs. \subsection{FFT compression} We use the FFT-based convolution with compression proposed in~\cite{dziedzic2019band} and here describe its essential component. We express input $x$ and filter $y$ as discrete functions that map tensor index positions $n$ to values $x[n]$. Their corresponding Fourier representation re-indexes tensors in the spectral domain: \[ F_x[\omega] = F(x[n]) ~~~~~~~ F_y[\omega] = F(y[n]) \] This mapping is invertible $x = F^{-1}(F(x))$. Convolutions in the spectral domain correspond to element-wise multiplications: \[ x * y = F^{-1}(F_x[\omega] \cdot F_y[\omega]) \] For natural data, such as time-series data, a substantial portion of the high-frequency domain is close to 0. This observation allows us to compress the data. Let $M_c[\omega]$ be a discrete indicator function defined as: \[ M_c[\omega] = \begin{cases} 1, \omega \le c\\ 0,\omega > c \end{cases} \] $M_c[\omega]$ is a mask that limits the input data and filters to a certain \emph{band} of frequencies. The FFT-based convolution with compression is defined as follows: \begin{align} x *_c y & = F^{-1}\{(F_x[\omega] \cdot M_c[\omega]) \cdot (F_y[\omega] \cdot M_c[\omega])\} \label{eq:fft-based-conv} \end{align} The mask $M_c[\omega]$ is applied to both the signal $F_x[\omega]$ and filter $F_y[\omega]$ (in equation~\ref{eq:fft-based-conv}) to indicate the compression of both arguments. \section{Experimental results}\label{sec:experimental-results} In this section we discuss the model training, inference and transition between different classes. The code for our project can be found on github:~\url{http://bit.ly/2Ob5kAr}. \subsection{Training and Inference} Each model is trained for at least 100 epochs. We experiment with different gradient descent optimization algorithms, e.g. Stochastic Gradient Descent (SGD) and Adaptive Moment Estimation (Adam)~\footnote{A very good explanation can be found here: http://bit.ly/2Y9XaQ8}. For the SGD algorithm, we grid search for the best initial learning rate and primarily use 0.0001. The learning rate is reduced on plateau by 2X after 50 consecutive iterations (scheduled patience). SGD is used with momentum value 0.9. We use standard parameters for the Adam optimization algorithm. The batch size is set to $s=32$ to provide high statistical efficiency. The weight decay is set to 0.0001. For our neural network models, the dataset is relatively simple. The \mbox{Wi-Fi} data can be compared in its size and complexity to the MNIST dataset~\cite{deng2012mnist} or to the GunPoint series from the UCR archive~\cite{chen2015ucr}. \subsection{Time-series width} \begin{figure}[htb!] \begin{center} \includegraphics[width=8cm, height = 5.2cm]{chunk-size/test-accuracy-chunk-size5.pdf} \caption{The test accuracy (\%) for a model trained and tested for a given chunk size (ranging from 1 to 2048) to distinguish between 2 classes (either 1 or 2 \mbox{Wi-Fi} APs), 3 classes (distinguish between 0, 1, or 2 \mbox{Wi-Fi} APs), 4 classes (distinguish between 0, 1, 2, or 3 \mbox{Wi-Fi} APs), and 5 classes (distinguish between 0, 1, 2, 3 or 4 \mbox{Wi-Fi} APs) } \label{fig:test-accuracy-chunk-size} \end{center} \end{figure} The number of samples collected per second by the LTE-U BS is about 192. The inference of a neural network is executed in milliseconds and can be further optimized by compressing the network. The final width of the time-series chunk imposes a major bottleneck in terms of the system latency. The smaller the time-series chunk width $w$, the lower the latency of the system. However, the neural network has to remain highly accurate despite the small amount of data provided for its inference. Thus, we train many models and systematically vary the chunk width $w$ from 1 to 2048 (see Fig.~\ref{fig:test-accuracy-chunk-size}). In this case, each model is trained only for the single scenario (placement of the \mbox{Wi-Fi} APs) and with zero, one, two, or three active \mbox{Wi-Fi} APs. When we decrease the chunk sizes to the smaller chunk consisting of a single sample, the test accuracy deteriorates steadily down to the random choice out of the 3 or 4 classes (accuracy of about 33\% and 25\%, respectively) and for the 2 classes, its performance is very close to the ED (Energy-based Detection) method. \begin{figure}[htb!] \begin{center} \includegraphics[width=1.0\linewidth]{wifi-energy/wifi3-graph2.pdf} \caption{\textbf{Number of \mbox{Wi-Fi} APs.} The values of the energy (in dBm) captured for 2048 samples in \mbox{LTE-U} BS while there are 1 Wi-Fi, 2, and 3 \mbox{Wi-Fi}s scenarios at 6 Feet, NLOS. The more \mbox{Wi-Fi} APs active, the more energy picks we observe.} \label{fig:energy_values_number_of_wifis} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=1.0\linewidth]{wifi-energy/wifi_distance6_2_wifis-graph2.pdf} \caption{\textbf{Distances from LTE-U.} The values of the energy (in dBm) captured for 2048 samples in \mbox{LTE-U} BS while there are 2 \mbox{Wi-Fi} APs at 6, 10, and 15 Feet, NLOS. The closer the \mbox{Wi-Fi} APs are to the LTE-U, the higher energy is captured.} \label{fig:energy_values_distances} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=1.0\linewidth]{wifi-energy/wifi_los_nlos-graph2.pdf} \caption{\textbf{NLOS vs LOS.} The values of the energy (in dBm) captured for 2048 samples in \mbox{LTE-U} BS while there are 2 \mbox{Wi-Fi} APs at 6 Feet, in NLOS and LOS scenarios. The fewer obstructions, the higher energy is captured.} \label{fig:energy_values_nlos_los} \end{center} \end{figure} We present the energy of the signals captured in different configurations: (1) Fig.~\ref{fig:energy_values_number_of_wifis} shows the values of energy captured for different number of \mbox{Wi-Fi} APs (one, two and three), (2) Fig.~\ref{fig:energy_values_distances} demonstrates the scenario with different distances of \mbox{Wi-Fi} APs from the LTE-U, and (3) Fig.~\ref{fig:energy_values_nlos_los} gives insight into energy of the signal in NLOS and LOS scenarios. We consider in detail the signal from about 1500th sample to 2000th sample in Fig.~\ref{fig:energy_values_number_of_wifis}. It is challenging to distinguish between two or three Wi-Fis \footnote{The Energy values for 4 and 5 Wi-Fi APs are more dense and challenging. In order to better visualize we plotted only 1, 2 and 3 \mbox{Wi-Fi} APs}. The visual inspection of the signals suggests that width of the time-series chunk should be longer than 500 samples. Signals with width of 384 achieve test accuracy below 99\% and signals with width 512 can be trained to obtain 99.68\% of test accuracy. Based on the experiments in Figs. \ref{fig:test-accuracy-chunk-size} and \ref{fig:energy_values_number_of_wifis}, we find that the best trade-off between accuracy and inference time is achieved for chunk~of~size~512. \subsection{Transitions between classes}\label{sec:classTransition} When we switch to another class (change the state of the system in terms of the number of Wi-Fis), we account for the transition period. If in a given window of 1 second a new \mbox{Wi-Fi} is added, the samples from this first second with new \mbox{Wi-Fi} (or without one of the existing Wi-Fis - when it is removed), the chunk is containing values from $n$ and $n+1$ (or $n-1$) number of Wi-Fis. An easy workaround for the \textit{contaminated} chunk is to change the state of the system to new number of Wi-Fis only after the same class is returned by the model in two consecutive inferences (classifications). \subsection{Real-time inference} \begin{figure}[htb!] \begin{center} \includegraphics[width=8cm, height = 4cm]{inference/inference-schema3.pdf} \caption{The schema of the inference process, where the input received by the \mbox{LTE-U} BS is signals from \mbox{Wi-Fi}s and the output is the predicted number of \mbox{Wi-Fi}s.} \label{fig:inference} \end{center} \end{figure} We deploy the model in real-time, which is similar to the energy data collection experiment setup, and is shown in Fig.~\ref{fig:inference}. We prepare the model only for the inference task in the following steps. Python scripts load and deploy the trained PyTorch model. We set up the \mbox{Wi-Fi} devices and generate some network load for each device. The \mbox{LTE-U} BS is connected to a computer with the hardware requirements of at least 8 GB RAM (Installed Memory), 64-bit operating system, x64-based processor, Intel(R) Core i7, CPU clock 2.60GHz. The energy of the \mbox{Wi-Fi} transmission signal in a given moment in time is capture using NI LabVIEW. From the program, we generate an output file or write the data to a pipe. The ML model reads the new values from the file until it reaches the time-series chunk length. Next, the chunk is normalized and passed through the model that gives a categorical output that indicates the predicted number of \mbox{Wi-Fi}s in the real-time environment. \section{Performance comparison between HD, ED, AC and ML methods}\label{comp} We analyze and study the performance differences between HD, ED, AC and ML methods for different configuration setups and discus the inference delay. In ML method, we validate the performance on ML real-time inference data. For the final evaluation, we train a single Machine Learning model that is based on the FCN network and used for all the following experiments. The model is trained on the whole dataset of size 3.4 GB, where the train and test sets are of the same size of about 1.7 GB. \begin{figure}[htb!] \begin{center} \includegraphics[width=\linewidth]{ml-methods/data42020-02-01-10-51-20-361714.pdf} \caption{Comparison of test accuracy for different ML methods. Number of \mbox{Wi-Fi} APs equals to 2 denotes the Case D configuration (NLOS, 6 feet). Thus, 2 on the x axis corresponds to distinguishing between 1 and 2 \mbox{Wi-Fi} APs, whereas 3 denotes distinguishing between 0, 1, or 2 \mbox{Wi-Fi} APs. Similarly, the values on the x axis (4,5) denote distinguishing from 0 to (x-1) WiFi APs.} \label{fig:ml-methods-compare} \end{center} \end{figure} \subsection{Comparison between ML methods} We present comparison between ML methods in Fig.~\ref{fig:ml-methods-compare}. The time-series specific neural network models, such as FCN (\ref{sec:fcn-neural-nets}) as well as BOSS VS (\ref{sec:boss-vs}), perform much better than the general purpose models from scikit-learn library (described in section~\ref{sec:sklearn-ml}). The middle-ground between the two options is a simple two-layer convolutional network called \textit{LeNet}. The main benefit of using FCN (MEDIUM) or BOSS VS is greater model learning capacity than \textit{LeNet} or scikit-learn models. There is a negligible difference in terms of test accuracy between the FCN and BOSS VS models. However, the FCN models can scale to much bigger data sizes and we observe that the BOSS VS model often goes out of memory for more than a few GBs of input data. Thus, we select FCN as our main Machine Learning (ML) model for all the remaining experiments. \subsection{Successful Detection at Fixed Distance} \begin{figure}[htb!] \begin{center} \includegraphics[width=9cm, height = 5.4cm]{wifi-nlos/wifi-n-los-one-model-ri2.pdf} \caption{Comparison of results for successful detection between ED, AC and ML methods. ML results are presented for the test data (denoted as $ML_t$:) and for the real time inference (denoted as $ML_r$:).} \label{sample1} \end{center} \end{figure} \begin{table*} \centering \caption{Performance of detection for fixed distance configuration setup.} \begin{tabular}{|*{15}{c|}} \hline \multicolumn{1}{|c}{\cellcolor{Gray} Configuration} & \multicolumn{1}{|c}{\cellcolor{Gray} Classes} & \multicolumn{2}{|c}{\cellcolor{Gray} HD (\%)} & \multicolumn{2}{|c}{\cellcolor{Gray} ED (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} AC (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} ML (\%)} \\ \hline \cellcolor{Gray} Distance & \cellcolor{Gray} \# of Wi-Fis & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS \\ \hline \multirow{5}{*}{6F} & 2 & 100 & 100 & 96 & 91 & 98 & 96 & 98.60 & 99.10 \\ \cline{2-10} & 3 & 100 & 100 & 88 & 85 & 95 & 90 & 99.10 & 99.50 \\ \cline{2-10} & 4 & 100 & 100 & 80 & 74 & 87 & 81 & 99.40 & 99.00 \\ \cline{2-10} & 5 & 100 & 100 & 74 & 62 & 76 & 65 & 99.20 & 98.70 \\ \cline{2-10} & 6 & 100 & 100 & 62 & 51 & 70 & 59 & 99.30 & 99.0 \\ \hline \multirow{5}{*}{10F} & 2 & 100 & 100 & 94 & 89 & 97 & 94 & 99.80 & 99.98 \\ \cline{2-10} & 3 & 100 & 100 & 86 & 82 & 91 & 88 & 99.80 & 99.98 \\ \cline{2-10} & 4 & 100 & 100 & 78 & 72 & 85 & 79 & 99.80 & 99.90 \\ \cline{2-10} & 5 & 100 & 100 & 72 & 60 & 75 & 63 & 99.50 & 99.85 \\ \cline{2-10} & 6 & 100 & 100 & 64 & 54 & 68 & 57 & 99.80 & 99.84 \\ \hline \multirow{5}{*}{15F} & 2 & 100 & 100 & 92 & 87 & 95 & 90 & 99.80 & 99.80 \\ \cline{2-10} & 3 & 100 & 100 & 84 & 80 & 85 & 81 & 99.90 & 99.60 \\ \cline{2-10} & 4 & 100 & 100 & 75 & 70 & 79 & 71 & 99.90 & 99.60 \\ \cline{2-10} & 5 & 100 & 100 & 70 & 58 & 71 & 64 & 99.60 & 99.50 \\ \cline{2-10} & 6 & 100 & 100 & 63 & 53 & 66 & 55 & 99.50 & 99.40 \\ \hline \end{tabular} \label{mll1} \end{table*} We compare the ML performance with HD\footnote{The successful Wi-Fi detection in HD for LOS and NLOS scenario is 100\%. Hence we have not included in the Fig.~\ref{exp}.}, ED and AC approaches using the NI USRP platform as shown in Fig.~\ref{exp}. Similarly we compare the performance of HD by analyzing the Wi-Fi BSSID through wireshark capture. In the experiment, \mbox{Wi-Fi} APs are transmitting full buffer data, along with beacon and probe response frames following the 802.11 CSMA specification. We performed different experiments with 6ft, 10ft and 15ft for LOS and NLOS scenarios. Fig~\ref{sample1} shows the performance of detection for LOS and NLOS scenarios. In ED and AC based approach the proposed detection algorithm achieves the successful detection on average at 93\% and 95\% for LOS scenario. Similarly, the algorithm achieves 80\% and 90\% for the NLOS scenario. In this work, we show that ML approach can achieve close to 100\% successful detection rate for both LOS and NLOS, and different distance scenarios (6ft, 10ft \& 15ft). We observe the ML approach works close to the performance of HD. Table~\ref{mll1} shows the performance of detection for fixed distance configuration setup. From, this table the number of Wi-Fis columns represents the number of Wi-Fi APs deployed in the coexistence setup. The number of Wi-Fi AP 2 corresponds to distinguishing between 1 and 2 Wi-Fi APs, whereas 3 denotes distinguishing between 0, 1, or 2 Wi-Fi APs and so on. In all cases the performance of ML is close to 100\%. \subsection{Successful Detection at Different Configurations} We verify how the detection works in different configurations. We placed more than two \mbox{Wi-Fi} APs on the same side of the LTE-U BS, unlike the above configuration (i.e., 6ft, 10ft and 15ft) where they were on opposite sides. \mbox{Wi-Fi} AP 1, \mbox{Wi-Fi} AP 2, \mbox{Wi-Fi} AP 3, \mbox{Wi-Fi} AP 4 and \mbox{Wi-Fi} AP 5 are placed at distances of 6 feet, 10 feet and 15 feet from the LTE-U BS respectively. We measured the performance of detection with LOS and NLOS configurations. The goal in this section is to observe the performance of detection in the ML compared with HD, ED, and AC. Some of the possible cases are listed below. \begin{itemize} \item \textbf{Case A:} Only the \mbox{Wi-Fi} AP 1 at 6 feet is ON. \item \textbf{Case B:} Only the \mbox{Wi-Fi} AP 2 at 10 feet is ON. \item \textbf{Case C:} Only the \mbox{Wi-Fi} AP 3 at 15 feet is ON. \item \textbf{Case D:} \mbox{Wi-Fi} AP 1 at 6 feet is ON and \mbox{Wi-Fi} AP 2 at 6 feet is ON. \item \textbf{Case E:} The \mbox{Wi-Fi} AP 1 at 6 feet and \mbox{Wi-Fi} AP 3 at 15 feet is ON. \item \textbf{Case F:} The \mbox{Wi-Fi} AP 1 at 10 feet and \mbox{Wi-Fi} AP 3 at 15 feet is ON. \item \textbf{Case G:} \mbox{Wi-Fi} AP 1 and \mbox{Wi-Fi} AP 2 at 6 feet is ON and \mbox{Wi-Fi} AP 3 at 15 feet is ON. \item \textbf{Case H:} \mbox{Wi-Fi} AP 1 at 6 feet is ON, \mbox{Wi-Fi} AP 2 at 10 feet is ON, and \mbox{Wi-Fi} AP 3 at 15 feet is ON. \item \textbf{Case I:} \mbox{Wi-Fi} AP 1 and \mbox{Wi-Fi} AP 2 at 6 feet is ON, \mbox{Wi-Fi} AP 3 at 10 feet is ON and \mbox{Wi-Fi} AP 4 at 15 feet is ON. \item \textbf{Case J:} \mbox{Wi-Fi} AP 1 at 6 feet is ON, \mbox{Wi-Fi} AP 2 and \mbox{Wi-Fi} AP 3 at 10 feet is ON and \mbox{Wi-Fi} AP 4 at 15 feet is ON. \item \textbf{Case K:} \mbox{Wi-Fi} AP 1 and \mbox{Wi-Fi} AP 2 at 6 feet is ON, \mbox{Wi-Fi} AP 3 and \mbox{Wi-Fi} AP 4 at 10 feet is ON and \mbox{Wi-Fi} AP 5 at 15 feet is ON. \item \textbf{Case L:} \mbox{Wi-Fi} AP 1 at 6 feet is ON, \mbox{Wi-Fi} AP 2 , \mbox{Wi-Fi} AP 3 and \mbox{Wi-Fi} AP 4 at 10 feet is ON and \mbox{Wi-Fi} AP 5 at 15 feet is ON. \item \textbf{Case M:} \mbox{Wi-Fi} AP 1 at 6 feet is ON, \mbox{Wi-Fi} AP 2 at 10 feet is ON, \mbox{Wi-Fi} AP 3, \mbox{Wi-Fi} AP 4 and \mbox{Wi-Fi} AP 5 at 15 feet is ON. \item \textbf{Case N:} \mbox{Wi-Fi} AP 1 and \mbox{Wi-Fi} AP 2 at 6 feet is ON, \mbox{Wi-Fi} AP 3 at 10 feet is ON, \mbox{Wi-Fi} AP 4 and \mbox{Wi-Fi} AP 5 at 15 feet is ON. \end{itemize} The different configurations are for LTE-U when it coexists with different number of Wi-Fi APs (from 1 to 5). Table~\ref{t1} shows the better performance for ED and AC compared to the table~\ref{t12}. This is due to fewer number of Wi-Fi AP deployments from Case A to G compared to Case H to N. Hence, the the ED and AC methods can detect the number of Wi-Fi APs close to 80\% for ED and up to 90\% for AC. As the number of Wi-Fi APs increases from 3 to 5 Wi-Fi APs (\emph{i.e.,} Case H to N), we observe substantial degradation in ED performance (to 56\%) and AC performance (to 63\%). Tables~\ref{t1} and ~\ref{t12} show that there is no such degradation in the performance of ML as compared to ED and AC. Hence, we believe that the ML approach is the preferred method for a \mbox{LTE-U} BS in a dense environment to detect the number of \mbox{Wi-Fi} APs and scale back the duty cycle efficiently. \begin{table*} \centering \caption{Performance of detection for different configuration setup (from case A to G).} \begin{tabular}{|*{18}{c|}} \hline \multicolumn{1}{|c}{\cellcolor{Gray} CSAT Types} & \multicolumn{2}{|c}{\cellcolor{Gray} CASE A (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE B (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE C (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE D (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE E (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE F (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE G (\%)}\\ \hline & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS \\ \hline HD & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100\\ \hline ED & 91 & 82 & 90 & 79 & 85 & 78 & 82 & 77 & 80 & 74 & 81 & 72 & 80 & 69\\ \hline AC & 95 & 91 & 94 & 91 & 92 & 90 & 91 & 90 & 88 & 85 & 88 & 83 & 86 & 77\\ \hline ML & 98.80 & 97.96 & 99.94 & 99.37 & 99.96 & 97.74 & 99.46 & 97.80 & 99.21 & 99.14 & 99.32 & 99.10 & 99.56 & 98.44 \\ \hline \end{tabular} \label{t1} \end{table*} \begin{table*} \centering \caption{Performance of detection for different configuration setup (from case H to N).} \begin{tabular}{|*{18}{c|}} \hline \multicolumn{1}{|c}{\cellcolor{Gray} CSAT Types} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE H (\%)} & \multicolumn{2}{|c}{\cellcolor{Gray} CASE I (\%)} & \multicolumn{2}{|c}{\cellcolor{Gray} CASE J (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE K (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE L (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE M (\%)} & \multicolumn{2}{|c|}{\cellcolor{Gray} CASE N (\%)} \\ \hline & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS & \cellcolor{Gray} LOS & \cellcolor{Gray} NLOS \\ \hline HD & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ \hline ED & 80 & 69 & 77 & 67 & 76 & 65 & 68 & 61 & 67 & 56 & 68 & 57 & 66 & 52\\ \hline AC & 84 & 74 & 79 & 68 & 77 & 67 & 75 & 65 & 74 & 64 & 72 & 63 & 71 & 59\\ \hline ML & 98.70 & 97.36 & 99.24 & 98.26 & 99.76 & 98.24 & 98.83 & 98.06 & 99.11 & 99.04 & 99.02 & 98.05 & 99.96 & 97.74 \\ \hline \end{tabular} \label{t12} \end{table*} \subsection{Additional Delay to Detect the \mbox{Wi-Fi} AP} To study the additional delay to detect a Wi-FI AP, we consider a 5 Wi-Fi AP deployment scenario, where, Wi-Fi AP 1 and Wi-Fi AP 2 at 6 feet are ON, Wi-Fi AP 3 and Wi-Fi AP 4 at 10 feet are ON and Wi-Fi AP 5 at 15 feet is ON. We observe a large number of Wi-Fi packets on the air and moreover the LTE-U ON cycle interference impacts the delay in Wi-Fi transmissions. In HD, the total time for the LTE-U BS to decode the BSSID is 1.4 seconds (i.e., \mbox{Wi-Fi} 1st BSSID beacon packet + LTE-U detects $K$ beacon + Additional layer complexity + NI USRP RIO hardware processing time). In ED, the total time for the energy based CSAT algorithm to adopt or change the duty cycle from 50\% to 33\% is 5.9 seconds (i.e., \mbox{Wi-Fi} 1st beacon transmission time + LTE-U detects $K$ beacon (or) data packets time + NI USRP RIO hardware processing time) as shown in Table~\ref{t23}. In AC, the total time for the AC based CSAT algorithm to change the duty cycle from 50\% to 33\% is 4.8 seconds (i.e., \mbox{Wi-Fi} 1st L-STF packet frame + LTE-U detects L-STF frame time + NI USRP RIO hardware processing time). In ML, the total time for the CSAT algorithm to adopt the duty cycle from 50\% to 33\% is about 3.1 seconds. This approach is dependent on the chunk size (in this case set to 512). \begin{table} \caption{Other additional delay to detect the \mbox{Wi-Fi} AP due to the NI hardware } \centering \begin{tabular}{|p{3.5cm}| p{2cm}| } \hline \cellcolor{Gray} \textbf{CSAT Types} & \cellcolor{Gray} \textbf{NI HW Delay} \\ \hline Header Decoding (HD) & 1.4 S \\ \hline Energy Detection (ED) & 5.9 S \\ \hline Auto-correlation (AC) & 4.8 S \\ \hline Machine Learning (ML) & 3.1 S \\ \hline \end{tabular} \label{t23} \end{table} \subsection{FFT compression} We test the FCN model using the FFT based convolutional layers with compression~\cite{dziedzic2019band}. The results are presented in Fig.~\ref{fig:wifi-fft}. We observe that for 2 and 3 classes the data is highly compressible and we can allow up to even 60\% compression with the test accuracy preserved on the level of above 99\%. As we increase the number of classes, the accuracy of the model gracefully degrades and the 60\% compression rate allows us to retain the test accuracy of about 90\% for 5 classes. We do not observe a significant difference between the cases with 2 and 3 classes. For 2 classes, we have 1 or 2 \mbox{Wi-Fi} APs and for 3 classes, we distinguish between 0, 1, or 2 \mbox{Wi-Fi} APs. The signal for no \mbox{Wi-Fi} APs is very different and hence easier to classify, than for the remaining signals with active \mbox{Wi-Fi} APs. \begin{figure}[htb!] \begin{center} \includegraphics[width=\linewidth]{wifi-fft/data32020-02-01-12-58-15-800876.pdf} \caption{Effect of FFT compression embedded into the convolutional layers of the FCN model on test accuracy. We use the Case D configuration for 2 classes and the same configuration with NLOS and 6 feet for the remaining classes.} \label{fig:wifi-fft} \end{center} \end{figure} \section{Conclusions and Future Work}\label{sec:conclusion} We have presented a comprehensive experimental study of different kinds of ML algorithms that could be used to address the problem of identifying the number of active Wi-Fi APs on the air to aid in setting the LTE-U duty cycle appropriately. Additionally, we have compared the performance of the optimum ML algorithm to conventional methods using energy detection and auto-correlation detection and demonstrated superior performance in multiple configurations. We believe that this is the first result that demonstrates the feasibility of using ML on energy values in real-time, instead of packet decoding \cite{chai2016lte}, to reliably distinguish between the presence of different number of \mbox{Wi-Fi} APs. Such a result can have applications beyond LTE-U duty-cycle adaptation, for example in better Wi-Fi frequency management. We aim to extend this work in the future by distinguishing between LTE-LAA BS and \mbox{Wi-Fi} APs for the coexistence scenario between Wi-Fi, LTE-U and LTE-LAA, thus enabling even finer duty cycle adjustments of a LTE-U BS and improved coexistence with \mbox{Wi-Fi}. Also, we are interested in developing a ML framework that predicts the type of Wi-Fi traffic \emph{i.,e.,} voice, video, or data which in turn can further ensure fair access to the unlicensed spectrum since each traffic-type requires different transmission opportunity times (TXOPs) and per-traffic fairness is more important than per node (Wi-Fi AP) fairness. Similar concepts can also be applied to LTE-LAA/Wi-Fi coexistence deployments and future NR-U/Wi-Fi coexistence in the 6 GHz band. \section*{ACKNOWLEDGEMENT}\label{p4} This material is based on work supported by the National Science Foundation (NSF) under Grant No. CNS - 1618920. Adam Dziedzic is supported by the Center For Unstoppable Computing (CERES) at the University of Chicago. \bibliographystyle{unsrt}
1,477,468,750,518
arxiv
\section{\label{}} \section{TESTS OF QCD FACTORIZATION} Weak decays of hadrons provide a straight access to the parameters of the CKM matrix and thus to the study of the CP violation. Gluon scattering in the final state, related with the confinement of quarks and gluons into hadrons, can modify the decay dynamics and so must be well understood. In the factorization model~ \cite{ref:BauerStechWirbel,ref:NeubertPetrov}, the non-factorizable interactions in the final state by soft gluons are neglected. The matrix element in the effective weak Hamiltonian of the $B$ decay is then factorized into a product of independent hadronic currents. \subsection{Measurement of the branching fractions (BF) of the decays {\boldmath $B\rightarrow \chi_{c0}K^{*}$}~\cite{ref:Kchic0}} In the factorization model of the decay $b\rightarrow c\bar{c}s$, the charge conjugation invariance of the current-current operator forbids the hadronization of $c\bar{c}$ into $\chi_{c0}$. The branching fractions (BF) of the decays $B^0\rightarrow \chi_{c0}K^{*0}$ and $B^+\rightarrow \chi_{c0}K^{*+} $ are measured from exclusive reconstruction using a data sample of 454$\times 10^6\ B\bar{B}$ pairs in units of $10^{-4}$: $BF(B^0\rightarrow \chi_{c0}K^{*0}) = 1.7 \pm 0.3 \pm 0.2$ and $BF(B^+\rightarrow \chi_{c0}K^{*+}) = 1.4 \pm 0.5 \pm 0.2$, where the quoted first errors are statistical and the second are systematic. The decay $B^0\rightarrow \chi_{c0}K^{*0}$ is observed with a 8.9 standard deviation (quoted as $\sigma$) significance and an evidence is found for $B^+\rightarrow \chi_{c0}K^{*+}$ with a 3.6$\sigma$ significance. An upper limit is set for $BF(B^+\rightarrow \chi_{c0}K^{*+})<2.1$ at 90~$\%$ confidence level (quoted as $CL$). The $B^0\rightarrow \chi_{c0}K^{*0}$ BF does not agree with the zero value expected from factorization and is about half of the favored mode $B^0\rightarrow \chi_{c1}K^{*0}$ ($(3.2\pm 0.6)\times 10^{-4}$~\cite{ref:PDG}). \subsection{Measurement of the BFs of the decays {\boldmath $B\rightarrow\chi_{c1,2}K^{(*)}$}~\cite{ref:Kchic12}} In the factorization model, no operators exist for the hadronization of $c\bar{c}$ into $\chi_{c2}$, while the hadronization to $\chi_{c1}$ is favored. The BFs of the decays $B\rightarrow \chi_{c1}K^{(*)}$ and $B\rightarrow \chi_{c2}K^{(*)} $ are measured from exclusive reconstruction using a data sample of 465$\times 10^6\ B\bar{B}$ pairs in units of $10^{-5}$: $BF(B^+\rightarrow\chi_{c1}K^+)=46 \pm 2 \pm 3$, $BF(B^0\rightarrow\chi_{c1}K^0)= 41 \pm 3 \pm 3$, $BF(B^+\rightarrow\chi_{c1}K^{*+})= 27\pm 5 \pm 4$, $BF(B^0\rightarrow\chi_{c1}K^{*0})= 25\pm 2 \pm 2$, $BF(B^+\rightarrow\chi_{c2}K^+)<1.8~@ 90~\%~CL$, $BF(B^0\rightarrow\chi_{c2}K^0)<2.8~@ 90~\%~CL$, $BF(B^+\rightarrow\chi_{c2}K^{*+})<12~@ 90~\%~CL$, and $BF(B^0\rightarrow\chi_{c2}K^{*0})= 6.4\pm 1.7 \pm 0.5$, where the first quoted errors are statistical and the second are systematic. The measured values of $BF(B^+\rightarrow\chi_{c1}K^+)$, $BF(B^0\rightarrow\chi_{c1}K^0)$, and $BF(B^+\rightarrow\chi_{c1}K^{*+})$ are the most precise to date. The upper limit on $BF(B^+\rightarrow\chi_{c2}K^+)$ is improved and evidence for the decay $B^0\rightarrow\chi_{c2}K^{*0}$ is seen for the first time. \subsection{Measurement of the BFs of the color-suppressed decays {\boldmath $\bar{B}^{0}\rightarrow D^{(*)0}h^{0}$}, \boldmath{$h^0=\pi^0,\ \eta,\ \omega,\ \eta'$}~\cite{ref:D0h0}} Previous measurements of the BFs of the color-suppressed decays $\bar{B}^{0}\rightarrow D^{(*)0}h^{0}$ invalidated the factorization model~\cite{ref:Babar2004,ref:Belle2005,ref:Belle2006}. However more precise measurements are needed to confirm that result and to constrain the different QCD models: SCET (Soft Collinear Effective Theory) and pQCD (perturbative QCD). The BFs are measured from exclusive reconstruction using a data sample of 454$\times 10^6\ B\bar{B}$ pairs, the measured values are given in the Table~\ref{tab:table2}. \begin{table*}[htb] {\footnotesize \caption{BFs of the decays $\bar{B}^{0}\rightarrow D^{(*)0}h^{0}$ measured in data. \label{tab:table2}} \begin{center} \begin{tabular}{|l|c|c|} \hline $\bar{B}^0$ mode & $(BF\pm\textrm{stat.}\pm\textrm{syst.})\times 10^{-4}$ & Signif. \\ \hline $D^0\pi^0$ & 2.78 $\pm$ 0.08 $\pm$ 0.20 & 35.5$\sigma$\\ $D^0\eta(\gamma\gamma)$ & 2.34 $\pm$ 0.11 $\pm$ 0.17& 26.1$\sigma$ \\ $D^0\eta(\pi\pi\pi^0)$ & 2.51 $\pm$ 0.16 $\pm$ 0.17& 20.3$\sigma$\\ $D^0\eta$ & 2.41 $\pm$ 0.09 $\pm$ 0.17 & - \\ $D^0\omega$ & 2.77 $\pm$ 0.13 $\pm$ 0.22& 29.4$\sigma$\\ $D^0\eta'(\pi\pi\eta(\gamma\gamma))$ & 1.29 $\pm$ 0.14 $\pm$ 0.09& 14.7$\sigma$\\ $D^0\eta'(\rho^0\gamma)$ & 1.95 $\pm$ 0.29 $\pm$ 0.30 & 7.2$\sigma$\\ $D^0\eta'$ & 1.38 $\pm$ 0.12 $\pm$ 0.22 & - \\ $D^{*0}\pi^0$ & 1.78 $\pm$ 0.13 $\pm$ 0.23 & 15.1$\sigma$\\ $D^{*0}\eta(\gamma\gamma)$ & 2.37 $\pm$ 0.15 $\pm$ 0.24 & 19.4$\sigma$ \\ $D^{*0}\eta(\pi\pi\pi^0)$ & 2.27 $\pm$ 0.23 $\pm$ 0.18& 12.2$\sigma$\\ $D^{*0}\eta$ & 2.32 $\pm$ 0.13 $\pm$ 0.22 & - \\ $D^{*0}\omega$ & 4.44 $\pm$ 0.23 $\pm$ 0.61 & 22.3$\sigma$\\ $D^{*0}\eta'(\pi\pi\eta)$ & 1.12 $\pm$ 0.26 $\pm$ 0.27 & 8.0$\sigma$\\ $D^{*0}(D^0\pi^0)\eta'(\rho^0\gamma)$ & 1.64 $\pm$ 0.53 $\pm$ 0.20 & 3.3$\sigma$\\ $D^{*0}\eta'$ & 1.29 $\pm$ 0.23 $\pm$ 0.23 & - \\ \hline \end{tabular} \end{center} } \end{table*} These results are consistent with the prediction by SCET: $BF(D^{*0}h^0)/BF(D^0h^0)\sim 1$ for $h^0\ne\omega$, but marginally consistent with the predictions by pQCD on the BFs. The measurements are 3 to 7 times higher than the predictions by the naive factorization model. \section{STUDY OF THE {\boldmath$B$}-MESON DECAYS TO {\boldmath $\eta_c K^{(*)},\ \eta_c(2S) K^{(*)},$} AND \boldmath{$h_c\gamma K^{(*)}$}~\cite{ref:etac_hc_etac2S}}\label{charmo} The $B$ decays to charmonium singlet states $h_c$ and $\eta_c$ are still poorly known. A better knowledge of the relative abundances of the various charmonium states allows a deeper understanding of the underlying strong processes. In the non-relativistic QCD model, the productions of $\chi_{cJ}$ ($J=0,1,2$) and $h_c$ are predicted to be comparable in magnitude, however $BF(B\rightarrow \chi_{c1}K)\sim 3\times 10^{-4}$ and $BF(B^+\rightarrow h_c K^+)<3.8\times 10^{-5}$. Similarly no exclusive measurements of the BF of $\eta_c(2S)$ production have been performed. The knowledge of the mass parameters of the charmonium state $\eta_c$ is pivotal for the models of $c\bar{c}$ spectrum, but the measurements available so far are in poor agreement with one another. The large uncertainties on $BF(\eta_c\rightarrow K\bar{K}\pi)$ and on $BF(\eta_c(2S)\rightarrow K\bar{K}\pi)$ are cancelled by measuring the ratio by respect to $BF(B^+\rightarrow \eta_c K^+)$ and $BF(B^+\rightarrow \eta_c(2S) K^+)$. The measured BF of the $h_c$ and $\eta_c$ productions are measured using 384$\times 10^6\ B\bar{B}$ pairs (with $BF(B^+\rightarrow \eta_c K^+)=(9.1\pm 1.3)\times 10^{-4}$): $BF(B^0\rightarrow\eta_c K^{*0}) = ( 5.7 \pm 0.6 (\textrm{stat.}) \pm 0.4 (\textrm{syst.}) \pm 0.8 (\textrm{bf.}) )\times 10^{-4}$, $BF(B^+\rightarrow h_c K^+)\times BF(h_c\rightarrow\eta_c\gamma) < 4.8\times 10^{-5}~@ 90~\%~CL$, and $BF(B^0\rightarrow h_c K^{*0})\times BF(h_c\rightarrow\eta_c\gamma) <2.2\times 10^{-4}~@ 90~\%~CL$. The uncertainty noted bf. is related the error on $BF(B^+\rightarrow \eta_c K^+)$. These are the first upper limits and confirm the $h_c$ suppression. Using $BF(B^+\rightarrow\eta_c(2S)K^+)=(3.4\pm 1.8)\times 10^{-4}$, the upper limit on BF for $\eta_c(2S)$ production is $BF(B^0\rightarrow \eta_c(2S)K^{*0}) < 3.9 \times 10^{-4}~@90~\%\ CL$. Using $BF(B^+\rightarrow\eta_c K^+)\times BF(\eta_c\rightarrow K\bar{K}\pi)=(6.88\pm 0.77^{+0.55}_{-0.66})\times 10^{-4}$, the first measurement is reported for $BF(\eta_c(2S)\rightarrow K\bar{K}\pi) = (1.9 \pm 0.4\textrm{(stat.)} \pm 0.5\textrm{(syst.)} \pm 1.0\textrm{(bf.)}) $. Both the mean and width of the $\eta_c$ mass distribution are extracted: $m(\eta_c) = (2985.8 \pm 1.5\textrm{(stat.)} \pm 3.1\textrm{(syst.)})~\textrm{MeV}/c^2$, $\Gamma(\eta_c) = (36.3^{+3.7}_{-3.6}\textrm{(stat.)} \pm 4.4\textrm{(syst.)})~\textrm{MeV}$, which are in agreement with the previous $\mbox{\slshape B\kern-0.1em{\small A}\kern-0.1em B\kern-0.1em{\small A\kern-0.2em R}}$ measurements. \section{MEASUREMENT OF THE MASS DIFFERENCE {\boldmath $m(B^0)-m(B^+)$}~\cite{ref:BmassDiff}}\label{Bmassdiff} The measurement of the mass difference $\Delta m_B = m(B^0)-m(B^+)$ probes the Coulomb contributions to the quark structure, which affect the relative production rates of $\Upsilon(4S)\rightarrow B^0\bar{B}^0$ and $\Upsilon(4S)\rightarrow B^+B^-$. The decay modes $B^0\rightarrow J/\psi K^+\pi^-$ and $B^+\rightarrow J/\psi K^+$ with $J/\psi\rightarrow e^+e^-,\ \mu^+\mu^-$, are reconstructed exclusively using 230 million$\times 10^6\ B\bar{B}$ pairs. The mass difference $\Delta m_B$ is then computed as: \begin{equation} \Delta m_B = - \Delta p^* \times \frac{p^*(B^0)+p^*(B^+)}{(m(B^0)+m(B^+))\cdot c^2}, \end{equation} where $p^*$ is the momentum in the $\Upsilon(4S)$ rest frame. The measured value is $\Delta m_B = (0.33 \pm 0.05\textrm{(stat.)} \pm 0.03\textrm{(syst.)} )~\textrm{MeV}/c^2$, which excludes the null value at the 5$\sigma$ level. \section{MEASUREMENT OF THE BFs OF {\boldmath $B^0\rightarrow D_s^{(*)+}\pi^-$}, {\boldmath $B^0\rightarrow D_s^{(*)+}\rho^-$}, AND {\boldmath $B^0\rightarrow D_s^{(*)-}K^+$}~\cite{ref:sin2betaG}}\label{mesratior} The quantity $\sin(2\beta+\gamma)$, with the CKM parameters $\beta$ and $\gamma$, can be measured from the study of the time evolution of the doubly-Cabibbo and CKM-suppressed decays $B^0\rightarrow D^{(*)-}\pi^+$ and $B^0\rightarrow D^{(*)-}\rho^+$. That study requires the knowledge of the ratios of the decay amplitudes $r(D^{(*)}\pi)= |A(B^0\rightarrow D^{(*)+}\pi^-)/A(B^0\rightarrow D^{(*)-}\pi^+)|$, which cannot be directly measured. Assuming $SU(3)$ flavor symmetry, $r(D^{(*)}\pi)$ can be related to the decay $B^0\rightarrow D_s^{(*)+}\pi^-$: \begin{equation}\label{eq:SU3} r(D^{(*)}\pi)=\tan(\theta_c)\frac{f_{D^{(*)}}}{f_{D_s^{(*)}}}\sqrt{\frac{BF(B^0\rightarrow D_s^{(*)+}\pi^-)}{BF(B^0\rightarrow D^{(*)-}\pi^+)}}, \end{equation} where $\theta_c$ is the Cabibbo angle, and $f_{D^{(*)}}/f_{D_s^{(*)}}$ is the ratio of $D^{(*)}$ and $D_s^{(*)}$ meson decay constants. The contribution from W-exchange diagrams are evaluated from the study of $B^0\rightarrow D_s^{(*)-}K^+$, which proceeds through a W-exchange diagram only. Using 381$\times 10^6\ B\bar{B}$ pairs, the measured BFs are (in units of $10^{-5}$): $BF(D_s^+\pi^-) = 2.5 \pm 0.4 \pm 0.2$, $BF(D_s^{*+}\pi^-) = 2.6^{+0.5}_{-0.4} \pm 0.2$, $BF(D_s^+\rho^-) < 2.4~@ 90~\%~CL$, $BF(D_s^{*+}\rho^-) = 4.1^{+1.3}_{-1.2} \pm 0.4$, $BF(D_s^-K^-) = 2.9 \pm 0.4 \pm 0.2$, $BF(D_s^{*-}K^+) = 2.4 \pm 0.4 \pm 0.2$, $BF(D_s^-K^{*+}) = 3.5^{+1.0}_{-0.9} \pm 0.4$, and $BF(D_s^{*-}K^{*+}) = 3.2^{+1.4}_{-1.2} \pm 0.4$. The measured longitudinal fractions are: $f_L(D_s^{*+}\rho^-) = 0.84^{+0.26}_{-0.28} \pm 0.13$ and $f_L(D_s^{*-}K^{*+}) = 0.92^{+0.37}_{-0.31} \pm 0.07$. The values of $r(D^{(*)}\pi)$ are computed with Equation~(\ref{eq:SU3}): $r(D\pi) = (1.78^{+0.14}_{-0.13} \pm 0.08 \pm 0.10 (\textrm{th.}))~\%$, $r(D^{*}\pi) = (1.81^{+0.16}_{-0.15} \pm 0.09 \pm 0.10 (\textrm{th.}))~\%$, $r(D\rho) = (0.71^{+0.29}_{-0.27} \pm 0.10 \pm 0.04 (\textrm{th.}))~\%$, and $r(D^{*}\rho) = (1.45^{+0.23}_{-0.22} \pm 0.12 \pm 0.08 (\textrm{th.}))~\%$. The quoted first errors are statistical and the second are systematic. The errors denoted th. are related to the theoretical uncertainties. \section{MEASUREMENT OF THE BFs OF THE RARE DECAYS {\boldmath $B\rightarrow J/\psi\phi K$}~\cite{ref:JPsiPhiK}} Many charmonium-like resonances were discovered recently and more are expected in the $J/\psi\phi$ decay channel. The decays $B^0\rightarrow J/\psi\phi K^0$ and $B^+\rightarrow J/\psi\phi K^+$ were exclusively reconstructed using 433$\times 10^6\ B\bar{B}$ pairs. The measured BFs are: $BF(B^0\rightarrow J/\psi\phi K^0) = ( 5.40 \pm 1.20(\textrm{stat.}) \pm 0.40(\textrm{syst.}) )\times 10^{-5}$ and $BF(B^+\rightarrow J/\psi\phi K^+) = (5.81 \pm 0.73(\textrm{stat.}) \pm 0.29(\textrm{syst.}) )\times 10^{-5}$. The study of the $J/\psi\phi$ mass spectrum is on-going. \section{STUDY OF BARYONIC \boldmath{$B$} DECAYS} Baryonic decays of $B$ mesons provide a laboratory for searches for excited charm baryon states and for the investigation of the dynamics of 3-body decays. \subsection{Study of the decays {\boldmath $\bar{B}^0\rightarrow\Lambda_c^+\bar{p}$} and \boldmath{$B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$}~\cite{ref:Lambdacppi}} The BFs of the decay channels $\bar{B}^0\rightarrow\Lambda_c^+\bar{p}$ and $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$ are measured from exclusive reconstruction using 383$\times 10^6\ B\bar{B}$ pairs: $BF(\bar{B}^0\rightarrow\Lambda_c^+\bar{p}) = (1.89 \pm 0.21(\textrm{stat.}) \pm 0.06(\textrm{syst.}) \pm 0.49(\textrm{bf.}))\times 10^{-5}$ and $BF(B^-\rightarrow\Lambda_c^+\pi^-\bar{p}) = (3.38 \pm 0.12(\textrm{stat.}) \pm 0.12(\textrm{syst.}) \pm 0.88(\textrm{bf.}))\times 10^{-4} $, where the error denoted bf. is related to the uncertainty on $BF(\Lambda_c^+\rightarrow p K^-\pi^+)$. One notices an enhancement of the 3-body channel by a factor 15 by respect to the 2-body channel. An enhancement is seen in the Dalitz plot of $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$ at the threshold of the phase space in $m^2(\Lambda_c\bar{p})$. Such threshold enhancement has been seen in other baryon-antibaryon decay modes and is thus expected to be a dynamical effect rather than a resonance. Three resonances are investigated in the $\Lambda_c\pi$ mass spectrum: $\Sigma_c(2455)^0$, $\Sigma_c(2520)^0$ and $\Sigma_c(2800)^0$. The relative BFs measured are: $BF(B^-\rightarrow\Sigma_c(2455)^0\bar{p})/BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^-\bar{p}) = (12.3 \pm 1.2(\textrm{stat.}) \pm 0.8(\textrm{syst.}))\times 10^{-2}$, $BF(B^-\rightarrow\Sigma_c(2800)^0\bar{p})/BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^-\bar{p}) = (11.7 \pm 2.3(\textrm{stat.}) \pm 2.4(\textrm{syst.}))\times 10^{-2}$, and $BF(B^-\rightarrow\Sigma_c(2520)^0\bar{p})/BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^-\bar{p}) < 0.9\times 10^{-2}~@ 90~\%~CL$. No signal is seen for $\Sigma_c(2520)^0$. The parameters of the mass distributions of these resonances are extracted: $m(\Sigma_c(2455)^0) = 2454.0 \pm 0.2~\textrm{MeV}/c^2$, $\Gamma(\Sigma_c(2455)^0) = 2.6\pm 0.5~\textrm{MeV}$, $m(\Sigma_c(2800)^0) = 2846.0 \pm 8.0~\textrm{MeV}/c^2$, and $\Gamma(\Sigma_c(2800)^0) = 86^{+33}_{-22}~\textrm{MeV}$. The measured mass for $\Sigma_c(2800)^0$ is 3$\sigma$ higher than the resonance seen by Belle~\cite{ref:BelleSigma2800}, which may indicate a new $J=1/2$ state. The angular distribution of $B^-\rightarrow\Sigma_c(2455)^0\bar{p}$ is consistent with a spin of $J=1/2$ for $\Sigma_c(2455)^0$ and the hypothesis $J=3/2$ is rejected at the $> 4\sigma$ level. \subsection{Study of the decay {\boldmath$\bar{B}^0\rightarrow\Lambda_c^+\pi^0\bar{p}$}} This channel is the isospin counterpart of $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$ and has never been observed. The BF is measured in the restrictive phase space $m(\Lambda_c^+\pi^0)>3.0~\textrm{GeV}/c^2$ and so does not include contributions from $\Sigma_c(2455,2520,2800)^0$ resonances. An enhancement, similar to the one seen for $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$, is seen at the threshold of the phase space of the $\Lambda_c^+\bar{p}$ mass spectrum. Using 467$\times 10^6\ B\bar{B}$ pairs the measured BF is: $BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^0\bar{p}) = (1.61 \pm 0.26(\textrm{stat.}) \pm 0.13(\textrm{syst.}) \pm 0.42(\textrm{bf.}))\times 10^{-4}$, where the error denoted bf. is related to the uncertainty on $BF(\Lambda_c^+\rightarrow p K^-\pi^+)$.
1,477,468,750,519
arxiv
\section{Introduction} Despite the substantial effort toward quantizing gravity in 4 dimensions, this issue is still open. One of the best candidates till now is the superstring theory formulated in 10 dimensions. A way from superstring theory to 4-dimensional quantum gravity or standard model of particle physics (minimal supersymmetric extension thereof) is, at best, highly non-unique. Many techniques of compactifications and flux stabilization along with specific model-building branes configurations and dualities, were worked out toward this end within the years. Possibly some important data of a fundamental character are still missing. The point of view advocated in this paper is that indeed we have not respected till now 4-dimensional phenomena of different smoothings of Euclidean $\mathbb{R}^{4}$ which presumably are very important for the program of QG. There are strong evidences that exotic 4-smoothness on compact manifolds should be taken into account by any QG theory \cite{Asselmeyer-Maluga2010}. Here we refer to open 4-manifolds and try to consider exotic $\mathbb{R}^{4}$'s as serving a link between higher dimensional superstring theory and 4-dimensional ,,physical\textasciiacute{}\textasciiacute{} theories and 4-dimensional QG. String theory would describe directly 4-dimensional structures at the fundamental level. This paper serves as a step toward seeing exotic smooth $\mathbb{R}^{4}$ as fundamental objects underlying higher dimensional (super)string theory. Further results regarding compactification and realistic 4-dimensional models of various brane configurations in string theory and their relation to exotic 4-smoothings, will be presented separately. The problem with successful inclusion of the effects of 4-open-exotics into any physical theory, is the notorious lack of an explicit coordinate-like description of these smooth manifolds. In the series of our recent papers we addressed this issue and worked out some relative techniques allowing for analytical treatment of small exotic $\mathbb{R}^{4}$'s \cite{AsselmeyerKrol2009,AsselmeyerKrol2009a,AsselmeyerKrol2010,Krol2010}. In this paper we show that description of D-branes in some exact string backgrounds are related with 4-smoothness of $\mathbb{R}^{4}$. Moreover, the deep quantum regime of the D-branes is also 4-exotic sensitive. However the connection of abstract, generalized quantum D-branes to the actual superstring theory D-branes (in the manifold limit) is not directly given. The Witten limit of superstring theory where D-branes yield their noncommutative world-volumes is only the midway and in fact motivates the full $C^{\star}$ algebra approach \cite{Szabo2008a}. This last serves as a possible partial solution to the problem of describing quantum D-branes in superstring theory. The connection with exotic $\mathbb{R}^{4}$ at this, quantum level is unexpected and shows that 4-dimensionality may get into the game in string theory through back-door of nonperturbative quantum regime. In the last section of the paper we use the $C^{*}$algebra approach to quantum D-branes to construct a manifold model of a quantum D-brane as wild embedding. Then we show that the $C^{*}$algebra of the wild embedding is isomorphic to the $C^{*}$algebra of the quantum D-brane. Furthermore we construct a quantum version of an action using cyclic cohomology and get the right limit to the classical D-brane described by the Born-Infeld action. The basic technical ingredient of the analysis of small exotic $\mathbb{R}^{4}$'s enabling uncovering many applications also in string theory is the relation between exotic (small) $\mathbb{R}^{4}$'s and non-cobordant codimension-1 foliations of the $S^{3}$ as well gropes and wild embeddings as shown in \cite{AsselmeyerKrol2009}. The foliation are classified by Godbillon-Vey class as element of the cohomology group $H^{3}(S^{3},\mathbb{R})$. By using the $S^{1}$-gerbes it was possible to interpret the integral elements $H^{3}(S^{3},\mathbb{Z})$ as characteristic classes of a $S^{1}$-gerbe over $S^{3}$ \cite{AsselmeyerKrol2009a}. The main line of the topological argumentation can be briefly described as follows: \begin{enumerate} \item In Bizacas exotic $\mathbb{R}^{4}$ one starts with the neighborhood $N(A)$ of the Akbulut cork $A$ in the K3 surface $M$. The exotic $\mathbb{R}^{4}$ is the interior of $N(A)$. \item This neighborhood $N(A)$ decomposes into $A$ and a Casson handle representing the non-trivial involution of the cork. \item From the Casson handle we construct a grope containing Alexanders horned sphere. \item Akbuluts construction gives a non-trivial involution, i.e. the double of that construction is the identity map. \item From the grope we get a polygon in the hyperbolic space $\mathbb{H}^{2}$. \item This polygon defines a codimension-1 foliation of the 3-sphere inside of the exotic $\mathbb{R}^{4}$ with an wildly embedded 2-sphere, Alexanders horned sphere. \item Finally we get a relation between codimension-1 foliations of the 3-sphere and exotic $\mathbb{R}^{4}$. \end{enumerate} This relation is very strict, i.e. if we change the Casson handle then we must change the polygon. But that changes the foliation and vice verse. Finally we obtained the result:\\ \emph{The exotic $\mathbb{R}^{4}$ (of Bizaca) is determined by the codimension-1 foliations with non-vanishing Godbillon-Vey class in $H^{3}(S^{3},\mathbb{R}^{3})$ of a 3-sphere seen as submanifold $S^{3}\subset\mathbb{R}^{4}$.} \section{Geometry of string backgrounds and exotic $\mathbb{R}^{4}$} In this section we take the point of view that exotic smoothness of some small exotic $\mathbb{R}^{4}$'s when localized on $S^{3}\subset\mathbb{R}^{4}$, correspond to some stringy geometry given by so-called $B$-fields on $S^{3}$. The localization is understood as the representation of the exotics by 3-rd integral or real cohomologies of $S^{3}$. This correspondence takes place in fact for a classical limit of the geometry of string backgrounds, i.e. curved Riemannian manifold with B-field. One can say that localized exotic smooth $\mathbb{R}^{4}$ on $S^{3}$ is described by stringy geometry of $B$-fields on this $S^{3}$. The correspondence can be extended over string regime of finite volume of $SU(2)$ WZW model. \subsection{$SU(2)$ WZW model, D-branes and exotic $\mathbb{R}^{4}$} We want to focus on changing the smoothness of $\mathbb{R}^{4}$ and considering the changes as localized on $S^{3}$. As follows from \cite{AsselmeyerKrol2009,AsselmeyerKrol2009a} this gives rise to stringy effects, since the changes can be described by computations in some 2D CFT, namely WZW models on $SU(2)$ at finite level. First we are going to discuss bosonic $SU(2)$ WZW model and dynamics of branes in it. We deal here with $S^{3}$ hence the nonzero metric of string background. In general, non-vanishing curvature $R(g)$, where $g$ is a non-constant metric, of the background manifold $(M,g)$ on which bosonic string theory is formulated, enforces that $H$ -field on $M$ cannot vanish. This is since the string field equations gives rise to (see e.g. \cite{Schomerus2002}) \begin{equation} R_{\mu\nu}(g)-\frac{1}{4}H_{\mu\rho\sigma}{H_{\nu}}^{\rho\sigma}={\cal O(\alpha')}\label{eq:R-H}\end{equation} where $H=dB$ is the NSNS 3-form, $B=B(x)dx^{\mu}\wedge dx^{\nu}$ is the B-field, and dilaton is fixed to be constant. Also in the case of superstring theory this equation still holds true provided all RR background fields vanish \cite{Schomerus2002}. D-branes in group manifold $SU(2)$ (at the semi-classical limit) are determined as wrapping the conjugacy classes of $SU(2)$, which are 2-spheres $S^{2}$ plus 2 points-poles, seen as degenerated 2-spheres. Due to the quantization conditions there are $k+1$ D-branes on the level $k$ $SU(2)$ WZW model \cite{Schomerus2000,Schomerus2002,Alekseev1999}. To grasp the dynamics of the branes one should deal with the gauge theory on the stack of $N$ D-branes on $S^{3}$ which is quite similar to the flat space case where noncommutative gauge theory emerges \cite{Alekseev1999b}. For $N$ branes of type $J$ on top of each other, where $J$ is the representation of $SU(2)_{k}$ i.e. $J=0,\frac{1}{2},1,\,...\,,\frac{k}{2}$, the dynamics of the branes is described by the noncommutative action: \begin{equation} S_{N,J}=S_{YM}+S_{CS}=\frac{\pi^{2}}{k^{2}(2J+1)N}\left(\frac{1}{4}{\rm tr}(F_{\mu\nu}F^{\mu\nu})-\frac{i}{2}{\rm tr}(f^{\mu\nu\rho}{\rm CS}_{\mu\nu\rho})\right)\:.\label{eq:NoncommAction}\end{equation} Here the curvature form $F_{\mu\nu}(A)=iL_{\mu}A_{\nu}-iL_{\nu}A_{\mu}+i[A_{\mu},A_{\nu}]+f_{\mu\nu\rho}A^{\rho}$ and the noncommutative Chern-Simons action reads ${\rm CS}_{\mu\nu\rho}(A)=L_{\mu}A_{\nu}A_{\rho}+\frac{1}{3}A_{\mu}[A_{\nu},A_{\rho}]$. The fields $A_{\mu},\,\mu=1,2,3$ are defined on fuzzy 2-sphere $S_{J}^{2}$ and should be considered as $N\times N$ matrix-valued, i.e. $A_{\mu}=\sum_{j,a}{\rm a}_{j,a}^{\mu}Y_{a}^{j}$ where $Y_{a}^{j}$ are fuzzy spherical harmonics and ${\rm a}_{j,a}^{\mu}$ are Chan-Paton matrix-valued coefficients. $L_{\mu}$ are generators of the rotations on fuzzy 2-spheres and they act only on fuzzy spherical harmonics \cite{Schomerus2002}. The noncommutative action $S_{YM}$ was derived from Connes spectral triples of the noncommutative geometry and was aimed to describe Maxwell theory on fuzzy spheres \cite{Watamura2000}. One can solve the equations of motion derived from the stationery points of (\ref{eq:NoncommAction}) and the solutions describing the dynamics of the branes, i.e. the condensation processes on the brane configuration $(N,J)$ which results in another configuration $(N',J')$. Namely the equation of motion derived from (\ref{eq:NoncommAction}) read: \begin{equation} L_{\mu}F^{\mu\nu}+[A_{\mu},F^{\mu\nu}]=0\label{eq:eof}\end{equation} A class of solutions for (\ref{eq:eof}), in the semi-classical $k\to\infty$ limit, can be obtained from the $N(2J+1)$ dimensional representations of the algebra ${\rm su}(2)$. For $J=0$ one has $N$ branes of type $J=0$, i.e. $N$ point-like branes in $S^{3}$ at the identity of the group. Given another solution corresponding to the $J_{N}=\frac{N-1}{2}$ one can show that this corresponds to the brane wrapping the $S_{J_{N}}^{2}$ sphere and is obtained as the condensed state of $N$ point-like branes at the identity of $SU(2)$ \cite{Schomerus2002}: \begin{equation} (N,J)=(N,0)\to(1,\frac{N-1}{2})=(N',J')\label{eq:semi-class:N-to-N1}\end{equation} Turning to the finite $k$ stringy regime of the $SU(2)$ WZW model one can make use of the techniques of the boundary CFT when applied to the analysis of Kondo effect \cite{Schomerus2002}. It follows that there exists a continuous shift at the level of partition function, between $N\chi_{j}(q)$ and the interfered sum of characters $\sum_{j}N_{J_{N}j}^{\; l}\chi_{l}(q)$ where $N=2J_{N}+1$ (in the vanishing value of the coupling constant) and $N_{J_{N}j}^{\; l}$ are Verlinde fusion rule coefficients. In the case of $N$ point-like branes one can determine the decay product of these by considering open strings ending on the branes. The result on the partition function is \[ Z_{(N,0)}(q)=N^{2}\chi_{0}(q)\] which is continuously shifted to $N\chi_{J_{N}}(q)$ and next to $\sum_{j}N_{J_{N}J_{N}}^{\;\; j}\chi_{j}(q)$. As the result we have the decay process \cite{Schomerus2002} \begin{equation}\label{eq:stringN-to-N1} \begin{array}{c} Z_{(N,0)}(q)\to Z_{(1,J_{N})}\\[4pt] (N,0)\to(1,J_{N}) \end{array} \end{equation}which extends the similar process derived at the semi-classical $k\to\infty$ limit in the effective gauge theory (\ref{eq:semi-class:N-to-N1}), however the representations $2J_{N}$ are bounded now, from the above, by $k$. Given the above dynamics of branes in the WZW $SU(2)$ model at stringy regime, one can address the question of brane charges in a direct way. This is based on the decay rule (\ref{eq:stringN-to-N1}) in the supersymmetric WZW $SU(2)$ model. In this case we have a shift of the level namely $k\to k+2$ which measures the units of the NSNS flux through $SU(2)=S^{3}$. One can see the supersymmetric model as strings moving on $SU(2)$ with $k+2$ units of NSNS flux. From the CFT point of view there exist currents $J^{a}$ which satisfy $k+2$ level of the Kac-Moody algebra and free fermionic fields $\psi^{a}$ in the adjoint representation of $su(2)$. However it is possible to redefine the bosonic currents as \[ J^{a}+\frac{i}{k}f_{\: bc}^{a}\psi^{b}\psi^{c}\] which fulfill the current algebra commutation relation at the level $k$. Here $f_{\: bc}^{a}$ are the structure constants of $su(2)$. The fields $\psi^{a}$ commute with such currents, thus we have the splitting of the supersymmetric WZW $SU(2)$ model at level $k+2$ as WZW $SU(2)$ model at level $k$ times the theory of free fermionic fields. Thus there are $k+1$ stable branes wrapping the conjugacy classes numbered by $J=0,\frac{1}{2},...,\frac{k}{2}$. The decaying process (\ref{eq:stringN-to-N1}) says that placing $N$ point-like branes (each charged by the unit $1$) at the pole $e$ they can decay to the spherical brane $J_{N}$ wrapping the conjugacy class. Taking more point-like branes to the stack at $e$ gives the more distant $S^{2}$ branes until reaching the opposite pole $-e$ where we have single point-like brane with the opposite charge $-1$. Having identify $k+1$ units of the charge with $-1$ we arrive at the conclusion that the group of charges is $\mathbb{Z}_{k+2}$. More generally the charges of branes on the background $X$ with non-vanishing $H\in H^{3}(X,\mathbb{Z})$ are described by the twisted $K$ group $K_{H}^{\star}(X)$ (see e.g. \cite{MathaiMurray2001}). In the case of $SU(2)$ we get the group of RR charges as above for $K=k+2$ \begin{equation} K_{H}^{\star}(S^{3})=\mathbb{Z}_{K}\end{equation} Based on \cite{AsselmeyerKrol2009}, the following important observation is in order:\emph{ certain small exotic $\mathbb{R}^{4}$'s generate the group of RR charges of D-branes in the curved background of $S^{3}\subset\mathbb{R}^{4}$.} This observation is based on the integral classes $H\in H^{3}(S^{3},\mathbb{Z})$ from which one can construct the exotic $\mathbb{R}_{H}^{4}$ as corresponding to the codimension-1 foliation of $S^{3}$ (determined by the class $H$). In \cite{AsselmeyerKrol2009} we showed that twisted K-theory of $S^{3}$ by the class $H\in H^{3}(S^{3},\mathbb{Z})$ can be seen as the effect of the exotic smoothness $\mathbb{R}_{H}^{4}$ on the ambient 4-space, when $S^{3}$ is understood as the part of the boundary of the Akbulut cork of $\mathbb{R}_{H}^{4}$. Thus we arrive at the correspondence: \begin{theorem} The classification of RR charges of the branes on group manifold background $SU(2)$ at the level $k$, hence the dynamics of D-branes in $S^{3}$ in stringy regime, is correlated with exotic smoothness on $\mathbb{R}^{4}$ containing this $S^{3}=SU(2)$ as the part of the boundary of the Akbulut cork. \end{theorem} We can give yet another interpretation of the 4-exoticness which appears on flat $\mathbb{R}^{4}$ in this context. Exotic smoothness of $\mathbb{R}^{4}$, $\mathbb{R}_{H}^{4}$, determines the collection of stable D-branes on $SU(2)$ at the level $k$ of the WZW model, where $k=[H]\in H^{3}(S^{3},\mathbb{Z})$. Thus, the stringy, finite $k$, level of WZW model characterizes exotic 4-smoothness. Recall that in the case of $H=0$ (e.g. $B$ constant in a flat space, i.e. in $k\to\infty$ limit) the smooth structure on $\mathbb{R}^{4}$ is the standard one \cite{AsselmeyerKrol2009}. Thus the exotic smoothness on $\mathbb{R}^{4}$ translates the 4-curvature to the non-zero H-field on $S^{3}$ of finite volume in string units. This is similar to the effect of string field equations relating $R$ and $H$ as in (\ref{eq:R-H}), though it holds now between different spaces ($\mathbb{R}^{4}$ and $S^{3}$). \subsection{$SU(2)$ WZW model in the geometry of the stack of NS5-branes} The group manifold $SU(2)=S^{3}$ is the only manifold which became relevant so far for the description of small exotic $\mathbb{R}^{4}$. From the other side it is the only one which appears directly as part of a string background (namely one generated by NS5-branes). The reason is given by the connection of 4-exotics and string theory as it can be naturally formulated in the geometry of the stack of NS5-branes. Let us briefly describe this string background \cite{Schomerus2000,Schomerus2002,Bachas2000}. We consider a configuration of $k$ coincident supersymmetric NS5-branes in type II theory. The full fivebrane background is (in string frame) \begin{equation}\label{eq:NS5-background} \begin{array}{c} ds^{2}=dx^{2}+f(r)dy^{2}\\[4pt] e^{2\phi}=g_{s}^{2}f(r)\\[4pt] f(r)=1+\frac{k\alpha'}{r^{2}} \\[4pt] H_{IJK}=k\alpha'\epsilon_{IJK} \end{array} \end{equation}where $x$ are the $5+1$ longitudinal coordinates along NS5-branes referred to by indices $\mu$, $\nu$, etc., $y$ being 4 transverse coordinates referred to by indices $I$, $J$, $K$ ... and $r=|y|$, $1/\alpha'\sim$ string tension. The fields of this background reads as \begin{equation} \begin{array}{c} e^{2\Phi}=1+\sum_{j=1}^{k}\frac{l_{s}^{2}}{|y-y_{j}|^2}\\[4pt] g_{IJ}=e^{2\Phi}\delta_{IJ}\\[4pt] g_{\mu\nu}=\eta_{\mu\nu} \\[4pt] H_{IJK}=-\epsilon_{IJKL}\partial^{L}\Phi \end{array} \end{equation}where $y_{j},j=1,...,k$ are the positions of the NS5-branes. When the branes coincide at 0, $y_{j}=0$, the near horizon solutions $y\to0$, are \begin{equation} \begin{array}{c} e^{2\Phi}=\frac{kl_{s}^{2}}{|y|^{2}}\\[4pt] g_{IJ}=e^{2\Phi}\delta_{IJ}\\[4pt] g_{\mu\nu}=\eta_{\mu\nu} \\[4pt] H_{IJK}=-\epsilon_{IJKL}\partial^{L}\Phi \end{array} \end{equation} In the near-horizon limit $r=|y|^{2}\to0$, the background factorizes into a radial component and a $S^{3}$ and flat 6-dimensional Minkowski spacetime. Strings propagating at this limiting background are described by the exact world-sheet CFT with the target $\mathbb{R}^{5,1}\times\mathbb{R}_{\phi}\times S_{k}^{3}$. Here $\mathbb{R}_{\phi}$ is the real line with the parameter $\phi$ which is a scalar corresponding to the ,,linear dilaton'' \begin{equation} \begin{array}{c} \Phi=-\sqrt{\frac{1}{2k}}\phi\\[4pt] \phi=\sqrt{\frac{k}{2}}\log\frac{r}{kl_{s}^{2}} \end{array} \end{equation} The flat Minkowski space $\mathbb{R}^{5,1}$ is longitudinal to the directions of NS5-branes, $S_{k}^{3}$ is $SU(2)_{k}$ and is a level $k$ WZW supersymmetric CFT (SCFT) on $SU(2)$ as discussed in the previous subsection. This $S^{3}$ corresponds to the angular coordinates of the transversal $\mathbb{R}^{4}$. We see that infinite geometrical ,,throat'' $\mathbb{R}_{\phi}\times S_{k}^{3}$, emerges. The metric of the background (in the string frame) thus reads \[ ds^{2}=dx_{6}^{2}+d\phi^{2}+kl_{s}d\Omega_{3}^{2}\,,\: g_{s}^{2}(\phi)=e^{-2\phi/\sqrt{k}l_{s}}\:.\] This background is obtained in the near horizon, $\phi\to-\infty$ ($r\to0$), geometry of the stack of $k-2$ NS5-branes in type II string theory and is in fact a SCFT on the throat. The NS5-branes are placed at $\phi\to-\infty$ and string theory is strongly coupled there, $g_{s}\sim\exp(2\Phi)$. In the opposite limit $\phi\to+\infty$, or $r\to+\infty$, gives asymptotically flat 10-space and string theory is weakly coupled in that limit. This is essentially the CHS (Callan, Harvey, Strominger \cite{CHS1991}) exact string theory background where $SU(2)$ WZW model appears at suitable level $k$. Given the CHS limiting geometry of $N$ NS5-branes we have the 4-dimensional tube $\mathbb{R}_{\phi}\times S^{3}$. The volume of $S^{3}$ in string units is finite and correlated with the number of NS5-branes by $N=k-2$ \cite{Bachas2000}. We take an exotic $\mathbb{R}_{H}^{4}$ for $[H]=k\in H^{3}(S^{3},\mathbb{Z})$. This can be achieved more directly by considering the Akbulut cork $A_{H}$ with the boundary, $\partial A_{H}=\Sigma_{H}$, the homology 3-sphere. As was shown in \cite{AsselmeyerKrol2009} $\Sigma_{H}$ contains $S^{3}$ such that the codimension-1 foliations of it generates the foliations of $\Sigma_{H}$. The foliations in turn are generated by Casson handles attached to $A$. Thus the attached Akbulut cork and Casson handle(s) determine the small exotic smoothness of $\mathbb{R}_{H}^{4}$ \cite{GomSti:1999,AsselmeyerKrol2009}. Moreover, the cobordism classes of codimension-1 foliations of $S^{3}$ are classified by the Godbillon-Vey invariants which are elements of $H^{3}(S^{3},\mathbb{R})$. In our case we deal with integral 3-rd cohomologies $[H]\in H^{3}(S^{3},\mathbb{Z})$. Thus, a way of embedding the Akbulut cork, for some class of exotic $\mathbb{R}^{4}$'s, in the ambient $\mathbb{R}^{4}$ is determined by the integral classes $k\in H^{3}(S^{3},\mathbb{Z})$. Taking the above $S^{3}$ from the boundary of the Akbulut cork, as $S^{3}=SU(2)$ in the string background of $N$ NS5-branes we arrive at the following result: \begin{theorem}\label{Th:Ns5Branes} In the geometry of the stack of NS5-branes in type II superstring theories, adding or subtracting a NS5-brane is correlated with the change of smoothing on transversal $\mathbb{R}^{4}$. \end{theorem} Now the tube $\mathbb{R}_{\phi}\times S_{k}^{3}$ of the limiting geometry can be embedded in the ambient standard $\mathbb{R}^{4}$. Taking this $S_{k}^{3}$ as lying in the boundary of the Akbulut cork for some exotic smooth $\mathbb{R}_{H}^{4}$, the embedding of the tube in this exotic 4-space is determined by the embedding of the Akbulut cork. But this embedding is determined by Casson handles attached to the cork and corresponds to the integral class $[H]=k\in H^{3}(S^{3},\mathbb{Z})$. Thus the background $\mathbb{R}^{5,1}\times\mathbb{R}_{\phi}\times SU(2)_{k}$ is geometrically realized as $\mathbb{R}^{5,1}\times\mathbb{R}_{H}^{4}$. We propose here a general heuristic rule: R1. \emph{D-branes probing exotic 4-dimensional Euclidean space, $\mathbb{R}_{H}^{4}$, times 6-dim. Minkowski spacetime ,$M^{5,1}$, are described equivalently by the D-branes of type II string theory probing the transversal 4-space, $\mathbb{R}^{4}$, to $k$ NS5-branes in the background of these 5-branes. Here $[H]=k\in H^{3}(S^{3},\mathbb{Z})$. Since $M^{5,1}$ appears in both sides of the correspondence we say that }D-branes explore exotic Euclidean $\mathbb{R}_{H}^{4}$\emph{. } Rule R1 is based on the assumption that various nonstandard smoothings of $\mathbb{R}^{4}$ can be grasped by the effects of $H^{3}(S^{3},\mathbb{Z})$. This follows from the correlation of the classes and 4-exotics as proved in \cite{AsselmeyerKrol2009}. Following this rule we can consider many examples of D-branes in the above background (see e.g. \cite{GiveonAntoniadis2000,GiveonKutasov2000,YunKwon2009,Ribault2003,ChenSun2005}), as referring to 4-exoticness. Furthermore type II string theory on $\mathbb{R}^{5,1}\times\mathbb{R}_{\phi}\times SU(2)_{k}$ is given by the SCFT on the infinite ,,throat'' of the background, i.e. $\mathbb{R}_{\phi}\times S^{3}$. Then this theory was proposed to be approachable via holography by using duality. The holographically dual theory appears to be so called 6-dimensional \emph{little string theory (LST)} \cite{GiveonKutasov2000,Aharony2002}. This is a very interesting situation for us since LST was analyzed as having possible experimental signatures at the TeV scales after the compactification on torus \cite{GiveonAntoniadis2000}. By the rule above this refers to 4-exotics as well. We do not deal here with the details and refer the interesting reader to a separate paper devoted to (flux) compactification in string theory and exotic 4-smoothness. But we will present some general remarks here. LST's are non-local theories without gravity and can be described in the limit $g_{s}\to0$ in the theory on $k$ NS5-branes. In that limit the bulk degrees of freedom decouple, hence gravity does. This 6-dim. LST without gravity is holographically dual to the type II string theory formulated on the background $\mathbb{R}^{5,1}\times\mathbb{R}_{\phi}\times SU(2)_{k}$ \cite{GiveonAntoniadis2000}. From the rule R1 it follows that LST is referred to exotic $\mathbb{R}_{H}^{4}$ and calculations in LST should lead to invariants of the 4-exotics. The perturbative calculations, however, are hardly performed in LST since the string coupling $g_{s}$ diverges in the dual string background along the tube, and LST is sensitive on that. One usually regulates the geometry via chopping the tube. But the decomposition of the SCFT $SU(2)_{k}$ on $S_{Y}^{1}\times SU(2)_{k}/U(1)$ can be performed. Here $SU(2)_{k}/U(1)$ is a minimal $N=2$ model at the level $k$ and $S_{Y}^{1}$ is the Cartan subalgebra of $SU(2)$ with the parameter $Y$. The dependence on $k$ is crucial at this reformulation since this refers to 4-exotics by theorem \ref{Th:Ns5Branes} and the rule R1. Thus we have the SCFT $\mathbb{R}_{\phi}\times S_{Y}^{1}\times\frac{SU(2)_{k}}{U(1)}$ instead of the tube $\mathbb{R}_{\phi}\times SU(2)_{k}$. The chopping of the strong coupling region is now performed by taking the SCFT $\frac{SL(2)_{k}}{U(1)}$ instead of $\mathbb{R}_{\phi}\times S_{Y}^{1}$ which means replacing the background $\mathbb{R}^{5,1}\times\mathbb{R}_{\phi}\times SU(2)_{k}$ by $\mathbb{R}^{5,1}\times\frac{SL(2)_{k}}{U(1)}\times\frac{SU(2)_{k}}{U(1)}$. This means, on the level of $k$ NS5-branes, the separation of these 5-branes along the transverse circle of radius $L$. Now the double-scale limit of LST is the one when taking both $g_{s}$ and $L$ to zero while $\frac{L}{g_{s}}$ remains constant. Following \cite{GiveonKutasov2000} we can take systems of D4, D6-branes between separated NS5-branes. The various expressions like correlation functions can be now calculated perturbatively in the holographically dual 6-dimensional LST theory. Besides, suitable compactifications may refer to the spectra with the TeV scale of the standard model of particles. The dependence on $k$ of some of these expressions can be seen as the signature of the existence of exotic structure in the 4-space transversal to the branes. \emph{Exoticness of the 4-space transversal to the worldvolume of NS5-branes, is reflected in specific perturbative spectra of D-branes when calculated in dual 6-dimensional LST. When compactifying this LST on 2 directions longitudinal to the 5-brane one gets spectra which could be sensitive on transversal exoticness of $\mathbb{R}^{4}$. From the point of view of physics, the calculations refer to the TeV scale \cite{Aharony2002}. } The important observation can be made: \emph{Some LST calculations refer not only to holographically dual string theory but also to exotic smoothness on $\mathbb{R}^{4}$}. \emph{This is the indication that one can try, at least in some cases, to replace higher dimensional string theory effects by 4-dimensional phenomena.} This is in fact the reformulation of the rule R1. The NS5-branes backgrounds show that string theory computations ,,feel'' the 4-exoticness. \section{Quantum D-branes and 4-exotica} In this section we want to show that D-branes of string theory, as in the previous sections, are related with exotic smooth $\mathbb{R}^{4}$'s also beyond the semi-classical limit, i.e. in the quantum regime of the theory where one should deal rather with \emph{quantum branes}. What \emph{quantum branes} mean in general is still an open and hard problem. One appealing proposition, relevant for this paper, is to consider branes in noncommutative spacetimes rather than on commutative manifolds or orbifolds. This leads to abstract D-branes in general noncommutative separable $C^{\star}$ algebras as counterparts for quantum D-branes. In the next section we will present a definition using wild embeddings. \subsection{D-branes on spaces: K-homology and KK theory \label{sub:D-branes-on-spaces:} } The description of systems of stable Dp-branes of IIA,B string theories via K-theory of topological spaces can be extended toward the branes in noncommutative $C^{\star}$ algebras. A direct string representation of the algebraic and K-theoretic ideas is best seen in K-matrix string theory where, in particular, tachyons are elements of the spectral triples representing the noncommutative geometry of the world-volumes of the configurations of branes \cite{AsakawaSugimotoTerasima2002}. The elements of the formulation of type II strings as K matrix theory is presented in the Appendix \ref{sec:Elements-of-K-matrix}. First let us consider the case of vanishing $H$-field on $X$. The charges of D-branes are classified by suitable $K$ theory groups, i.e. $K^{0}(X)$ in IIB and $K^{1}(X)$ in IIA string theories, where $X$ is the background manifold. On the other hand, world-volumes of Dp-branes correspond to the cycles of K homology groups, $K_{1}(X)$, $K_{0}(X)$, which are dual to the $K$ theory groups. Let us see how $K$ -cycles correspond to the configurations of D-branes. A $K$ - cycle on $X$ is a triple $(M,E,\phi)$ where $M$ is a compact ${\rm {Spin}^{c}}$ manifold without boundary, $E$ is a complex vector bundle on $M$ and $\phi:M\to X$ is a continuous map. The topological $K$-homology $K_{\star}(X)$ is the set of equivalence classes of the triples $(M,E,\phi)$ respecting the following conditions: \begin{itemize} \item[(i)] $(M_{1},E_{1},\phi_{1})\sim(M_{2},E_{2},\phi_{2})$ when there exists a triple (bordism of the triples) $(M,E,\phi)$ such that $(\partial M,E_{|\partial M},\phi_{|\partial M})$ is isomorphic to the disjoint union $(M_{1},E_{1},\phi_{1})\cup(-M_{2},E_{2},\phi_{2})$ where $-M_{2}$ is the reversed ${\rm {Spin}^{c}}$ structure of $M_{2}$ and $M$ is a compact ${\rm {Spin}^{c}}$ manifold with boundary. \item[(ii)] $(M,E_{1}\oplus E_{2},\phi)\sim(M,E_{1},\phi)\cup(M,E_{2},\phi)$, \item[(iii)] Vector bundle modification $(M,E,\phi)\sim(\widehat{M},\widehat{H}\otimes\rho^{\star}(E),\phi\circ\rho)$. $\widehat{M}$ is even dimensional sphere bundle on $M$, $\rho:\widehat{M}\to M$ projection, $\widehat{H}$ is a vector bundle on $\widehat{M}$ which gives the generator of $K(S_{q}^{2p})=\mathbb{Z}$ on every $S_{q}^{2p}$ over each $q\in M$ \cite{Szabo2002a}. \end{itemize} The topological K-homology as above has an abelian group structure with disjoint union of cycles as sum. The triples $(M,E,\phi)$ with $M$ being even dimensional determines $K_{0}(X)$. Similarly, $K_{1}(X)$ corresponds to odd dimensions. Thus $K_{\star}(X)$ decomposes into a direct sum of abelian groups: \[ K_{\star}(X)=K_{0}(X)\oplus K_{1}(X)\,.\] Now the interpretation of cycles $(M,E,\phi)$ as D-branes \cite{HarveyMoore2000} is the following: $M$ is the world-volume of brane, $E$ the Chan-Paton bundle on it and $\phi$ gives the embedding of the brane into spacetime $X$. Moreover, $M$ has to wrap ${\rm Spin}^{c}$ manifold \cite{FreedWitten1999} and $K_{0}(X)$ classifies stable D-branes configurations in IIB, and $K_{1}(X)$ in IIA, string theories. The equivalences of K-cycles as formulated in the conditions (i)-(iii) correspond to natural relations for D-branes \cite{AsakawaSugimotoTerasima2002,Szabo2008b}. The topological K-homology theory above can be obtained analytically (analytic K-homology theory) as a special commutative case of the following construction on general $C^{\star}$ algebras \cite{AsakawaSugimotoTerasima2002}. A Fredholm module over a $C^{\star}$ algebra ${\cal A}$ is a triple $({\cal H},\phi,F)$ such that \begin{enumerate} \item ${\cal H}$ is a separable Hilbert space, \item $\phi$ is a $^{\star}$ homomorphism between $C^{\star}$ algebras ${\cal A}$ and ${\rm {\bf B}}({\cal H})$ of bounded linear operators on ${\cal H}$, \item $F$ is self-adjoint operator in ${\rm {\bf B}}({\cal H})$ satisfying \end{enumerate} \[ F^{2}-id\in{\rm K}({\cal H})\,,\quad[F,\phi(a)]\in{\rm K}({\cal H})\:{\rm for}\:{\rm every}\: a\in{\cal A}\] where ${\rm K}({\cal H})$ are compact operators on ${\cal H}$. Now let us see how a Fredholm module $({\cal H},\phi,F)$ describes certain configuration of IIA K matrix string theory directly related to D branes. To this end we consider the operators of the K-matrix theory $\Phi^{0},...,\Phi^{9}$ (infinite matrices) acting on the Hilbert space ${\cal H}$ as generating the $C^{\star}$ algebra ${\cal A}_{M}$ (see the Appendix \ref{sec:Elements-of-K-matrix} and \cite{AsakawaSugimotoTerasima2002}). In the case of commuting $\Phi^{\mu}$, hence commutative ${\cal A}_{M}$, we have the following correspondence (explaining the index $M$ in ${\cal A}_{M}$): \begin{itemize} \item Every commutative $C^{\star}$ algebra is isomorphic to the algebra of continuous complex functions vanishing at infinity $C(M)$ on some locally compact Hausdorff space $M$ (Gelfand-Najmark theorem). A point $x\in M$ is determined by a character of ${\cal A}_{M}$ which is a $^{\star}$ homomorphism $\phi_{x}:{\cal A}_{M}\to\mathbb{C}$. \item $M$ serves as a common spectrum for $\Phi^{0},...,\Phi^{9}$ and the choice of a point from $M$ as the eigenvalue of $\Phi^{\mu}$ fixes the position of the non BPS instanton along $x^{\mu}$. \item In this way $M$ is covered by the positions of infinite many non BPS instantons and serves as the world-volume of some higher dimensional D brane \cite{AsakawaSugimotoTerasima2002}. \end{itemize} Now let us explain the role of the tachyon $T$. $T$ is a self-adjoint unbounded operator acting on the Chan-Paton Hilbert space ${\cal H}$. ${\cal A}_{M}$ is a $C^{\star}$ unital algebra generated by $\Phi^{0},...,\Phi^{9}$ which can be now noncommutative. The corresponding geometry of the world-volume $M$ would be noncommutative and given by some spectral triple. The spectral triple is in fact $({\cal H},{\cal A},T)$ which means that the following conditions are satisfied \cite{AsakawaSugimotoTerasima2002}: \[ T-\lambda\in{\rm {\bf K}}({\cal H})\:{\rm for}\:{\rm every}\:\lambda\in\mathbb{C}\setminus\mathbb{R},\;[a,T]\in{\bf B}({\cal H})\:{\rm for}\:{\rm every}\: a\in{\cal A}_{M}\] These conditions indeed hold true in our case of K matrix string theory for a tachyon field $T$, Chan-Paton Hilbert space ${\cal H}$ and $C^{\star}$ algebra ${\cal A}_{M}$ generated by $\Phi^{\mu}$ (see Appendix \ref{sec:Spectral-triples-and}). The extension of spacetime manifold toward noncommutative algebra and noncommutative world-volumes of branes, represented by spectral triples, is thus given by \cite{AsakawaSugimotoTerasima2002}: \begin{enumerate} \item Fixing the spacetime $C^{\star}$ algebra ${\cal A}$; \item A $^{\star}$ homomorphism $\phi:{\cal A}\to{\bf B}({\cal H})$ generates embedding of the D-brane world-volume $M$ and its noncommutative algebra ${\cal A}_{M}$ as ${\cal A}_{M}:=\phi({\cal A})$; \item D-branes embedded in a spacetime ${\cal A}$ are represented by the spectral triple $({\cal H},{\cal A}_{M},T)$; \item Equivalently, D-brane in $A$ is given by unbounded Fredholm module $({\cal H},\phi,T)$. \end{enumerate} In particular the classification of stable D-branes in ${\cal A}$ is the classification of Fredholm modules $({\cal H},\phi,T)$ given by analytical K-homology. Given the isomorphisms of the topological and analytical K homology groups, we have the classification of stable D-branes in terms of K-cycles, as we discussed at the beginning of this section. In terms of K matrix string theory we can say that stable configurations of D-instantons determine the stable higher dimensional D-branes which are K-homologically classified as above. Now let us turn to a more general situation than K-string theory of D-instantons, i.e. backgrounds given by non-BPS Dp-branes or non-BPS Dp-${\rm \overline{{\rm Dp}}}$-branes in type II string theory. The stable configurations of Dq-branes are then classified by generalized K-theory namely Kasparov KK-theory. As in the above case of D-branes in a $C^{\star}$ algebra ${\cal A}$ corresponding to Fredholm modules, one defines an odd Kasparov module $({\cal H}_{{\cal B}},\phi,T)$, where ${\cal H}_{{\cal B}}$ is an countable Hilbert module over $C^{\star}$algebra ${\cal B}$, as \begin{itemize} \item a $\star$-homomorphism from ${\cal A}$ to the $C^{\star}$ algebra of bounded linear operators on ${\cal H}_{{\cal B}}$, $\phi:{\cal A}\to{\rm {\bf B}}({\cal H}_{{\cal B}})$; \item a self-adjoint operator $T$ from ${\rm {\bf B}}({\cal H}_{{\cal B}})$ satisfying: \end{itemize} \[ T^{2}-1\in{\rm {\bf K}}({\cal H}_{{\cal B}})\:{\rm and}\:[T\,,\phi(a)]\in{\rm {\bf K}}({\cal H}_{{\cal B}})\:{\rm for}\,{\rm every}\, a\in{\cal A}\,,\] where ${\rm {\bf K}}({\cal H}_{{\cal B}})$ is ${\cal B}\otimes{\bf {\rm K}}$. $({\cal H}_{{\cal B}},\phi,T)$ is in fact a family of Fredholm modules on the algebra ${\cal B}$. When ${\cal B}$ is $\mathbb{C}$ we have an ordinary Fredholm module as before. The homotopy equivalence classes of odd Kasparov modules $({\cal H}_{{\cal B}},\phi,T)$ determine elements of $KK^{1}({\cal A},{\cal B})$. Also one defines an even Kasparov classes $KK^{0}({\cal A},{\cal B})=KK({\cal A},{\cal B})$ as homotopy equivalence classes of the triples $({\cal H}_{{\cal B}}^{(0)}\oplus H_{{\cal B}}^{(1)},\phi^{(0)}\oplus\phi^{(1)},\left(\begin{array}{cc} 0 & T^{\star}\\ T & 0\end{array}\right))$. A natural $\mathbb{Z}_{2}$ grading appears due to the involution ${\cal H}_{{\cal B}}^{(0)}\oplus H_{{\cal B}}^{(1)}\to{\cal H}_{{\cal B}}^{(0)}\oplus-H_{{\cal B}}^{(1)}$. Now the classification pattern for branes in spaces emerges. There are non-BPS unstable Dp-branes wrapping the $p+1$-dimensional world-volume $B$. Then stable Dq-branes configurations embedded in a space $A$ transverse to $B$ correspond to (are classified by) the classes of $KK^{1}(A,B)$. Similarly, given non-BPS unstable Dp-${\rm \overline{Dp}}$-branes system, then stable Dq-branes embedded in $A$ transverse to $B$ ($p+1$-dimensional world-volumes) are classified by elements of $KK^{0}(A,B)$. The case of even $KK^{0}(A,B)$ contains the $\mathbb{Z}_{2}$ grading as corresponding to the Chan-Paton indices of Dp and ${\rm \overline{Dp}}$-branes. \subsection{D-branes on separable $C^{\star}$ algebras and KK theory \label{sub:Branes-on-separable}} The classification of D-branes in a spacetime manifold given by KK theory as sketched in the previous subsection, can be extended over noncommutative spacetimes and noncommutative D-branes both represented by separable $C^{\star}$ algebras. Let us first recapitulate the ,,classic'' case of spaces allowing the extension over $C^{\star}$ algebras \cite{Szabo2008c}. In the case of type II superstring theory, let $X$ be a compact part of spacetime manifold, i.e. $X$ is a compact ${\rm spin}^{c}$ manifold again with no background $H$-flux. As we saw, a flat D-brane in $X$ is a Baum-Douglas K-cycle $(W,E,f)$. Here $f:W\hookrightarrow X$ is the embedding of the closed ${\rm spin}^{c}$ submanifold $W$ of $X$ and $E\to W$ is a complex vector bundle with connection (Chan-Paton gauge bundle). As follows from Baum-Douglas construction, $E$ determines the stable class in the K-theory group $K^{0}(W)$ and all K-cycles form an additive category under disjoint union. Now, the set of all K-cycles classes up to a kind of gauge equivalence as in Baum-Douglass construction, gives the K-homology of $X$. This K-homology is also the set of stable homotopy classes of Fredholm modules which are taken over the commutative $C^{\star}$ algebra $C(X)$ of continuous functions on $X$. This defines the correspondence (isomorphism) where a K-cycle $(W,E,f)$ corresponds to unbounded Fredholm module $({\cal H},\rho,D_{E}^{\mbox{W}})$. Here ${\cal H}$ is the separable Hilbert space of square integrable spinors on $W$ taking values in the bundle $E$, i.e. $L^{2}(W,S\otimes E)$, $\rho:C(X)\to{\rm {\bf B}}({\cal H})$ is the representation of the $C^{\star}$ algebra $C(X)$ in ${\cal H}$ such that $C(X)\ni g\to a_{g\circ f}\in{\rm {\bf B}}({\cal H})$ where $a_{g\circ f}$ is the operator of point-wise multiplication of functions in $L^{2}(W,S\otimes E)$ by the function on $W$, $g\circ f$, and $f:W\hookrightarrow X$. $D_{E}^{W}$ is the Dirac operator twisted by $E$ corresponding to the ${\rm spin}^{c}$ structure on $W$. Given the K-theory class of the Chan-Paton bundle $E$, i.e. $[E]\in K^{0}(W)$, then the dual K-homology class of a D-brane, $[W,E,f]$ uniquely determines $[E]$. In that way D-branes determine K-homology classes on $X$ which are dual to K-theory classes from $K^{r}(X)$ where $r$ is the transversal dimension for the brane world-volume $W$. This K-theory class is derived from the image of $[E]\in K^{0}(W)$ by the Gysin K-theoretic map $f_{!}$. As we discussed already, the odd and even classes of K-homology $K_{\star}(X)$ correspond to the parity of the dimension of $W$. The K-cycle $(W,E,f)$ corresponds to a Dp-brane and its gauge equivalence is given by Baum-Douglas construction using the conditions (i)-(iii) in Sec. \ref{sub:D-branes-on-spaces:}. Thus we have \cite{Szabo2008b}: Fact 1: \emph{There is a one-to-one correspondence between flat D-branes in $X$, modulo Baum-Douglas equivalence, and stable homotopy classes of Fredholm modules over the algebra $C(X)$.} In the presence of a non-zero $B$-field on $X$, which is a $U(1)$-gerbe with connection represented by the characteristic class in $H^{3}(X,\mathbb{Z})$ \cite{Szabo2008b,AsselmeyerKrol2009}, one can define twisted D-brane on $X$ as \cite{Szabo2008b}: \begin{definition} A twisted D-brane in a B-field $(X,H)$ is a triple $(W,E,\phi)$, where $\phi:W\hookrightarrow X$ is a closed, embedded oriented submanifold with $\phi^{\star}H={\rm W}_{3}(W)$, and $E$ is the Chan-Paton bundle on $W$, i.e. $E\in K^{0}(W)$, and ${\rm W}_{3}(W)$ is the 3-rd integer Stiefel-Whitney class of the normal bundle of $W$, ${\rm W}_{3}(W)\in H^{3}(W,\mathbb{Z})$. \end{definition} The condition in the definition is in fact required by the cancellation of the Freed-Witten anomaly, where $H\in H^{3}(X,\mathbb{Z})$ is the NS-NS $H$-flux. Since ${\rm W}_{3}(W)$ is the obstruction to the ${\rm spin}^{c}$ structure on $W$, in the case of ${\rm W}_{3}(W)=0$ one has flat D-branes in $X$. Thus equivalence classes of twisted D-branes on $X$ are represented by twisted topological K-homology $K_{\star}(X,H)$ which is dual to the twisted K-theory $K^{\star}(X,H)$. As was argued in \cite{AsselmeyerKrol2010}, in case of $S^{3}$, one has some exotic $\mathbb{R}^{4}$'s which can be twisted by $H$ leading to the K-theory $K^{\star}(S^{3},H)$. We can represent the $U(1)$ gerbes with connection on $S^{3}$, by the bundles ${\cal E}_{H}$ of algebras over $S^{3}$, such that the sections of the bundle ${\cal E}_{H}$ define the noncommutative, twisted algebra \emph{$C_{0}(X,{\cal E}_{H})$ }and the Dixmier-Douady class of ${\cal E}_{H}$, $\delta_{H}({\cal E}_{H})$, is $H\in H^{3}(S^{3},\mathbb{Z})$ \cite{AsselmeyerKrol2009a,AtiyahSegal2004,Szabo2002a}. The important relation is the following (\cite{Szabo2008b}, Proposition 1.15): Fact 2: \emph{There is a one-to-one correspondence between twisted D-branes in $(X,H)$ and stable homotopy classes of Fredholm modules over the algebra $C_{0}(X,{\cal E}_{H})$.} Since the algebra \emph{$C_{0}(X,{\cal E}_{H})$ }certainly determines its stable homotopy classes of the Fredholm modules on it, then in the case $X=S^{3}$ one has the following observation: A. \emph{Let the exotic smooth $\mathbb{R}^{4}$'s are determined by the integral third classes $H\in H^{3}(S^{3},\mathbb{Z})$. Then, these exotic smooth $\mathbb{R}^{4}$'s correspond one-to-one to the set of twisted D-branes in $(S^{3},H)$.} In principle, given the complete collection of twisted D-branes in $(S^{3},H)$, which take values in $K_{\star}(S^{3},H)$, one can determine the corresponding exotic $\mathbb{R}^{4}$. This is simply the exotic $\mathbb{R}_{H}^{4}$ corresponding to the class $[H]\in H^{3}(S^{3})$ and $H$ makes the twist in the K-homology as dual to the twisted K-theory $K^{\star}(S^{3},H)$ \cite{AsselmeyerKrol2009a,AsselmeyerKrol2010,Szabo2002a}. In this paper we collect further evidences that this is also the case more generally, and the relation D-branes - 4-exotics is closer. Remembering that $S^{3}\subset\mathbb{R}^{4}$ as part of the Akbulut cork of the exotic structure, our previous observation can be restated as: B. \emph{The change of the exotic smoothness of $\mathbb{R}^{4}$, $\mathbb{R}_{H_{1}}^{4}\to\mathbb{R}_{H_{2}}^{4}$, $H_{1}$, $H_{2}\in H^{3}(S^{3},\mathbb{Z})$, $H_{1}\neq H_{2}$, corresponds to the change of the curved backgrounds $(S^{3},H_{1})\to(S^{3},H_{2})$ hence the sets of stable D-branes.} This motivates the formulation: C. \emph{Some small exotic smoothness on $\mathbb{R}^{4}$, $\mathbb{R}_{H_{1}}^{4}$, can be destabilize (or stabilize) D-branes in $(S^{3},H_{2})$, where $S^{3}\subset\mathbb{R}^{4}$ lies at the boundary of the Akbulut cork of $\mathbb{R}_{H_{1}}^{4}$. We say that D-branes in $(S^{3},H_{2})$ are }4-exotic-sensitive\emph{.} Turning to the generalization of spaces to noncommutative $C^{\star}$ algebras, there were developed recently impressive counterparts of many topological, geometrical and analytical results, like Poincar\'e duality, characteristic classes and the Riemann-Roch theorem. Also the generalized formula for charges of quantum D-branes in a noncommutative separable $C^{\star}$ algebras was worked out \cite{Szabo2008a,Szabo2008b}. Thus the suitable framework for considering the quantum regime of D-branes emerged. In next subsection we will try to find a relation to 4-exotics also in this quantum regime of D-branes. Following \cite{AsakawaSugimotoTerasima2002,Szabo2008a,Szabo2008b,Szabo2008c} one can take as an initial substitute for the category of quantum D-branes, the category of separable $C^{\star}$ algebras and morphisms being elements of KK theory groups. This means that for a pair $({\cal A},{\cal B})$ of separable $C^{\star}$ algebras the morphisms $h:{\cal A}\to{\cal B}$ is lifted to the element of the group $KK({\cal A},{\cal B})$. Thus we can consider a generalized D-branes in a separable $C^{\star}$ algebra ${\cal A}$ as corresponding to the lift $h!:{\cal A}\to{\cal B}$ where ${\cal B}$ represents a quantum D-brane. More precisely following \cite{Szabo2008a}, let us consider a subcategory ${\cal C}$ of the category of $C^{\star}$ separable algebras and their morphisms, which consists of strongly K-oriented morphisms. This means that there exists a contravariant functor $!:{\cal C}\to KK$ such that ${\cal C}\ni f:{\cal A}\to{\cal B}$ is mapped to $f!\in KK_{d}({\cal B},{\cal A})$, here $KK$ is the category of separable $C^{\star}$ algebras with KK classes as morphisms. Strongly K-oriented morphisms and the functor $!$ are subjects to the following conditions: \begin{enumerate} \item Identity morphism $id_{{\cal A}}:{\cal A}\to{\cal A}$ is strongly K-oriented (SKKO) for every separable $C^{\star}$ algebra ${\cal A}$ and $(id_{{\cal A}})!=1_{{\cal A}}$. Also, the 0-morphism $0_{{\cal A}}:{\cal A}\to{\cal A}$ is SKKO and $(0_{{\cal A}})!=0\in KK(0,{\cal A})$. \item If $f:{\cal A}\to{\cal B}$ is SKKO then $f^{\circ}:{\cal A}^{\circ}\to{\cal B}^{\circ}$ is either, and $(f!)^{\circ}=(f^{\circ})!$. ${\cal A}^{\circ}$ is the opposite $C^{\star}$ algebra to ${\cal A}$, i.e. one which has the same underlying vector space but reversed product. \item Any morphism $f:{\cal A}\to{\cal B}$ is SKKO, provided ${\cal A}$ and ${\cal B}$ are strong Poincar\'e dual (PD) algebras. Then $f!$ is determined as: \begin{equation} f!=(-1)^{d_{{\cal A}}}\Delta_{{\cal A}}^{\vee}\otimes_{{\cal A}^{0}}\left[f^{0}\right]\otimes_{{\cal B}^{0}}\Delta_{{\cal B}}\label{eq:K-orientation}\end{equation} here $[f]$ is the class of $f:{\cal A}\to{\cal B}$ in $KK({\cal A},{\cal B})$. $\Delta_{{\cal A}}$ is the fundamental class in $KK_{d_{{\cal A}}}({\cal A}\otimes{\cal A}^{\circ},\mathbb{C})=K^{d_{{\cal A}}}({\cal A}\otimes{\cal A}^{\circ})$, $\Delta_{{\cal A}}^{\vee}$ its dual class in $KK_{-d_{{\cal A}}}(\mathbb{C},{\cal A}\otimes{\cal A}^{\circ})=K_{-d_{{\cal A}}}(A\otimes{\cal A}^{\circ})$ which exist by strong PD \cite{Szabo2008a}. \end{enumerate} K-orientability was introduced, in its original form, by A. Connes in order to define the analogue of ${\rm spin}^{c}$ structure for noncommutative $C^{\star}$ algebras (see also \cite{Connes1984} and next subsections). Presented here formulation of K-orientability and strong PD $C^{\star}$ algebras are crucial ingredients of noncommutative versions of Riemann-Roch theorem, Poincar\'e-like dualities, Gysin K-theory map and allows to formulate a very general formula for noncommutative D-brane charges \cite{Szabo2008b,Szabo2008a,Szabo2008c}. Let us notice that if both ${\cal A}$ and ${\cal B}$ are PD algebras then any morphism $f:{\cal A}\to{\cal B}$ is K-oriented and the K-orientation for $f$ is given in (\ref{eq:K-orientation}). In the particular case of the proper smooth embedding $f:M\to X$ of codimension $d$, where $M$, $X$ are smooth compact manifolds, let the normal bundle $\tau$ over $W$, of $TW$ with respect to $f^{\star}(TX)$, be ${\rm spin}^{c}$. When also $X$ is ${\rm spin}^{c}$ then the ${\rm spin}^{c}$ condition on $\tau$ when $H$-flux is absent in type II string theory formulated on $X$, is the Freed-Witten anomaly cancellation condition \cite{Szabo2008a}. In this case any D-brane in $X$, given by the triple $(W,E,f)$, determines the KK-theory element $f!\in KK(C(W),C(X))$. The construction of K-orientation $f:M\to X$, between smooth compact manifolds, can be extended to smooth proper maps which are not necessary embeddings. Thus the general condition for K-orientability gives the correct analogue for stable D-branes in $C^{\star}$ algebras. \begin{definition}\label{enu:Def: q-Branes} \emph{A generalized stable quantum D-brane} on a separable $C^{\star}$ algebra ${\cal A}$, represented by a separable $C^{\star}$ algebra ${\cal B}$, is given by the strongly K-oriented homomorphism of $C^{\star}$ algebras, $h_{{\cal B}}:{\cal A}\to{\cal B}$. The K-orientation means that there is the lift $(h_{{\cal B}})!\in KK({\cal B},{\cal A})$ where $!$ fulfills the functoriality condition as in (\ref{eq:K-orientation}). \end{definition} This kind of an approach to quantum D-branes is in fact a conjectural framework which exceeds both the dynamical Seiberg-Witten limit of superstring theory (where noncommutative brane world-volumes emerges) and geometrical understanding of branes, and places itself rather in a deep quantum regime of the theory \cite{Szabo2008c}. \subsection{Exotic $\mathbb{R}^{4}$ and stable D-branes configurations on foliated manifolds\label{sub:D-branes-on-foliated}} Now we want to approach the problem of description of stable states of D-branes in a more general geometry than spaces, namely the geometry of foliated manifolds. The case of our interest is a codimension-1 foliation of $S^{3}$. This is a noncommutative geometry. In general, to every foliation $(V,F)$ one can associate its noncommutative $C^{\star}$ algebra $C^{\star}(V,F)$, on the other hand a foliation determines its holonomy groupoid $G$ and the topological classifying space $BG$. Both cases, topological K-homology of $G$ and $C^{\star}$algebraic K-theory, are in fact dual. Analogously to our previous discussion of branes as K-cycles on $X$, let us start with K-homology of $G$ and define D-branes as K-cycles in $G$: A $K$ - cycle on a foliated geometry $X=(V,F)$ is a triple $(M,E,\phi)$ where $M$ is a compact manifold without boundary, $E$ is a complex vector bundle on $M$ and $\phi:M\to BG$ is a smooth K-oriented map. Due to the K-orientability in the presence of canonical $G$-bundle $\tau$ on $BG$, the condition of ${\rm Spin}^{c}$ structure on $M$ is lifted to the ${\rm Spin}^{c}$ structure on $TM\oplus\phi^{\star}\tau$ \cite{Connes1984}. The topological $K$-homology $K_{\star,\tau}(X)=K_{\star,\tau}(BG)$ of the foliation $(V,F)$ is the set of equivalence classes of the above triples, where the equivalence respects the following conditions: \begin{itemize} \item[(i)] $(M_{1},E_{1},\phi_{1})\sim(M_{2},E_{2},\phi_{2})$ when there exists a triple (bordism of the triples) $(M,E,\phi)$ such that $(\partial M,E_{|\partial M},\phi_{|\partial M})$ is isomorphic to the disjoint union $(M_{1},E_{1},\phi_{1})\cup(-M_{2},E_{2},\phi_{2})$ where $-M_{2}$ is the reversed ${\rm {Spin}^{c}}$ structure of $TM_{2}\oplus\phi_{2}^{\star}\tau$ and $M$ is a compact manifold with boundary. \item[(ii)] $(M,E_{1}\oplus E_{2},\phi)\sim(M,E_{1},\phi)\cup(M,E_{2},\phi)$, \item[(iii)] Vector bundle modification $(M,E,\phi)\sim(\widehat{M},\widehat{H}\otimes\rho^{\star}(E),\phi\circ\rho)$ similarly as in the case of manifolds. \end{itemize} As in the case of spaces (manifolds) and the corresponding K-homology groups representing stable D-branes of type II superstring theory (see Sec. \ref{sub:D-branes-on-spaces:}), also here, in the case of the geometry of foliated manifolds we generalize stable D-branes as being represented by the above triples. \begin{theorem} The class of generalized stable D-branes on the $C^{\star}$ algebra $C^{\star}(S^{3},F_{1})$ (of the codimension 1 foliation of $S^{3}$) which correspond to the K-homology classes $K_{\star,\tau}(S^{3}/F)$, determines an invariant of exotic smooth $\mathbb{R}^{4}$. Such an exotic $\mathbb{R}^{4}$ contains this foliated $S^{3}$ as a generalized (noncommutative) smooth subset \cite{AsselmeyerKrol2009a}. \end{theorem} The result follows from the fact that $K_{\star,\tau}(S^{3}/F)$ is isomorphic to $K_{\star,\tau}(BG)$ \cite{Connes1984} and this determines a class of stable D-branes in $(S^{3},F)$. The foliations $(S^{3},F)$ correspond to different smoothings on $\mathbb{R}^{4}$ \cite{AsselmeyerKrol2009}. $\square$ Let us note that this approach allows for considering a kind of string theory and branes also beyond the integral levels of $SU(2)$ WZW model given by $[H]\in H^{3}(S^{3},\mathbb{Z})$. The relation with exotic smooth $\mathbb{R}^{4}$'s extends over this as well. \subsection{Net of exotic $\mathbb{R}^{4}$'s and quantum D-branes in $C^{\star}(S^{3},F)$\label{sub:Net-of-exotic}} The extension of string theory and D-branes over general noncommutative separable $C^{\star}$ algebras where also D-branes are represented by noncommutative separable $C^{\star}$ algebras, can be considered as an approach to quantum D-branes. A category of D-branes in a quantum regime, is the category of separable $C^{\star}$ algebras and morphisms which are elements of KK theory groups. For a pair $({\cal A},{\cal B})$ of separable $C^{\star}$ algebras the morphisms $h:{\cal A}\to{\cal B}$ belong to $KK({\cal A},{\cal B})$. Abstract quantum D-branes in a separable $C^{\star}$ algebra ${\cal A}$ correspond to $\phi:{\cal A}\to{\cal B}$ where ${\cal B}$ is the algebra representing a quantum D-brane and $\phi$ is a strongly K-oriented map. For such branes a general formula for RR charges in noncommutative setting was worked out \cite{Szabo2008a,Szabo2008b}. D-branes considered in the previous subsection, correspond to the lifted KK-theory classes, i.e. $f!\in KK(M,V/F)$ where D-brane corresponds to the triple $(M,E,f)$ and $f:M\hookrightarrow G=V/F$ is K-oriented map (see \cite{Connes1984}). More generally (still following \cite{Connes1984}), given a K-oriented map $f:X\to Y$ , one can define (under certain conditions) a push forward map $f!$ in K-theory. The very important property of the analytical group $K(V/F)$ of the foliation $(V,F)$ is its ,,wrong way'' (Gysin) functoriality which to each K-oriented map $f:V_{1}/F_{1}\to V_{2}/F_{2}$ of leaf spaces associates an element $f!$ of the Kasparov group $KK(C^{\star}(V_{1};F_{1});C^{\star}(V_{2};F_{2}))$. Now given a small exotic $\mathbb{R}^{4}$, say $e_{1}$, embedded in some small exotic $\mathbb{R}^{4}$, $e$, both are represented by the $C^{\star}$ algebras of the codimension-1 foliations of $S^{3}$, $C^{\star}(V_{1};F_{1})$ and $C^{\star}(V;F)$ respectively. The embedding $i:e_{1}\hookrightarrow e$ determines the corresponding K-oriented map of the leaf spaces $f_{i}:S^{3}/F_{1}\to S^{3}/F$ and the KK-theory lift $f_{i}!:KK(C^{\star}(V_{1};F_{1});C^{\star}(V;F))$. According to Def. \ref{enu:Def: q-Branes} from Sec. \ref{sub:Branes-on-separable}, we see that \begin{theorem} \label{theo:quantum-exotic-R4} Let $e$ be an exotic $\mathbb{R}^{4}$ corresponding to the codimension-1 foliation of $S^{3}$ which gives rise to the $C^{\star}$algebra ${\cal A}_{e}$. The exotic smooth $\mathbb{R}^{4}$ embedded in $e$ determines a generalized quantum D-brane in ${\cal A}_{e}$. \end{theorem} Given exotic $\mathbb{R}^{4}$'s, $\{e_{a},\, a\in I\}$, all embedded in $e$, one has the family of $C^{\star}$ algebras, $\{{\cal A}_{a},\, a\in I\}$, of the codimension-1 foliations of $S_{a}^{3},\: a\in I$. Now the embeddings $e_{a}\to e$ determine the corresponding K-oriented maps of the leaf spaces as before, and the $\star$-homomorphisms of algebras $\phi_{a}:{\cal A}_{e}\to{\cal A}_{a}$. The corresponding classes in KK theory $KK({\cal A}_{e},{\cal A}_{a})$, represent quantum D-branes in ${\cal A}_{e}$. $\square$ However, the correspondence in the theorem is many-to-one and an exotic smooth $\mathbb{R}^{4}$ embedded in $e$ can be represented (non-uniquely) by stable D-brane in ${\cal A}_{e}$, and not all abstract D-branes in the algebra ${\cal A}_{e}$ are represented by some exotic $e'\subset e$. Still one can consider D-branes represented by exotic $e_{a}$ in $e$ as carrying 4-dimensional, hence potentially physical, information. This is a kind of special ,,superselection'' rule in superstring theory and will be discussed separately. \subsection{RR charges of D6-Branes in the presence of $B$-field} Now let us comment on some indication how 4-dimensional structure can refer directly to dynamics of higher dimensional branes in flat spacetime. This higher dimensional brane is the important D6-brane which is usually involved in building various ,,realistic'' 4-dimensional models derived from the brane configurations. We will analyze this case separately along with compactifications in string theory. Let us consider the D6-brane of IIA string theory in flat 10 dimensional spacetime and assume that B-field vanishes. The world-volumes of flat Dp-branes are classified by $K_{1}(\mathbb{R}^{p+1})$ where this K-homology group is understood as $K^{1}(C_{0}(\mathbb{R}^{p+1}))$ i.e. the K-group of the reduced $C^{\star}$ algebra of functions $C_{0}(\mathbb{R}^{p+1})=C(S^{p+1})$. Hence $K_{1}(\mathbb{R}^{p+1})=K_{1}(S^{p+1})$. Their charges, constraining the dynamics of the brane, are dually described by $K^{1}(\mathbb{R}^{9-p})=K^{1}(S^{9-p})$. In the case of D6-branes we have $K^{1}(S^{3})$ as classifying the RR charges of flat D6-branes in flat 10-dimensional spacetime \cite{Witten1998}. In the presence of a non-vanishing B-field for a stable D6-brane, the B-field need to be non-trivial on space $\mathbb{R}^{3}$ transverse to the world-volume , hence $S^{3}$. The classification of D6-brane charges in IIA type superstring theory in flat space is then given by the twisted K-theory $K_{H}(S^{3})$, which is $K^{1}(S^{3},H)=\mathbb{Z}_{k}$, where $0\neq[dB]=[H]=k\in H^{3}(S^{3},\mathbb{Z})$. Hence the dynamics of D6-branes in type IIA superstring theory on flat spacetime is influenced by non-zero B-field. Now we follow the philosophy present already implicitly in our previous work that the source for the non-trivial B-field on $S^{3}$, hence $H\neq0$, is due to the exoticness of the ambient $\mathbb{R}^{4}$. The motivation is certainly the fact that the given exotic $\mathbb{R}_{H}^{4}$ corresponds to the non-trivial class $[H]\in H^{3}(S^{3},\mathbb{Z})$ and conversely, where $S^{3}$ is taken from the boundary of the Akbulut cork \cite{AsselmeyerKrol2009,AsselmeyerKrol2009a}. Moreover, exotic smoothness of $\mathbb{R}_{H}^{4}$ twists the K-theory groups $K^{\star}(S^{3})$ \cite{AsselmeyerKrol2010} provided $S^{3}$ lies at the boundary of the Akbulut cork. Hence the possible dynamics (the charges) of D6-branes in spacetime $\mathbb{R}_{H}^{4}\times\mathbb{R}^{5,1}$ is equivalently referred to the dynamics of D6-brane in the presence of non-zero B-field on transversal $\mathbb{R}^{3}$. \begin{theorem} RR charges of D6-branes in string theory IIA in the presence of non-trivial B-field ($[dB]\neq0$), (these charges are classified by $K_{H}(S^{3})$, $H\neq0$ and $[H]\in H^{3}(S^{3},\mathbb{Z})$), are related with exotic smoothness of small $\mathbb{R}_{H}^{4}$. This exotic $\mathbb{R}_{H}^{4}$ corresponds to $[H]$ which twists $K(S^{3})$ \cite{AsselmeyerKrol2010}, where $S^{3}\subset\mathbb{R}^{4}$ lies at the boundary of the Akbulut cork and $S^{3}$ is transverse to the branes. Thus, changing the smoothness of $\mathbb{R}^{4}$ gives rise to the change of the allowed charges for D6 branes, hence the dynamics changes. \end{theorem} We see that geometric realization of (classical) D-branes in certain backgrounds of string theory is correlated with small exotic $\mathbb{R}^{4}$'s which can be all embedded in the standard smooth $\mathbb{R}^{4}$. We saw in previous subsection \ref{sub:Net-of-exotic} that quantum D-branes correspond to the net of exotic smooth $\mathbb{R}^{4}$'s embedded in certain exotic smooth $\mathbb{R}^{4}$. Also an intriguing interpretation for this correspondence can be given: \emph{in some limit of IIA superstring theory, small exotic smooth $\mathbb{R}^{4}$'s can be considered as carrying the RR charges of D6 branes}. We will come back to these interesting points in the next section. \section{From wild embeddings to quantum D-branes} In this section we try to give a geometric approach to quantum D-branes using wild embeddings of trivial complexes into $S^{n}$ or $\mathbb{R}^{n}$. Furthermore we are able to obtain a low-dimensional interpretation of D-brane charges. This point of view is supported by the Theorem \ref{theo:quantum-exotic-R4} above. Here we will describe a dimension-independent way: every wild embedding $j$ of a $p-$dimensional complex $K$ into the $n-$dimensional sphere $S^{n}$ is determined by the fundamental group $\pi_{1}(S^{n}\setminus j(K))$ of the complement. This group is perfect and uniquely representable by a 2-dimensional complex, a singular disk or grope (see \cite{Can:79}). As we showed in \cite{AsselmeyerKrol2009}, the exotic $\mathbb{R}^{4}$ is given by the grope. Thus, every quantum D-brane must be determined (as a kind of germ) by some exotic $\mathbb{R}^{4}$. \subsection{Wild and tame embeddings\label{sub:Wild-and-tame-embed}} We call a map $f:N\to M$ between two topological manifolds an embedding if $N$ and $f(N)\subset M$ are homeomorphic to each other. From the differential-topological point of view, an embedding is a map $f:N\to M$ with injective differential on each point (an immersion) and $N$ is diffeomorphic to $f(N)\subset M$. An embedding $i:N\hookrightarrow M$ is \emph{tame} if $i(N)$ is represented by a finite polyhedron homeomorphic to $N$. Otherwise we call the embedding \emph{wild}. There are famous wild embeddings like Alexanders horned sphere or Antoine's necklace. In physics one uses mostly tame embeddings but as Cannon mentioned in his overview \cite{Can:78}, one needs wild embeddings to understand the tame one. As shown by us \cite{AsselmeyerKrol2009}, wild embeddings are needed to understand exotic smoothness. As explained in \cite{Can:78} by Cannon, tameness is strongly connected to another topic: decomposition theory (see the book \cite{Daverman1986}). Two embeddings $f,g:N\to M$ are said to be isotopic, if there exists a homeomorphism $F:M\times[0,1]\to M\times[0,1]$ such that \begin{enumerate} \item $F(y,0)=(y,0)$ for each $y\in M$ (i.e. $F(.,0)=id_{M}$) \item $F(f(x),1)=g(x)$ for each $x\in N$, and \item $F(M\times\left\{ t\right\} )=M\times\left\{ t\right\} $ for each $t\in[0,1]$. \end{enumerate} If only the first conditions can be fulfilled then one call it concordance. Embeddings are usually classified by isotopy. An important example is the embedding $S^{1}\to\mathbb{R}^{3}$, known as knot, where different knots are different isotopy classes. \subsection{Embeddings of $(4k-1)$- into $6k$-manifolds} Now we start with a short discussion of embeddings $S^{3}\to S^{6}$ as the example $k=1$ of a general map $S^{4k-1}\to S^{6k}$. As Haefliger \cite{Haefliger1962} showed the isotopy classes of embeddings are determined by the integer classes (Hopf invariant) in $H^{3}(S^{3},\mathbb{Z})$. Thus the $4k-1$ space is knotted in the $6k$ space. This phenomenon depends strongly on smoothness, i.e. it disappears for continuous or PL embeddings. Usually every $n-$sphere or every homology $n-$sphere unknots (in PL or TOP) in $\mathbb{R}^{m}$ for $m\geq n+3$, i.e. for codimension $m-n=3$ or higher. Of course, one has the usual knotting phenomena in codimension $2$ and the codimension $1$ was shown to be unique for embeddings $S^{n}\to S^{n+1}$ (for $n\geq6$) but is hard to solve in other cases. Let $\Sigma\to S^{6}$ be an embedding of a homology 3-sphere $\Sigma$ (containing the case $S^{3}$). Then the normal bundle of $F$ is trivial (definition of an embedding) and homotopy classes of trivializations of the normal bundle (normal framing) are classified by the homotopy class $[\Sigma,SO(3)]$ with respect to some fixed framing. There is an isomorphism $[\Sigma,SO(3)]=[\Sigma,S^{2}]$ (so-called Pontrjagin-Thom construction) and $[\Sigma,S^{2}]$ can be identified with $H^{3}(\Sigma,\mathbb{Z})=\mathbb{Z}$. That is one possible way to get the classification of isotopy classes of embeddings $\Sigma\to S^{6}$ by elements of $H^{3}(\Sigma,\mathbb{Z})=\mathbb{Z}$. A class $[H]$ in $H^{3}(\Sigma,\mathbb{Z})$ determines via Stokes theorem\[ \intop_{\Sigma=\partial A}H=\intop_{A}dH\] the 4-form $dH$ in the 4-manifold% \footnote{To every 3-manifold $\Sigma$, there is a 4-manifold $A$ with $\partial A=\Sigma$.% } $A$ with $\partial A=\Sigma$. As we know the (small) exotic $\mathbb{R}^{4}$ is determined by a contractable submanifold $A$, the Akbulut cork, with boundary $\partial A$ a homology 3-sphere. The contractability of $A$ implies $H^{4}(A,\mathbb{Z})=0$, i.e. every 4-form on $A$ is given by $dH$ for some 3-form $H$. The isomorphism $H^{4}(A,\partial A)=H^{3}(\partial A)$ and Stokes theorem imply\[ \intop_{A}dH=\intop_{\partial A}H=Q\not=0\] the non-vanishing of the 4-form $dH=Q\cdot dvol(A)$ with the volume form $dvol(A)$ of $A$ normed to one. Combined with our result that $H^{3}(S^{3},\mathbb{Z})$ determines some exotic $\mathbb{R}^{4}$ we have: \begin{theorem}(The topological origins of the allowed D6-brane charges) \label{theo:D6-brane-charge} Let $\mathbb{R}_{H}^{4}$ be some exotic $\mathbb{R}^{4}$ determined by some 3-form $H$, i.e. by a codimension-1 foliation on the boundary $\partial A$ of the Akbulut cork $A$. The codimension-1 foliation on $\partial A$ is determined by $H^{3}(\partial A,\mathbb{R})$. Each integer class in $H^{3}(\partial A,\mathbb{Z})$ determines the isotopy class of an embedding $\partial A\to S^{6}$. Hence, the group of allowed charges of D6-branes in the presence of B-field in $M^{10}$, i.e. $K_{H}^{3}(S^{3})$ with $dB=H$, is determined equivalently by the isotopy classes of embeddings $\partial A\to S^{6}$. The classes of $H$-field are topologically determined by the isotopy classes of the embeddings, which affects the allowed charges of D6-branes. \end{theorem} But more is true. Given two embeddings $F_{i}:\Sigma_{i}\to S^{6}$ between two homology 3-spheres $\Sigma_{i}$ for $i=0,1$. A homology cobordism is a cobordism between $\Sigma_{0}$ and $\Sigma_{1}$. This cobordism can be embedded in $S^{6}\times[0,1]$ determining the homology bordism class of the embedding. Then two embeddings of an oriented homology 3-sphere in $S^{6}$ are isotopic if and only if they are homology bordant. \subsection{Real cohomology classes and wild embeddings} Wild embeddings are important to understand usual embeddings. Consider a closed curve in the plane. By common sense, this curve divides the plane into an interior and an exterior area. The Jordan curve theorem agrees with that view completely. But what about one dimension higher, i.e. consider the embedding $S^{2}\to\mathbb{R}^{3}$? Alexander was the first who constructed a counterexample, Alexanders horned sphere \cite{Alex:24}, as wild embedding $D^{2}\to\mathbb{R}^{3}$. The main property of this wild object $D_{W}^{2}$ is the non-simple connected complement $\mathbb{R}^{3}\setminus D_{W}^{2}$. In the following we will concentrate on wild embeddings of spheres $S^{n}$ into spheres $S^{m}$ equivalent to embeddings of $\mathbb{R}^{n}$ into $\mathbb{R}^{m}$relative to the infinity $\infty$ point or to relative embeddings of $D^{n}$ into $D^{m}$ (relative to its boundary). From the physical point of view, D-branes or M-branes are topological objects of a trivial type like $\mathbb{R}^{n},S^{n}$ or $D^{n}$. Lets start with the case of a finite $k-$dimensional polyhedron $K^{k}$ (i.e. a piecewise-linear version of a $k-$disk $D^{k}$). Consider the wild embedding $i:K\to S^{n}$ with $0\leq k\leq n-3$ and $n\geq7$. Then, as proofed in \cite{FerryPedersenVogel1989}, the complement $S^{n}\setminus i(K)$ is non-simple connected with a countable generated (but not finitely presented) fundamental group $\pi_{1}(S^{n}\setminus i(K))=\pi$. Furthermore, the group $\pi$ is perfect (i.e. generated by the commutator subgroup $[\pi,\pi]=\pi$ implying $H_{1}(\pi)=0$) and $H_{2}(\pi)=0$ ($\pi$ is called a superperfect group). With other words, $\pi$ is a group where every element $x\in\pi$ can be generated by a commutator $x=[a,b]=aba^{-1}b^{-1}$ (including the trivial case $x=a,\: b=e$). By using geometric group theory, we can represent $\pi$ by a grope (or generalized disk, see Cannon \cite{Can:79}), i.e. a hierarchical object with the same fundamental group as $\pi$ (see below). In \cite{AsselmeyerKrol2009}, the grope was used to construct a non-trivial involution of the 3-sphere connected with a codimension-1 foliation of the 3-sphere classified by the real cohomology classes $H^{3}(S^{3},\mathbb{R})$. By using the suspension \[ \Sigma X=X\times[0,1]/(X\times\left\{ 0\right\} \cup X\times\left\{ 1\right\} \cup\left\{ x_{0}\right\} \times[0,1])\] of a topological space $(X,x_{0})$ with base point $x_{0}$, we have an isomorphism of cohomology groups $H^{n}(S^{n})=H^{n+1}(\Sigma S^{n})$. Thus the class in $H^{3}(S^{3},\mathbb{R})$ induces classes in $H^{n}(S^{n},\mathbb{R})$ for $n>3$ represented by a wild embedding $i:K\to S^{n}$ for some $k-$dimensional polyhedron. Then every small exotic $\mathbb{R}^{4}$ determines also higher brane charges: \begin{theorem} Let $\mathbb{R}_{H}^{4}$ be some exotic $\mathbb{R}^{4}$ determined by element in $H^{3}(S^{3},\mathbb{R})$, i.e. by a codimension-1 foliation on the boundary $\partial A$ of the Akbulut cork $A$. Each wild embedding $i:K^{3}\to S^{p}$ for $p>6$ of a 3-dimensional polyhedron (as part of $S^{3}$) determines a class in $H^{p}(S^{p},\mathbb{R})$ which can be interpreted as the charge of a $Dp$ brane in the sense of Theorem \ref{theo:D6-brane-charge}. \end{theorem} \subsection{$C^{*}-$algebras associated to wild embeddings} As described above, a wild embedding $j:K\to S^{n}$ of a polyhedron $K$ is characterized by its complement $M(K,j)=S^{n}\setminus j(K)$ which is non-simple connected (i.e. the fundamental group $\pi_{1}(M(K,j))$ is non-trivial). The fundamental group $\pi_{1}(M(K,j))=\pi$ of the complement $M(K,j)$ is a superperfect group, i.e. $\pi$ is identical to its commutator subgroup $\pi=[\pi,\pi]$ (then $H_{1}(\pi)=0)$ and $H_{2}(\pi)=0$. This group is not finite in case of a wild embedding. Here we use gropes to represent $\pi$ geometrically. The idea behind that approach is very simple: the fundamental group of the 2-dimensional torus $T^{2}$ is the abelian group $\pi_{1}(T^{2})=\left\langle a,b\:|\:[a,b]=aba^{-1}b^{-1}=e\right\rangle =\mathbb{Z}\oplus\mathbb{Z}$ generated by the two standard slopes $a,b$. The capped torus $T^{2}\setminus D^{2}$ has an additional element $c$ in the fundamental group generated by the boundary $\partial(T^{2}\setminus D^{2})=S^{1}$. This element is represented by the commutator $c=[a,b]$. In our superperfect group we have the same problem: every element $c$ is generated by the commutator $[a,b]$ of two other elements $a,b$ which are also represented by commutators etc. Thus one obtains a hierarchical object, a generalized 2-disk or a grope (see Fig. \ref{fig:grope}).% \begin{figure} \begin{center} \includegraphics[height=8cm]{grope} \end{center}\caption{An example of a grope\label{fig:grope}} \end{figure} Now we describe two ways to associate a $C^{*}-$algebra to this grope. This first approach uses a combination of our previous papers \cite{AsselmeyerKrol2009,AsselmeyerKrol2010}. Then every grope determines a codimension-1 foliation of the 3-sphere and vice verse. The leaf-space of this foliation is a factor $I\! I\! I_{1}$ von Neumann algebra and we have a $C^{*}-$algebra for the holonomy groupoid. For later usage, we need a more direct way to construct a $C^{*}-$algebra from a wild embedding or grope. The main ingredient is the superperfect group $\pi$, countable generated but not finitely presented group. To get an impression of this group, we consider a representation $\pi\to G$ in some infinite group. As the obvious example for $G$ we choose the infinite union $GL(\mathbb{C})=\bigcup_{\infty}GL(n,\mathbb{C})$ of complex, linear groups (induced from the embedding $GL(n,\mathbb{C})\to GL(n+1,\mathbb{C})$ by an inductive limes process). Then we have a homomorphism\[ U:\pi\to GL(\mathbb{C})\] mapping a commutator $[a,b]\in\pi$ to $U([a,b])\in[GL(\mathbb{C}),GL(\mathbb{C})]$ into the commutator subgroup of $GL(\mathbb{C})$. But every element in $\pi$ is generated by a commutator, i.e. we have\[ U:\pi\to[GL(\mathbb{C}),GL(\mathbb{C})]\] and we are faced with the problem to determine this commutator subgroup. Actually, one has Whitehead's lemma (see \cite{Ros:94}) which determines this subgroup to be the group of elementary matrices $E(\mathbb{C})$. One defines the elementary matrix $e_{ij}(a)$ in $E(n,\mathbb{C})$ to be the $(n\times n)$ matrix with 1\textasciiacute{}s on the diagonal, with the complex number $a\in\mathbb{C}$ in the $(i,j)-$slot, and 0\textasciiacute{}s elsewhere. Analogously, $E(\mathbb{C})$ is the infinite union $E(\mathbb{C})=\bigcup_{\infty}E(n,\mathbb{C})$. Thus, every homomorphism descends to a homomorphism\[ U:\pi\to E(\mathbb{C})=[GL(\mathbb{C}),GL(\mathbb{C})]\quad.\] By using the relation\[ [e_{ij}(a),e_{jk}(b)]=e_{ij}(a)e_{jk}(b)e_{ij}(a)^{-1}e_{jk}(b)^{-1}=e_{ik}(ab)\quad i,j,k\:\mbox{distinct}\] one can split every element in $E(\mathbb{C})$ into a (group) commutator of two other elements. Given a grope $\mathcal{G}$ representing via $\pi_{1}(\mathcal{G})=\pi$ the (superperfect) group $\pi$. Now we define the $C^{*}-$algebra $C^{*}(\mathcal{G},\pi$) associated to the grope $\mathcal{G}$ with group $\pi$. The basic elements of this algebra are smooth half-densities with compact supports on $\mathcal{G}$, $f\in C_{c}^{\infty}(\mathcal{G},\Omega^{1/2})$, where $\Omega_{\gamma}^{1/2}$ for $\gamma\in\pi$ is the one-dimensional complex vector space of maps from the exterior power $\Lambda^{2}L$ , of the union of levels $L$ representing $\gamma$ to $\mathbb{C}$ such that \[ \rho(\lambda\nu)=|\lambda|^{1/2}\rho(\nu)\qquad\forall\nu\in\Lambda^{2}L,\lambda\in\mathbb{R}\:.\] For $f,g\in C_{c}^{\infty}(\mathcal{G},\Omega^{1/2})$, the convolution product $f*g$ is given by the equality\[ (f*g)(\gamma)=\intop_{[\gamma_{1},\gamma_{2}]=\gamma}f(\gamma_{1})g(\gamma_{2})\] Then we define via $f^{*}(\gamma)=\overline{f(\gamma^{-1})}$ a $*$operation making $C_{c}^{\infty}(\mathcal{G},\Omega^{1/2})$ into a $*$algebra. For each capped torus $T$ in some level of the grope $\mathcal{G}$ one has a natural representation of $C_{c}^{\infty}(\mathcal{G},\Omega^{1/2})$ on the $L^{2}$ space over $T$. Then one defines the representation\[ (\pi_{x}(f)\xi)(\gamma)=\intop_{[\gamma_{1},\gamma_{2}]=\gamma}f(\gamma_{1})\xi(\gamma_{2})\qquad\forall\xi\in L^{2}(T).\] The completion of $C_{c}^{\infty}(\mathcal{G},\Omega^{1/2})$ with respect to the norm \[ ||f||=\sup_{x\in M}||\pi_{x}(f)||\] makes it into a $C^{*}$algebra $C_{c}^{\infty}(\mathcal{G},\pi$). Via the representation $U:\pi\to E(\mathbb{C})$, we get a homomorphism into the usual convolution algebra $C^{*}(E(\mathbb{C}))$ of the group $E(\mathbb{C})$ used later to construct the action of the quantum D-brane. Finally we are able to define the $C^{*}-$algebra associated to the wild embedding: \begin{definition} Let $j:K\to S^{n}$ be a wild embedding with $\pi=\pi_{1}(S^{n}\setminus j(K))$ as fundamental group of the complement $M(K,j)=S^{n}\setminus j(K)$. The $C^{*}-$algebra $C_{c}^{\infty}(K,j)$ associated to the wild embedding is defined to be $C_{c}^{\infty}(K,j)=C_{c}^{\infty}(\mathcal{G},\pi)$ the $C^{*}-$algebra of the grope $\mathcal{G}$ with group $\pi$. \end{definition} \subsection{Isotopy classes of wild embeddings and KK theory} In section \ref{sub:Wild-and-tame-embed} we introduce the notion of isotopy classes for embeddings. Given two embeddings $f,g:N\to M$ with a special map $F:M\times[0,1]\to M\times[0,1]$ as deformation of $f$ into $g$, then both embeddings are isotopic to each other. The definition is independent of the tameness or wilderness for the embedding. Now we specialize to our case of wild embeddings $f,g:K\to S^{n}$ with complements $M(K,f)$ and $M(K,g)$. The map $F:S^{n}\times[0,1]\to S^{n}\times[0,1]$ induces a homotopy of the complements $M(K,f)\simeq M(K,g)$ giving an isomorphism of the fundamental groups $\pi_{1}(M(K,g))=\pi_{1}(M(K,f))$. Thus, the isotopy class of the wild embedding $f$ is completely determined by the $M(K,f)$ up to homotopy. Using Connes work on operator algebras of foliation, our construction of the $C^{*}-$algebra for a wild embedding is functorial, i.e. an isotopy of the embeddings induces an isomorphism between the corresponding $C^{*}-$algebras. Given two non-isotopic, wild embeddings then we have a homomorphism between the $C^{*}-$algebras only. But every homomorphism (which is not a isomorphism) between $C^{*}-$algebras $A,B$ gives an element of $KK(A,B)$ and vice verse. Thus, \begin{theorem} Let $j:K\to S^{n}$ be a wild embedding with $\pi=\pi_{1}(S^{n}\setminus j(K))$ as fundamental group of the complement $M(K,j)=S^{n}\setminus j(K)$ and $C^{*}-$algebra $C_{c}^{\infty}(K,j)$. Given another wild embedding $i$ with $C^{*}-$algebra $C_{c}^{\infty}(K,i)$. The elements of $KK(C_{c}^{\infty}(K,j),C_{c}^{\infty}(K,i))$ are the isotopy classes of the wild embedding $j$ relative to $i$. \end{theorem} \subsection{Wild embeddings are quantum D-branes} Given a wild embedding $f:K\to S^{n}$ with $C^{*}-$algebra $C^{*}(K,f)$ and group $\pi=\pi_{1}(S^{n}\setminus f(K))$. In this section we will derive an action for this embedding to get back the D-brane action in the classical limit. The starting point is our remark above that the group $\pi$ can be geometrically constructed by using a grope $\mathcal{G}$ with $\pi=\pi_{1}(\mathcal{G})$. This grope was used to construct a codimension-1 foliation on the 3-sphere classified by the Godbillon-Vey invariant. This class can be seen as element of $H^{3}(BG,\mathbb{R})$ with the holonomy groupoid $G$ of the foliation. The strong relation between the grope $\mathcal{G}$ and the foliation gives an isomorphism for the $C^{*}-$algebra which can be easily verified by using the definitions of both algebras. As shown by Connes \cite{Connes1984,Connes94}, the Godbillon-Vey class $GV$ can be expressed as cyclic cohomology class (the so-called flow of weights)\[ GV_{HC}\in HC^{2}(C_{c}^{\infty}(G))\simeq HC^{2}(C_{c}^{\infty}(\mathcal{G},\pi))\] of the $C^{*}-$algebra for the foliation isomorphic to the $C^{*}-$algebra for the grope $\mathcal{G}$. Then we define an expression\[ S=Tr_{\omega}\left(GV_{HC}\right)\] uniquely associated to the wild embedding ($Tr_{\omega}$ is the Dixmier trace). $S$ is the action of the embedding. Because of the invariance for the class $GV_{HC}$, the variation of $S$ vanishes if the map $f$ is a wild embedding. But this expression is not satisfactory and cannot be used to get the classical limit. For that purpose we consider the representation of the group $\pi$ into the group $E(\mathbb{C})$ of elementary matrices. As mentioned above, $\pi$ is countable generated and the generators can be arranged in the embeddings space. Then we obtain matrix-valued functions $X^{\mu}\in C_{c}^{\infty}(E(\mathbb{C}))$ as the image of the generators of $\pi$ w.r.t. the representation $\pi\to E(\mathbb{C})$ labeled by the dimension $\mu=1,\ldots,n$ of the embedding space $S^{n}$. Via the representation $\iota:\pi\to E(\mathbb{C})$, we obtain a cyclic cocycle in $HC^{2}(C_{c}^{\infty}(E(\mathbb{C}))$ generated by a suitable Fredholm operator $F$. Here we use the standard choice $F=D|D|^{-1}$ with the Dirac operator $D$ acting on functions $C_{c}^{\infty}(E(\mathbb{C}))$. Then the cocycle in $HC^{2}(C_{c}^{\infty}(E(\mathbb{C}))$ can be expressed by\[ \iota_{*}GV_{HC}=\eta_{\mu\nu}[F,X^{\mu}][F,X^{\nu}]\] using a metric $\eta_{\mu\nu}$in $S^{n}$ via the pull-back using the representation $\iota:\pi\to E(\mathbb{C})$. Finally we obtain the action\begin{equation} S=Tr_{\omega}([F,X^{\mu}][F,X_{\mu}])=Tr_{\omega}([D,X^{\mu}][D,X_{\mu}]|D|^{-2})\label{eq:quantum-D-brane-action}\end{equation} which can be evaluated by using the heat-kernel of the Dirac operator. For the classical limit, we take a tame embedding $f:K\to S^{n}$ of a $p-$dimensional complex $K$. Then the group $\pi$ simplifies to a finite group or is trivial. The Dirac operator $D$ on $K$ acts on usual square-integrable functions and the action simplifies to\[ S=\intop_{K}\left(\eta_{\mu\nu}\partial^{\alpha}X^{\mu}\partial_{\alpha}X^{\nu}+\frac{1}{3}R+\ldots\right)dvol(K)\] for the main contributions where $R$ is the scalar curvature of $K$ (for $p>2$). It is known that this action agrees with the usual Born-Infeld action for $p-$branes ($p>2)$ if $R>0$. Thus we obtain a description of the quantum D-brane action by using wild embeddings for the description of a quantum D-brane. We will further investigate this point in a forthcoming paper. \section{Conclusion} In this paper we present a lot of results to support our main conjecture:\\ \emph{The exotic small $\mathbb{R}^{4}$ lies at the heart of quantum gravity. Especially it is a quantized object.}\\ Here we are mainly concentrated on the various relation to branes in superstring theory as a possible candidate of quantum gravity. We found the amazing connections between 4-exotics and NS and D-branes in various string backgrounds. We also studied the case of quantum D-branes using $C^{*}-$algebras. All the results can be simply summarized by:\\ \emph{The exotic small $\mathbb{R}^{4}$ as described by codimension-1 foliations on the 3-sphere is the germ of wide range of effects on D-branes. A quantum Dp-brane is given by a wild embedding of a $p-$dimensional complex into a $n-$dimensional space described by a two-dimensional complex, a grope. }\\ Further evidences supported this statement as well the relation with supersymmetry and realistic QFT will be presented in a separate paper. But as known from our previous work, the grope is the main structure to get the relation between the exotic small $\mathbb{R}^{4}$ and the codimension-1 foliation on the 3-sphere. The description of the wild embedding is rather independent of the dimensions ($n>6$, $p>2$). That is the reason why the exotic small $\mathbb{R}^{4}$ appeared in so different situations above! \section*{Acknowledgment} T.A. wants to thank C.H. Brans and H. Ros\'e for numerous discussions over the years about the relation of exotic smoothness to physics. J.K. benefited much from the explanations given to him by Robert Gompf regarding 4-smoothness several years ago, and discussions with Jan S{\l}adkowski.
1,477,468,750,520
arxiv
\section{Phase diagram in equilibrium at zero temperature} \label{sec:sup1} We here give a detailed analysis regarding the properties of our model at $T=0$ in equilibrium. We treat both the cases of non-conserved and conserved order parameters. The contents of this section, especially for the case of non-conserved order parameter, were found in the literatures~\cite{sup:binder1983phase}. For simplicity, we restrict ourselves to the case $N=2$; the extension to $N>2$ can be straightforwardly performed. \subsection{Bulk long-range order} In this section, we study the phase diagram for regions sufficiently far from the $x_2=0$ plane. Because the effects of the enhanced interactions are neglected in this region, the phase diagram is calculated by minimizing the bulk free energy $\Phi_b[\bm{\varphi}]$. Then, we start with calculating local minima of $\Phi_b[\bm{\varphi}]$ by solving the following equation \begin{eqnarray} \frac{\delta \Phi_b[\bm{\varphi}]}{\delta \varphi^a(\bm{x},t)} = - \Delta \varphi^a(\bm{x}) + r_0 \varphi^a(\bm{x}) + \frac{g}{2} |\bm{\varphi}(\bm{x})|^2 \varphi^a(\bm{x}) = 0. \label{eq:sup:the minimum of the total free energy: bulk} \end{eqnarray} We search the global-minimum state among the local-minimum solutions. It should be noted that the global-minimum state depends on the existence of the conservation law: \begin{eqnarray} \int_{V_3} d^3\bm{x} \bm{\varphi}(\bm{x}) = \bm{0}. \label{eq:sup:conservation law} \end{eqnarray} First, we consider the non-conserved case. We choose a symmetry breaking solution by setting $\varphi^2(\bm{x})=0$. Then, Eq.~(\ref{eq:sup:the minimum of the total free energy: bulk}) becomes \begin{eqnarray} - \Delta \varphi^1(\bm{x}) + r_0 \varphi^1(\bm{x}) + \frac{g}{2} \big(\varphi^1(\bm{x})\big)^3 = 0. \label{eq:sup:the minimum of the total free energy: bulk: mod 1} \end{eqnarray} The solution of Eq.~(\ref{eq:sup:the minimum of the total free energy: bulk: mod 1}) that satisfies the periodic boundary condition is given by \begin{eqnarray} \varphi^1(\bm{x}) = \left\{ \begin{array}{ll} 0 & {\rm for} \ r_0\geq 0, \\ \pm a_0 & {\rm for} \ r_0< 0, \\ \end{array}\right. \label{eq:sup:solution of bulk equation: model A} \end{eqnarray} where we set \begin{eqnarray} a_0 = \sqrt{-\frac{2r_0}{g}} . \end{eqnarray} This result means that the bulk region exhibits the long-range order for $r_0<0$. Next, we consider the conserved case, where the solution Eq.~(\ref{eq:sup:solution of bulk equation: model A}) is not relevant because it does not satisfy the conservation law Eq.~(\ref{eq:sup:conservation law}). Instead of Eq.~(\ref{eq:sup:solution of bulk equation: model A}), we consider two types of solutions: domain-wall and twisted solutions. The domain-wall solution has a domain wall described by~\cite{sup:chaikin1995principles} \begin{eqnarray} \varphi^1(\bm{x}) = a_0 \tanh \Big(\frac{x_1}{\sqrt{2}\xi} \Big), \end{eqnarray} where we have assumed that the domain wall is located at $x_1=0$ and $\xi=r_0^{-1/2}$ is the correlation length. Then, the solution satisfying the conservation law Eq.~(\ref{eq:sup:conservation law}) is constructed by combining two domain walls, for example, as \begin{eqnarray} \varphi^1(\bm{x}) = a_0\tanh \Big(\frac{x_1+\frac{L_1}{4}}{\sqrt{2}\xi} \Big) - a_0\tanh \Big(\frac{x_1-\frac{L_1}{4}}{\sqrt{2}\xi} \Big) - a_0 + b(x_1), \label{eq:sup:solution of bulk equation: model B} \end{eqnarray} where $b(x_1)$ is a small correction. Next, in order to calculate the twisted solution, we return to Eq.~(\ref{eq:sup:the minimum of the total free energy: bulk}). We here assume that $|\bm{\varphi}(\bm{x})|^2$ is a constant $I_b$ independent of $\bm{x}$: \begin{eqnarray} |\bm{\varphi}(\bm{x})|^2 = I_b, \label{eq:sup:assumption for bulk solution O(2) to O(1)} \end{eqnarray} and also assume that $\bm{\varphi}(\bm{x})$ depends only on the $x_1$-coordinate. Then, Eq.~(\ref{eq:sup:the minimum of the total free energy: bulk}) is simplified as \begin{eqnarray} - \frac{\partial^2}{\partial x^2_1}\varphi^a(\bm{x}) + \big(r_0+ \frac{g}{2} I_b\big) \varphi^a(\bm{x}) = 0, \end{eqnarray} and the general solution is immediately obtained as \begin{eqnarray} \varphi^a(\bm{x}) &=& A_1^a e^{\sqrt{r_0+gI_b/2} x_1} + A_2^a e^{-\sqrt{r_0+gI_b/2} x_1}. \end{eqnarray} By imposing the periodic boundary condition, $I_b$ is calculated as \begin{eqnarray} I_b = \left\{ \begin{array}{ll} 0 & {\rm for} \ r_0 \geq 0, \\ -\frac{2}{g} \big(r_0 + \frac{(2n\pi)^2}{L_1^2} \big) & {\rm for} \ r_0 < 0, \end{array}\right. \end{eqnarray} where $n=1,2,\cdots$. The constants $(A_1^a,A_2^a)$ are determined by the condition Eq.~(\ref{eq:sup:assumption for bulk solution O(2) to O(1)}). The final expression of twisted solutions is given by \begin{eqnarray} \left(\begin{array}{ll} \varphi^1(\bm{x}), & \varphi^2(\bm{x}) \end{array}\right) = \left( \begin{array}{ll} a_n\cos\big(\frac{2n\pi}{L_1} x_1\big), & a_n\sin\big(\frac{2n\pi}{L_1} x_1\big) \end{array}\right) \label{eq:sup:solution of bulk equation: O(0)} \end{eqnarray} with \begin{eqnarray} a_n = \sqrt{-\frac{2r_0}{g}-\frac{2(2n\pi)^2}{gL_1^2}} \end{eqnarray} for $r_0<0$ and $\varphi^1(\bm{x})=\varphi^2(\bm{x})=0$ for $r_0\geq0$. We here notice that the total free energy $\Phi_b[\bm{\varphi}]$ increases with increasing $n$. Therefore, we have the twisted solution with $n=1$ as a candidate of minimizing the total free energy $\Phi_b[\bm{\varphi}]$. From the above calculation, we find that the bulk long-range order occurs for $r_0<0$ regardless of the presence or absence of conservation law. For the non-conserved case, it is trivial that Eq.~(\ref{eq:sup:solution of bulk equation: model A}) gives the global minimum of $\Phi_b[\bm{\varphi}]$. In contrast, for the conserved case, we need to compare $\Phi_b[\varphi]$ for the two solutions, the domain-wall and twisted solutions. For the domain-wall solution, the domain wall gives the extra free energy that is proportional to its area $O(L_2 L_3)$. In contrast, for the twisted solution with $n=1$, the extra free energy comes from the spatial variation of the order parameter, which is estimated as $O(L_2 L_3/L_1)$. Therefore, when the system size is sufficiently large, the twisted solution Eq.~(\ref{eq:sup:solution of bulk equation: O(0)}) with $n=1$ is realized in the ordered phase. \subsection{Surface long-range order} We study the surface long-range order for $c_0>0$. For this purpose, $r_0$ is chosen to be positive ($r_0>0$) so that the regions sufficiently far from the $x_2=0$ plane remain disordered. We also assume that the ultraviolet cutoff in the $x_2$-axis, $a_2^{\rm uv}$, is sufficiently smaller than $r_0^{-1/2}$. We show that the surface long-range order occurs in equilibrium at $T=0$. Here, the surface long-range order is identified by two properties; (i) the long-range order on the $x_2=0$ plane and (ii) the exponential decay into the bulk. The state that minimizes the total free energy $\Phi_b[\bm{\varphi}]+\Phi_s[\bm{\varphi}]$ is given by~\cite{sup:lubensky1975critical} \begin{eqnarray} - \Delta \varphi^a(\bm{x}) + r_0 \varphi^a(\bm{x}) + \frac{g}{2} |\bm{\varphi}(\bm{x})|^2 \varphi^a(\bm{x}) = 0 \label{eq:sup:zero temperature theory:bulk} \end{eqnarray} with the following boundary conditions on the $x_2=0$ plane \begin{eqnarray} \frac{\partial\bm{\varphi}(+0)}{\partial x_2} = - \frac{c_0}{2}\bm{\varphi}(+0), \label{eq:sup:zero temperature theory:boundary 1}\\[3pt] \frac{\partial \bm{\varphi}(-0)}{\partial x_2} = \frac{c_0}{2} \bm{\varphi}(-0). \label{eq:sup:zero temperature theory:boundary 2} \end{eqnarray} Here, we note that the additional free energy $\Phi_s[\bm{\varphi}]$ yields the boundary conditions at $x_2=0$. Because we are interested in the regime where the bulk is still disordered, we impose two additional boundary conditions: \begin{eqnarray} \bm{\varphi}(\pm \infty) = \bm{0}, \label{eq:sup:zero temperature theory:boundary 3} \\[3pt] \frac{\partial \bm{\varphi}(\pm \infty) }{\partial x_2}= \bm{0}, \label{eq:sup:zero temperature theory:boundary 4} \end{eqnarray} where we have taken the large system-size limit $L_2\to \infty$. These boundary conditions mean that the bulk remains disordered. We then assume that $\big|\bm{\varphi}(\bm{x})\big|^2$ depends only on the $x_2$-coordinate: \begin{eqnarray} |\bm{\varphi}(\bm{x})|^2 = I(x_2). \label{eq:sup:assumption of zero temperature theory: mod} \end{eqnarray} This assumption corresponds to Eq.~(\ref{eq:sup:assumption for bulk solution O(2) to O(1)}) in the bulk long-range order. By multiplying Eq.~(\ref{eq:sup:zero temperature theory:bulk}) by $\partial \varphi^a /\partial x_2$, we obtain \begin{eqnarray} \hspace{-1cm}-\frac{1}{2}\frac{\partial^2}{\partial x_1^2}\Big(\frac{\partial}{\partial x_2}(\varphi^a)^2\Big) - \frac{1}{2}\frac{\partial}{\partial x_2}\Big(\frac{\partial \varphi^a}{\partial x_2}\Big)^2 + \frac{1}{2}r_0 \frac{\partial}{\partial x_2}(\varphi^a)^2 + \frac{g}{2} I(x_2) \frac{\partial}{\partial x_2}(\varphi^a)^2 = 0. \end{eqnarray} By taking the sum with respect to $a$ and integrating from $x_2=+0$ to $x_2=\infty$, $I(x_2)$ at the $x_2=0$ plane is calculated as \begin{eqnarray} \frac{1}{2}\biggl(\frac{c_0^2}{4}-r_0\biggr) I(+0) - \frac{g}{4} I(+0)^2 = 0. \label{eq:sup:zero temperature theory:full mod} \end{eqnarray} From this equation, we obtain \begin{eqnarray} I(+0) = \left\{ \begin{array}{ll} 0 & {\rm for} \ c_0\leq2\sqrt{r_0}, \\ \frac{2}{g}\biggl(\frac{c_0^2}{4}-r_0\biggr) & \mbox{for} \ c_0>2\sqrt{r_0}. \\ \end{array}\right. \label{surface order1} \end{eqnarray} Accordingly, the phase transition on the $x_2=0$ plane is observed at $c_0=2\sqrt{r_0}$. We note that this result coincides with that in the large-$N$ limit, Eq.~(5) if the ultraviolet cutoff $a_2^{\rm uv}$ is taken to be infinitesimal~\cite{sup:lubensky1975critical}. In order to show that the order is localized near the $x_2=0$ plane, we return to Eq.~(\ref{eq:sup:zero temperature theory:bulk}). Because the boundary condition Eq.~(\ref{eq:sup:zero temperature theory:boundary 3}) implies that the order parameter is so small sufficiently far from the $x_2=0$ plane, we can neglect the non-linear term from Eq.~(\ref{eq:sup:zero temperature theory:bulk}) there. Then, Eq.~(\ref{eq:sup:zero temperature theory:bulk}) is rewritten as \begin{eqnarray} - \frac{\partial^2}{\partial x_2^2} \varphi^a(\bm{x}) + r_0 \varphi^a(\bm{x}) = 0, \end{eqnarray} and we immediately find that $\varphi^a(\bm{x})$ decays to $0$ with the correlation length $\xi=r_0^{-1/2}$. By combining this fact with Eq.~(\ref{surface order1}), we conclude that the surface long-range order is realized at $T=0$. We then consider the order parameter profile in the $x_2=0$ plane. It is understood from the similar argument as the bulk long-range order. For the non-conserved case, because there is no constraint for the order parameter, the order parameter points to the same direction in the true equilibrium state, specifically, described by \begin{eqnarray} \left(\begin{array}{ll} \varphi^1(x_1,x_2=0,x_3), & \varphi^2(x_1,x_2=0,x_3) \end{array}\right) = \left( \begin{array}{ll} \sqrt{\frac{2}{g}\big(\frac{c_0^2}{4}-r_0\big)}, & 0 \end{array}\right), \end{eqnarray} where the direction of order is assumed to be parallel to the $\varphi^1$-direction. In contrast, for the conserved case, we need to compare the domain-wall and twisted solutions. For the domain-wall solution, the direction of order is fixed (e.g. $\varphi^1$) and the magnitude of the order parameter is given by $\sqrt{I(+0)}$. Therefore, in order to satisfy the conservation law, there must be order-parameter flips somewhere in the $x_2=0$ plane (c.f. Eq.~(\ref{eq:sup:solution of bulk equation: model B})), which yield the extra energy proportional to $O(L_3)$. For the twisted solution, the order-parameter profile in the $x_2=0$ plane is given by \begin{eqnarray} \left(\begin{array}{ll} \varphi^1(x_1,x_2=0,x_3), & \varphi^2(x_1,x_2=0,x_3) \end{array}\right) = \left( \begin{array}{ll} \sqrt{\frac{2}{g}\big(\frac{c_0^2}{4}-r_0\big)}\cos\big(\frac{2n\pi}{L_1} x_1\big), & \sqrt{\frac{2}{g}\big(\frac{c_0^2}{4}-r_0\big)}\sin\big(\frac{2n\pi}{L_1} x_1\big) \end{array}\right). \label{eq:sup:solution of surface equation: O(0)} \end{eqnarray} Then, we find that the spatial variation of the order parameter yields the extra energy proportional to $O(L_3/L_1)$. By comparing these two solutions, we conclude that the global minimum is given by Eq.~(\ref{eq:sup:solution of surface equation: O(0)}). This result corresponds to Fig.~2 in the main text. In summary, we present the phase diagram in Fig.~\ref{fig:Phase diagram at zero temperature}, which is often found in the literatures~\cite{sup:binder1983phase}. \begin{figure}[h] \begin{center} \includegraphics[width=8.6cm]{PhaseDiagramAtZeroTemperature.eps} \end{center} \vspace{-0.5cm} \caption{Phase diagram of our model at $T=0$ in equilibrium.} \label{fig:Phase diagram at zero temperature} \end{figure} \section{Detailed analysis of linear fluctuations} \label{sec:sup2} In the main text, we used the results obtained by analyzing the linearized equation: \begin{eqnarray} \Big(\frac{\partial }{\partial t} + \dot{\gamma} x_2\frac{\partial}{\partial x_1} \Big)\xi^a(\bm{x},t) = D_0 \Delta \big(-\Delta + r - c(x_2) \big) \xi^a(\bm{x},t) - \nabla \cdot \bm{f}^a(\bm{x},t). \label{eq:sup:linearized equation of motion: appendix} \end{eqnarray} We here give their derivations. The basic quantity of interest is the equal-time correlation function in the steady state, which is defined by \begin{eqnarray} C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma}) = \lim_{t \to \infty} C^{r,c()}_{l}(\bm{x},\bm{y},t;\dot{\gamma}) \end{eqnarray} with \begin{eqnarray} C^{r,c()}_{l}(\bm{x},\bm{y},t;\dot{\gamma}) = \frac{1}{N}\big\langle\bm{\xi}(\bm{x},t)\cdot \bm{\xi}(\bm{y},t) \big\rangle^{r,c()}_{l} . \end{eqnarray} When the focus is restricted to the steady state with $c(x_2)=0$, the correlation function satisfies the translational invariance \begin{eqnarray} C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma}) = C^{r,0}_{l}(\bm{x}+\bm{a},\bm{y}+\bm{a};\dot{\gamma}), \label{eq:sup:translational invariance for correlation function: c=0} \end{eqnarray} where $\bm{a}$ is any vector. This result is derived directly from Galilean invariance of Eq.~(\ref{eq:sup:linearized equation of motion: appendix}) with $c(x_2)=0$. The detailed discussion was given in Ref.~\cite{sup:onuki1979nonequilibrium}. Based on this property, we introduce the Fourier transform $C^{r,0}_{l}(\bm{k};\dot{\gamma})$ with respect to all the directions as \begin{eqnarray} C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma}) =\int \frac{d^3\bm{k}}{(2\pi)^3}C^{r,0}_{l}(\bm{k};\dot{\gamma})e^{i\bm{k}\cdot (\bm{x}-\bm{y})}. \end{eqnarray} While Eq.~(\ref{eq:sup:translational invariance for correlation function: c=0}) does not hold for $c(x_2)\neq 0$, there remains the translational symmetry along the $x_1$- and $x_3$-directions. Then, we introduce the Fourier transform with respect to the remaining directions: \begin{eqnarray} C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma}) =\int \frac{d^2\bm{k}_{\parallel}}{(2\pi)^2}C^{r,c()}_{l}(\bm{k}_{\parallel},x_2,y_2;\dot{\gamma})e^{i\bm{k}_{\parallel}\cdot (\bm{x}_{\parallel}-\bm{y}_{\parallel})}, \label{eq:sup:Fourier transform w.r.t. 13direction} \end{eqnarray} where $\bm{x}_{\parallel}=(x_1,x_3)$ and $\bm{k}_{\parallel}=(k_1,k_3)$. \subsection{Derivation of Eq.~(15)} The equation for $C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma})$ is derived from Eq.~(\ref{eq:sup:linearized equation of motion: appendix}) as \begin{eqnarray} \Big\{\dot{\gamma} x_2\frac{\partial}{\partial x_1} + \dot{\gamma} y_2\frac{\partial}{\partial y_1} + D_0 \Delta_x\Big(\Delta_x - r + c(x_2) \Big) + D_0 \Delta_y\Big(\Delta_y - r + c(y_2)\Big) \Big\} C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma}) = -D_0 T \big(\Delta_x + \Delta_y\big)\delta(\bm{x}-\bm{y}), \nonumber \\ \label{eq:sup:equation for correlation function: intermediate} \end{eqnarray} where $\Delta_x$ is the Laplacian with respect to $\bm{x}$. Noting $C_{l}^{r,c()}(\bm{x},\bm{y};\dot{\gamma}) = C_{l}^{r,c()}(\bm{y},\bm{x};\dot{\gamma})$, we simplify Eq.~(\ref{eq:sup:equation for correlation function: intermediate}) as \begin{eqnarray} \Big\{\dot{\gamma} x_2\frac{\partial}{\partial x_1} + D_0 \Delta_x \Big(\Delta_x - r + c(x_2)\Big) \Big\} C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma}) = -D_0 T \Delta_x\delta(\bm{x}-\bm{y}) . \label{eq:sup:equation for correlation function: c>0} \end{eqnarray} Now, we introduce the differential operators: \begin{eqnarray} \mathcal{L}_0(\bm{x},\bm{x}') &=& \Big[\dot{\gamma} x_2 \frac{\partial}{\partial x_1} + D_0 \Delta_x\Big(\Delta_x - r \Big)\Big] \delta(\bm{x}-\bm{x}') , \label{eq:sup:differential oparator1} \end{eqnarray} and rewrite Eq.~(\ref{eq:sup:equation for correlation function: c>0}) as \begin{eqnarray} \int d^3\bm{x}' \mathcal{L}_0(\bm{x},\bm{x}') C^{r,c()}_{l}(\bm{x}',\bm{y};\dot{\gamma}) = - D_0 \Delta_x c(x_2) C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma}) - D_0 T \Delta_x \delta(\bm{x}-\bm{y}) . \label{eq:sup:equation for correlation function: c>0: rewrite} \end{eqnarray} Clearly, $\mathcal{L}_0(\bm{x},\bm{y})$ is connected with $C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma})$ as \begin{eqnarray} \int d^3 \bm{x}'\mathcal{L}_0(\bm{x},\bm{x}')C^{r,0}_{l}(\bm{x}',\bm{y};\dot{\gamma}) = -D_0T \Delta_x \delta(\bm{x}-\bm{y}). \label{eq:sup:equation for correlation function: c=0} \end{eqnarray} By interpreting Eq.~(\ref{eq:sup:equation for correlation function: c=0}) from a different viewpoint, we find that $C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma})$ is identified with the inverse operator of $\mathcal{L}_0(\bm{x},\bm{y})$. Based on this observation, by acting the inverse operator $C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma})$ on Eq.~(\ref{eq:sup:equation for correlation function: c>0: rewrite}), we obtain \begin{eqnarray} C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma}) = C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma}) + \frac{1}{T} \int d^3 \bm{x}' C^{r,0}_{l}(\bm{x},\bm{x}';\dot{\gamma}) c(x'_2) C^{r,c()}_{l}(\bm{x}',\bm{y};\dot{\gamma}). \label{eq:sup:act inverse EOM of ETCF: transient} \end{eqnarray} This equation connects $C^{r,0}_{l}(\bm{x},\bm{y};\dot{\gamma})$ with $C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma})$. When $c(x_2)$ and $C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma})$ have simple forms, we can derive more convenient relations for $C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma})$ from Eq.~(\ref{eq:sup:act inverse EOM of ETCF: transient}). The simplest example is $c(x_2)=c_0 \delta(x_2)$, for which we can immediately derive the simpler equation for $C^{r,c()}_{l}(\bm{k}_{\parallel},x_2=0,y_2=0,\dot{\gamma})$ as \begin{eqnarray} C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \frac{TC^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})}{T-c_0C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})}. \label{eq:sup:expression of correlation function of x2=0: delta} \end{eqnarray} As another example, we consider that $c(x_2)$ and $C^{r,c()}_{l}(\bm{k}_{\parallel},x_2,0;\dot{\gamma})$ have exponential forms: \begin{eqnarray} c(x_2) &=& c_0 \delta(x_2) + \sum_{i=1}^n c^{\rm (i)}_1e^{-x_2/\ell^{\rm (i)}_1}, \label{eq:sup:exponential ansatz of c}\\ C^{r,c()}_{l}(\bm{k}_{\parallel},x_2,0;\dot{\gamma}) &=& C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) e^{-x_2/\ell_2[c()]}. \label{eq:sup:exponential ansatz of correlation function} \end{eqnarray} By substituting Eqs.~(\ref{eq:sup:exponential ansatz of c}) and (\ref{eq:sup:exponential ansatz of correlation function}) into Eq.~(\ref{eq:sup:act inverse EOM of ETCF: transient}), we obtain \begin{eqnarray} C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \frac{TC^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})}{T-\bar{c}C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})} \label{eq:sup:expression of correlation function of x2=0: exponential} \end{eqnarray} with \begin{eqnarray} \bar{c} &=& c_0 + \sum_{i=1}^N c_1^{\rm (i)}\ell^{\rm (i)}_{\rm sum} ,\nonumber \\[3pt] \ell^{\rm (i)}_{\rm sum} &=& \ell_1^{\rm (i)}+\ell_2(0)+\ell_2[c()]. \end{eqnarray} The case $n=1$ corresponds to Eq.~(15) in the main text. \subsection{Derivation of Eq.~(27)} We here study the case $c(x_2)=0$, which corresponds to the dynamics in the bulk region. The starting point of our analysis is an exact integral expression for $C^{r,0}_{l}(\bm{k};\dot{\gamma})$: \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) = TD_0 \int_0^{\infty} ds e^{-D_0\int_0^s d\lambda |\bm{\kappa}_{\lambda}|^{2}( |\bm{\kappa}_{\lambda}|^{2} + r)} |\bm{\kappa}_s|^{2} \label{eq:sup:formal expression of correlation function in the bulk: appendix} \end{eqnarray} with $\bm{\kappa}_\lambda = (k_1,k_2+\dot{\gamma} \lambda k_1/2,k_3)$. This type of expression was initially derived by Onuki and Kawasaki~\cite{sup:onuki1979nonequilibrium}, and has been widely used in the analysis of fluctuations in the presence of shear flow. We recently summarized its compact derivation in Ref.~\cite{sup:nakano2021long}. Therefore, we omit the details of derivation and study the asymptotic expression in the long-wavelength region. First, by expanding the $\lambda$-integral of Eq.~(\ref{eq:sup:formal expression of correlation function in the bulk: appendix}), we obtain \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) = D_0 T \int_0^{\infty} ds \Big[k_1^2 + (k_2+\frac{1}{2} \dot{\gamma} s k_1)^2 + k_3^2 \Big] e^{-F_B(s;\bm{k})} \label{eq:sup:correlation function in bulk for conserved non-equilibrium: explicit form} \end{eqnarray} with \begin{eqnarray} F_B(s;\bm{k}) &=& D_0 \Big[s r \Big( |\bm{k}|^2 + \frac{1}{2} s \dot{\gamma} k_1 k_2 + \frac{1}{12} s^2 \dot{\gamma}^2 k_1^2 \Big) \nonumber \\[3pt] &+& s \Big(|\bm{k}|^4 + s k_1 k_2 (\dot{\gamma} |\bm{k}|^2 + \frac{1}{8} \dot{\gamma}^3 s^2 k_1^2) + \frac{1}{6} \dot{\gamma}^2 s^2 k_1^2 (|\bm{k}|^2 + 2 k_2^2) + \frac{1}{80} s^4 \dot{\gamma}^4 k_1^4\Big)\Big]. \label{eq:sup:GammaB integral} \end{eqnarray} Noting \begin{eqnarray} \frac{\partial}{\partial s} e^{-F_B(s;\bm{k})} = -D_0 |\bm{\kappa}_{s}|^{2}( |\bm{\kappa}_{s}|^{2} + r)e^{-F_B(s;\bm{k})}, \end{eqnarray} we have \begin{eqnarray} C_l^{r,0}(\bm{k},\dot{\gamma}) = - T \int_0^{\infty} ds \frac{1}{|\bm{\kappa}_{s}|^{2}+r} \frac{\partial}{\partial s} e^{-F_B(s;\bm{k})}. \label{eq:sup:correlation function in bulk for conserved non-equilibrium: transit1} \end{eqnarray} Using integration by parts, Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: transit1}) is rewritten as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) = \frac{T}{r + |\bm{k}|^2} - \dot{\gamma} T \int_0^{\infty} ds e^{-F_B(s;\bm{k})} \frac{ k_1(k_2 + \frac{1}{2} s \dot{\gamma} k_1) }{\Big(r + |\bm{k}|^2 + s \dot{\gamma} k_1 k_2 + \frac{1}{4} s^2 \dot{\gamma}^2 k_1^2\Big)^{2}} . \label{eq:sup:correlation function in bulk for conserved non-equilibrium: another form} \end{eqnarray} For $|\bm{k}| \ll r^{1/2}$, we can neglect the terms of order $|\bm{k}|^4$ in Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: another form}) and obtain the approximation form \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) \simeq \frac{T}{r + |\bm{k}|^2} - \frac{\dot{\gamma} T}{r^2}\int_0^{\infty} ds\Big(k_1 k_2 + \frac{1}{2} s \dot{\gamma} k_1^2 \Big) e^{-D_0 s r \Big( |\bm{k}|^2 + \frac{1}{2} s \dot{\gamma} k_1 k_2 + \frac{1}{12} s^2 \dot{\gamma}^2 k_1^2 \Big)} . \label{eq:sup:correlation function in bulk for conserved non-equilibrium: approximated form} \end{eqnarray} To obtain the asymptotic behavior of $C^{r,0}_{l}(\bm{k};\dot{\gamma})$ from this expression, we divide the $\bm{k}$-region into two regions \begin{eqnarray} {\rm (i)} \ \ \frac{1}{12} \dot{\gamma}^2 k_1^2 \ll D_0^3 r^3 |\bm{k}|^6, \nonumber \\[3pt] {\rm (ii)} \ \ \frac{1}{12} \dot{\gamma}^2 k_1^2 \gg D_0^3 r^3 |\bm{k}|^6. \end{eqnarray} In region (i), the dominant contribution of the $s$-integral arises from \begin{eqnarray} s = \frac{1}{D_0 r |\bm{k}|^2}, \end{eqnarray} where the integrand is approximated as \begin{eqnarray} \Big(k_1 k_2 + \frac{1}{2} s \dot{\gamma} k_1^2 \Big) e^{-D_0 s r \Big( |\bm{k}|^2 + \frac{1}{2} s \dot{\gamma} k_1 k_2 + \frac{1}{12} s^2 \dot{\gamma}^2 k_1^2 \Big)} \simeq \Big(k_1 k_2 + \frac{1}{2} s \dot{\gamma} k_1^2 \Big) e^{- D_0 s r |\bm{k}|^2}. \end{eqnarray} Then, Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: approximated form}) is approximately rewritten as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) \simeq \frac{T}{r} - \frac{T}{r^2} |\bm{k}|^2 - \frac{\dot{\gamma}T}{D_0 r^3} \frac{k_1 k_2}{|\bm{k}|^2} - \frac{\dot{\gamma}^2T}{2D_0^2 r^4} \frac{k_1^2}{ |\bm{k}|^4} -\cdots. \label{eq:sup:asmptotic behavior in region i: model B bulk} \end{eqnarray} In region (ii), the dominant contribution of the $s$-integral arises from \begin{eqnarray} s = \Big(\frac{12}{D_0 \dot{\gamma} r k_1^2}\Big)^{\frac{1}{3}}, \end{eqnarray} where the integrand is approximated as \begin{eqnarray} \Big(k_1 k_2 + \frac{1}{2} s \dot{\gamma} k_1^2 \Big) e^{-D_0 s r \Big( |\bm{k}|^2 + \frac{1}{2} s \dot{\gamma} k_1 k_2 + \frac{1}{12} s^2 \dot{\gamma}^2 k_1^2 \Big)} \simeq \Big(k_1 k_2 + \frac{1}{2} s \dot{\gamma} k_1^2 \Big) e^{-\frac{1}{12} D_0 r s^3 \dot{\gamma}^2 k_1^2}. \end{eqnarray} Then, Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: approximated form}) is approximately rewritten as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) \simeq \frac{T}{r} - \frac{12^{\frac{2}{3}}}{6} \Gamma\Big(\frac{2}{3}\Big) T \frac{\dot{\gamma}^{\frac{2}{3}}}{r^{\frac{8}{3}}D_0^{\frac{2}{3}}} |k_1|^{\frac{2}{3}} - 12^{\frac{1}{3}} \Gamma\Big(\frac{4}{3}\Big) T \frac{\dot{\gamma}^{\frac{1}{3}}}{r^{\frac{7}{3}}D_0^{\frac{1}{3}}} |k_1|^{\frac{1}{3}}k_2 \cdots . \label{eq:sup:asmptotic behavior in region ii: model B bulk} \end{eqnarray} The behaviors of Eqs.~(\ref{eq:sup:asmptotic behavior in region i: model B bulk}) and (\ref{eq:sup:asmptotic behavior in region ii: model B bulk}) are understood as two limiting cases of the following equation \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) = \frac{T}{r + |\bm{k}|^2 + c_B \{(\dot{\gamma}/r D_0)|k_1|\}^{2/3}+\cdots}, \label{eq:sup:correlation function: c0=0: conserved} \end{eqnarray} where $c_B\simeq1.18$. If we use the typical length scales $l_B=(D_0/\dot{\gamma})^{1/4}$ and $\xi_B=\sqrt{1/r}$, Eq.~(\ref{eq:sup:correlation function: c0=0: conserved}) is rewritten as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) = \frac{T}{r + |\bm{k}|^2 + c_B \{(\xi^2_B/l^4_B)|k_1|\}^{2/3}+\cdots}. \end{eqnarray} Here, $l_B$ is the characteristic length of the flow, and $\xi_B$ is the correlation length of fluctuations in the equilibrium system. To check the validity of the above calculation, we numerically integrate Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: another form}). The result is presented in Fig.~\ref{fig: bulk correlation function: B sr=0.3 r=10}. The parameter settings are the same as in Fig.~3. We find the good agreement between Eq.~(\ref{eq:sup:correlation function: c0=0: conserved}) and the numerical result. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{correlations.inbulk.B.eps} \end{center} \vspace{-0.5cm} \caption{Red: $1.0/C^{r,0}_l(k_1,k_2=k_3=0;\dot{\gamma})-r$ vs. $k_1$. Blue: $1.0/C^{r,0}_l(k_1=0,k_2,k_3=0;\dot{\gamma})-r$ vs. $k_2$. The black lines are, respectively, given by $1.0/C^{r,0}_l(k_1,k_2=k_3=0;\dot{\gamma})-r = c_B \{(\xi^2_B/l^4_B)|k_1|\}^{2/3}$, and $1.0/C^{r,0}_l(k_1=0,k_2,k_3=0;\dot{\gamma})-r = k_2^2$.} \label{fig: bulk correlation function: B sr=0.3 r=10} \end{figure} \subsection{Derivation of Eq.~(24)} As previously shown, when $c(x_2)$ and $C^{r,c()}_{l}(\bm{x},\bm{y};\dot{\gamma})$ satisfy Eqs.~(\ref{eq:sup:exponential ansatz of c}) and (\ref{eq:sup:exponential ansatz of correlation function}), $C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})$ is written as Eq.~(\ref{eq:sup:expression of correlation function of x2=0: exponential}). Here, we derive the asymptotic expression of $C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})$ from Eq.~(\ref{eq:sup:expression of correlation function of x2=0: exponential}). Because Eq.~(\ref{eq:sup:expression of correlation function of x2=0: exponential}) connects $C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})$ with $C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma})$, we first calculate the asymptotic form of $C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\gamma)$ for $|\bm{k}_\parallel|\ll r^{1/2}$. By taking the Fourier transform of Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: approximated form}) with respect to the $x_2$-coordinate and substituting $x_2=0$, we obtain \begin{eqnarray} C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) \simeq \int_{-\infty}^{\infty}\frac{dk_2}{2\pi}\frac{T}{r + \bm{k}^2} - \frac{\dot{\gamma} T}{r^2} \int_{-\infty}^{\infty}\frac{dk_2}{2\pi}\int_0^{\infty}ds \Big(k_1k_2+\frac{1}{2}\dot{\gamma} s k_1^2\Big) e^{-D_0 r\big(s|\bm{k}|^2+\frac{1}{2}\dot{\gamma} s^2 k_1 k_2 + \frac{1}{12}\dot{\gamma}^2 s^3 k_1^2 \big)} .\nonumber \\ \label{eq:sup:correlation function: under shear c=0 model b:mod1} \end{eqnarray} Noting that Eq.~(\ref{eq:sup:correlation function in bulk for conserved non-equilibrium: approximated form}) holds for $|\bm{k}|\ll r^{1/2}$, we find that this expression is valid for $|\bm{k}_{\parallel}| \ll r^{1/2}$. The $k_2$-integral in Eq.~(\ref{eq:sup:correlation function: under shear c=0 model b:mod1}) is explicitly calculated as \begin{eqnarray} C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) &\simeq& \frac{T}{2}\sqrt{\frac{1}{r + \bm{k}_{\parallel}^2}} - \frac{T}{8\sqrt{\pi}} \frac{\dot{\gamma}^2 k_1^2}{\sqrt{D_0 r^5}} \int_0^{\infty}ds s^{\frac{1}{2}} e^{-D_0 r \big(s\bm{k}_{\parallel}^2 + \frac{1}{48}\dot{\gamma}^2 s^3 k_1^2 \big)} . \label{eq:sup:correlation function: under shear c=0 model b:mod2} \end{eqnarray} The $s$-integral is calculated in the similar way as in bulk. We omit the details; the result is given by \begin{eqnarray} C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \begin{cases} \frac{T}{2\sqrt{r}} - \frac{T}{4r\sqrt{r}} |\bm{k}_{\parallel}|^2 - \frac{\dot{\gamma}T}{2\sqrt{3} D_0 r^3}|k_1| + \cdots \ \ {\rm for} \ 4\sqrt{3}D_0r |\bm{k}_{\parallel}|^3 \ll \dot{\gamma} |k_1|, \\ \frac{T}{2\sqrt{r}} - \frac{T}{4r\sqrt{r}} |\bm{k}_{\parallel}|^2 - \frac{T\dot{\gamma}^2}{16 D_0^2 r^4} \frac{k_1^2}{|\bm{k}_{\parallel}|^3} \cdots \ \ {\rm for} \ 4\sqrt{3}D_0r |\bm{k}_{\parallel}|^3 \gg \dot{\gamma} |k_1|. \end{cases} \label{eq:sup:correlation function: under shear c=0 model b:final} \end{eqnarray} Then, by substituting Eq.~(\ref{eq:sup:correlation function: under shear c=0 model b:final}) into Eq.~(\ref{eq:sup:expression of correlation function of x2=0: exponential}), we obtain \begin{eqnarray} C^{r,c()}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \frac{T}{(2\sqrt{r}-\bar{c}) + \frac{2}{\sqrt{3}}\frac{\dot{\gamma}}{r^2 D_0} |k_1| + \frac{1}{\sqrt{r}} |\bm{k}_{\parallel}|^2 + \cdots}, \end{eqnarray} where we have ignored higher order terms than $O(\dot{\gamma})$. Finally, by taking as $\bar{c}=2\sqrt{r}$, we obtain Eq.~(24). \section{Supplemental Numerical Analysis} \subsection{Renormalization of $r$} We consider the renormalization of $r$. As explained in the main text, it is calculated by self-consistently solving the following equation \begin{eqnarray} r = r_0 + g \int_{2\pi/L}^{2\pi/a^{\rm uv}} \frac{d^3\bm{k}}{(2\pi)^3} C_l^{r,0}(\bm{k};\dot{\gamma}), \label{eq:sup:self-consistent equation of r: supple} \end{eqnarray} where $a^{\rm uv} = a^{\rm uv}_1 = a^{\rm uv}_2 = a^{\rm uv}_3$. We numerically solve this equation and obtain $r$ as a function of $r_0$. The result is presented in Fig.~\ref{fig:renormalization of r}. The parameter values are chosen as $T=g=1.0$, $L_1=L_2=L_3=512.0$ and $a^{\rm uv} =1.0$. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{Renormalization_r_sr=1.eps} \end{center} \vspace{-0.5cm} \caption{$r$ as a function of $r_0$, which is obtained by numerically solving Eq.~(\ref{eq:sup:self-consistent equation of r: supple}).} \label{fig:renormalization of r} \end{figure} We note that the value of $r$ depends on the choice of $a^{\rm uv}$. Generally, $r$ diverges as $a^{\rm uv}$ approaches $+0$. As shown in Fig.~\ref{fig:renormalization of r}, the renormalization effect of $r$ is rather small when $a^{\rm uv}$ is set to $1.0$. \subsection{How to draw the guideline in Fig.~4} In the main text, we derived the analytical expression Eq.~(26) for the critical point $c_0^{sc}(r_0;\dot{\gamma},T)$. This expression is valid for $\dot{\gamma} \to +0$ and $a^{\rm uv}_2 \to 0$. However, we drew the phase diagram Fig.~4 with using $a^{\rm uv}_2=1.0$ to reduce the numerical cost. Here, we argue the influence of the finite cutoff. First of all, we numerically calculate $C^r_{\rm sc}(\bm{k}_{\parallel};\dot{\gamma})$ with $a^{\rm uv}_2 = 1.0$ using Eq.~(19) and (23), and plot it as the red and blue plots in Fig.~\ref{fig: plane correlation function at criticality using auv=1.0}. The red and blue plots, respectively, give $C^r_{\rm sc}(k_1,k_3=0;\dot{\gamma})$ as a function of $k_1$ and $C^r_{\rm sc}(k_1=0,k_3;\dot{\gamma})$ as a function of $k_3$. The black solid curve is obtained by fitting with \begin{eqnarray} C^{r}_{\rm sc}(\bm{k}_{\parallel};\dot{\gamma}) = A \frac{T}{(2 \dot{\gamma}/\sqrt{3}D_0 r^{2})|k_1| + |\bm{k}_{\parallel}|^2/\sqrt{r}}, \end{eqnarray} where $A$ is the fitting parameter. By fitting the numerical data with $|\bm{k}|<0.1$, we obtain $A=3.118$. We find that the red and blue plots agree well with the black solid curves except for a slightly difference in the long-wavelength region. We note that there is no such a deviation for $a^{\rm uv}_2=0.01$ (see Fig.~3 in the main text). \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{correlations.atcriticality.r10.sr0.3.auv1.eps} \end{center} \vspace{-0.5cm} \caption{Same as Fig.~3 in the main text, but with $a_{\rm uv}= 1.0$.} \label{fig: plane correlation function at criticality using auv=1.0} \end{figure} By considering the additional factor $A$, the expression Eq.~(26) is modified to \begin{eqnarray} c_0^{\rm sc}(r_0;\dot{\gamma},T) &\simeq& c_{\rm max} + Ag l_{\rm sum} \int_{2\pi/L}^{2\pi/a^{\rm uv}} \frac{d^2\bm{k}_{\parallel}}{(2\pi)^2}\frac{T}{(2 \dot{\gamma}/\sqrt{3}D_0 r^{2})|k_1| + |\bm{k}_{\parallel}|^2/\sqrt{r}} - \ell_{\rm sum} (r-r_0)\nonumber \\[3pt] &\simeq& c_{\rm max} + A g l_{\rm sum} \int_{k_c}^{2\pi/a^{\rm uv}} \frac{d^{2}\bm{k}_{\parallel}}{(2\pi)^{2}} \frac{T}{|\bm{k}_{\parallel}|^2/\sqrt{r}} - \ell_{\rm sum} (r-r_0) \nonumber \\[3pt] &=& c_{\rm max} - \frac{g l_{\rm sum}T A \sqrt{r}}{2\pi} \log \big(\frac{\Lambda \dot{\gamma}}{\sqrt{3}\pi D_0r_0^{3/2}}\big) - \ell_{sum} (r-r_0). \label{eq:sup:transition point: explicit form under shear: plus A factor} \end{eqnarray} The guideline in Fig.~4 is drawn by using Eq.~(\ref{eq:sup:transition point: explicit form under shear: plus A factor}). Concretely, the functional form of the guideline is given by \begin{eqnarray} c_0 - c_{\rm max} = 0.74 - 1.569 \log \dot{\gamma}, \end{eqnarray} where we have used the parameter given in Fig.~4 and $0.74$ is adjusted by eyes. Figure~4 shows that its slope agrees well with Eq.~(\ref{eq:sup:transition point: explicit form under shear: plus A factor}) as expected. \section{Non-conserved case} In the main text, we studied the model where the order parameter is conserved in the time evolution. As a related model, we can also consider the dynamics where the order parameter is not conserved. It is given by the following equation: \begin{eqnarray} \frac{\partial \varphi^a(\bm{x},t)}{\partial t} + \bm{v}(\bm{x}) \cdot \nabla \varphi^a(\bm{x},t) = - \Gamma_0 \frac{\delta \Phi[\bm{\varphi}]}{\delta \varphi^a(\bm{x},t)} + \eta^a(\bm{x},t), \label{eq:sup:time dependent Ginzburg-Landau model: non-conserved} \end{eqnarray} where $\Gamma_0$ is a bare diffusion constant and $\bm{\eta}(\bm{x},t)$ is the Gaussian white noise satisfying \begin{eqnarray} \big\langle \eta^a(\bm{x},t) \big\rangle &=& 0, \\ \big\langle \eta^a(\bm{x},t)\eta^b(\bm{x}',t') \big\rangle &=& 2T\delta_{a b} \Gamma_0 \delta(\bm{x}-\bm{x}') \delta(t-t'). \label{eq:sup:bare transport coefficient: explicit form: non-conserved} \end{eqnarray} The Landau--Ginzburg free energy $\Phi[\bm{\varphi}]$ is the same as that of the conserved case. We note that in equilibrium, the non-conserved and conserved dynamics correspond to model A and B in the classification of Hohenberg and Halperin~\cite{sup:hohenberg1977theory}, respectively. In contrast to the conserved case, there is no localized long-range order for the non-conserved case. Here, we derive this result. \subsection{Fluctuations in the disordered bulk} First, we study linear fluctuations in the disordered bulk. For the non-conserved case, $C^{r,0}_{l}(\bm{k};\dot{\gamma})$ is given by \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) = \Gamma_0 T \int_0^{\infty}ds e^{-\Gamma_0 \big\{s(r + |\bm{k}|^2)+\frac{1}{2}\dot{\gamma} s^2 k_1 k_2 + \frac{1}{12}\dot{\gamma}^2 s^3 k_1^2 \big\}}. \label{eq:sup:correlation function in bulk for non-conserved non-equilibrium: explicit form} \end{eqnarray} When we restrict ourselves to the long-wavelength region $|\bm{k}| \ll r^{1/2}$, the dominant contribution of the $s$-integral comes from $s\sim 1/\Gamma_0r$. Accordingly, we can simply expand Eq.~(\ref{eq:sup:correlation function in bulk for non-conserved non-equilibrium: explicit form}) as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) &=& \Gamma_0 T \int_0^{\infty}ds \Big(1 -\Gamma_0 s|\bm{k}|^2-\frac{\Gamma_0}{2}\dot{\gamma} s^2 k_1 k_2 -\frac{\Gamma_0}{12} \dot{\gamma}^2 s^3 k_1^2 \cdots \Big) e^{-\Gamma_0 s r} \nonumber \\ &=& \frac{T}{r} - \frac{T}{r^2} |\bm{k}|^2 - \frac{T\dot{\gamma}}{\Gamma_0r^3} k_1k_2 - \frac{1}{2}\frac{T\dot{\gamma}^2}{\Gamma_0^2r^4}k_1^2\ \cdots. \end{eqnarray} To make it easier to see, we rewrite it as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) \simeq \frac{T}{r + |\bm{k}|^2 + (\dot{\gamma}/r\Gamma_0) k_1k_2 + (\dot{\gamma}^2/2r^2\Gamma_0^2)k_1^2}. \label{eq:sup:correlation function: c0=0: non-conserved} \end{eqnarray} If we use the typical length scales $l_A=\sqrt{\Gamma_0/\dot{\gamma}}$ and $\xi_A=\sqrt{1/r}$, Eq.~(\ref{eq:sup:correlation function: c0=0: non-conserved}) is rewritten as \begin{eqnarray} C^{r,0}_{l}(\bm{k};\dot{\gamma}) \simeq \frac{T}{r + |\bm{k}|^2 + (\xi_A/l_A)^2 k_1k_2 + (\xi_A/l_A)^4 k^2_1/2}. \label{eq:sup:correlation function in the disordered bulk: non-conserved case} \end{eqnarray} Eq.~(\ref{eq:sup:correlation function in the disordered bulk: non-conserved case}) corresponds to Eq.~(27) for the conserved case. We also present the result obtained by numerically integrating Eq.~(\ref{eq:sup:correlation function in bulk for non-conserved non-equilibrium: explicit form}) in the left-hand side of Fig.~\ref{fig: correlation function: A sr=1 r=1}. This figure supports the validity of Eq.~(\ref{eq:sup:correlation function: c0=0: non-conserved}). Equation~(\ref{eq:sup:correlation function in the disordered bulk: non-conserved case}) implies that the shear flow does not lead to the anomalous suppression although it makes the fluctuation anisotropic. Accordingly, because there is no effective force to stabilize the long-range order in contrast to the conserved case, it is predicted that the two-dimensional localized long-range order does not occur. \subsection{Proof of non-existence of surface long-range order} In the same way as the conserved case, the expression of the transition point is given by Eq.~(22). Below, we show the non-existence of surface long-range order by demonstrating that the infrared divergence is not removed for the non-conserved case. We calculate the asymptotic expression of $C_{sc}^r(\bm{k}_{\parallel};\dot{\gamma})$ for $|\bm{k}_{\parallel}| \ll r^{1/2}$ from Eq.~(\ref{eq:sup:correlation function in bulk for non-conserved non-equilibrium: explicit form}). By taking the Fourier transform of Eq.~(\ref{eq:sup:correlation function in bulk for non-conserved non-equilibrium: explicit form}) with respect to the $x_2$-coordinate and substituting $x_2=0$, we have \begin{eqnarray} C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \frac{T}{2}\sqrt{\frac{\Gamma_0}{\pi}} \int_0^{\infty}ds~s^{-\frac{1}{2}} e^{-\Gamma_0 \big\{s(r+|\bm{k}_{\parallel}|^2)+ \frac{1}{48}\dot{\gamma}^2 s^3 k_1^2 \big\}}. \label{eq:sup:appendix c2: int} \end{eqnarray} For $|\bm{k}_{\parallel}| \ll r^{1/2}$, the integrand of (\ref{eq:sup:appendix c2: int}) is expanded as \begin{eqnarray} C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \frac{T}{2}\sqrt{\frac{\Gamma_0}{\pi}} \int_0^{\infty}ds~s^{-\frac{1}{2}} e^{-\Gamma_0 r s} \Big(1- \Gamma_0 s |\bm{k}_{\parallel}|^2 - \frac{1}{48} \Gamma_0 \dot{\gamma}^2 s^3 k_1^2 + \cdots \Big). \label{eq:sup:appendix c2: intint} \end{eqnarray} Integrating each term of (\ref{eq:sup:appendix c2: intint}) with respect to $s$ yields \begin{eqnarray} C^{r,0}_{l}(\bm{k}_{\parallel},0,0;\dot{\gamma}) = \frac{T}{2\sqrt{r_0}} \Big(1- \frac{1}{2r} |\bm{k}_{\parallel}|^2 - \frac{15}{384}\frac{\dot{\gamma}^2}{\Gamma_0^2 r^3} k_1^2 + \cdots \Big). \label{eq:sup:correlation function: under shear c=0 model a:mod1} \end{eqnarray} Finally, by noting that Eq.~(15) also holds for the non-conserved case, we obtain \begin{eqnarray} C_{\rm sc}^r(\bm{k}_{\parallel};\dot{\gamma}) = \frac{T}{|\bm{k}_{\parallel}|^2/\sqrt{r} + \{15\dot{\gamma}^2/(192\Gamma_0^2 r^{5/2})\} k_1^2 + \cdots } . \label{eq:sup:correlation function: under shear c=0 model a: final} \end{eqnarray} This result is checked by comparing with the one numerically integrating Eq.~(\ref{eq:sup:appendix c2: int}). It is presented in the right-hand side of Fig.~\ref{fig: correlation function: A sr=1 r=1}. We find the good agreement between both results. This expression implies that the shear flow stretches the fluctuation in the $x_2=0$ plane along the $x_1$-axis, but does not lead to the anomalous suppression. Therefore, the shear flow does not remove the infrared divergence, which is the same as in the equilibrium case. Thus, we conclude that the two-dimensional localized long-range order does not appear for the non-conserved case. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{correlations.inbulk.A.eps} \includegraphics[width=8cm]{correlations.atcriticality.r1.sr1.auv0.01.A} \end{center} \vspace{-0.5cm} \caption{Left: correlation function in the disordered bulk. Red circle: $1.0/C^r_l(k_1,k_2=k_3=0;\dot{\gamma})-r$ vs. $k_1$. Blue circle: $1.0/C^r_l(k_1=0,k_2,k_3=0;\dot{\gamma})-r$ vs. $k_2$. Green circle: $1.0/C^r_l(k_1=k_2=k,k_3=0;\dot{\gamma})-r$ vs. $k$. The black solid curves are Eq.~(\ref{eq:sup:correlation function: c0=0: non-conserved}) for the corresponding wavenumber. Right: correlation function at the critical point. Red circle: $1.0/C_{\rm sc}^r(k_1,k_3=0;\dot{\gamma})$ vs. $k_1$. Blue circle: $1.0/C_{\rm sc}^r(k_1=0,k_3;\dot{\gamma})$ vs. $k_3$. Green circle: $1.0/C_{\rm sc}^r(k_1,k_3=0;\dot{\gamma})-k_1^2/\sqrt{r}$ vs. $k_1$. The black solid curves are Eq.~(\ref{eq:sup:correlation function: under shear c=0 model a: final}) for the corresponding wavenumber. The parameters are set to $r=T=\dot{\gamma}=1.0$, and $a_2^{\rm uv}=0.01$. } \label{fig: correlation function: A sr=1 r=1} \end{figure}
1,477,468,750,521
arxiv
\section{Introduction} \label{sec:intro} In hierarchical models of galaxy formation, massive galaxies form via mergers of smaller galaxies. Thus, the most massive galaxies, which tend to be ellipticals, should have formed most recently \citep[$z<1$; e.g.][]{White91,Cole00}. In seeming contrast with this picture, studies of the stellar populations and scaling relations in samples of elliptical galaxies imply that the bulk of their stars formed significantly earlier, at $z \lower.7ex\hbox{\gtsima} 2$ \citep[e.g.][]{Djorgovski87,Bower92b,Bender93,Kuntschner00,Eisenhardt08}. This discrepancy can be resolved if massive ellipticals form via \textit{dissipationless} mergers of red, bulge-dominated galaxies at low redshift. These mergers would involve little to no gas, and are colloquially referred to as ``dry'' mergers. Hierarchical scenarios in which a significant fraction of massive elliptical galaxies formed via dry mergers have been presented in a number of theoretical works \citep[e.g.][]{kauffmann00,khochfar03,khochfar09,deluciablaizot07}. The observational evidence is mixed, with both supporting \citep[e.g.][]{vandokkum99,Bell04,Bell05,Bell06,vandokkum05,Tran05,Brown08} and contradictory \citep{Cimatti06, Scarlata07,Donovan07} studies. In this paper, we focus on the study of \citet{vandokkum05} (hereafter vD05), who analyzed the frequency of tidal distortions among an optically-selected sample of nearby ($z \approx 0.1$) bright red galaxies. He found that 53\% of the entire color-selected sample shows morphological evidence of tidal interactions. Further, this ratio rises to 71\% when considering only the bulge-dominated early-type galaxies in the sample. vD05 concludes that the majority of today's most luminous field elliptical galaxies were assembled at low redshift through major dry mergers. However, N-body simulations of binary galaxy mergers analyzed by \citet{Feldmann08} show that the morphologies of the tidal features seen in the vD05 sample cannot be reproduced by major dry mergers. Instead, the observations are better explained by massive elliptical galaxies accreting much lower mass disk-dominated galaxies. They also find that tidal features arising from disk accretion events last significantly longer than major elliptical-elliptical dry mergers (1-2 Gyr compared to a few hundred million years). This pushes the primary epoch of elliptical mass assembly back to $z > 1$. These simulations do not include gas. However, if the mass of the accreted galaxy is small, and the lifetime of the tidal signatures is large, then \citet{Feldmann08} estimate that any stars formed during the interaction could have reddened with age sufficiently to match the observed colors. In a complementary study, \citet{Kawata06} use a cosmological N-body simulation including gas to show that a minor merger can result in a red elliptical galaxy displaying red tidal features, similar to some of the objects selected by vD05, if AGN heating is taken into account. If the \citet{Feldmann08} and \citet{Kawata06} scenarios are correct, and the vD05 red galaxy tidal features are due to the accretion of a low mass, possibly disk-dominated galaxy, then the accreted companion could contain a significant reservoir of gas, which should be accompanied by dust. \citet{Whitaker08} analyze $V_{606}$ and $I_{814}$ HST/ACS and WFPC2 images of 31 of the bulge-dominated red sequence galaxies presented by vD05 and found that only 10\% show evidence for dust based on their spatially-resolved colors. Assuming a simple relation between dust and gas mass, they conclude that red mergers in the nearby Universe mostly involve early-type galaxies with little gas. In contrast, \citet{Donovan07} examine 20 early-type galaxies known to be associated with neutral hydrogen. Of these, 15 match the vD05 optical selection criteria. The majority have $>$10$^8$~M$_{\odot}$ of HI. In two cases, significant (up to 30$-$40~M$_{\odot}$~yr$^{-1}$) star formation is detected. They conclude that red early-type galaxies are not the product of truly dry mergers. \citet{Whitaker08} argue that this sample is not representative of massive ellipticals. However, cold gas is observed in a large fraction of early-type galaxies in the nearby universe \citep{Morganti06,Combes07}. The star formation activity within a subset of the vD05 sample was recently analyzed by \citet{SanchezBlazquez09} (hereafter SB09), who derived kinematics, stellar population absorption features, and ionization from emission lines. They find that half of the sample with strong tidal features contain young stellar populations corresponding to 2\% of the baryonic mass of the galaxy, while a sample lacking interaction features does not contain detectable star formation. They also find that the galaxies containing young stellar populations are supported by rotation, which is unexpected in remnants of major dry mergers \citep{Cox06, Naab03}. Given these conflicting results regarding the gas content of optically selected dry mergers, we use data from the \textit{Spitzer Space Telescope} to revisit the question of whether the specific red mergers identified by vD05 are truly dry. In \S\ref{sec:sample} we describe our multiwavelength data. In \S\ref{sec:seds} we present the spectral energy distributions (SEDs) of a subsample of the vD05 galaxies and show that a significant fraction of the sources display mid-infrared excesses. After arguing that the origin of these excesses is most likely star formation (\S\ref{sec:origin}), we estimate the implied star formation rates (SFRs) in \S\ref{sec:sfrs}, and discuss the contributions from AGN and AGB stars in \S\ref{sec:agn} and \S\ref{sec:agb}. We then estimate the dust and gas content of these dry mergers in \S\ref{sec:gascontent}, discuss the results in \S\ref{sec:discussion}, and conclude in \S\ref{sec:conclusions}. Throughout, we use ${\rm H}_0 = 70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\rm m} = 0.3$, and $\Omega_\Lambda = 0.7$. All quoted magnitudes are in the Vega system. \section{The Sample and Survey Data} \label{sec:sample} In this paper, we investigate the properties of the sample of red galaxies presented by vD05. Galaxies were selected by vD05 based on total $R$ magnitude and $B-R$ color. The selection was tuned to yield nonstellar field objects with the colors and magnitudes of $L > L_{*}$ early-type galaxies at $0.05 < z < 0.2$. The vD05 sample consists of 116 red galaxies selected from the optical imaging of the \bootes \ field of the NOAO Deep Wide-Field Survey \citep[NDWFS;][]{Jannuzi99}, and 10 galaxies selected from the Multiwavelength Survey by Yale-Chile \citep[MUSYC;][]{Gawiser06}. The relative numbers reflect the relative areas of the parent surveys. In this paper, we analyze the SEDs of the \bootes \ sample. The \bootes \ field of the NDWFS has been observed with a variety of telescopes, at wavelengths ranging from the X-ray to the radio. The multiwavelength data sets used in this paper are the following. {\bf Ground-based Optical Imaging:} The 9.3 deg$^2$ \bootes \ field of the NDWFS has been imaged in the $B_W$, $R$, $I$, and $K$ bands down to 5$\sigma$ point-source depths of $\approx$27.1, 26.1, 25.4, and 19.0 Vega mags, respectively\footnote{See http://www.noao.edu/noao/noaodeep/ for more information regarding the depth and coverage of the NDWFS.}. These are the data used by vD05 to select 116 red galaxies. The optical photometry plotted in this paper was determined using the images of the third data release (DR3) of the NDWFS, smoothed to achieve a uniform 1.35$\arcsec$ FWHM Moffat profile with a $\beta$ parameter of 2.5. We used SExtractor AUTO magnitudes \citep{Bertin96}. {\bf \textit{Spitzer} IRAC Imaging:} As part of the \textit{Spitzer} Deep Wide-Field Survey \citep[SDWFS;][]{Ashby09}, 10 square degrees of the \bootes \ field have been mapped with the Infrared Array Camera \citep[IRAC;][]{Fazio04} on board the \textit{Spitzer Space Telescope}. The 5$\sigma$, 4$\arcsec$ diameter, aperture-corrected SDWFS limits are 18.77, 18.83, 16.50, and 15.82 Vega~mag at 3.6, 4.5, 5.8, and 8.0~$\micron$, respectively. We measured SExtractor AUTO magnitudes using the SDWFS version 3.2 zeropoints. All 116 objects that were selected by vD05 from the optical \bootes \ data were also observed by IRAC. {\bf \textit{Spitzer} MIPS Imaging:} Approximately 8.74 deg$^2$ of the \bootes \ field have been imaged with the Multiband Imaging Photometer for \textit{Spitzer} \citep[MIPS;][]{Rieke04}. The 1$\sigma$ point-source depths of the MIPS survey are 0.051, 5, and 18 mJy at 24, 70, and 160~$\micron$, respectively. The data were reduced by the MIPS GTO team. Only five of the 116 sources selected by vD05 from the optical imaging of the \bootes \ field lack MIPS coverage: 6-2707, 2-3102, 3-953, 4-567, 2-3070. All MIPS photometry quoted in this paper refers to the emission associated with the main galaxy. In some cases, we see 24 $\micron$ emission in the near vicinity of the galaxy even though it is not centrally located. This emission may be associated with the tidal arms and/or may be associated with the merger. This emission is not accounted for in our current discussion, but may represent additional star formation associated with the merger event. Examples include 2-368, 3-601, 4-1975, 11-1732, 13-3813, 16-584, 17-2819, 21-837, 25-3572, and 27-984. {\bf \textit{Chandra} X-ray Imaging:} As part of the X\bootes \ survey, 9.3 deg$^2$ of the \bootes \ field has been imaged at a depth of 5~ks with ACIS-I on the \textit{Chandra X-ray Observatory} \citep{Murray05,Kenter05,Brand06}. The limiting flux, corresponding to 4 or more X-ray counts, is $f_{(0.5-7{\rm keV})} = 8.1 \times 10^{-15}$ erg~cm$^{-2}$~s$^{-1}$. The X-ray detections are discussed in \S\ref{sec:xray}. {\bf \textit{SDSS} Optical Spectroscopy:} The \bootes \ field has also been observed as part of the Sloan Digital Sky Survey \citep[SDSS;][]{York00,Gunn06,Gunn98}. We searched the SDSS DR7 \citep{SDSSDR7} database to find the optical spectra corresponding to our sources. Of the 116 \bootes \ sources, we found optical spectra for 106. Figure \ref{fig:zdist} shows their redshift distribution. The sources lie in the range $0.08 < z < 0.17$, with a median redshift of $<z> = 0.1$. Many measurements and physical properties have been derived from the SDSS spectra in a homogenous way and made public by a team of SDSS researchers at the Max-Planck Institute for Astronomy at Heidelberg and Johns Hopkins University. Although SDSS DR7 spectra are available, this value-added catalog is only complete through DR4 \citep{AdelmanMcCarthy06} at the time of writing. Therefore, the reported optical SFRs, classifications, and stellar masses are measured from DR4 spectra\footnote{http://www.mpa-garching.mpg.de/SDSS/DR4}. {\it\bf Samples:} Our sample of red ellipticals is drawn from the set of galaxies identified as red ellipticals in the Bo\"otes field by vD05. We first selected all 75 galaxies identified as red ellipticals (i.e., classified ``E/S0'', but not ``S0'' or ``S'') by vD05. In addition to the simple morphological classification, vD05 also visually classified galaxies into ``{\it tidal}'' classes, which take on four integer values (0 to 3) based on the following criteria: 0 for no tidal features; 1 for weak tidal features; 2 for strong tidal features; and 3 for evidence of an ongoing interaction with a neighboring galaxy. While the {\it tidal} class assigned to a galaxy is subjective and dependent on the depth of the imaging data available, it serves as a simple diagnostic to discriminate undisturbed ellipticals (i.e., those with {\it tidal}=0) from those which show some evidence of recent interaction (i.e., those with {\it tidal}$>$0). Of the sample of 75 ellipticals in the vD05 study, 21 are undisturbed ({\it tidal}=0) and 54 are dry merger candidates ({\it tidal}$>$0). Figure \ref{fig:zdist} shows the redshift distribution of the two samples. The redshift distributions are similar and suggest that the samples can be compared fairly. Table 1 lists the properties of the 54 dry merger candidates ({\it tidal}$>$0). \section{Observed Mid-Infrared Excesses Among Dry Merger Candidates} \label{sec:seds} Using the multiwavelength survey data described in \S\ref{sec:sample}, we present the spectral energy distributions (SEDs) of the 54 \bootes \ dry merger candidates listed in Table \ref{table:sample}. Figure \ref{fig:seds} shows the subset of 22 (41\%) that were detected at 24~$\micron$, while Figure \ref{fig:moreseds} shows the remaining 32 (59\%), which were undetected at 24~$\micron$. Although a significant fraction of \bootes \ dry mergers were undetected in our 24~$\micron$ survey, all were detected at 8~$\micron$. Stellar population synthesis models computed using the isochrone synthesis code of \citet{Bruzual03} (hereafter BC03) were fit to the optical ($B_WRIK$) and near-infrared (IRAC 3.6, 4.5, 5.8~$\micron$) photometry for each object in Figures \ref{fig:seds} and \ref{fig:moreseds}. The models include three different metallicities (0.008, 0.004, 0.02); have exponentially decreasing star formation rates with $\tau=0.1,0.3,1,2,3,5,10,15,30$; and use a Chabrier IMF. They do not include extinction. While the best-fitting BC03 models are good representations of the optical and near-infrared data, the photometry at longer wavelengths (IRAC 8~$\micron$, MIPS 24~$\micron$ and, in three cases, MIPS 70~$\micron$) varies significantly with respect to this model. For example, Figures \ref{fig:seds} and \ref{fig:moreseds} show that the observed 24~$\micron$ flux density ranges from being consistent with an old stellar population (e.g.~18-794, 6-1553) to being over an order-of-magnitude in excess of it (e.g.~20-2395). Large 8~$\micron$ excesses are also obvious in several sources (e.g.~5-1271, 10-112, 17-2134). These excesses may be attributed to emission from some combination of 1) dusty star-forming regions; 2) dust heated by an active galactic nucleus (AGN); and 3) dust in the envelopes of stars on the Asymptotic Giant Branch (AGB). The strong 8~$\micron$ excesses apparent in some sources in Figure \ref{fig:seds} imply polycyclic aromatic hydrocarbon (PAH) emission, which strongly suggests ongoing star formation. In \S\ref{sec:origin}, we use this basic premise to argue that the observed mid-infrared excesses are dominated by dusty star formation. However, we also consider the potential AGN and AGB contributions in \S\ref{sec:agn} and \S\ref{sec:agb}, respectively. In \S\ref{sec:gascontent}, we go on to estimate the dust and gas content of the dry merger candidates under the simplifying assumption that all of the mid-infrared excesses are due entirely to star formation. The SFRs used in Figures \ref{fig:seds} through \ref{fig:iracirac} all refer to values derived from the excess 24~$\micron$ emission, as described in \S\ref{sec:sfrs}. \section{Origin of the Observed Mid-Infrared Excesses} \label{sec:origin} IRAC colors have been developed as a diagnostic to distinguish between star formation and AGN activity by several groups \citep{Lacy04,Sajina05,Stern05,Brand06}. For star-forming galaxies at $z \approx 0.1$, the first three IRAC channels sample the Rayleigh-Jeans side of the blackbody contributed by old (cool) stars, while the fourth IRAC channel samples the rest-frame 7.7 $\micron$ PAH feature. Thus, in star-forming galaxies, the [3.6]$-$[4.5] color is blue and the [5.8]$-$[8.0] color is red. For passive galaxies with no ongoing star formation or AGN activity, there will be no PAH features, so both [3.6]$-$[4.5] and [5.8]$-$[8.0] are expected to be blue. For powerful AGN, both the stellar blackbody and the PAH features can be overwhelmed by emission from AGN-heated dust, leading to redder [3.6]$-$[4.5] and bluer [5.8]$-$[8.0] colors \citep{Brand09}. Figure \ref{fig:iracirac} shows an IRAC color-color diagnostic diagram with the \bootes \ dry merger candidates overplotted. The AGN wedge was determined by \citet{Stern05} and was calibrated on optical spectroscopic diagnostics. We adopt the \citet{Brand09} empirically-determined boundary between PAH (star-forming) and non-PAH (passive) emitting galaxies. Based on this color-color diagram, many of the vD05 \bootes \ dry merger candidates exhibit colors consistent with PAH-emitting star-forming galaxies or passive galaxies with low levels of star formation activity. Only one source (11-1732) lies near the AGN wedge in this diagram. The mid-infrared colors do not rule out the presence of AGN in these galaxies (see \S\ref{sec:agn}), but they do imply that the observed mid-infrared excesses of the vast majority are dominated by star formation rather than AGN activity. \section{Star Formation Rates} \label{sec:sfrs} \subsection{Infrared SFRs} Figures \ref{fig:seds} and \ref{fig:moreseds} show the rest-frame SEDs of the \bootes \ dry merger candidates, as well as the BC03 model fits to the $B_WRIK$ and IRAC channels 1-3 photometry. We estimate the infrared ($8-1000$ $\micron$) luminosity of each source from the \citet{Chary01} model that best fits the single data point represented by the 24 $\micron$ excess (the observed 24 $\micron$ flux density minus the expected 24~$\micron$ flux density of the BC03 model). We find infrared luminosities in the range $1.2 \times 10^9 < {\rm L}_{8-1000 \micron} / {\rm L}_{\odot} < 4 \times 10^{10}$, corresponding to SFRs in the range $ 0.2 < {\rm SFR} / [{\rm M}_{\odot} {\rm yr}^{-1}] < 7$ \citep{Kennicutt98}. For those sources without 24~$\micron$ detections, we calculate upper limits to the infrared SFR. Given the typical redshift ($z \approx 0.1$) of the sources and the depth of the MIPS imaging, we are able to detect SFRs greater than approximately 1~M$_{\Sun}$~yr$^{-1}$. Of the total sample of 54 dry merger candidates, 22 have 24~$\micron$ detections. Of these, 12 have SFR$\ge$1~M$_{\odot}$~yr$^{-1}$. In addition, we can rule out star formation rates greater than 1~M$_{\odot}$ yr$^{-1}$ in all but two of the 32 dry merger candidates with only upper limits at 24~$\micron$. Therefore, 12-14 out of 54 dry merger candidates (22-26\%) have infrared-derived SFR$\ge$1~M$_{\odot}$~yr$^{-1}$. The infrared-derived SFRs and limits are listed in the second column of Table \ref{table:sample}. We caution the reader that these star formation rates carry large (systematic) uncertainties, stemming from our lack of direct knowledge about the far-infrared SED, as well as scatter in the conversion between far-infrared luminosity and SFR. We also remind the reader that in several cases we have excluded 24~$\micron$ emission arising outside the main body of the galaxy and are therefore missing some of the star formation in these galaxies. \subsection{Spectroscopic SFRs} \label{sec:spectra} As described in \S\ref{sec:sample}, many of the \bootes \ dry merger candidates were spectroscopically observed as part of SDSS. \citet{Brinchmann04} have estimated SFRs for galaxies based on SDSS DR4 spectra. First, the spectra were classified according to the BPT diagram \citep[][\S\ref{sec:agn}]{Baldwin81}. For galaxies classified as Star-Forming, \citet{Brinchmann04} modeled all of the emission lines to determine the SFR. For Low SNR Star-Forming galaxies, the observed H$\alpha$ luminosity was used to calculate the SFR. For the galaxies classified as AGN, Composite, or Unclassifiable, the measured values of D4000 were used to estimate SFRs. All SFRs were aperture corrected. The resulting SFRs are listed in Table \ref{table:derived}. The stellar mass estimates (LGM in the \citet{Brinchmann04} catalog) are also presented in Table \ref{table:derived}. SDSS-derived specific SFRs are available for 32 \bootes \ dry merger candidates classified as Star-Forming, Low SNR Star-Forming, AGN, Composite, or Unclassifiable. The values range from $6.2 \times 10^{-13}$ to $2.9 \times 10^{-11}$~yr$^{-1}$. Of these 32, most (81\%) have specific SFRs less than $1 \times 10^{-11}$ yr$^{-1}$, making them typical of quiescent galaxies detected at 24 $\micron$ \citep{Salim09}. However, 19\% (5-1271, 10-232, 11-1732, 17-2134, 22-2252, 27-3444) have specific SFRs exceeding $1~\times~10^{-11}$~yr$^{-1}$, making them transition objects between quiescent and star forming. All of these have significant 24~$\micron$ excesses, but all have SDSS-derived stellar masses that are completely consistent with those of the quiescent galaxies. Of the six transition objects, Table \ref{table:sample} reveals that three have weak tidal features ($tidal = 1$), one has strong tidal features ($tidal = 2$), and two show evidence of an ongoing interaction with another galaxy ($tidal = 3$). Based on this small sample, we conclude that dry merger candidates with high spectroscopically-derived specific SFRs show diversity in the strength of their tidal features. Figure \ref{fig:irvspec} shows the SDSS-derived SFRs versus the MIPS-derived SFRs. The two are broadly correlated at a high level of significance according to the Spearman correlation coefficient ($\rho = 0.73$ when neglecting upper limits in the IR-derived SFRs). On average, the MIPS-derived SFRs are a factor of $\approx$1.4 higher than the SDSS-derived SFRs, a discrepancy which may be due to dust-obscuration. \section{AGN Content of Dry Merger Candidates} \label{sec:agn} Having identified star formation as the \textit{dominant} source of the mid-infrared excesses observed in Figure \ref{fig:seds}, we now investigate the AGN content of the \bootes \ dry mergers. In particular, we consider X-ray luminosity and optical spectroscopic line ratios. \subsection{X-ray Luminosity} \label{sec:xray} The X-ray luminosity of a dry merger candidate can indicate its level of AGN activity. The X\bootes \ survey has a limiting depth of $f_{(0.5-7{\rm keV})}~=~8.1~\times~10^{-15}~{\rm erg}~{\rm cm}^{-2}~{\rm s}^{-1}$. At $z=0.1$, this corresponds to an X-ray luminosity of $2~\times~10^{41}~{\rm erg}~{\rm s}^{-1}$. \citet{Grimm03} find that the X-ray luminosity of a galaxy is related to its SFR through the following relation: \begin{equation} {\rm SFR}~[{\rm M}_{\odot}~ {\rm yr}^{-1}] = \frac{{\rm L}_{2-10 {\rm keV}}}{6.7 \times 10^{39}~{\rm erg~s}^{-1}} \end{equation} \noindent for $L_{2-10{\rm keV}} \lower.7ex\hbox{\gtsima} 3 \times 10^{40}$~erg~s$^{-1}$. This relation yields an X-ray derived SFR of $\approx$30 M$_{\odot}$~yr$^{-1}$ at the detection limit of the X\bootes \ survey. This limiting SFR is a factor of $\approx$4 higher than the highest mid-infrared SFR derived for any of these sources (see \S\ref{sec:sfrs}), so any dry merger candidate detected in the X\bootes \ survey likely hosts an AGN. Using the X\bootes \ catalog of \citet{Brand06}, we found X-ray detections for only 5 of the red galaxies: 1-1403, 5-901, 5-2398, 10-232, and 26-5372. Of these, only two (5-901 and 10-232) are dry merger candidates. The object 5-901 was not detected at 24~$\micron$. In contrast, 10-232 has $f_{\nu}(24 \micron)=1.71$~mJy, which translates to a fairly large SFR of $3.4$~$ {\rm M}_{\odot} {\rm yr}^{-1}$. The X-ray luminosity calls into question whether the observed 24~$\micron$ excess for this source can be translated into a SFR, since AGN can also emit strongly at 24~$\micron$. However, the SED shown in Figure \ref{fig:seds} appears to have an elevated 8~$\micron$ flux density, presumably from strong PAH features, indicating that star formation dominates the mid-infrared flux density. The same conclusion can be drawn from Figure \ref{fig:iracirac}, where 10-232 (indicated by a gold star within a black circle) is very near the ``PAH'' region and well removed from the ``Powerful AGN'' region. While we detect two X-ray luminous AGN among the dry merger candidates, the mid-infrared emission of these sources does not appear to be strongly affected by the AGN. \subsection{BPT Diagram} \label{sec:bpt} The hard ionizing radiation of AGN results in optical emission line ratios distinct from those observed in star-forming regions. For example, high [\ion{O}{3}]/H$\beta$ and [\ion{N}{2}]/H$\alpha$ ratios have been used to diagnose the presence of AGN in what is known as a BPT diagram \citep{Baldwin81}. \citet{Brinchmann04} use 3$\arcsec$ diameter SDSS DR4 fiber spectroscopy to classify targeted galaxies into the following categories, which are numbered as in Table \ref{table:derived}. {\bf Star-Forming (1):} The objects with ${\rm SNR} > 3$ in all four BPT lines that have line ratios consistent with star formation. {\bf Low SNR Star-Forming (2):} Galaxies with ${\rm SNR} > 2$ in H$\alpha$ that have not been classified as SF, AGN, or Composite. {\bf Composite (3):} The objects with ${\rm SNR} > 3$ in all four BPT lines for which up to 40\% of the H$\alpha$ luminosity has an AGN origin. {\bf AGN (4):} The objects with ${\rm SNR} > 3$ in all four BPT lines for which a substantial AGN contribution is required to reproduce the BPT line fluxes. In addition, galaxies with AGN-like values of [\ion{N}{2}]$\lambda$6584/H$\alpha$ and ${\rm SNR} > 3$ in the [\ion{N}{2}]$\lambda$6584 and H$\alpha$ lines but ${\rm SNR} < 3$ in either of the [\ion{O}{3}]$\lambda$5007 or H$\beta$ lines. {\bf Unclassifiable (-1):} Those galaxies that cannot be classified using the BPT diagram, typically because they have no or very weak emission lines. Of the 32 classified sources, four are classified as low SNR star-forming, 10 are classified as AGN, one is classified as Composite, and 17 are Unclassifiable. Recall that these classifications are based on 3$\arcsec$ diameter fiber spectra. Thus, the Unclassifiable sample likely includes galaxies which do not have star formation in the central bulge but may have star formation in the disk. Similarly, while the optical emission lines indicate the presence of an AGN in some objects, this does not mean that the mid-infrared excess in these objects is \textit{dominated} by AGN activity. The IRAC ratios presented in \S\ref{sec:sfrs} and Figure \ref{fig:iracirac} indicate that the objects with the largest mid-infrared excesses are dominated by star formation in the mid-infrared. Recently, SB09 spectroscopically classified 24 of the galaxies listed in our Tables \ref{table:sample} and \ref{table:derived}. Since they had insufficient wavelength coverage to use the same line diagnostics as \citet{Brinchmann04}, they relied on [\ion{O}{2}]$\lambda$3727, H$\beta$, and [\ion{O}{3}]$\lambda$5007. Their results are listed in column 5 of Table \ref{table:derived}. There are 12 objects for which both SDSS and SB09 classifications exist. Of these, six were listed by SB09 as ``?'', meaning there was insufficient wavelength coverage to measure enough lines for classification. Of the remainder, three (12-1734, 17-681, 22-790) were consistently classified as having no or very weak emission lines; 10-232 was consistently classified as an AGN, and only two (6-1676 and 11-1732) had inconsistent diagnoses (-1 versus Seyfert and 4 versus LINER). \section{AGB contribution to mid-infrared emission} \label{sec:agb} In Figures \ref{fig:seds} and \ref{fig:moreseds}, we model an old stellar population with BC03 fits. However, such a stellar template may not accurately account for the mid-infrared emission from an old stellar population. \citet{Kelson2010} recently showed that the large amount of near-infrared light expected from the thermally pulsating asymptotic giant branch (TP-AGB) phase \citep{Maraston05, Bruzual09, Conroy09}, when coupled with the observed mid-infrared fluxes of TP-AGB stars in our own galaxy, imply that a significant amount of mid-infrared flux could come from a $\sim$1 Gyr old stellar population. This timescale is on the order of the time that a disk accretion event would have occurred according to \citet{Feldmann08} and thus, some mid-infrared emission could come from the stars that formed at or around the time of the accretion event itself. To test this effect, we have obtained updated models (commonly referred to as CB07 models in the literature) which use the prescription of \citet{Marigo07} and \citet{Marigo08} for the TP-AGB evolution of low- and intermediate-mass stars (S. Charlot 2011, private communication). We find that the resulting infrared-derived SFRs are very similar, and identical in many cases. We therefore conclude that for this population, the contribution of TP-AGB stars to the mid-infrared flux is insignificant. \section{Dust and Gas Masses of Dry Merger Candidates} \label{sec:gascontent} The red optical colors of the \bootes \ dry merger candidates can be modeled by an old stellar population. However, they could also be consistent with dust-extincted younger stars. The dust mass of a galaxy can be inferred from its measured submillimeter flux. Although submillimeter photometry is currently unavailable for these dry merger candidates, we can extrapolate based on the measured 24 $\micron$ flux densities. Based on Figure \ref{fig:iracirac}, we assume that AGN do not contribute significantly to the far infrared fluxes of these objects. Therefore, we use the best-fit \citet{Chary01} template from \S\ref{sec:agb} to estimate the 350 $\micron$ flux density of each source detected at 24 $\micron$ (or an upper limit in cases where 24 $\micron$ observations were available but there was no detection). We estimate the dust mass following \citet{Hughes97}: \begin{equation} M_{\rm dust} = \frac{1}{1+z} \frac{F_{350} d_L^2}{\kappa_d B(\nu, T_d)}, \end{equation} \noindent where $d_L$ is the luminosity distance; $\kappa_d$ is the rest-frequency mass absorption coefficient interpolated from \citet{Draine03}; and $B(\nu, T_d)$ is the value of the blackbody function at the rest frequency $\nu$ and a temperature $T_d$, which is taken to be 45 K. Estimated dust masses are listed in Table \ref{table:derived}. The dust masses for the dry merger candidates detected at 24 $\micron$ range from (0.3-10) $\times$ 10$^6$ M$_{\odot}$, with a mean of 3 $\times$ 10$^6$ M$_{\odot}$. The dust mass upper limits for the dry merger candidates undetected at 24 $\micron$ range from (0.3-2) $\times$ 10$^6$ M$_{\odot}$, with a mean of 1.6 $\times$ 10$^6$ M$_{\odot}$. For the canonical gas-to-dust ratio of 100, these dust masses correspond to gas masses ranging from (3-100) $\times$ 10$^7$ M$_{\odot}$ for the dry merger candidates detected at 24 $\micron$ and upper limits ranging from (3-20) $\times$ 10$^7$ M$_{\odot}$ for the dry merger candidates undetected at 24 $\micron$. Adopting a dust temperature of 30 K instead of 45 K would increase these estimates by a factor of two. Our \textit{Spitzer} observations reveal dust that went undetected in the HST analysis of \citet{Whitaker08}. The 18 sources for which HST imaging is available are indicated in Table \ref{table:sample}. Of these, we detected a 24 $\micron$ excess in half: 9-360, 9-2105, 11-962, 16-1302, 17-681, 17-2134, 22-790, 22-2252, 27-3444. Note that 17-2134 has a particularly large 24 $\micron$ excess, corresponding to a SFR of 7~M$_{\odot}$~yr$^{-1}$ if all of it is attributed to star formation. Interestingly, \citet{Whitaker08} find evidence for dust in 25-1980, which shows no mid-infrared excess. Thus it seems that both the optical and mid-infrared imaging are necessary for a full census of the dust content of this population. Figure \ref{fig:whitaker} shows the gas-to-stellar mass ratio versus vD05 \textit{tidal} parameter for the subset of the dry merger candidates for which we were able to calculate all quantities. For comparison, we show the cosmic mean, the \citet{Donovan07} sample mean, and the \citet{Whitaker08} upper envelope, all taken from \citet{Whitaker08}. The measured gas-to-stellar mass ratios derived by \textit{Spitzer} are intermediate between those derived for an overlapping sample by \citet{Whitaker08} and those derived for an analogous sample of early-type galaxies with HI emission by \citet{Donovan07}. All are significantly below the cosmic mean. Although our gas mass estimates are significantly higher than those calculated from optical images, we still find that gas makes up less than 1\% of the baryonic mass in these ellipticals. A caveat to Figure \ref{fig:whitaker} are the uncertainties in calculating the gas masses. These errors arise from uncertainties in the extrapolation to the far-infrared, the dust temperatures, the mass absorption coefficient, and the gas-to-dust ratio. Considering only the last, we would have to adopt a gas-to-dust ratio of 10 to match the gas-to-stellar mass ratios of \citet{Whitaker08} and a gas-to-dust ratio of 250 to match to gas-to-stellar mass ratios of \citet{Donovan07}. \section{Discussion} \label{sec:discussion} We have detected $>$1~M$_{\odot}~{\rm yr}^{-1}$ of star formation in $\approx$25\% of the massive dry merger candidates discovered by vD05 in \bootes. Given the mass already existing in old stars, this represents a ``frosting'' of star formation, similar to what has been seen in previous studies of elliptical galaxies \citep[e.g.][]{Trager00, Gebhardt03}. \citet{Kormendy09} have argued that the cuspy cores observed in the most luminous ellipticals may result from the scouring caused by binary black holes in dry mergers. The higher redshift of our sample and the lack of sufficiently high spatial resolution data preclude a determination of the nature of the central light profile. However, the absolute magnitude limit on the vD05 sample is approximately 1.5 mags fainter than the cuspy cores observed by Kormendy et al. Based on the SED combined with a number of assumptions, we estimate that these ellipticals are associated with on the order of 10$^8$ M$_{\odot}$ in gas. What is the origin of this gas? We favor a scenario in which gas was delivered to the dry merger candidates via a merger. The strongest evidence for this is that the residual star formation tends to be found in the sources with morphological evidence of a recent merger. Figure \ref{fig:morphplot} shows the distribution of infrared-derived SFRs in bins of the $tidal$ parameter tabulated by vD05. Distributions are presented both for the \bootes \ dry merger candidates and for a control sample consisting of the subset of the red galaxy sample with early type morphologies and no tidal features. Galaxies forming stars at a rate greater than about 1 M$_{\odot}$ yr$^{-1}$ tend have a $tidal$ parameter greater than 0. The observation that the mid-infrared emission is related to the $tidal$ designation supports the hypothesis that the star formation is due to interaction-driven activity. In a GALEX study of these same sources, \citet{Kaviraj2010} also found that the dry merger candidates with the bluest GALEX colors tended to show signs of tidal interaction. Additionally, \citet{Donovan07} looked at HI data for galaxies that would fall into the dry merger sample, and found large reservoirs of gas. They find that 12/15 red rogues exhibit signs of tidal interaction. This is comparable to the fraction of red early-type galaxies found by vD05 to exhibit tidal features. An alternative explanation is that gas expelled from AGB stars is already present in the dry merger candidates, and star formation is triggered by a merger, not fueled by one. Can AGB stars expel enough gas to fuel star formation? For a $10^{11}$~M$_{\odot}$ old stellar population, 0.15~M$_{\odot}$~yr$^{-1}$ is expected to be ejected by AGB stars \citep{Mathews03}. Over a billion years, such a galaxy could accumulate $~$10$^8$~M$_{\odot}$ of gas, which is on the same order as what is observed. If this were the case, then we would expect that even the galaxies without tidal features should have evidence for gas, even if it has not been triggered to form stars. \section{Conclusions} \label{sec:conclusions} We analyze the multiwavelength data available for vD05 dry merger candidates in the NDWFS \bootes \ field. We find: \begin{itemize} \item A significant fraction of the sources display mid-infrared (24~$\micron$) excesses over that expected from an old stellar population with the observed red optical colors. \item Based on the mid-infrared IRAC colors indicating the presence of PAH emission, this infrared excess is likely due to emission from dust heated by star forming regions, rather than AGN-heated dust or AGB stars. \item If the observed mid-infrared excesses are due to star formation, we estimate that a quarter of the \bootes \ dry merger candidates are forming stars at rates greater than 1 M$_{\odot}$ yr$^{-1}$. This represents a ``frosting'' of star formation on top of a well-developed old stellar population. \item Red early-type galaxies exhibiting tidal features are more likely to have star formation detectable in the mid-infrared than a control sample lacking tidal features. This implies that a frosting of star formation in elliptical galaxies may be triggered by tidal interactions. \item We estimate gas masses in the range (3-100) $\times$ 10$^7$ M$_{\odot}$ for the dry merger candidates detected at 24~$\micron$ and upper limits ranging from (3-20) $\times$ 10$^7$ M$_{\odot}$ for the dry merger candidates undetected at 24 $\micron$. \item Based on the observed 24~$\micron$ emission, and assuming the \citet{Chary01} star-forming templates, we predict the 70 $\micron$ flux densities of the dry merger candidates. The predicted 70 $\micron$ flux densities are shown in Table \ref{table:sample}. Only three sources have a \textit{Spitzer} 70 $\micron$ detection. For these three sources, the observed 70 $\micron$ flux density is a factor of 1.5-2 times greater than the template prediction. Nevertheless, these template predictions provide useful guidelines for future Herschel observations. \end{itemize} \acknowledgments \section{Acknowledgments} \label{sec:acknowledgments} AD thanks the SSC/Caltech for its gracious hospitality during summer 2009, when much of this paper was written. EC was supported through the Caltech Summer Undergraduate Research Fellowship program and the \textit{Spitzer} Enchanced Science Fund. ELF is supported by the \textit{Spitzer} Fellowship Program through a contract with JPL/Caltech/NASA. We thank John Moustakas for advice on using the SDSS spectra, and St\'{e}phane Charlot for providing the CB07 models. We also warmly thank Lee Armus, George Helou, Bradford Holden, Daniel Kelson, Francine Marleau, Samir Salim, and Nick Scoville for stimulating discussions pertaining to this work. Finally, we are grateful to the anonymous referee for providing useful feedback that improved this work. This research is partially supported by the National Optical Astronomy Observatory which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the National Science Foundation. This work is in part based on observations made with the \textit{Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. The \textit{Spitzer}/MIPS survey of the \bootes \ region was obtained using GTO time provided by the \textit{Spitzer} Infrared Spectrograph Team (James Houck, P.I.) and by M. Rieke. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. {\it Facilities:} \facility{Spitzer(MIPS,IRAC,IRS)} \facility{KPNO:2.1m(ONIS,SQUID,FLAMINGOS,FLAMINGOS-1)} \facility{Mayall(Mosaic-1)} \bibliographystyle{apj}
1,477,468,750,522
arxiv
\section{Introduction} As a consequence of the strong nuclear interaction, the nuclear mass $M$ is not just the sum of the individual nucleons. The difference between these two quantities is an indicator of the stability of a given nucleus, the larger the difference the more stable is the nucleus. An accurate description of this binding energy as a function of the number of neutrons and protons is a recurrent research topic in nuclear physics \cite{Lunn03} and nuclear astrophysics \cite{Rol88}. The semi-phenomenological liquid drop model, in which the nucleus is described as a very dense, charged liquid drop, is the oldest and simplest approach to this problem \cite{Bohr98}. It provides a qualitative description of the binding energy though it fails to capture features related to the quantum nature of the single particles (neutrons and protons) inside the nucleus. This is clearly observed in Fig. \ref{boustro}, where we have plotted the difference between measured masses \cite{Aud03} and Liquid Drop Model (LDM) predictions \cite{ILDM}, as a function of the proton number $N$, mass number $A$, neutron number $Z$, and as an ordered list \cite{Hir04b,Bar05}. \begin{figure} \includegraphics[width=.95\columnwidth]{fig1.ps} \caption{(Color online) Mass differences plotted as function of $Z$, $A$, $N$, and as an ordered list \cite{Hir04b,Bar05}.} \label{boustro} \end{figure} The sharp valleys and round peaks which remain after the removal of the smooth LDM mass contribution contain information related with shell effects due to the quantum motion of the individual nucleons inside the nucleus, nuclear deformations, and nuclear residual interactions. One of the main goals of the present paper is to further investigate the details of these corrections. Most theoretical descriptions of nuclear mass models have as a starting point the general expression \begin{eqnarray} M = {\bar M} + \delta M \end{eqnarray} where ${\bar M}$ is a smooth function of the number of nucleons, usually the liquid drop mass formula. By contrast $\delta M$ is a fluctuating function in the number of nucleons which accounts for the quantum nature of protons and neutrons within the many body problem. There is a variety of nuclear mass models in the literature, two of the most broadly utilized are the finite range droplet model (FRDM) \cite{Moll95}, which combines the macroscopic effects with microscopic shell and pairing corrections, including explicit deformation effects and the Strutinsky procedure \cite{stru}, and, on the other hand, the Duflo and Zuker (DZ) \cite{Duf94} model, where the microscopic corrections are functions of the valence numbers of protons and neutrons. The latter is inspired in the shell model, including explicitly the diagonal two- and three-body residual interactions between valence particles and holes. In principle the fluctuating part $\delta M$ also depends on the details of the interaction. However, according to Strutinsky's \cite{stru} energy theorem, the leading contribution can be evaluated within the mean field approximation which assumes the nucleus is composed of free nucleons confined by a one-body potential. It has been shown \cite{patricio} that even a simple one-body potential, in which the nucleons are confined inside a spherical rigid sphere (spherical model from now on), with radius $R = r_0 {N_{nuc}}^{1/3}$ ($r_0 \sim 1.1 fm$ and $N_{nuc}$ the number of nucleons) describes qualitatively some aspects of the experimental $\delta M$. However, for a more quantitative comparison one has to include small multipolar deformations of the spherical cavity and an effective number of nucleons \cite{Boh02,patricio}. While the idea of employing a spherical well to describe the independent particle model of the nucleus is rather old \cite{Gree55}, the corresponding magic numbers, associated with the zeros of the spherical Bessel functions, are in only rough agreement with the observed nuclear shell closures, even when an effective rescaling is employed \cite{patricio}. The spherical model has shown its best predictive power in systems with just one kind of particles, like electrons in spherical metal clusters, where shell closures are predicted in close agreement with the experimental observation \cite{Pav98}. On the theoretical side, a clear advantage of the spherical model is that $\delta M$ can be evaluated analytically in the semiclassical limit by expressing the exact spectral density of a quantum particle in a sphere as a trace formula \cite{blo}, namely, as a sum over periodic orbits of the classical counterpart. In this way explicit expressions for $\delta M$ are available for a nucleus composed of an arbitrary number of nucleons. In this letter we will show that this simple spherical model is specially suitable for the description of the autocorrelations $C(q)$ of $\delta M$ as a function of the number of particles. We will show that a quantitative agreement with the experimental mass autocorrelations can be obtained without any of the extensions (deformations of the sphere and an effective number of nucleons) needed for the case of the microscopic contributions to the nuclear mass. This is indeed remarkable given the simplicity of the model and the complex behavior of the nuclear many body problem. \section{Autocorrelations} Our object of study is the autocorrelation, \begin{eqnarray} \label{auto} C(q) = \frac{F(q)}{F(0)}\frac{N}{N-q} \end{eqnarray} with \begin{eqnarray} F(q) = \sum_{i}\delta M(i) \, \delta M(i+q) \end{eqnarray} where the sum runs, depending on the case, over the total number of nucleons, the neutron number $N$, or over a set including all possible nuclei as given by the boustrophedon list \cite{Hir04b,Bar05}. We shall also investigate $C(q)$ inside an isotopic chain, namely, we fix the number of protons $Z$ and examine the autocorrelations among all isotopes. Autocorrelations are a useful tool in identifying relationships between elements in a list or an array. The autocorrelation of a constant distribution is also a constant distribution, and that of a pure harmonic sine or cosine distribution will also be an oscillatory distribution. On the other hand, the autocorrelation of a random distribution is a delta function, peaked at $q=0$ plus a small random signal for any other $q$, signaling a null correlation length among the elements of the distribution. The fluctuating part of the nuclear mass distribution can be defined by \begin{eqnarray} \label{exp} \delta M_{exp} = M_{exp} - {\bar M}_{drop} ~, \end{eqnarray} where $M_{exp}$ is the experimental value for a certain nucleus according to the nuclear data chart \cite{Aud03} and ${\bar M}_{drop}$ is the prediction of the liquid drop model \cite{ILDM}. As shown in Fig 2, the autocorrelation $C(q)$ has a well defined oscillatory behavior with clear maxima and minima related with the presence of shell closures, as seen in Fig. 1. When the oscillation amplitude decreases, the position of the first zero in $C(q)$ provides an estimate of the size of the region in the nuclear chart where the microscopic, fluctuating contributions to the nuclear masses are strongly correlated. It will be shown that this region can include as many of 10 to 15 isotopes or isotones, with at least 200 neighboring nuclei significantly correlated. The oscillatory behavior of $\delta M$ is closely related with the oscillations in $C(q)$. In what follows it will be shown that not only the oscillation length, but also other details of these oscillations are well described by the spherical model. Theoretically $\delta M$ is expressed as a function of the spectral density $g(E) = \sum_i \delta(E-E_i) = \bar g(E) + \delta g(E)$ of the one body Hamiltonian (in our case a free fermion confined in a spherical cavity) as, \begin{eqnarray*} \label{M} \delta M = M - {\bar M} \end{eqnarray*} with \begin{eqnarray} \label{exM} M = 2 \sum_{i=1}^{N}E_i = 2\int^{E_F}E \,g(E)\, dE\\ \end{eqnarray} and \begin{eqnarray*} {\bar M} = 2\int^{{\bar E}_F}E \,{\bar g}(E) \, dE , \end{eqnarray*} where $E_i$ are the eigenvalues of the one-body Hamiltonian and ${\bar g}$ and $\delta g$ are the mean and fluctuating part of the spectral density respectively. The exact ($E_F$) and smooth (${\bar E}_F$) Fermi energies are obtained explicitly as a function of the number of particles by inversion of the following relation, \begin{eqnarray} \label{N} \frac{N_{nuc}}{2} = \int^{E_F} g(E)\, dE = \int^{{\bar E}_F} {\bar g}(E)\, dE . \end{eqnarray} where $N_{nuc}$ is the number of nucleons (neutrons or protons) and the factor two accounts for the spin degeneracy. The final expression of $\delta M$ in term of the spectral density is given by, \begin{eqnarray*} \label{fluc} \delta M(N_{nuc}) = M(N_{nuc}) - {\bar M_{nuc}} \\= 2 \int^{E_F}E \,g(E)\, dE - 2\int^{{\bar E}_F}E \,{\bar g}(E) \, dE . \end{eqnarray*} In order to compute analytically the autocorrelation $C(q)$ we will first evaluate $\delta M$ by using the semiclassical expression for the fluctuating part of the spectral density $\delta g(E)$ in a spherical cavity. We then mention how to get ${\bar E}_F$ as a function of the number of particles $N_{nuc}$. It is well known \cite{baduri} that, for generic cavities, the smooth part of the spectral density ${\bar g}(E)$ in three dimensions is given by, \begin{eqnarray} {\bar g}(E) = \frac{m}{2\pi^2 \hbar^2}\left [Vk + S + \frac{1}{6 \kappa}\int dS \left ( \frac{1}{R_1}+\frac{1}{R_2}\right)\right] \label{gbar} \end{eqnarray} where $E= \hbar^2 k^2/2m$, $V$ is the volume of the cavity, $S$ is the surface, $R_1, R_2$ are the radii of curvature and $\kappa$ is the scalar curvature. For a spherical cavity Eq. (\ref{gbar}) reduces to, \begin{eqnarray} {\bar g}(E) = \frac{1}{3\pi}E^{1/2}R^3 -\frac{1}{4}R^2+\frac{R}{6\pi}E^{-1/2} \end{eqnarray} In this way the mean Fermi energy is explicitly obtained as a function of $N_{nuc}$ by performing the integral in Eq. (\ref{N}) and then expressing ${\bar E}_F$ as a function of $N_{nuc}$. In the following section we give a brief account of how to evaluate $\delta g(E)$ semiclassically by a trace formula involving only classical quantities. \section{Semiclassical evaluation of the spectral density in a spherical cavity} The oscillatory part of the spectral density describes the fine structure of the spectrum. These oscillations are related with classical periodic orbits inside the cavity \cite{bal} (for an introduction see \cite{baduri,patricio}), \begin{eqnarray} \label{osc} \delta g(E)=\sum_{\alpha}A_{\alpha}(E)\exp(iS_{\alpha}(E)/\hbar+\nu_{\alpha}) ~, \end{eqnarray} where the index $\alpha$ labels the periodic orbits, $S_\alpha$ is the classical action and $\nu_{\alpha}$ is the Maslov index. As a general rule, the amplitude $A_\alpha(E,L)$ is a decreasing function of the cavity size $L$ but depends strongly on its shape. It increases with the degree of symmetry of the cavity. It is maximal in spherical cavities and minimal in cavities with no symmetry axis. The difference (for the same volume) between these two limits can be of orders of magnitude. \subsection{The spherical cavity} The oscillating part of the spectral density of a particle in a spherical cavity of radius $R$ has already been analyzed in the literature \cite{blo1,blo2}. Below we provide a brief overview and refer to \cite{blo2} for an account of the details of the calculation. For a spherical geometry the closed stationary trajectories are given by planar regular polygons along a plane containing the diameter. The length $L$ of the trajectories is given by the simple relation $L=2pR\sin(\phi)$ where $p$ is the number of vertexes of the polygon and $\phi=\pi t/p$ with $t$ being the number of turns around the origin of a specific periodic orbit. Two cases must be distinguished: Orbits with $p=2t$ corresponding with a single diameter repeated $t$ times contribute to the density of states as, \begin{eqnarray} \label{per1} {\delta g_{D}(E)}=-\frac{1}{2\pi E_0}\sum_{t=1}\frac{1}{t}\sin(4t\sqrt{E/E_0}) , \end{eqnarray} where $E = \frac{\hbar^2 k^2}{2m}$ and $E_0 = \frac{\hbar^2}{2mR^2}$. For the case $p>2t$ corresponding to regular polygons the contribution to the spectral density is given by, \begin{eqnarray*} \label{per} {\delta g_{P}(E)}=\frac{1}{E_0} \left(\frac{E}{E_0}\right)^{1/4} \sum_{t=1} \sum_{p>2t}(-1)^t \,{\sin(2\phi)} \, \\ \sqrt{\frac{\sin(\phi)}{p\pi}} \, \sin\left(\frac{3\pi}{4}+p\sin(\phi)\sqrt{E/E_0}\right). \end{eqnarray*} The complete expression for the fluctuating part of the spectral density is, \begin{equation} \label{oscR} \delta g(E)=\delta g_P(E)+\delta g_D(E) , \end{equation} where the first term yields the leading correction for sufficiently large cavities. A similar calculation can be in principle carried out for a chaotic cavity. In this case the spectral density can also be written in terms of classical periodic orbits by using the Gutzwiller trace formula. Although an explicit expression for the length of the periodic orbits, equivalent to Eq. (\ref{oscR}), is not in general available in this case it is still possible to estimate the amplitude of the oscillating part by using symmetry arguments. This amplitude increases with the symmetry of the cavity. In cavities with one or several symmetry axis periodic orbits are degenerate, namely, there exist different periodic orbits of the same length related by symmetry transformations. It can be shown that the amplitude, as a function of $k$, is enhanced by a factor $(kR)^{1/2}$ \cite{bal} for each symmetry axis. A spherical cavity has three symmetry axis so the symmetry factor $S$ is proportional to $S \sim (kR)^{3/2} \gg 1$. The factor $R$ is a typical length of the cavity. By contrast, chaotic cavities of the same volume have no additional symmetries and the symmetry factor S is unity, corresponding to the contribution of a single unstable periodic orbit. Consequently finite size effects are much more important in cavities with high symmetry. We have now all the ingredients to compute the autocorrelation $C(q)$ as a function of the number of nucleons in the rigid spherical approximation for the nucleus. \section{Mass autocorrelations in the nuclear spherical model. Results and comparison with experiment} In this section we adapt our previous results to the specific case of the nucleus. Our aim is to evaluate the autocorrelation function $C(q)$ given in Eq. (\ref{auto}). We now describe the smooth part of the ground state energy ${\bar M}$ by means of the liquid drop model. The fluctuating part $\delta M$ is computed by assuming that nucleons, protons and neutrons, are confined in a spherical cavity. Obviously this is a mean field approximation that should become better as the number of nucleons grows. For comparison with the experimental results we will typically remove those nuclei with $N < 30$, a region where the mean field approximation is not appropriate. In our calculations, the radius $R$ is related to the number of nucleons $N_{nuc}$ by $R=r_0{N_{nuc}}^{1/3}$ with $r_0 \sim 1.1 fm$. We remark that since neutrons and protons are distinguishable one has to consider these contributions separately, each one with its own Fermi energy but with the same radius. We are now ready to write down an explicit analytical expression for $\delta M$, \begin{eqnarray} \label{flucn} \delta M = 2 \int^{E_F}E \,g(E)\, dE - 2\int^{{\bar E}_F}E \,{\bar g}(E) \, dE , \end{eqnarray} where the spectral density $\delta g(E)$ is given by Eq. (\ref{oscR}), and $\bar E_F$ is expressed as a function of the number of particles $N_{nuc}$ by solving exactly the third order equation in $\bar E_F$, \begin{eqnarray} N_{nuc} = \int_{0}^{\bar E_F}{\bar g}(E) = \frac{2}{9\pi}{\epsilon}^{3/2} -\frac{\epsilon}{4}+ \frac{1}{3\pi}{\epsilon}^{1/2} \end{eqnarray} with $\epsilon = {\bar E_F}/E_0$. Finally the exact Fermi Energy $E_F$ is computed by inverting numerically Eq. (\ref{N}). In all cases we assume a mass $m_p \sim m_n \sim 940 MeV$. The sum over periodic orbits in Eq. (\ref{oscR}) has a natural cutoff for scales (length of periodic orbits) such that inelastic processes which break the quantum coherence are relevant. In order to account for this fact we have included in the spectral density Eq. (\ref{oscR}) a damping factor $k(l)=\frac{l/\xi}{\sinh(l/\xi)}$ where $l$ is the length of the periodic orbit and $\xi$ a coherence length that acts as a effective cutoff for $l \gg \xi$. Following the estimation of Ref.\cite{patricio} for the nuclear case we have set $\xi \sim 5 R$. We have checked that the gross features of $C(q)$ do not depend on the cutoff, provided that enough periodic orbits are taken into account but other parameters like the amplitude of the oscillations of $C(q)$ may depend on it. This value of the coherence length can be associated with an effective temperature \cite{baduri} close to 1 MeV, typical of pairing energies not included in the model. \subsection{Comparison with experimental results: $C(q)$ as a function of the number of neutrons} We now compute $C(q)$, defined in Eq. (\ref{auto}), for a nucleus composed of $N$ neutrons and $Z$ protons with the fluctuating part of the mass given by Eq. (\ref{flucn}). First we examine the autocorrelation function as a function of the total number of neutrons $N$. We remark that predictions of our model for $C(q)$ are essentially parameter free. Since there are many different nuclei with the same number of neutrons a proper averaging method is needed. In order to proceed $C(q)$ is evaluated as follows (see Fig \ref{Nfrombous} right): we first obtain the analytical prediction for $\delta M = \delta M(N) + \delta M(Z)$ for each of the $2140$ combinations of $N$ and $Z$, then perform an average over different nuclei with the same $N$ and finally compute the autocorrelation function $C(q)$. The experimental $C(q)$ is obtained by using the same averaging procedure. As shown in Fig. \ref{Nfrombous}, despite the simplicity of the model, the agreement with the experimental results is quite satisfactory. It accurately reproduces both the amplitude of the oscillations and the position of the maxima and minima. The agreement between theory and experiment gets better if only heavier nuclei are considered. This is expected due the mean field nature of the model. The agreement between theory and experiment could be improved if, as discussed in \cite{patricio}, multipolar corrections are considered. However we prefer to stick to our parameter-free model in order to emphasize that the main features of the autocorrelation function are related to the spherical symmetry of the problem. We remark that similar results are obtained if, instead of taking into account all the possible combinations of $N$ and $Z$, we make the simple assumption $\delta M\sim 2 \delta M(N)$, with $r = r_0 (2N)^{1/3}$. For the sake of completeness we have also computed $C(q)$ as a function of the total number of nucleons $A = N +Z$. As was expected (see Fig \ref{Afrombous}) a similar degree of agreement has been found. \begin{figure*}[ht] \vspace {1cm} \hfill \begin{minipage}[t]{.45\textwidth} \includegraphics[width=\columnwidth,clip]{fig2a.eps} \end{minipage} \begin{minipage}[t]{.45\textwidth} \includegraphics[width=\columnwidth,clip]{fig2b.eps} \end{minipage} \caption{(Color online) The autocorrelation $C(q)$ as a function of $N= 30 \ldots, 154$. (Left) and $N= 60 \ldots, 154$ (Right). In both cases the agreement with the experimental results (diamond) is quite good. For $N > 60$ (Right) it reproduces correctly both the amplitude of the oscillations and the positions of the maxima and minima of the experimental data.} \label{Nfrombous} \end{figure*} \begin{figure*}[ht] \vspace {1cm} \hfill \begin{minipage}[t]{.45\textwidth} \includegraphics[width=\columnwidth,clip]{fig3a.eps} \end{minipage} \begin{minipage}[t]{.45\textwidth} \includegraphics[width=\columnwidth,clip]{fig3b.eps} \end{minipage} \caption{(Color online) The autocorrelation $C(q)$ as a function of $A = N+Z = 50 \ldots, 254$. (Left) and $A= 116 \ldots, 254$ (Right). In both cases the agreement with experimental results (diamond) is quite good. For $A \geq 116$ (Right) reproduces correctly both the amplitude of the oscillations and the positions of the maxima and minima of the experimental data.} \label{Afrombous} \end{figure*} \subsection{Comparison with experimental results: $C(q)$ as a function of the boustrophedon ordering scheme} By performing averages (for $A$ or $N$ fixed) over the nuclear data-chart we may be loosing valuable information about nuclear mass correlations. Moreover, since cuts along fixed $N$ or $A$ have a small number of nuclei, it is difficult to extract definite conclusions. To overcome these difficulties, we organize all nuclei with measured mass by ordering them in a boustrophedon, namely, a 1D list composed by $2140$ entries numbered as follows: Even-A nuclei are ordered by increasing $N-Z$, while odd-A ones follow a decreasing value of $N-Z$. We have evaluated both the experimental and the analytical autocorrelation $C(q)$, Eq. (\ref{auto}), as a function of the order number of the boustrophedon. For each $i=1,\ldots,2140$, $\delta M$ is evaluated by a specific $N$ and $Z$ combination chosen according to the above classification scheme, as we did previously, but in this case we have not performed any average. As is shown in Fig \ref{bous} the agreement between theory and experiment is also quite satisfactory for this more general correlation function. Both the global oscillatory behavior and the more microscopic details (see right plot in Fig. \ref{bous}) are well reproduced. From the above extensive analysis we conclude that the main features of the nuclear mass correlations are captured by the simple spherical model. As was mentioned previously, our analytical results could be further improved by considering small multipole corrections to the spherical shape \cite{creagh}. \begin{figure*}[ht] \begin{minipage}[t]{.45\textwidth} \includegraphics[width=\columnwidth,clip]{fig4a.eps} \end{minipage} \begin{minipage}[t]{.45\textwidth} \includegraphics[width=\columnwidth,clip,angle=0]{fig4b.eps} \end{minipage} \caption{(Color online) The autocorrelation $C(q)$ as a function of the order number according to the boustrophedon list. (Left) Both the result for the spherical model and the experimental data are obtained by considering the $2140$ possible combinations of $N$ and $Z$. The agreement between theory (solid line) and experimental data (diamonds) is quite good. (Right) The same but now $C(q)$ is plotted only in the window $q < 150$.} \label{bous} \end{figure*} However it is remarkable that our simple spherical model can reproduce in great detail average properties of the nuclear autocorrelations. \section{Power spectrum and integrable dynamics} Finally as a further check of the validity of our results we compare the power-spectrum associated to the nuclear mass fluctuation $\delta M(i)$ ($i=1,\ldots, N=2140$ is the label of the nuclei according to the boustrophedon ordering) with the prediction of the spherical model. The discrete Fourier transforms of the mass fluctuation is just, \begin{equation} F(k)={\frac{1}{\sqrt{N}}} \sum_j {\frac{\delta M(j)}{\sigma_{\rm rms}}} \exp\left({\frac{-2\pi ijk}{N}}\right). \end{equation} with the root-mean-square (rms) deviations given by \begin{equation} \sigma_{\rm rms}= \left[{\frac 1 N} \sum_{j=1}^N \left(\delta M(j) \right)^2 \right]^{1/2}, \end{equation} where $\delta M(j)$ is either the experimental or the analytical fluctuating part of the nuclear mass. The decay of the associated power spectrum $S(k)= |F(k)|^2$ provides information about the type of dynamics of the model. Thus it can be shown \cite{rel} that, for scales roughly in between the shortest periodic orbit and the mean level spacing, a power law decay $S(k)\sim k^{-\alpha}$ with $\alpha = 4$ corresponds to integrable classical dynamics. In Fig \ref{power} we observe a close agreement between the power spectrum of the spherical model and that of the experimental nuclear masses. Moreover the decay in the range $\sim [1,3]$, which includes frequencies between those associated with the mean level spacing and with the shortest periodic orbit, follows a power-law with $\alpha \sim 4.2$, in agreement with the prediction for integrable dynamics. Based on these results we suggest that the power spectrum could be utilized as an effective test to check whether a strongly interacting many body system is indeed close to integrability or not. The power spectrum of differences between measured masses and those calculated in different models has been studied in \cite{Bar05}. A gradual vanishing of the slope $\alpha$ was observed as more sophisticated and realistic models were utilized. For the most realistic models a white noise $\alpha = 0$ (all frequencies have equal weight) signal was found. For a detailed study of intermediate situations we refer to \cite{Hir04a}. \begin{figure}[ht] \includegraphics[width=.95\columnwidth,clip]{fig5.eps} \caption {(Color online) Power spectrum $S(k)$ of the nuclear mass fluctuation for the boustrophedon ordering. As is observed, the agreement between theory and experiment is very good in the low and intermediate frequency region. Their power-law decay with $\alpha \sim 4.2$ is close to the result $\alpha = 4$ predicted for classically integrable systems.} \label{power} \end{figure} \section{Conclusions} A simple semiclassical analysis, where protons and neutrons are described by free particles bouncing elastically, back and forth inside a rigid sphere, has been shown to nearly reproduce the autocorrelations of the differences between measured nuclear masses and those calculated using the liquid drop model. The results are remarkable, offering a different insight on the microscopic corrections needed to describe nuclear masses with precision. It also has been shown that it is possible to perform autocorrelation analysis of nuclear mass differences along very long chains of isotones, isotopes, isobars and other chains, a task generally considered very difficult to perform \cite{Olof04}. While interesting in themselves, these results could also provide a theoretical explanation of the amazing success of the two dimensional Fourier analysis, performed in the $Z-N$ space, in the description and prediction of nuclear masses \cite{Bar05b}. AMG was supported by a Marie Curie Outgoing Fellowship, contract MOIF-CT-2005-007300. This work was supported in part by PAPIIT-UNAM and Conacyt-Mexico.
1,477,468,750,523
arxiv
\section{Introduction} The analysis of large sets of binary data is a central issue in many fields such as biostatistics (\cite{Schildcrout05Longitudinal_Binary},~\cite{Wilbur02multivariate_Binary_regression}), image processing (\cite{Yue12Binary_neuroimaging}), machine learning (\cite{Banerjee08sparse_ML_binary},~\cite{Koh07logistic_l1regularized}), medicine (\cite{Christakis08socialNetwork_medicine}), text analysis (\cite{Taddy13multinomial_text}) and statistics (\cite{Ravikumar10Ising_logistic_l1regularized},~\cite{Sherman06Binary_autologistic_spatial},~\cite{Visaya15multivariate_Binary_longitudinal}). In this paper we consider binary series representing the edge activation in time-varying (\cite{Holme12TemporalNetworks}) and multilayer networks (\cite{Boccaletti14Multilayer_Networks}). The application focuses on financial networks since, in spite of the wide literature on theoretical models (e.g.~\cite{Acemoglu12Network_AggregateFluctuations},~\cite{Chaney14aer_InternTradeNetwork},~\cite{Mele17ecta_FormationDenseNetwork},~\cite{Graham17ecta_FormationNetwork_degree_heterogeneity}), the statistical analysis of their dynamical properties is still at its infancy (e.g., \cite{Billioetal12GrangerNet} and \cite{Diebold14NetworkTopology}). The study of temporal networks is very interdisciplinary and we expect our statistical framework to be of interest for many disciplines. The first issue in building a dynamic network model concerns the impact of covariates on the dynamic process of link formation. We propose a parsimonious model that can be successfully used to this aim, relying on tensors and their decompositions. See~\cite{KoldaBader09},~\cite{Cichocki15Tensor_Multiway_Analysis} and~\cite{Cichocki16Tensor_theory} for a review. The main advantage in using tensors is the possibility of dealing with the complexity of novel data structures which are becoming increasingly available, such as networks, multilayer networks, three-way tables, spatial panels with multiple series observed for each unit (e.g., municipalities, regions, countries). The use of tensor algebra has the advantage of preventing data reshape and manipulation, and preserving data intrinsic structure. Another advantage of tensors stems from the decompositions and approximations, which provide representations in lower dimensional spaces (see ch.7-8 of\cite{Hackbusch12Tensor_book}). In this paper, we exploit the parallel factor (PARAFAC) decomposition for reducing the number of parameters to estimate, thus making inference on network models feasible. Another issue in network modelling regards the time variation of the network topology. For example, structural breaks have been detected by~\cite{Billioetal12GrangerNet},~\cite{Ahelegbey16BStuctVAR} and~\cite{Bianchi19GraphicalSUR} in contagion networks and \cite{Giraitis16DynamicNetwork_estimating_Financial} found evidence of link persistence in interbank networks. Starting from these stylized facts, we propose a new Markov switching model for capturing structural changes in temporal networks. After~\cite{Hamilton89MS}, the Markov switching dynamics has been used in several time series models, such as VARs (\cite{Sims08LargeMarkovSwitch}), factor models (\cite{Kim98MarkovSwitch_Factor}), dynamic panels (\cite{Kaufmann15MSwitch_timevarying_transitions}), stochastic volatility (\cite{Chib02MCMC_MarkovSwitch_StochVol}), ARCH and GARCH (\cite{Haas04MarkovSwitch_GARCH}) and stochastic correlation (\cite{Casarin18BayesMS_DynCorrelation}). See \cite{Fruhwirth06FiniteMixtures_MarkovSwitch_book} for an introduction to Markov switching models. We contribute to this literature by applying Markov switching to tensor valued data. Many real world temporal networks exhibit sparsity (\cite{Newman10Networks_book}) and sudden abrupt changes in the sparsity level across time. See also~\cite{Ahelegbey16SparseGVAR} for an empirical evidence on financial networks. Motivated by this observation, we propose a zero-inflated logit regression for the edge activation and allow for Markov switching sparsity levels. We contribute to the statistics literature on models for network data (\cite{DuranteDunson14LogitDynamicNetwork_GP},~\cite{WangDuranteDunson17BayesLogitNetwork},~\cite{Carvalho08Sparse_Factor_gene_network}, \cite{Chen18Bayes_Dynamic_Network},~\cite{Berry19Bayes_Count_Network},~\cite{Snijders10MLE_Network_Dynamic},~\cite{Kolar10Estimate_Temporal_Nets}) and matrix-valued data (\cite{Windle14StateSpace_matrices},~\cite{Carvalho07Dynamic_Matrix_Graphical}) by proposing a nonlinear model for sparse tensor-valued data. The remainder of this paper is organized as follows. Section~\ref{sec:model} presents the model. Sections~\ref{sec:inference}-\ref{sec:posterior_approx} discuss the Bayesian inference procedure. Section~\ref{sec:application} provides an application to financial network data. Concluding remarks are given in Section~\ref{sec:conclusions}. Further details and results are provided in the supplementary material. \section{A Markov Switching Model for Networks} \label{sec:model} Relevant objects in our modelling framework are $D$-order tensors $\mathcal{X} \in \mathds{R}^{d_1\times\ldots\times d_D}$ of size $(d_1\times\ldots\times d_D)$, that are $D$-dimensional arrays, elements of the tensor product of $D$ vector spaces, each one endowed with a coordinate system. See~\cite{Hackbusch12Tensor_book} for an introduction to tensor spaces. A tensor can be though of as the multidimensional extension of a matrix (i.e., a $2$-order tensor), where each dimension is called mode. Other objects of interest in this paper are tensor slices, i.e. matrices obtained by fixing all but two of the indices of the array, and tensor fibers, i.e. vectors resulting from keeping fixed all indices but one. See \autoref{sec:apdx_tensor} for some background material on tensors. Tensors are particularly useful for representing multilayer temporal networks (\cite{Boccaletti14Multilayer_Networks} and~\cite{Kivela14Multilayer_Networks}). Let $G_t = (V_1,V_2,M,E_t)$ be a multilayer temporal network, where $V_1 = \lbrace 1,\ldots,I \rbrace$, $V_2 = \lbrace 1,\ldots,J \rbrace$ are two vertex sets, $M = \lbrace 1,\ldots,K \rbrace$ is the set of layers and $E_t \subset (V_1 \times V_2 \times M)$ is the edge set at time $t=1,\ldots,T$. The network connectivity can be encoded in a $4$-order tensor $\mathcal{X}$ of size $(I\times J\times K\times T)$, with entries \begin{equation} x_{ijk,t} = \Bigg\{ \begin{array}{cc} 1 & \text{if } \lbrace i,j,k \rbrace \in E_t \\ 0 & \text{if } \lbrace i,j,k \rbrace \notin E_t. \end{array} \end{equation} This definition is general enough to include undirected and directed networks, and undirected bipartite networks. It can be further extended to account for other types of networks (\cite{Kivela14Multilayer_Networks}). One of the most recurrent features of observed networks is sparsity. In random graph theory sparsity is defined asymptotically as the feature of a network where the number of edges grows subquadratically with the number of nodes \cite[see][ch.7]{Diestel12GraphTheory}. In finite graphs, sparsity occurs when there is an excess of zeros in the connectivity tensor, that is, when the degree distribution has a peak at $0$. To describe network sparsity we assume that the probability of observing an edge in each layer of the network is a mixture of a Dirac mass at $0$ and a Bernoulli distribution. Since the sparsity pattern in many real networks is not time homogeneous, we assume that both the mixing and the Bernoulli probabilities are time-varying. Finally, a logistic regression is assumed to include covariates. In summary, for each entry $x_{ijk,t}$ of the tensor $\mathcal{X}_t$ (that is, each edge of the corresponding network) we assume a zero-inflated logit regression model \begin{equation} \begin{split} x_{ijk,t}|\rho(t),\mathbf{g}_{ijk}(t) & \sim \rho(t) \delta_{\lbrace 0 \rbrace}(x_{ijk,t}) + (1-\rho(t))\delta_{\lbrace d_{ijk,t} \rbrace}(x_{ijk,t}) \\ d_{ijk,t} & = \mathds{1}_{\mathds{R}_+} (x_{ijk,t}^*) \\ x_{ijk,t}^* & = \mathbf{z}_{ijk,t}' \mathbf{g}_{ijk}(t) + \varepsilon_{ijk,t} \qquad \varepsilon_{ijk,t} \distas{iid} \text{Logistic}(0,1). \label{eq:model_all_xijkt} \end{split} \end{equation} where $\mathbf{z}_{ijk,t}\in\mathds{R}^Q$ is a vector of edge-specific covariates and $\mathbf{g}_{ijk}(t) \in \mathds{R}^Q$ is a time-varying edge-specific vector of parameters and $\rho(t)$ is the time-varying probability of excess of zeros in the network. Without loss of generality, we assume the set of covariates is common to all edges, i.e. $\mathbf{z}_{ijk,t} = \mathbf{z}_t$. The specification of the model is completed with the assumption that the parameters $\rho(t)$ and $\mathbf{g}_{ijk}(t)$ are driven by a hidden Markov chain $\lbrace s_t \rbrace_{t=1}^T$ with finite state space $\lbrace 1,\ldots,L \rbrace$, that is $\rho(t) = \rho_{s_t}$ and $\mathbf{g}_{ijk}(t) = \mathbf{g}_{ijk,s_t}$. The transition matrix of the chain is assumed to be time-invariant and denoted by $\boldsymbol{\Xi} = (\boldsymbol{\xi}_{1}',\ldots,\boldsymbol{\xi}_{L}')'$, where $\boldsymbol{\xi}_{l} = (\xi_{l,1},\ldots,\xi_{l,L})$ is a probability vector and $\xi_{i,j} = p(s_t=j | s_{t-1}=i)$ is the transition probability from state $i$ to state $j$. By integrating out $x_{ijk,t}^*$ in eq. \eqref{eq:model_all_xijkt}, we obtain the regime-specific probabilities of observing an edge from $i$ to $j$ in the layer $k$ \begin{align} p(x_{ijk,t} = 1 | \rho_l,\mathbf{g}_{ijk,l}) & = (1-\rho_l) \frac{\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})}{1+\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})} \\ p(x_{ijk,t} = 0 | \rho_l,\mathbf{g}_{ijk,l}) & = \rho_l + (1-\rho_l) \bigg( 1-\frac{\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})}{1+\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})} \bigg). \end{align} For the ease of notation, we provide a compact representation of the general model. First, we define $\mathbb{X}^{d} = \lbrace \mathcal{X}\in \mathds{R}^{i_1\times\ldots\times i_d} \rbrace$ the set of real valued $d$-order tensors of size $(i_1\times\ldots\times i_d)$, $\mathbb{X}_{0,1}^{d} = \lbrace \mathcal{X}\in \mathds{R}^{i_1\times\ldots\times i_d} : \mathcal{X}_{i_1,\ldots,i_d} \in \lbrace 0,1 \rbrace \rbrace \subset \mathbb{X}^{d}$ the set of adjacency tensors of size $(i_1\times\ldots\times i_d)$, and $\Psi : \mathbb{X}^{d} \rightarrow \mathbb{X}_{0,1}^{d}$ a linear operator such that $\mathcal{X}^* \mapsto \Psi(\mathcal{X}^*) \in \lbrace 0,1 \rbrace^{i_1\times\ldots\times i_d}$. For a tensor $\mathcal{X}_t^*$ with $k$-th slice $\mathbf{X}_{k,t}^* \in\mathcal{X}^{I,J}$ it is possible to write the model in tensor form by $\Psi(\mathbf{X}_{k,t}^*) = (\mathds{1}_{\mathds{R}_+}(x_{ijk,t}^*))_{i,j}$, where $\mathds{1}_A (x)$ is the indicator function, which takes value $1$ if $x\in A$ and $0$ otherwise. Second, we define the mode-$n$ product between a $D$-order tensor $\mathcal{X}\in\mathds{R}^{d_1\times\ldots\times d_D}$ and a vector $\mathbf{v}\in\mathds{R}^{d_n}$, as a $(D-1)$-order tensor $\mathcal{Y}\in\mathds{R}^{d_1\times\ldots\times d_{n-1}\times d_{n+1}\times\ldots\times d_D}$ whose entries are \begin{equation} \mathcal{Y}_{(i_1,\ldots,i_{n-1},i_{n+1},\ldots,i_D)} = (\mathcal{X} \times_n \mathbf{v})_{(i_1,\ldots,i_{n-1},i_{n+1},\ldots,i_D)} = \sum_{i_n=1}^{d_n} \mathcal{X}_{i_1,\ldots,i_n,\ldots,i_D} \mathbf{v}_{i_n} \, . \label{eq:tensor_moden_vector} \end{equation} By collecting the coefficients $\mathbf{g}_{ijk}(t)$ along the indices $i,j,k$ in a $4$-order tensor $\mathcal{G}(t) \in \mathds{R}^{I\times J \times K\times Q}$, we can rewrite eq.~\eqref{eq:model_all_xijkt} in the compact form: \begin{equation} \Bigg\lbrace \begin{array}{ll} \mathcal{X}_t = \mathcal{B}(t) \odot \Psi(\mathcal{X}_t^*) & \qquad b_{ijk}(t) \distas{iid} \mathcal{B}ern(1-\rho(t)) \\ \mathcal{X}^*_{t} = \mathcal{G}(t) \times_4 \mathbf{z}_t + \mathcal{E}_t & \qquad \varepsilon_{ijk,t} \distas{iid} \text{Logistic}(0,1) \end{array} \label{eq:model_compact_first} \end{equation} where $\mathcal{B}(t) \in \lbrace 0,1 \rbrace^{I\times J\times K}$ and $\mathcal{E}_t \in \mathds{R}^{I\times J\times K}$ are tensors of the same size of $\mathcal{X}_t$, with entries $b_{ijk}(t)$ and $\varepsilon_{ijk,t}$, respectively, and the symbol $\odot$ is the Hadamard product (\cite{KoldaBader09}). Matrix operations and results from linear algebra can be generalized to tensors (see~\cite{Hackbusch12Tensor_book}, \cite{Kroonenberg08AppliedMultiwayDataAnalysis}). This model is closely related to a switching regression representation (see Ch. 8 of \cite{Fruhwirth06FiniteMixtures_MarkovSwitch_book}) which can be used to carry out inference simultaneously for all coefficient tensors. By introducing a dummy coding for $s_t$ through $L$ binary variables $\zeta_{t,l} = \mathds{1}_{\lbrace l \rbrace}(s_t)$, $l=1,\ldots,L$, model \eqref{eq:model_compact_first} is written as \begin{equation} \begin{cases} \mathcal{X}_t = \mathcal{B}(t) \odot \Psi(\mathcal{X}_t^*) & \quad b_{ijk}(t) \distas{iid} \mathcal{B}ern(1-\rho(t)) \\[2pt] \mathcal{X}_t^* = \mathcal{G} \times_4 (\boldsymbol{\zeta}_t \otimes \tilde{\mathbf{z}}_t)' + \mathcal{E}_t = \mathcal{G} \times_4 (\boldsymbol{\zeta}_t, \boldsymbol{\zeta}_t \otimes \mathbf{z}_t)' + \mathcal{E}_t & \quad \varepsilon_{ijk,t} \distas{iid} \text{Logistic}(0,1) \\[2pt] \boldsymbol{\zeta}_{t+1} = \boldsymbol{\Xi} \boldsymbol{\zeta}_t + \tilde{\mathbf{u}}_t & \quad \mathbb{E}[\tilde{\mathbf{u}}_t | \tilde{\mathbf{u}}_{t-1}] =0 \end{cases} \label{eq:model_factor_statespace_form} \end{equation} which is a switching SUR (\cite{Zellner62SUR}, \cite{Bianchi19GraphicalSUR}), where $\otimes$ denotes the Kronecker product, $\lbrace \tilde{\mathbf{u}}_t \rbrace_t$ is a martingale difference process, $\tilde{\mathbf{z}}_t = (1, \mathbf{z}_t)'$ and $\boldsymbol{\zeta}_t = (\zeta_{t,1}, \ldots, \zeta_{t,L})'$. We propose a parsimonious parametrisation of the model by exploiting tensor representations (see~\cite{KoldaBader09} for a review). In particular we assume a PARAFAC decomposition with fixed rank $R$ for the tensor $\mathcal{G}(t) = \mathcal{G}_{s_t}$: \begin{equation} \mathcal{G}(t) = \sum_{r=1}^R \boldsymbol{\gamma}_{1}^{(r)}(t) \circ \boldsymbol{\gamma}_{2}^{(r)}(t) \circ \boldsymbol{\gamma}_{3}^{(r)}(t) \circ \boldsymbol{\gamma}_{4}^{(r)}(t) \, , \label{eq:CP_decomposition} \end{equation} where the vectors $\boldsymbol{\gamma}_{h}^{(r)}(t) = \boldsymbol{\gamma}_{h,s_t}^{(r)}$, $h=1,\ldots,4$, $r=1,\ldots,R$, are called the marginals of the PARAFAC decomposition and have length $I$, $J$, $K$ and $Q$, respectively. See \autoref{sec:apdx_tensor} and the supplement for further details. This specification permits us to: (i) achieve parsimony of the model, since for each value of the state $s_t$ the dimension of the parametric space is reduced from $IJKQ$ to $R(I+J+K+Q)$; (ii) introduce sparsity in the coefficient tensor, through a suitable choice of the prior distribution for the PARAFAC marginals. \section{Bayesian Inference} \label{sec:inference} As regards the prior distributions for the parameters of interest, we choose the following specifications. We assume a global-local shrinkage prior for on $\boldsymbol{\gamma}_{h,l}^{(r)}$ \begin{equation} p(\boldsymbol{\gamma}_{h,l}^{(r)} | \bar{\boldsymbol{\zeta}}_{h,l}^r, \tau, \phi_r, w_{h,r,l}) \sim \mathcal{N}_{n_h}(\bar{\boldsymbol{\zeta}}_{h,l}^r, \: \tau \phi_r w_{h,r,l} \mathbf{I}_{n_h}) \label{eq:prior_gammas} \end{equation} for $r=1,\ldots,R$, each $h=1,\ldots,4$ and each $l=1,\ldots,L$, where $n_1= I$, $n_2=J$, $n_3=K$, $n_4=Q$. The parameter $\tau$ represents the global component of the variance, common to all marginals, $\phi_r$ is the level component and $w_{h,r}$ is the local component. The choice of a global-local shrinkage prior, as opposed to a spike-and-slab distribution, is motivated by the reduced computational complexity and the capacity to handle high-dimensional settings. In what follows we denote with $p(\mathcal{G}|\mathcal{W},\boldsymbol{\phi},\tau)$ the joint prior of the $\boldsymbol{\gamma}_{h,l}^{(r)}$, where $\mathcal{W} = \lbrace w_{h,r,l} \rbrace_{h,r,l}$. We assume the following hyperpriors for the variance components\footnote{We use the shape-rate formulation for the gamma distribution, such that $\mathbb{E}(x) = \alpha/\beta$, $Var(x)=\alpha/\beta^2$.}: \begin{align} \label{eq:prior_tau} p(\tau) & \sim \mathcal{G}a(\bar{a}^\tau, \bar{b}^\tau) \qquad \bar{a}^\tau = \bar{\alpha} R \\\label{eq:prior_phi} p(\boldsymbol{\phi}) & \sim \mathcal{D}ir(\bar{\boldsymbol{\alpha}}) \quad \qquad \bar{\boldsymbol{\alpha}} = \bar{\alpha}\boldsymbol{\iota}_R \\ \label{eq:prior_w} p(w_{h,r,l}|\lambda_l) & \sim \mathcal{E}xp(\lambda_l^2/2) \qquad \forall \, h,r,l \\ \label{eq:prior_lambda} p(\lambda_l) & \sim \mathcal{G}a(\bar{a}_l^\lambda,\bar{b}_l^\lambda) \qquad \forall \, l \, , \end{align} where $\boldsymbol{\iota}_n$ is the $n$-dimensional vector of ones. The further level of hierarchy for the local components $w_{h,r,l}$ is added with the aim of favouring information sharing across local components of the variance (indices $h$ and $r$) within a given regime $l$. The specification of an exponential distribution for the local component of the variance of the $\boldsymbol{\gamma}_{h,l}^{(r)}$ yields a Laplace (or Double Exponential) distribution for each component of the vectors once the $w_{h,r,l}$ is integrated out, that is $\boldsymbol{\gamma}_{h,l,i}^{(r)}|\lambda_l,\tau,\phi_r \sim \text{Laplace}(0,\lambda_l/\sqrt{\tau\phi_r})$ for all $i=1,\ldots,n_h$. The marginal distribution of each entry, integrating all remaining random components, is a generalized Pareto distribution, which favours sparsity. In logit models it is not possible to identify the coefficients of the latent regression equation as well as the variance of the noise. As a consequence, we make the usual identifying restriction by imposing unitary variance for each $\varepsilon_{ijk,t}$. The mixing probability of the observation model is assumed beta distributed: \begin{equation} p(\rho_l) \sim \mathcal{B}e(\bar{a}_l^\rho, \bar{b}_l^\rho) \qquad \forall \, l \, . \label{eq:prior_rho} \end{equation} A well known identification issue for mixture models is the label switching problem (e.g., see \cite{Fruhwirth01LabelSwitch_PermutationSampler}). When the specific application provides meaningful restrictions on the value of some parameters (e.g., from theory, or interpretation), they can be used for identifying the regimes. Following this approach, we assume $\rho_1 > \rho_2 > \ldots > \rho_L$, meaning that regime $1$ represents the sparsest and regime $L$ the densest. Finally, we assume each row of the transition matrix $\boldsymbol{\xi}_l$ follows a Dirichlet distribution \begin{equation} p(\boldsymbol{\xi}_{l}) \sim \mathcal{D}ir(\bar{\mathbf{c}}_{l}) \quad \forall \, l \, . \label{eq:prior_xi} \end{equation} The overall structure of the hierarchical prior distribution is represented graphically by means of the directed acyclic graph in Fig.~\ref{fig:prior}. \begin{figure}[t] \centering \includegraphics[trim= 0mm 0mm 0mm 0mm,clip,scale= 1.00]{DAGtikz-figure0.eps} \caption{Directed acyclic graph of the model in eq.~\eqref{eq:model_compact_first} and prior structure in eq.~\eqref{eq:prior_gammas}-\eqref{eq:prior_xi}. Gray circles denote observable variables, white solid circles indicate parameters, white dashed circles indicate fixed hyperparameters. Directed edges represent the conditional independence relationships.} \label{fig:prior} \end{figure} \section{Posterior Approximation} \label{sec:posterior_approx} Since the joint posterior is not tractable, we apply Markov chain Monte Carlo (MCMC) combined with a data augmentation strategy (\cite{Tanner87DataAugmentation}). We introduce allocation variables for the mixture in eq.~\eqref{eq:model_all_xijkt} and the P\'olya-Gamma augmentation of~\cite{Polsonetal13PolyaGamma}, which allows for conjugate full conditional distributions and a better mixing of the MCMC chain. See also~\cite{WangDuranteDunson17BayesLogitNetwork} and~\cite{Holsclaw17PolyaGamma_NotHomogMarkovmodel} for an application of the P\'olya-Gamma scheme to network-response regression and hidden Markov models, respectively. Define $\boldsymbol{\mathcal{X}} = \lbrace \mathcal{X}_t \rbrace_{t=1}^T$, $\mathbf{s} = \lbrace s_t \rbrace_{t=0}^T$ and let $\boldsymbol{\theta}$ denote the set of parameters. For each $l=1,\ldots,L$, we define $\mathcal{T}_l = \lbrace t : \zeta_{t,l}=1 \rbrace$ and $T_l = \#\mathcal{T}_l$. The data augmented likelihood of the model in eq. \eqref{eq:model_compact_first} is \begin{align} L(\boldsymbol{\mathcal{X}},\mathbf{s}|\boldsymbol{\theta}) & = L(\boldsymbol{\mathcal{X}}|\mathbf{s}, \boldsymbol{\theta}) L(\mathbf{s} | \boldsymbol{\theta}), \label{eq:likelihood_X_y_state_1} \end{align} where \begin{align} L(\boldsymbol{\mathcal{X}} | \mathbf{s},\boldsymbol{\theta}) & = \prod_{l=1}^L \prod_{t\in \mathcal{T}_l} \prod_{i=1}^I \prod_{j=1}^J \prod_{k=1}^K \bigg( \frac{(1-\rho_l) \exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})}{1+\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})} \bigg)^{x_{ijk,t}} \bigg( \rho_l + \frac{1-\rho_l}{1+\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l})} \bigg)^{1-x_{ijk,t}} \label{eq:likelihood_X_y_state_2} \end{align} and \begin{align} L(\mathbf{s}|\boldsymbol{\theta}) & = \prod_{g=1}^L \prod_{l=1}^L \xi_{g,l}^{N_{gl}(\mathbf{s})}, \label{eq:likelihood_X_y_state_3} \end{align} with $N_{gl}(\mathbf{s}) = \# \lbrace \zeta_{t-1,g}=1, \zeta_{t,l}=1, \; t=1,\ldots,T \rbrace$, $g,l=1,\ldots,L$, with $\#$ the cardinality of a set. To make the likelihood more tractable, we further augment the data in two steps. First, we introduce the latent allocation variable for the mixture in eq.~\eqref{eq:model_all_xijkt}, $d_{ijk,t} \in \lbrace 0,1 \rbrace$, and obtain the conditional distribution \begin{align} p(x_{ijk,t}|d_{ijk,t},s_t=l,\mathcal{G}_l) = \big( \delta_{\lbrace 0 \rbrace}(x_{ijk,t}) \big)^{d_{ijk,t}} \frac{ \big( \exp(\mathbf{z}_t' \mathbf{g}_{ijk,l}) \big)^{x_{ijk,t}(1-d_{ijk,t})}}{ \big( 1+\exp( \mathbf{z}_t' \mathbf{g}_{ijk,l}) \big)^{(1-d_{ijk,t})}} \, , \label{eq:likelihood_cond_X_d} \end{align} and the marginal distribution \begin{equation} p(d_{ijk,t}|s_t=l,\rho_l) = \rho_l^{d_{ijk,t}} (1-\rho_l)^{1-d_{ijk,t}} \, . \label{eq:marginal_d} \end{equation} Second, we decompose the ratio in eq.~\eqref{eq:likelihood_cond_X_d} and obtain \begin{align} p(x_{ijk,t} & | d_{ijk,t},\omega_{ijk,t},s_t=l,\mathcal{G}_l) = \frac{\big( 2\delta_{\lbrace 0 \rbrace}(x_{ijk,t}) \big)^{d_{ijk,t}}}{2} \exp\Big( -\frac{\omega_{ijk,t}}{2}(\mathbf{z}_t' \mathbf{g}_{ijk,l})^2 +\kappa_{ijk,t}(\mathbf{z}_t' \mathbf{g}_{ijk,l}) \Big), \label{eq:likelihood_conditional_X_d_omega} \end{align} where $\kappa_{ijk,t} = (1-d_{ijk,t}) (x_{ijk,t} - 1/2)$ and $\omega_{ijk,t} \sim PG(1,0)$, with $PG(b,c)$ the P\'olya-Gamma distribution with parameters $b >0$ and $c \in \mathds{R}$ \cite[][Theorem 1]{Polsonetal13PolyaGamma}. Defining $\mathcal{D} = \lbrace d_{ijk,t} \rbrace_{ijkt}$ and $\boldsymbol{\Omega} = \lbrace \omega_{ijk,t} \rbrace_{ijkt}$ and combining the previous steps one gets the complete data likelihood \begin{align} \notag & L(\boldsymbol{\mathcal{X}},\mathcal{D},\boldsymbol{\Omega},\mathbf{s}|\boldsymbol{\theta}) = \bigg( \prod_{t=1}^T \prod_{i=1}^I \prod_{j=1}^J \prod_{k=1}^K p(\omega_{ijk,t}) \bigg) \cdot \bigg( \prod_{g=1}^L \prod_{l=1}^L \xi_{g,l}^{N_{gl}(\mathbf{s})} \bigg) \\ & \quad \cdot \prod_{l=1}^L \prod_{t\in \mathcal{T}_l} \prod_{i=1}^I \prod_{j=1}^J \prod_{k=1}^K \bigg( \frac{2\rho_l \delta_{\lbrace 0 \rbrace}(x_{ijk,t})}{1-\rho_l} \bigg)^{d_{ijk,t}} \frac{1-\rho_l}{2} \exp\Big( -\dfrac{\omega_{ijk,t}}{2}(\mathbf{z}_t' \mathbf{g}_{ijk,l})^2 +\kappa_{ijk,t}(\mathbf{z}_t' \mathbf{g}_{ijk,l}) \Big). \label{eq:complete_likelihood_final} \end{align} In the following, we define $\mathcal{G} = \lbrace \mathcal{G}_l \rbrace_{l=1}^L$ and $\boldsymbol{\rho} = \lbrace \rho_l \rbrace_{l=1}^L$, and let $\mathbf{W}_l$ and $\mathbf{W}^{(r)}$ be the $(4\times R)$ and $(4\times L)$ matrices representing the $l$- and $r$-th slices of $\mathcal{W}$, along the third and second mode, respectively. The complete data likelihood and the prior distributions yield a posterior sampling scheme consisting of four blocks (see the supplement for the derivation of the posterior full conditional distributions). In block (I) the sampler draws the latent variables from the full conditional distribution: \begin{align} p(\mathbf{s},\mathcal{D},\boldsymbol{\Omega} | \boldsymbol{\mathcal{X}}, \mathcal{G}, \boldsymbol{\Xi}, \boldsymbol{\rho}) & = p( \mathbf{s} | \boldsymbol{\mathcal{X}}, \mathcal{G}, \boldsymbol{\Xi}, \boldsymbol{\rho}) p( \mathcal{D} | \boldsymbol{\mathcal{X}}, \mathcal{G}, \boldsymbol{\rho}, \mathbf{s}) p( \boldsymbol{\Omega} | \boldsymbol{\mathcal{X}}, \mathcal{G}, \boldsymbol{\rho}, \mathbf{s}). \end{align} Samples of $\mathbf{s}$ are drawn via the Forward Filter Backward Sampler (see ch.13 of \cite{Fruhwirth06FiniteMixtures_MarkovSwitch_book}). The latent variables $\omega_{ijk,t}$ are sampled independently from \begin{equation} p( \omega_{ijk,t} | x_{ijk,t}, s_t, \mathcal{G}_{s_t}) \propto PG(1, \mathbf{z}_t' \mathbf{g}_{ijkq,s_t}). \label{eq:posterior_omega} \end{equation} The latent variables $\omega_{ijk,t}$ are sampled in block for each $t$. The latent variables $d_{ijk,t}$ are sampled independently from \begin{equation} \begin{split} p( d_{ijk,t}=1 | x_{ijk,t}, s_t, \mathcal{G}_{s_t}, \rho_{s_t}) & \propto \rho_{s_t} \delta_{\lbrace 0 \rbrace}(x_{ijk,t}) \\ p( d_{ijk,t}=0 | x_{ijk,t}, s_t, \mathcal{G}_{s_t}, \rho_{s_t}) & \propto (1-\rho_{s_t}) \frac{\exp( (\mathbf{z}_t' \mathbf{g}_{ijkq,s_t})x_{ijk,t})}{1+\exp( \mathbf{z}_t' \mathbf{g}_{ijkq,s_t} )} \, . \end{split} \label{eq:posterior_d} \end{equation} The hyperparameters which control the variance of the PARAFAC marginals are sampled in block (II) from the full conditional distribution \begin{equation} p(\tau, \boldsymbol{\phi}, \mathcal{W} | \lbrace \boldsymbol{\gamma}_{h,l}^{(r)} \rbrace_{h,l,r}) = p( \boldsymbol{\phi} | \lbrace \boldsymbol{\gamma}_{h,l}^{(r)} \rbrace_{h,l,r}, \mathcal{W}) p( \tau | \lbrace \boldsymbol{\gamma}_{h,l}^{(r)} \rbrace_{h,l,r}, \mathcal{W}, \boldsymbol{\phi}) p( \mathcal{W} | \lbrace \boldsymbol{\gamma}_{h,l}^{(r)} \rbrace_{h,l,r}, \boldsymbol{\phi}, \tau). \end{equation} We enable better mixing by blocking together the parameters $\boldsymbol{\phi}$. We set $\phi_r = \psi_r / (\psi_1 + \ldots + \psi_R)$, where the auxiliary variables $\psi_r$ are sampled independently for each $r$ from \begin{equation} p( \psi_r | \lbrace \boldsymbol{\gamma}_{h,1}^{(r)} \rbrace_{h,l}, \mathbf{W}^{(r)}) \propto \text{GiG} \Big( 2\bar{b}^\tau, \sum_{h=1}^4 \sum_{l=1}^L \frac{\boldsymbol{\gamma}_{h,l}^{(r)\prime} \boldsymbol{\gamma}_{h,l}^{(r)}}{w_{h,r,l}}, \bar{\alpha}-n \Big), \label{eq:posterior_psir} \end{equation} where $\textnormal{GiG}(a,b,p)$ is Generalized Inverse Gaussian distribution with parameters $p \in \mathds{R}$, $a >0$ and $b >0$, and $n=\sum_{h=1}^4 n_h$. The global variance parameter $\tau$ is drawn from \begin{equation} p( \tau | \lbrace \boldsymbol{\gamma}_{h,l}^{(r)} \rbrace_{h,l,r}, \mathcal{W}, \boldsymbol{\phi}) \propto \text{GiG} \Big( 2\bar{b}^\tau, \sum_{r=1}^R \sum_{h=1}^4 \sum_{l=1}^L \frac{\boldsymbol{\gamma}_{h,l}^{(r)\prime} \boldsymbol{\gamma}_{h,l}^{(r)}}{\phi_r w_{h,r,l}}, (\bar{\alpha}-n)R \Big). \label{eq:posterior_tau} \end{equation} The local variance parameters $w_{h,r,l}$ are independently drawn from \begin{equation} p( w_{h,r,l} | \boldsymbol{\gamma}_{h,l}^{(r)}, \phi_r, \tau, \lambda_l) \propto \text{GiG} \Big( \lambda_l^2, \frac{\boldsymbol{\gamma}_{h,l}^{(r)\prime} \boldsymbol{\gamma}_{h,l}^{(r)}}{\tau \phi_r}, 1-\frac{n_h}{2} \Big). \label{eq:posterior_w} \end{equation} Finally, the hyperparameters $\lambda_l$ are independently drawn from \begin{equation} p( \lambda_l | \mathbf{W}_l ) \propto \lambda_l^{\bar{a}_l^\lambda +8R -1} \exp\Big( -\lambda_l \bar{b}_l^\lambda -\frac{\lambda_l^2}{2}\sum_{r=1}^R \sum_{h=1}^4 w_{h,r,l} \Big). \label{eq:posterior_lambda} \end{equation} Block (III) concerns the marginals of the PARAFAC decomposition for the tensors $\mathcal{G}_l$. The vectors $\boldsymbol{\gamma}_{h,l}^{(r)}$ are sampled independently from \begin{equation} p(\boldsymbol{\gamma}_{h,l}^{(r)} | \mathcal{X}, \mathcal{W}, \boldsymbol{\phi}, \tau, \mathbf{s}, \mathcal{D}, \boldsymbol{\Omega}) \propto \mathcal{N}_{n_h}\Big( \tilde{\boldsymbol{\zeta}}_{h,l}^r, \tilde{\boldsymbol{\Lambda}}_{h,l}^r \Big). \label{eq:posterior_gamma} \end{equation} Finally, in block (IV) are drawn the mixing probability $\rho_l$ and the row $\boldsymbol{\xi}_l$ of the transition matrix $\boldsymbol{\Xi}$ from \begin{align} \label{eq:posterior_rho} p(\rho_l | \mathcal{D}, \mathbf{s}) & \propto \mathcal{B}e(\tilde{a}_l^\rho, \tilde{b}_l^\rho), \\ \label{eq:posterior_xi} p(\boldsymbol{\xi}_l|\mathbf{s}) & \propto \mathcal{D}ir(\tilde{\mathbf{c}}). \end{align} Blocks (I) and (II) are Rao-Blackwellized Gibbs steps: in block (I) we have marginalised over both $(\mathcal{D},\boldsymbol{\Omega})$ in the full joint conditional distribution of the state $\mathbf{s}$ and $\mathcal{D}$ (together with $\boldsymbol{\rho}$) in the full conditional of $\boldsymbol{\Omega}$, while in (II) we have integrated out $\tau$ from the full conditional of $\boldsymbol{\phi}$. The derivation of the full conditional distributions is given in \autoref{sec:apdx_proofs}. The supplement provides details on the Gibbs sampler and the results of a simulation study in which we show the efficiency of the proposed MCMC and its effectiveness in recovering the true value of the latent Markov chain and parameters. \section{Empirical Application} \label{sec:application} We apply the proposed methodology to temporal financial networks for European institutions obtained as in~\cite{Billioetal12GrangerNet}. The application is appealing since there are few empirical studies on this dataset and, to the best of our knowledge, none of them considers a dynamic network model. The dataset consists of $110$ binary, directed networks estimated at the monthly frequency, from December 2003 to January 2013, by Granger\footnote{See e.g. \cite{Swanson97IRF_Granger_causal}, \cite{Boudjellaba92Gramger_causal_test}.} causality\footnote{We define a binary adjacency matrix for each month by setting an entry to 1 only if the corresponding Granger-causality link existed for the whole month (i.e. for each trading day of the corresponding month), and setting the entry to 0 otherwise.}, where the nodes are $61$ European financial institutions ($25$ banks, $11$ insurance companies and $25$ investment companies, in this order). $x_{ij,t} = 1$ represents a Granger-causal link from institution $i$ to institution $j$ at time $t$. The most striking features of the data are time-varying sparsity (see Fig.~\ref{fig:data_graphs}) and temporal clustering of sparse and dense network topologies (see the supplement for a representation of the temporal network dataset). \begin{figure}[H] \setlength{\abovecaptionskip}{2pt} \centering \includegraphics[trim=12mm 17mm 12mm 17mm,clip,height= 4.0cm, width= 4.0cm]{net_med.eps} \qquad \includegraphics[trim=12mm 17mm 12mm 17mm,clip,height= 4.0cm, width= 4.0cm]{net_sparse.eps} \qquad \includegraphics[trim=12mm 17mm 12mm 17mm,clip,height= 4.0cm, width= 4.0cm]{net_dense.eps} \caption{Graphical representation of networks at time $t=25$ (Dec 2005), $t=43$ (Jul 2007) and $t=69$ (Aug 2009), respectively. Node size is proportional to its total degree. Edge $(i,j)$ is clockwise oriented when $i$ Granger causes $j$.} \label{fig:data_graphs} \end{figure} The set of covariates $\mathbf{z}_t$ used to explain each edge's probability includes a constant term and some risk factors usually employed in empirical finance: the monthly change of the VSTOXX index (DVX), the monthly log-returns on the STOXX50 index (STX), the credit spread (CRS), the term spread (TRS) and the momentum factor (MOM). In addition, we include a connectedness risk measure to account for financial linkages persistence: the network total degree (DTD). All covariates have been standardised and included with one lag, except DVX which is contemporaneous, following the standard practice (e.g., see \cite{Majewski15OptionPricing_VolatilityLeverage}). We estimated the model in eq.~\eqref{eq:model_compact_first} with tensor rank $R=5$, $L=2$ regimes and use the Gibbs sampler of \autoref{sec:posterior_approx} to obtain 5,000 draws from the posterior, after thinning and burn-in. See the supplement for details about the initialization. For comparison purposes, we estimate a restricted model which does not allow for heterogeneous effects of the covariates within each regime. The model is obtained by pooling parameters cross-edges for each covariate, $\mathbf{g}_{ijk,l} = \mathbf{g}_l \in \mathds{R}^Q$, for each $i,j,k,l$, and by assuming the prior distributions (see \autoref{sec:apdx_pooled} and supplement for posteriors) \begin{align*} \mathbf{g}_l | \tau,w_l \sim \mathcal{N}_Q(\bar{\zeta}_l,\tau w_l \mathbf{I}_Q), \quad w_l|\lambda_l \sim \mathcal{E}xp(\lambda_l^2/2), \quad \lambda_l \sim \mathcal{G}a(\bar{a}_l^\lambda,\bar{b}_l^\lambda), \quad \tau \sim \mathcal{G}a(\bar{a}^\tau,\bar{b}^\tau). \end{align*} In both models the identification constraint $\rho_1 > \rho_2$ allows us to label state 1 and 2 as the sparse and dense regime, respectively. \begin{figure}[t!h] \setlength{\abovecaptionskip}{-1pt} \centering \includegraphics[trim= 10mm 0mm 17mm 5mm,clip,height= 4.0cm, width= 9.0cm]{degree_st_shaded.eps} \includegraphics[trim= 0mm 0mm 10mm 10mm,clip,height= 4.0cm, width= 3.5cm]{rho.eps} \includegraphics[trim= 18mm 0mm 25mm 10mm,clip,height= 4.0cm, width= 3.5cm]{distrTens12.eps} \caption{In all plots light (dark) grey identifies the dense (sparse) regime. \textit{Left:} total degree of the temporal network (\textit{line}) and estimated regimes (\textit{vertical bars}) over time (format \textit{mm-yy}). \textit{Middle:} posterior distribution of the sparsity parameters $\rho_1$ and $\rho_2$. \textit{Right:} distribution of the entries of the estimated coefficient tensor.} \label{fig:degree_states} \end{figure} \begin{sidewaystable}[h!p] \captionsetup{width=0.9\linewidth} \begin{tabular}{c c c c c c c} & {\small DTD} & {\small DVX} & {\small STX} & {\small CRS} & {\small TRS} & {\small MOM} \\ \begin{rotate}{90} \hspace*{10pt} {\small sparse regime} \end{rotate} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten1q2.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten1q3.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten1q4.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten1q5.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten1q6.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten1q7.eps} \\ \begin{rotate}{90} \hspace*{10pt} {\small dense regime} \end{rotate} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten2q2.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten2q3.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten2q4.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten2q5.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten2q6.eps} & \includegraphics[trim= 0mm 0mm 20mm 0mm,clip,height= 3cm, width= 2.8cm]{ten2q7.eps} \end{tabular} \captionof{figure}{Posterior mean of the coefficient tensor, in matricised form, in the sparse (\textit{top}) and dense (\textit{bottom}) state of the hidden Markov chain. In each plot, entry $(i,j)$ represents the effect of the covariate reported in column on the probability of observing the edge between institution $i$ and institution $j$. Black lines separate groups of institutions: banks ($i$ and $j$ in $\{1,\ldots,25\}$), insurance ($\{26,\ldots,36\}$) and investment companies ($\{37,\ldots,61\}$). Same color scale, with red, blue and white colors indicating positive, negative and zero valued coefficients, respectively.} \label{fig:matTens} \end{sidewaystable} \begin{sidewaystable}[h!p] \captionsetup{width=0.95\linewidth} \setlength{\abovecaptionskip}{1pt} \begin{tabular}{c c c c c c c} & {\small DTD} & {\small DVX} & {\small STX} & {\small CRS} & {\small TRS} & {\small MOM} \\[-2pt] \begin{rotate}{90} \hspace*{10pt} {\small sparse regime} \end{rotate} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S1inDeg2.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S1inDeg3.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S1inDeg4.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S1inDeg5.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S1inDeg6.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S1inDeg7.eps} \\[-10pt] \begin{rotate}{90} \hspace*{10pt} {\small dense regime} \end{rotate} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S2inDeg2.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S2inDeg3.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S2inDeg4.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S2inDeg5.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S2inDeg6.eps} & \includegraphics[trim= 6mm 0mm 15mm 0mm,clip,height= 3.0cm, width= 2.8cm]{S2inDeg7.eps} \end{tabular} \captionof{figure}{Covariate coefficients (columns) for the incoming edge probabilities in the two regimes (rows). In each scatterplot: total node degree averaged over time within each regime (horizontal axis) versus the sum of the negative (blue) and positive (red) node coefficients of a given variable (vertical axis). Nodes: banks ($\textcolor{red}{\blacktriangle},\textcolor{blue}{\blacktriangle}$), insurance companies ($\textcolor{red}{\square},\textcolor{blue}{\square}$) and investment companies ($\textcolor{red}{\star},\textcolor{blue}{\star}$). Dashed line: the sum of the coefficients for the pooled model.} \label{fig:tens_IN} \end{sidewaystable} \begin{figure}[t!h] \centering \captionsetup{width=0.95\linewidth} \setlength{\abovecaptionskip}{1pt} \begin{tabular}{c c c c} & {\footnotesize In/Out banks connections} & {\footnotesize In/Out insurance connections} & {\footnotesize In/Out investment connections} \\[4pt] \begin{rotate}{90} \hspace*{40pt} {\large $\substack{\text{between groups}\\ \text{(edge threshold)}}$} \end{rotate} & \includegraphics[trim= 10mm 20mm 10mm 28mm,clip,height= 5.0cm,width= 5.0cm]{TRS2_bank_all_edge040.eps} & \includegraphics[trim= 10mm 25mm 10mm 28mm,clip,height= 5.0cm,width= 5.0cm]{TRS2_insurance_all_edge040.eps} & \includegraphics[trim= 10mm 20mm 10mm 28mm,clip,height= 5.0cm,width= 5.0cm]{TRS2_investment_all_edge040.eps} \\ \begin{rotate}{90} \hspace*{5pt} {\large $\substack{\text{within groups}\\ \text{(central institution)}}$} \end{rotate} & \includegraphics[trim= 170mm 140mm 10mm 28mm,clip,height= 3.5cm,width= 3.5cm]{TRS2_bank.eps} & \includegraphics[trim= 110mm 5mm 90mm 240mm,clip,height= 2.3cm,width= 3.5cm]{TRS2_insurance.eps} & \includegraphics[trim= 10mm 130mm 190mm 30mm,clip,height= 3.5cm,width= 3.5cm]{TRS2_investment.eps} \\ \begin{rotate}{90} \hspace*{30pt} {\large $\substack{\text{between groups}\\ \text{(central institution)}}$} \end{rotate} & \includegraphics[trim= 10mm 20mm 10mm 28mm,clip,height= 5.0cm,width= 5.0cm]{TRS2_bank_all_central.eps} & \includegraphics[trim= 10mm 25mm 10mm 28mm,clip,height= 5.0cm,width= 5.0cm]{TRS2_insurance_all_central.eps} & \includegraphics[trim= 10mm 20mm 10mm 28mm,clip,height= 5.0cm,width= 5.0cm]{TRS2_investment_all_central.eps} \end{tabular} \captionof{figure}{TRS coefficients in the dense regime. In the columns the effect of TRS on the edges from and to a specific group of nodes: bank (purple), insurance (green), investment companies (orange). In the rows the effects of TRS on between and within groups connectivity, filtering relevant effects (first row) and central institutions (second and third row). Node size: proportional to the total degree averaged over time within each regime. Edge color: blue for negative, red for positive. We show only edges with significant TRS coefficient.} \label{fig:tens_IN_TRS_th10} \end{figure} The estimated regimes are given in the left plot of Fig.~\ref{fig:degree_states} together with the network total degree. The identification constraint permits to recognise low and high connectedness periods and is strongly supported by the data, since the posterior distributions are well separated (middle plot). The distribution of the estimated coefficients in the two regimes (right plot) highlights the higher heterogeneity across edges in the dense regime. The unrestricted tensor model captures the edge-specific impact of each risk factor (different colors in each plot of Fig.~\ref{fig:matTens}) as opposed to the pooled model (see Fig.~\ref{S_fig:app_pool_tensor} in the supplement), and allows us to provide new insights on the dynamic relationship among financial institutions and risk factors. In the dense regime, we find that the credit spread positively affects the probability to be connected to banks from all institutions, and a negative impact on the edge probabilities among investment companies. The term spread has a strong positive effect on connecting to insurance and investment companies, and from banks to insurances. Similarly, the stock index return positively affects the edge probability from insurance and investment companies to banks. We find also that the autoregressive term has an average positive effect, which might account for either connectedness risk persistence or spurious autocorrelation due to the network estimation step. In the sparse regime (first line of Fig.~\ref{fig:matTens}) there is no evidence of impact for almost all covariates. This is most striking for CRS, TRS and DTD, which are the most relevant predictors in the dense state. This finding supports the stylised fact that the risk factors have higher explanatory power in periods of higher connectivity of the financial network (\cite{Billioetal12GrangerNet}). Fig.~\ref{fig:tens_IN} allows to detect potential relationships between covariate effects and node degree centrality. In particular, we find evidence of positive relationship (in absolute value) for DTD, CRS, TRS and MOM. In the sparse regime, all institutions feature low average degree and there is evidence of a weaker relationship for CRS, TRS, and negative impact for MOM. In the dense regime, the most central institutions (banks and insurance) are the most affected both in terms of the number of connections and the risk factor impact (see the top- and bottom-right part of the scatterplots in Fig.~\ref{fig:tens_IN}). Furthermore, according to the estimated regimes, the most central institutions differ between regimes (see node size in Fig.~\ref{fig:tens_IN_TRS_th10}), with banks being the more connected in both states. We focus on the term spread factor, since it is a key variable for monetary policy analysis. The unrestricted tensor model provides interesting results on the effect of term spread on the different types of institutions, especially in the dense regime. We disentangle the relationship among institutions by highlighting the most affected linkages (first row of Fig.~\ref{fig:tens_IN_TRS_th10}) and the impact on all the linkages of the most central nodes (second and third row). We find that the term spread mostly increases edge probability from banks and the most central insurance company to investment companies. There is no evidence of relevant impact on linkages between banks and insurances, which are strongly affected by the credit spread (see the supplement for further results). Finally, the effect of the term spread is larger for between group connectivity than for within group connectivity. Most of the edges of the central investment company and bank are negatively affected by the term spread (left and right plots), whereas the connectivity of the central insurance company increases with the term spread (middle plot). \section{Summary and Concluding Remarks} \label{sec:conclusions} We present a new zero-inflated logit regression for time series of binary tensors, such as the connectivity tensors encoding the dependence structure of multilayer networks. The mixing probability allows to capture the sparsity pattern in the data, and a set of coefficient tensors captures the effect of the covariates on each binary observation. We propose a parsimonious parametrization based on the PARAFAC decomposition of the coefficient tensor and allow the regression parameters to switch between multiple regimes in order to capture the time-varying sparsity patterns. We consider the Bayesian paradigm in the inferential process and developed an efficient Gibbs sampler for posterior approximation. We analyze a real dataset of time-varying networks among European financial institutions. There is strong evidence of heterogeneous effects of the covariates across edges and regimes, with the term spread and credit spread factors playing an important role in explaining the connectivity of central institutions. Our new empirical results can give interesting insights to policy makers for financial stability and risk monitoring. \section*{Supplementary Materials} Background material on tensors, the derivation of the posterior, simulation experiments and the description of the data are given in an online supplement\footnote{\url{https://matteoiacopini.github.io/docs/BiCaIa_Supplement.pdf}}. \bibliographystyle{plain}
1,477,468,750,524
arxiv
\section*{Annotated Bibliography} \noindent \textsc{J.~Barwise,} ed. {\it Handbook of Mathematical Logic,} North-Holland, 1977. Part B, on set theory, has many consistency and independence results, including applications to topology. The article by J.~P.~Burgess in Part B gives a fine introduction to forcing. The article by Smorynski on G\"odel's Incompleteness Theorems is one of the very few treatments I have seen that does not have holes plugged only by hand-waving. After the clearest treatment I have ever seen of the Second Incompleteness Theorem, he even points out, ``In Section 2.1, we have been guilty of cheating in two places'' and then goes on to make the necessary repairs. There is also a significant article by Harrington about an incompleteness in Peano Arithmetic at the end. \vskip\baselineskip \noindent \textsc{F.~Browder,} ed. {\it Mathematical Developments Arising from Hilbert Problems,} Proceedings of Symposia in Pure Mathematics XXVIII, American Mathematical Society, 1974. Includes a reprint of the English translation of Hilbert's article. The article on Hilbert's first problem, by D.~A.~Martin, expounds on the significance of consistency and independence proofs, and of large cardinal axioms. There are articles on the second problem by Kreisel and on the tenth by the co-solvers, Martin Davis (see reference to an article by him below), Yuri Matijasevic, and Julia Robinson. A quote from their article: ``The consistency of a recursively axiomatizable theory is equivalent to the assertion that some definite Diophantine equation has no solutions.'' \vskip\baselineskip \noindent \textsc{H.G.~Dales and W.H.~Woodin,} {\it An Introduction to Independence for Analysts,} Cambridge University Press, 1987. An eloquent preface introduces a self-contained treatment of the set-theoretic independence of a basic problem in functional analysis: If $X$ is compact, Hausdorff and infinite, is every homomorphism from $C(X, \mathbb{C})$ into any Banach algebra continuous? Answer: No if \textsf{CH} for {\it every} such infinite $X$, but it is also consistent that the answer is Yes for {\it every} such $X$! \vskip\baselineskip \noindent \textsc{M.~Davis,} {\it Hilbert's Tenth Problem is Unsolvable,} Amer.~Math.~Monthly {\bf 80} (1973) 233--269. A spellbinding exposition with complete proofs, not merely of the tenth problem but about how its solution impacts the foundations of mathematics in completely unexpected ways. Included is a very concrete treatment of G\"odel's First Incompleteness Theorem in terms of Diophantine equations. If we ever contact an extraterrestrial intelligence and want to impress it with what human beings are capable of, this would be the article I'd recommend to be transmitted to them. \vskip\baselineskip \noindent \textsc{P. Eklof,} {\it Whitehead's problem is undecidable,} Amer.~Math.~Monthly {\bf 83} (1976) 775--788. The set-theoretic independence of the problem of whether every Whitehead group is free. \vskip\baselineskip \noindent \textsc{D.H.~Fremlin,} {\it Consequences of Martin's Axiom,} Cambridge University Press, 1984. Includes many applications to topology, measure theory, and algebra of Martin's Axiom and the negation of \textsf{CH}, as well as of some weaker axioms which also deny \textsf{CH}. \vskip\baselineskip \noindent \textsc{K.~G\"odel,} {\it The Consistency of the Continuum Hypothesis,} Ann.~Math.~Studies no.~3, Princeton University Press, 1940.\\ {\it What is Cantor's continuum problem?} Amer.~Math.~Monthly {\bf 54} (1947) 515--525. The first article gives the proof of its main results in full; the second explains, {\it inter alia}, why G\"odel believed the Continuum Hypothesis to be ``dubious'' in spite of its consistency. \vskip\baselineskip \noindent \textsc{A.~Kanamori and M.~Magidor,} {\it The evolution of large cardinal axioms in set theory,} pp.~99--275 in: {\it Higher Set Theory\/}, G.~H.~Muller and D.~S.~Scott, eds., Lecture Notes in Math. no.~669, Springer-Verlag, 1978. A dramatic article on large cardinal axioms with a wealth of information and proofs. \vskip\baselineskip \noindent \textsc{K.~Kunen,} {\it Set Theory: An Introduction to Independence Proofs}, North-Holland, 1980. Together with Burgess's article referenced above, this provides a fine understanding of how forcing is done and why its results are consistent with \textsf{ZFC}. \vskip\baselineskip \noindent \textsc{K.~Kunen and J.~Vaughan,} eds. {\it Handbook of Set-Theoretic Topology}, North-Holland, 1984. Still the most comprehensive single source of information about the subject. \vskip\baselineskip \noindent \textsc{P.~Maddy,} {\it Believing the axioms. I\/} and {\it Believing the axioms. II,\/} J.~Symbolic Logic {\bf 53} (1988) 481--511 and 736--764. A highly readable pair of articles in which a philosopher looks at \textsf{CH} and at large cardinal axioms, and reasons for believing or disbelieving them. \vskip\baselineskip \noindent \textsc{J.D.~Monk,} {\it Cardinal Invariants on Boolean algebras,} Birkh\"auser Verlag, 1996. Contains many consistency and independence results. \vskip\baselineskip \noindent \textsc{P.~Nyikos,} untitled review, J.~Symbolic Logic {\bf 57} (1992) 763--766. A review of seven papers authored or co-authored by Andreas Blass, giving applications of forcing to algebra, analysis, and topology. \vskip\baselineskip \noindent \textsc{J.~Roitman,} {\it The uses of set theory,} Math.~Intelligencer {\bf 14} (1) (1992) 63--69. An entertaining and informative article which pointedly omits all applications to general topology, Boolean algebra, Whitehead groups, and measure theory, in order to better make the point that set-theoretic consistency results and specialized set-theoretic techniques are useful in unexpected places in mathematics. \vskip\baselineskip \noindent \textsc{M.E.~Rudin,} {\it Lectures on Set Theoretic Topology,} American Mathematical Society, 1975. This booklet made it clear how profoundly general topology was being remade by set-theoretic consistency and independent results. \vskip\baselineskip \noindent \textsc{S.G. Simpson,} {\it Partial Realizations of Hilbert's Program,} J. Symbolic Logic {\bf 53} (1988) 349--363. After the eloquent words at the beginning which I quoted, Simpson explains Hilbert's Program for salvaging the foundations of mathematics, and goes on to show how, despite G\"odel's negative solution of Hilbert's Second Problem, a lot can be done in this direction. In particular, he recounts how a lot of familiar results in analysis can be proven in a system called \textsf{WKL}${}_\mathsf{0}$, and anything that can be proven in this system is finitistically reducible in the way Hilbert envisioned. There is also a nice introduction to the field of reverse mathematics, which deals with the general question: given a theorem in mathematics, which set existence axioms are required to prove it? \end{document}
1,477,468,750,525
arxiv
\section{Introduction: ethical care to creativity and intent} The traditional role of automation in society served to make human lives easier by outsourcing mundane tasks. Recommender systems, for example, utilize language models to engage users in predictive text systems. However, much criticism has fallen on this medium as it alters the way people write. These systems have been found to make people “machine-like” – which is evident given its intention \cite{varshney_autonomy}. This prompts ethical care on the implementation of automation within attributes that characterize humanity – one of which is creativity. In psychoanalysis, creativity serves as the expressive element or natural human impulse that drives the artistic experience \cite{zweig_struggle}. It is what drives surprise within viewers for pushing the boundary of what is deemed to be the experience of reality. AI Art falls under criticism for automating this very process. For instance, as an agent of play to enact creativity, GANs are utilized as a black box for providing artistic result, where the feedback loop is based on the artist's alteration of the algorithm upon interpretation of results \cite{gans}. Unlike creation where artists decide meaning and form in process, AI Art limits artistic autonomy by basing the artist’s process upon output i.e. generating multiple sessions of training and determining the artwork based on generated artifacts (current exceptions go to CLIP with in-training modifications \cite{radford_clip}). With regards to intent, GANs were originally focused on improving quality, stability, and variation \cite{radford_unsupervised} in order to implement the style transfer of the input image. Since then, they have evolved from representation to visually indeterminate artifacts to create an AI Art identity \cite{hertzmann_indeterminate}. However, the implementation of this medium still surrenders the creative process as the artifact's intent does not address the fundamental loss in autonomy that occurs within automation \cite{mccormack_monash}. In June 2021, a discussion series on AI research and social responsibility, titled Post-Human Creativity: The Use of AI in Art, featured artists who emphasized the need to strengthen "interactions between humans and machines... instead of making technology more human" as to preserve "meaningful interactions with algorithms and push the boundaries of creative processes." With the concerns for AI’s role in art in mind, we consider the ethical implications to the artist’s creative autonomy via principles in self-determination theory and intent via fundamental limits of creativity. \section{Defining creativity: self-determination theory} Self-determination theory suggests that people are motivated to grow and change by three innate and universal psychological needs: autonomy, relatedness and competence \cite{ryan_self-determination}. Autonomy, or regulation by the self, is a phenomena that parallels other aspects of existence such as will, choice, and freedom. It is further augmented into liberty (independence from controlling principles) and agency (capacity for intentional action) \cite{define_autonomy}. We consider the limitation of AI Art to suffice liberty by considering emotion-based art. In the style transfer of AI Art, artists often use forms that acquire a sense of talent, such as impressionism, to replicate the delicacy of the form’s timeless novelty. However, in other forms such as Abstract expressionism, it is the human element of the artist that drives the form. Abstraction took time to develop appreciation due to the neglect for traditional talent \cite{abstract_resistance}, let alone expressionism, which is expressive of the artist's inner feelings \cite{expressionism}. This highlights the point in which creativity stems from the inner spark, or according to the psychoanalyst Carl Jung, "not accomplished by intellect but by play" or to a larger extent the "daimon of creativity" \cite{jung_symbols_1977}. As such, Abstract expressionism is rooted in the creativity that spurs from the artist at the moment of creation. This moment, much like the deep immersion that comes with it, is encouraged and developed by a constant interaction that need not be interrupted, regulated, or automated \cite{diamond_anger_1996}. Hence, if one were to create AI Art based on Abstract expressionism, such as Jackson Pollock's action painting \cite{pollock}, then the result would lose its creative autonomy because of artistic interruption during the surrender of process to AI, as well as its core essence in conveying the emotion of the artist. \section{Defining intention: fundamental limits of creativity} Intentionality is the inspiration or desire to express the human intent \cite{collingwood_intention}. The capacity for this action is captured by the need for agency in autonomy. Fundamental limits of creativity have detailed a limit theorem whereby tradeoff between novelty and quality for a given creative domain exists \cite{lav_math}. To consider a limit theorem for creativity with intentionality, Shannon’s capacity-cost-function formalism, which captures limits of reliable communication, is modified to address the semantic problem of creativity. Incorporating intention, semantic creativity shows that requiring communicative intent may reduce the quality and/or novelty of creative artifacts that are generated \cite{varshney_intentionality}. This inverse relationship between intent and novelty is paralleled by examples in Dada art, such as Duchamp's fountain, that, despite the utmost intent, garnered controversy on the novelty of artistic creation \cite{dada}. This begs to consider the role of novelty in AI Art due to the compromise in the intentional autonomy characterised by human creativity \cite{mccormack_monash}. One consideration would be to rethink novelty in AI Art and aim for a simultaneous increase of autonomy and intent. For instance, the DARCI (Digitial ARtist Communicating Intention) system builds an artificial system that exhibits creativity by noting the attribution of creativity with respect to system intentionality and autonomy. It has, thus far, addressed these difficulties and maintained to exhibit these characteristics to some extent \cite{ventura_autonomous_intention}. Drawing back to the “black box” analogy for AI training and the resultant novelty, one may consider the integration of intent within the system and assign the loss in novelty towards the artist. For example, the art collective aurèce vettier reinvents intent by exploring hybrid combinations of art and algorithms \cite{aurece2}. This way, with AI Art as a component, novelty arises out of the artist’s greater autonomy. \section{Conclusion} The novelty of AI Art need not arise out of appreciation for AI's capability to create such works, but rather ask what the artwork entails in creativity and intent. It would be best to encourage artists to re-calibrate the role of AI in their art. One could explore their curiosity by incorporating room for play and perhaps unlock their inner creative in part of retreived autonomy within the process of creation. \section{Acknowledgement} Discussions with Lav R. Varshney are greatly appreciated.
1,477,468,750,526
arxiv
\section{Splitting Functions in Spontaneously Broken $SU(2)_L\times U(1)_Y$} \label{sec:broken} While the parton shower formalism of the electroweak theory in the symmetric phase has much in common with that of $SU(3)_{\rm QCD}\times U(1)_{\rm EM}$, care needs to be taken when dealing with the broken phase and systematically accounting for the effects of the VEV ($v$). In a sense, we must extract the ``higher-twist'' effects of the broken electroweak theory in terms of powers of $v/E$. Although the regulating role of $v$ in the shower is somewhat analogous to that of $\Lambda_{QCD}$, the electroweak theory remains perturbative at $v$, and the unbroken QED shower continues into the deep infrared regime. The interplay between gauge and Goldstone degrees of freedom within the shower can also seem obscure, both technically and conceptually. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/dPdkt} \vspace{-0.4cm} \caption{} \end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/dPdz} \vspace{-0.4cm} \caption{} \end{subfigure} \vspace{-0.8cm} \end{center} \caption[]{Fixed-order differential emission rate for $W^\pm$ bosons off a massless fermion at $E_f = 10$~TeV: (a) $k_T$ distribution at $z=0.2$, (b) $z$ distribution at $k_T = m_W/2$. The different curves correspond to massless transversely-polarized $W^\pm_T$ (dotted curves), massive transversely-polarized $W^\pm_T$ (solid curves), and massive longitudinally-polarized $W^\pm_L$ (dashed curves). } \label{fig:zkt} \end{figure} Most immediately, the splitting functions of the unbroken theory, already detailed in Section~\ref{sec:unbroken}, must be adjusted to account for the physical masses of the gauge bosons, Higgs boson, and top quark. To large extent, these constitute simple modifications, folding in the kinematic effects discussed in Section~\ref{sec:split}. As a straightforward example, in Fig.~\ref{fig:zkt} we illustrate the fixed-order emission rate for $W^\pm$ bosons off a massless fermion at $E_f = 10$~TeV. Both the collinear and soft singularities of the massless theory (dotted curves) become regulated with $m_W \approx 80$~GeV (solid curves), as seen in the transversely-polarized boson $k_T$ distribution in Fig.~\ref{fig:zkt}(a) and the $z$ distribution in Fig.~\ref{fig:zkt}(b).\footnote{Note that in the region $z \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m_W/E$, the $W$s are non-relativistic, and collinear splitting function language ceases to be strictly appropriate or reliable. This region could more rigorously be matched onto universal soft Eikonal factors, e.g. as in~\cite{Denner:2000jv,Denner:2001gw}. But in practice, our treatment here still yields approximately correct rates for splitting angles $\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1$ when the splitting is defined in the hard scatter frame.} Indeed, giving the gauge bosons a mass is a common trick for regulating QCD and QED calculations. In the electroweak theory, such regulated splitting functions become physically meaningful. Figure~\ref{fig:zkt} also shows a contribution from longitudinal gauge boson radiation off of a massless fermion (dashed curves). This is a good example of an ``ultra-collinear'' process which emerges after EWSB at leading power in $v/E$. In this case it has a splitting probability of the form \begin{equation} d{\cal P} \sim {m^2_W\over k_T^2} {dk_T^2 \over k_T^2} \ . \label{eq:ultra} \end{equation} The rate is seen to be significant in the region $k_T \sim m_W$, and it can be larger than the conventional transverse emissions in the ultra-collinear region $k_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m_W$ as seen in Fig.~\ref{fig:zkt}(a). We further show in Fig.~\ref{fig:zkt}(b) the $z$ distribution at $k_T = m_W/2$, where we can see the dominance of the longitudinal polarization (dashed curve) over the transverse polarization (solid curve) for all values of $z$ at weak-scale values of $k_T$. Here we have defined $z$ as three-momentum fraction, employed a strict kinematic cut-off $z > k_T/E$, and multiplied the splitting rate by the $W$ velocity to account for non-relativistic phase space suppression. Considering emissions from light initial-state fermions, the ultra-collinear origins of these longitudinal weak bosons leads to quite distinctive PDFs~\cite{Kane:1984bb,Dawson:1984gx,Chanowitz:1985hj}. Due to the existence of an explicit mass scale $m_{W}^{} \sim gv$, the resulting PDFs exhibit Bjorken scaling~\cite{Bj}. In other words, they do not run logarithmically and do not exhibit the usual scaling violations of conventional PDFs in massless gauge theories. Consequently, the ISR jets associated with their generation are constrained to the region $k_T \sim m_W$ even for arbitrarily-energetic hard processes. This observation has led to the concepts of ``forward-jet tagging''~\cite{Cahn:1986zv,Barger:1988mr,Kleiss:1987cj} for the $W_{L}W_{L}$ scattering signal and ``central-jet vetoing''~\cite{Barger:1990py} for separating the $f\to W_T f'$ backgrounds. Such processes have no analogs in the unbroken theory. A naive application of the Goldstone-boson Equivalence Theorem (GET) \cite{Lee:1977eg,Chanowitz:1985hj} would have instructed us to identify longitudinal vector bosons with the eaten scalars from the Higgs doublet, and would have predicted zero rate because massless fermions have vanishing Yukawa couplings. More generally, we expect to see a variety of large effects of EWSB at $k_T \sim v$, beyond simple regulation of the unbroken-theory splitting functions. These will involve not only the broken-phase masses of the SM particles, but also broken-phase interactions such as scalar-vector-vector and the scalar cubics. The more general role of Goldstone boson equivalence and its violations within the parton shower are rather subtle. We expect that the high-$k_T$ showering of longitudinal gauge bosons should closely follow the behavior of the scalars in the unbroken theory. But even this simple identification is obscured by longitudinal polarizations that diverge with energy and by the gauge/Goldstone boson propagators with gauge-dependent tensor and pole structure. For processes with multiple emissions, as well as with the introduction of the novel ultra-collinear emissions, complete isolation and removal of non-collinear gauge artifacts can appear rather complicated. We are thus compelled to seek out a more efficient treatment, such that the bad high energy behavior of the longitudinal gauge bosons is alleviated and the key features of EWSB are made more transparent. \subsection{Longitudinal gauge bosons and Goldstone Boson Equivalence} \label{sec:GEG} The standard form for the polarization vector of an on-shell longitudinal gauge boson $W$ with a four-momentum $k_W^\mu = E_W(1,\beta_W \hat k_W)$ is \begin{equation} \epsilon_{L}^\mu(W) \,=\, {E_W\over m_W} \left(\beta_W,\, \hat k_W\right) \,=\, {k^\mu_W\over m_W} - \frac{m_W} {E_W(1+\beta_W)}n^{\mu}, \label{eq:scalar} \end{equation} where we define the light-like four-vector \begin{equation} n^{\mu}\equiv(1,-\hat{k}_W) \, . \label{eq:nmu} \end{equation} The second term in Eq.~(\ref{eq:scalar}) is of the order $m_W/E_W$, which could seemingly be ignored at very high energies in accordance with the GET. However, there are caveats to this picture, and understanding how pseudo-scalars and longitudinal vector bosons behave as both external and intermediate states requires some care. In the simplest approach, one would keep only the leading contribution, $k^\mu_W/m_{W}^{} $. When contracted into scattering amplitudes, this piece effectively ``scalarizes'' the longitudinal vector boson, realizing the GET. This can often be seen at the level of individual Feynman diagrams. For example, in the decay of a heavy Higgs boson with $m_h \gg 2 m_{W}^{}$, the vertex $g\, m_{W}^{} hW^\mu W_\mu$ simply leads to a scalar interaction $(m_h^2 /v)h \phi^+ \phi^-$ after the substitution $\epsilon_{L}^\mu(W) \to {k^\mu_W/ m_{W}^{}}$. In other cases, such as in couplings to fermion lines, the naively bad high-energy behavior $\propto E_W/m_{W}^{}$ is fully cancelled thanks to Ward identities, up to possible chirality-flip effects that go like $m_f/E_W$. This reproduces the Yukawa couplings of the unbroken theory. When longitudinal and Goldstone bosons appear as off-shell intermediate states, it is also possible to show that neither the naively badly-behaved structure $k^\mu k^\nu/m_W^2$ (in unitarity gauge) nor spurious gauge/Goldstone poles (in more general gauges) can lead to new collinear behavior at zeroth-order in the VEV. The unbroken shower emerges as expected as long as $k_T \gg m_W$. The major complication to the GET picture is that the naively sub-leading effects from EWSB can dominate in the relativistic ultra-collinear regime. Even if the $k^\mu_W/m_{W}^{} $ piece of an emitted gauge boson is removed by Ward identities, the ${\cal O}(m_W/E_W)$ remainder of $\epsilon_{L}^\mu(W)$ can still receive a compensating ultra-collinear power-enhancement in the region $k_T \sim m_{W}^{}$. There may also be comparable EWSB contributions lurking within off-shell propagators, including as well the propagators of Higgs bosons and massive fermions. Disentangling all EWSB effects in an ultra-collinear parton splitting can be accomplished by isolating and removing all parts of a $1\to 2$ splitting amplitude that go like $(Q^2-m^2)/m_W^2$, where $Q^2$ and $m^2$ are respectively the squares of the four-momentum and pole mass of the off-shell particle in the splitting. Once multiplied by the propagators, such contributions are explicitly not collinear-enhanced, and would need to be combined with other non-collinear (and hence non-universal) diagrams from a hard process. Their extraction can generally be accomplished via manipulations between kinematic quantities, polarization vectors, and couplings. However, carrying out this extraction procedure process-by-process can be tedious, especially when multiple gauge bosons and/or nested collinear emissions are involved, and the effects of EWSB are often not immediately obvious. Within the gauge/Goldstone boson sector, we expect that the $k^\mu_W/m_{W}^{}$ piece of the longitudinal polarization vector must generally reproduce the Goldstone scalar couplings, whereas the effects of EWSB are captured by the remainder term in Eq.~(\ref{eq:scalar}). A more convenient approach for tracking EWSB effects would be to keep the Goldstone scalar contributions manifest, and treat the remainder polarization as a separate entity. We point out that such a division can be enforced by judicious gauge-fixing. We do so here via a novel gauge which we call {\it Goldstone Equivalence Gauge} (GEG). GEG is defined by generalizing off-shell the light-like four-vector $n^\mu$ that appears in Eq.~(\ref{eq:scalar}) and using it to perform the gauge-fixing in momentum-space. Taking $W_\mu$ to represent any specific real gauge adjoint, with contraction of gauge indices left implicit, we adopt the gauge-fixing term (dropping here and below the ``$W$'' subscript on energy/momentum variables) \begin{equation} {\cal L}_{\rm fix} \,=\, -{1\over 2\xi} \big(n^\mu(k)\,W_\mu(k)\big)\big( n^\nu(k)\,W_\nu(-k)\big), \quad\quad (\xi \to 0) \ . \label{eq:gauge} \end{equation} Taking the $\xi\to 0$ limit effectively introduces an infinite mass term for the gauge polarization associated with the collinear light-like direction $\bar n^\mu \equiv (1,\hat k)$, aligned with the large components of relativistic momentum modes. This reduces the naive number of dynamical gauge degrees of freedom from four to three. The transverse modes ($xy$ or helicity $\pm1$) are as usual, except that they gain a mass term after spontaneous symmetry breaking. The remaining gauge degree of freedom ``$W_n$" explicitly mixes into the Goldstone boson, and becomes associated with exactly the remainder polarization in Eq.~(\ref{eq:scalar}). GEG is essentially a hybrid of Coulomb gauge \cite{Beenakker:2001kf} and light-cone gauge \cite{Srivastava:2002mw}, incorporating both the rotational-invariance of the former and the collinear boost-invariance of the latter, while isolating spurious gauge poles/discontinuities away from physical regions.\footnote{GEG falls into a more general class of non-covariant but physical gauges that exhibit many similar features in the broken phase. These include Coulomb~\cite{Beenakker:2001kf}, axial~\cite{Dams:2004vi}, and strict light-cone~\cite{Srivastava:2002mw} (as well as temporal, which has received little attention). In particular, splitting functions computed within GEG and Coulomb gauge should agree at high energies, but the latter can exhibit artificial singularities at zero three-momentum due to the residual gauge freedom.} This approach can be contrasted with the more commonly-used $R_\xi$ gauges, in which individual splitting diagrams often exhibit unphysical gauge artifacts scaling as $1/v$, Goldstone fields live purely off-shell, and Goldstone equivalence can become obscured. Canonically normalizing such that the gauge remainder field $W_n$ interpolates a longitudinal boson state with unit amplitude at tree level, its interaction vertices carry the polarization factor \begin{equation} \epsilon_{n}^\mu(k) \,\equiv\, \frac{-\sqrt{|k^2|}}{n(k)\cdot k}\ n^\mu(k) \,\,\overset{\overset{\text{\rm \footnotesize on-shell}}{}}{\to}\,\, \frac{m_W}{E+|\vec k|}\left(-1,\hat k\right). \label{eq:L} \end{equation} The Goldstone field remains an integral part of the description here, but in a manner quite different from that in $R_\xi$ gauges. In particular, it interpolates onto the {\it same} external particle as the remainder gauge field. This particle, which may alternately be viewed as a ``longitudinal gauge boson'' or as a ``Goldstone boson'', takes on a kind of dual identity in interactions. Processes involving creation/annihilation of this particle are computed by coherently summing over Feynman diagrams interpolated by both remainder gauge fields and Goldstone fields.\footnote{For a different but related approach, see~\cite{Wulzer:2013mza}.} More details and example calculations are presented in Appendices~\ref{sec:gauge} and~\ref{sec:FeynmanRules}. However, we can summarize here the key features of GEG that are relevant for parton shower physics: \begin{itemize} \item Gauge artifacts proportional to $E/m_W$ are deleted from the description of the theory at the outset, and appear neither in external polarizations nor in propagators. Physical longitudinal gauge bosons are no longer interpolated by a gauge boson field $W_{L}$ and its associated ${\cal O}(E/m_W)$ polarization vector $\epsilon_{L}^\mu$, and no propagating component of the gauge field serves a proxy for the eaten Goldstone bosons in high-energy interactions via ``scalarization.'' Instead, only a remainder gauge field $W_n$ may still interpolate longitudinal gauge bosons. But it does so via the suppressed ${\cal O}(m_W/E)$ polarization vector $\epsilon_n^\mu$ in Eq.~(\ref{eq:L}). \item The high-energy equivalence between longitudinal gauge bosons and Goldstone bosons becomes trivially manifest at the level of individual Feynman diagrams. This is because the Goldstone fields behave almost identically as in the unbroken theory at high energies ($v/ E\to 0$). The equivalence extends off-shell, encountering neither the usual fake gauge nor Goldstone poles. All propagators exhibit the physical pole at $m_{W}^{}$ or $m_Z^{}$ with positive residue. This greatly simplifies the interpretation of an ``almost on-shell'' boson as an intermediate state in a shower. \item Departures from Goldstone boson equivalence become organized in a systematic power expansion in $v/E$ factors. This allows general ultra-collinear splitting processes to be viewed as simple sums of well-behaved $1\to2$ Feynman diagrams. EWSB contributions in splitting matrix elements can come from remainder-longitudinal gauge insertions, fermion mass terms in spinor polarizations, and a small set of standard EWSB three-point vertices. \end{itemize} As a final remark of this section, we would like to point out that the GET has been shown to be valid including radiative corrections \cite{Yao:1988aj,Bagger:1989fc,He:1992nga}. Given the close relation between the GET and GEG, we suspect that GEG should also be adequate in dealing with radiative corrections. \subsection{Splitting functions in the broken phase} \subsubsection{Modifications to unbroken-phase splitting functions} The unbroken-phase splitting functions governed by the gauge and Yukawa couplings given in Tables~\ref{tab:massless_fermion_splittings} to~\ref{tab:massless_scalar_splittings} of Sec.~\ref{sec:unbroken} are still valid for $k_T$'s and virtualities far above the masses of all of the participating particles, provided we make the identification between pseudo-scalars and longitudinal gauge bosons in accordance with the GET. Indeed, in Goldstone Equivalence Gauge, this correspondence is completely transparent. The splitting matrix elements can be used largely unchanged as long as all of the particles are also relativistic, with corrections that typically scale as ${\cal O}(g^2v^2/E^2)$. At $k_T$'s and virtualities approaching the physical masses, EWSB causes these splitting functions to either smoothly shut off or to transition into resonance decays. The modifications are captured by the propagator and kinematic effects outlined in Section~\ref{sec:split}. In particular, the propagator modifications effectively rescale the unbroken-phase splitting functions of Tables~\ref{tab:massless_fermion_splittings}--\ref{tab:massless_scalar_splittings} as \begin{equation} \frac{d{\cal P}}{dz\,dk_T^2} \,\to\, \frac{k_T^4}{{\tilde k}_T^4} \, \frac{d{\cal P}}{dz\,dk_T^2} \, \quad {\rm where}\quad {\tilde k}_T^2 = k_T^2 + \bar z m_B^2 + z m_C^2 - z\bar z m_A^2 . \label{eq:ktilde} \end{equation} Soft ($1/z$ type) singularities also generally become regulated, though in the $1\to 2$ collinear splitting function language this regulation is somewhat convention-dependent. For $k_T$'s far above the physical masses, soft singularities are anyway constrained by kinematics: $z,\bar z \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} k_T/E_A$. For lower $k_T$'s, such that non-relativistic splitting momenta can be approached, the $k_T$ suppression also sufficiently regulates any soft-singular behavior. But additional soft phase space factors can also be applied to reduce artificial spikes in the differential splitting rates. Minimalistically, this involves the product of velocities of the outgoing products in final-state showers, and for initial-state showers involves the product of the on-shell daughter's velocity and the space-like daughter's ``velocity''. We have seen a simple example in Fig.~\ref{fig:zkt}(b). For the neutral boson states, the propagator factors become matrices. These may be conveniently diagonalized by rotating from the interaction basis $B^0/W^0$ and $H^0/H^{0*}$ to the mass basis $\gamma/Z_T$ and $h/Z_{L}$. The former requires the usual rotation by $\theta_W$ in gauge space. The latter is accomplished by a $U(2)$ rotation into the standard CP-eigenstates. The showering must still be performed coherently in order to capture nontrivial effects such as the flow of weak isospin and Higgs number. The full treatment is detailed in Appendix~\ref{app:split}. One residual complication is that the off-diagonal terms in the splitting function matrices are proportional to products of different propagator factors. E.g., for a $\gamma/Z_T$ state, the appropriate modification factor for $d{\cal P}_{\gamma Z}$ would use instead \begin{equation} \tilde k_T^4 \,\to\, (k_T^2 + \bar z m_B^2 + z m_C^2)(k_T^2 + \bar z m_B^2 + z m_C^2 - z\bar z m_Z^2) \, . \label{eq:ktilde2} \end{equation} We also note that our convention here is to align the phases of external $Z_{L}$ states with those of the eaten scalar $\phi^0$. Consequently, terms like $d{\cal P}_{h Z_{L}}$ are pure imaginary. The above modifications do not explicitly address possible running effects in the masses. Indeed, the numerical impact of the mass terms in the shower is anyway highly suppressed except at splitting scales of $\mathcal{O}(v)$. Still, some cases, such as kinematics with $k_T \sim v$ but $Q \gg v$, might require special care in the inclusion of higher-order radiative corrections. Similar considerations apply to the purely ultra-collinear splitting processes discussed below. \subsubsection{Ultra-collinear broken-phase splitting functions} \bgroup \def1.3{1.5} \begin{table} \centering {\small \begin{tabular}{l|ccc} \multicolumn{1}{l}{} & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \ArrowLine( 0,25)(50,25) \DashLine(50,25)(95,45){6} \ArrowLine( 50,25)(95,5) \Text(15,10)[]{$\boldsymbol \Leftarrow$} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Leftarrow$}} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$\phi/V_{L}$} \end{picture} & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \ArrowLine( 0,25)(50,25) \DashLine(50,25)(95,45){6} \ArrowLine( 50,25)(95,5) \Text(15,10)[]{$\boldsymbol \Leftarrow$} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Leftarrow$}} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$h$} \end{picture} & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \ArrowLine( 0,25)(50,25) \Photon(50,25)(95,45){3}{3} \ArrowLine( 50,25)(95,5) \Text(15,10)[]{$\boldsymbol \Leftarrow$} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Rightarrow$}} \SetWidth{1} \GCirc(50,25){8}{0.7} \end{picture} \\ & \underline{\hspace{0.7cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}\left(\dfrac{1}{z}\right)$\hspace{0.7cm}} & \underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{0.3cm}} & \underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{0.3cm}} \\ & $\to \ V_{L} \: f_{s}^{(\prime)} \ (V\!\ne\!\gamma)$ & $h \: f_{s}$ & $V_T \: f_{\text{-} s}^{(\prime)}$ \\ \hlin $f_{s=L}$ & $\big( I_f^V(y_f^2\bar z - y_{f^{(\prime)}}^2)z - Q_{f_L}^Vg_V^2 \bar z \big)^2$ & $\frac14 y_f^4 z(1+\bar z)^2$ & $g_V^2 z \big(Q_{f_R}^V y_{f}\bar z - Q_{f_L}^V y_{f^{(\prime)}}\big)^2$ \\ $f_{s=R}$\ & $\big( I_f^V y_f y_{f^{(\prime)}}z^2 - Q_{f_R}^Vg_V^2 \bar z \big)^2 $ & $\frac14 y_f^4 z(1+\bar z)^2$ & $g_V^2 z \big(Q_{f_L}^V y_{f}\bar z - Q_{f_R}^V y_{f^{(\prime)}}\big)^2$ \end{tabular} } \caption{Ultra-collinear fermion splitting functions $d{\mathcal P}/dz\,dk_T^2$ in the broken phase. Wavy lines represent transverse gauge bosons, while the longitudinals/Goldstones and Higgs bosons are represented by dashed lines. The $\tilde k_T^4$ symbol is defined in Eq.~(\ref{eq:ktilde}). The $I_f^V$ symbol is a shorthand for the ``charge'' of a fermion in its Yukawa coupling to the eaten Goldstone boson, or equivalently the fermion's axial charge under the vector $V$. These are normalized to approximately follow the weak isospin couplings, but are defined independently of the fermion's helicity: $I_u^Z = 1/2$, $I_{d/e}^Z = -1/2$, $I_u^{W^\pm} = I_{d/e}^{W^\pm} = 1/\sqrt{2}$. Other conventions are given in Appendix~\ref{sec:FeynmanRules}.} \label{tab:broken_fermion_splittings} \end{table} \egroup The remaining task is to compute all of the ultra-collinear splitting functions, proportional to the EWSB scale like in Eq.~(\ref{eq:ultra}). Generalizing the standard massless-fermion $f\to W_{L} f'$ calculation~\cite{Kane:1984bb,Dawson:1984gx,Chanowitz:1985hj}, we include the splittings involving arbitrary particles in the SM. The electroweak VEV ($v$), to which all of these splitting functions are proportionate, has been explicitly extracted, as well as universal numerical factors, the kinematic factor $\tilde k_T^4$ as in Eq.~(\ref{eq:ktilde}) or Eq.~(\ref{eq:ktilde2}), and the leading soft singularity structure ($1/z$, $1/\bar z$, or $1/z\bar z$). These are obtained quite straightforwardly in GEG, where individual $1\to 2$ ultra-collinear matrix elements all scale manifestly as $g^2v$, $y_f^2v$, or $gy_fv$. See Appendix~\ref{sec:FeynmanRules} for some explicit examples. We present these ``purely broken'' splitting functions in Tables~\ref{tab:broken_fermion_splittings}$-$\ref{tab:broken_scalar_splittings}, using similar logic as in Section~\ref{sec:unbroken}, though now working exclusively in mass basis for the neutral bosons. Unlike conventional collinear splittings, ultra-collinear splittings do not lead to collinear logarithms. Instead, integrating the emissions at a fixed value of $z$ yields a rate that asymptotes to a fixed value as the input energy increases. However, they are also unlike ordinary finite perturbative corrections, in that they are highly collinear-beamed, and subject to maximally large Sudakov effects from the conventional parton showering that can occur at higher emission scales. Ultra-collinear emissions of longitudinal gauge bosons, when formed by replacing a transverse boson in any conventional gauge emission by a longitudinal boson, retain soft-singular behavior $\sim 1/z$. (Within GEG, the $1/z$ factors within the splitting matrix elements become regulated to $2E_W/(E_W+k_W)$.) Fully integrating over emission phase space, these still lead to single-logarithmic divergences at high energy. This result might seem at odds with smoothly taking the unbroken limit. For $f\to W_{L}f'$, as we dial $v$ to zero at fixed fermion energy, the emission rate for longitudinal bosons grows unbounded. However, the spectrum of those bosons has a median energy fraction $z \sim \sqrt{m_W/E_f}$, and also tends to zero. Moreover, in theories where the fermion has a gauge-invariant mass, such as QED, the nominal ultra-collinear region $k_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m_W$ becomes subsumed by the usual emission dead cone at $k_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m_f$. Many of the other (soft-regular) splitting functions are close analogs of the unbroken splittings, but with ``wrong'' helicities. For example, there are processes where a fermion emits a transverse gauge boson but undergoes a helicity flip, and also where a fermion emits a Higgs boson {\it without} flipping its helicity. There are also new processes such as $h\to h h$ where such an identification is not possible. Schematically, all of these processes can be viewed as arising from $1\to 3$ splittings in the unbroken theory, where one of the final-state particles is a Higgs boson set to its VEV. \bgroup \def1.3{1.5} \begin{table} \begin{subtable}[t]{{\textwidth}} \centering {\small \begin{tabular}{l|cccc} \multicolumn{1}{l}{} & \multicolumn{4}{c}{ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \Photon( 0,25)(50,25){4}{3} \DashLine(50,25)(95,45){6} \Photon( 50,25)(95,5){4}{3} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$\phi/V_{L}$} \end{picture} } \\ & \multicolumn{4}{c}{\underline{\hspace{5.0cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}\left(\dfrac{1}{z}\right)$\hspace{5.0cm}}} \\ & $\rightarrow W^{\pm}_{L} \: \gamma_T$ & $W^{\pm}_{L} \: Z_T$ & $Z_{L} \: W^{\pm}_T$ & $W_{L}^+ \: W_T^- \ \overset{\rm or}\ \ W_{L}^- \: W_T^+$ \\ \hlin $W_T^\pm$\ & $e^2 g_2^2 \bar z^3$ & $\frac14 c_W^2 g_2^4 \bar z \left((1+\bar z) + t_W^2z\right)^2$ & $\frac14 g_2^4 \bar z (1+\bar z)^2$ & $0$ \\ $\gamma_T$ & $0$ & $0$ & $0$ & $e^2 g_2^2 \bar z$ \\ $Z_T$ & $0$ & $0$ & $0$ & $\frac14 c_W^2 g_2^4 \bar z \left((1+\bar z) - t_W^2z\right)^2$ \\ $[\gamma Z]_T$ & $0$ & $0$ & $0$ & $\frac12 c_W e g_2^3 \bar z \left((1+\bar z) - t_W^2z\right)$ \end{tabular} } \vspace{0.5cm} \end{subtable} \begin{subtable}[t]{\textwidth} \centering {\small \begin{tabular}{c|cc} \multicolumn{1}{l}{} &\begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \Photon( 0,25)(50,25){4}{3} \DashLine(50,25)(95,45){6} \Photon( 50,25)(95,5){4}{3} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$h$} \end{picture} & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \Photon( 0,25)(50,25){4}{3} \ArrowLine(50,25)(95,45) \ArrowLine(95,5)( 50,25) \Text(50,32)[]{\rotatebox{26}{$\boldsymbol \Rightarrow$}} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Rightarrow$}} \SetWidth{1} \GCirc(50,25){8}{0.7} \end{picture} \\ & \underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{0.3cm}} & \underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{0.3cm}} \\ & \ $\to h \: V_T \ (V\!\ne\!\gamma)$ \ & $f_s \: \bar f^{(\prime)}_s$ \\ \hlin $V_T$ & $\frac14 z\bar z g_V^4$ & $\frac12 g_V^2 \left( Q^V_{f_s} y_{f^{(\prime)}} z + Q^V_{f_{\text{-} s}} y_{f} \bar z \right)^2$ \\ $[\gamma Z]_T$ & $0$ & $\frac12 e g_Z y_f^2 Q^\gamma_{f} \left( Q^Z_{f_s} z + Q^Z_{f_{\text{-} s}} \bar z \right)$ \end{tabular} } \end{subtable} \caption{Ultra-collinear transverse vector splitting functions $d{\mathcal P}/dz\,dk_T^2$ in the broken phase. For the off-diagonal incoming $[\gamma Z]_T$, the $\tilde k_T^4$ symbol is defined in Eq.~(\ref{eq:ktilde2}). Other conventions are as in Table~\ref{tab:broken_fermion_splittings} and in Appendix~\ref{sec:FeynmanRules}.} \label{tab:broken_vector_splittings} \end{table} \egroup To make Tables~\ref{tab:broken_fermion_splittings}$-$\ref{tab:broken_scalar_splittings} more compact, and to make closer contact with practical applications, we have made one additional simplification by neglecting neutral boson interference effects for outgoing particles. E.g., for an ultra-collinear process such as $t_{s} \to (h/Z_{L})t_{s}$ (helicity non-flipping scalar emission), we treat the outgoing Higgs and longitudinal $Z$ states incoherently. For final-state radiation, such a treatment is easily justified, since, as discussed in Section~\ref{sec:mass_effects}, the particles produced out of an ultra-collinear splitting have suppressed secondary showering. And for PDF evolution starting from an initial-state composed exclusively of light matter, there are simply no available ultra-collinear processes where such interference effects can occur (e.g., there is GET-violating $q_{s} \to Z_{L}q_{s}$, but not $q_{s} \to hq_{s}$). At higher scales, where heavier particles begin to populate the PDFs, further ultra-collinear splittings are again suppressed. Note, however, that we retain interference effects for {\it incoming} neutral bosons, which can remain important for final-state splittings like $\gamma/Z_T \to W^\pm_{L}W^\mp_T$. We also re-emphasize that interference effects for outgoing particles should still be retained for the conventional splitting functions, even in the broken phase. This is particularly important for the generation of the mixed $\gamma/Z_T$ PDF. \bgroup \def1.3{1.5} \begin{table} \begin{subtable}[t]{\textwidth} \centering {\small \begin{tabular}{l|cc} \multicolumn{1}{l}{} &\multicolumn{2}{c}{ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \DashLine( 0,25)(50,25){6} \DashLine(50,25)(95,45){6} \DashLine( 50,25)(95,5){6} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$\phi/V_{L}$} \Text(70, 5)[l]{$\phi/V_{L}$} \end{picture} } \\ & \multicolumn{2}{c}{\underline{\hspace{3.5cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}\left(\dfrac{1}{z\bar z}\right)$\hspace{3.5cm}}} \\ & $\rightarrow W^+_{L} \: W^-_{L}$ & $Z_{L} \: W^\pm_{L}/Z_{L}$ \\ \hlin $W^{\pm}_{L}$ & $0$ & $\frac{1}{16} g_2^4 \left( (\bar z-z)(2+z\bar z) - t_W^2\bar z(1+\bar z) \right)^2$ \\ $h$ & $\frac14 \left( g_2^2(1-z\bar z) - \lambda_h z\bar z \right)^2$ & $\frac18 \left( g_Z^2(1-z\bar z) - \lambda_h z\bar z \right)^2$ \\ $Z_{L}$ & $\frac{1}{16} g_2^4 \left( (\bar z-z) (2+z\bar z - t_W^2z\bar z) \right)^2$ & $0$ \\ $[hZ_{L}]$ & $\frac{i}{8} g_2^2 \left( g_2^2(1-z\bar z) - \lambda_h z\bar z \right) \left(\bar z-z\right) \left(2+z\bar z - t_W^2z\bar z \right)$ & $0$ \end{tabular} } \vspace{0.5cm} \end{subtable} \begin{subtable}[t]{\textwidth} \centering {\small \begin{tabular}{l|cc} \multicolumn{1}{l}{} & \ \ \ \ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \DashLine( 0,25)(50,25){6} \DashLine(50,25)(95,45){6} \DashLine( 50,25)(95,5){6} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$h$} \Text(70, 5)[l]{$\phi/V_{L}$} \end{picture} \ \ \ \ & \ \ \ \ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \DashLine( 0,25)(50,25){6} \DashLine(50,25)(95,45){6} \DashLine( 50,25)(95,5){6} \SetWidth{1} \GCirc(50,25){8}{0.7} \Text(70,35)[l]{$h$} \Text(70,5)[l]{$h$} \end{picture} \ \ \ \ \\ & \underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}\left(\dfrac{1}{\bar z}\right)$\hspace{0.3cm}} & \underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{0.3cm}} \\ & $\rightarrow h \: W^\pm_{L}/Z_{L}$ & $h \: h$ \\ \hlin $W^{\pm}_{L}$ & $\frac14 z \left( g_2^2(1-z\bar z) + \lambda_h \bar z \right)^2$ & $0$ \\ $h$ & $0$ & $\frac98 \lambda_h^2 z\bar z$ \\ $Z_{L}$ & $\frac14 z \left( g_Z^2(1-z\bar z) + \lambda_h \bar z \right)^2$ & $0$ \\ $[hZ_{L}]$ & $0$ & $0$ \end{tabular} } \vspace{0.5cm} \end{subtable} \begin{subtable}[t]{\textwidth} {\small \begin{tabular}{l|cccc} \multicolumn{1}{l}{} & \multicolumn{3}{c}{ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \DashLine( 0,25)(50,25){6} \Photon(50,25)(95,45){4}{3} \Photon( 50,25)(95,5){4}{3} \SetWidth{1} \GCirc(50,25){8}{0.7} \end{picture} } & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \DashLine( 0,25)(50,25){6} \ArrowLine(50,25)(95,45) \ArrowLine(95,5)( 50,25) \Text(50,32)[]{\rotatebox{ 26}{$\boldsymbol \Leftarrow$}} \Text(50, 3)[]{\rotatebox{-26}{$\boldsymbol \Rightarrow$}} \SetWidth{1} \GCirc(50,25){8}{0.7} \end{picture} \\ & \multicolumn{3}{c}{\underline{\hspace{3.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{3.3cm}}} & \multicolumn{1}{c}{\underline{\hspace{0.3cm}$\dfrac{1}{16\pi^2}\dfrac{v^2}{\tilde{k}^4_T}$\hspace{0.3cm}}} \\ & $\to \gamma_T \: W_T^\pm$ & $Z_T \: W_T^\pm/Z_T$ & $W_T^+ \: W_T^-$ & $f_s \: f^{(\prime)}_{\text{-} s}$ \\ \hlin \multirow{2}{*}{$W^{\pm}_{L}$} & \multirow{2}{*}{$2 e^2 g_2^2 z^3\bar z$} & \multirow{2}{*}{$\frac12 c_W^2 g_2^4 z\bar z \left( (\bar z-z) + t_W^2 \right)^2$} & \multirow{2}{*}{$0$} & \multicolumn{1}{l}{$s\!=\!L: \ \frac12 \left(y_f^2\bar z + y_{f'}^2z - g_2^2z\bar z \right)^2$} \\ & & & & \multicolumn{1}{l}{$s\!=\!R: \qquad\qquad \frac12 y_f^2 y_{f'}^2$} \\ $h$ & $0$ & $\frac14 g_Z^4 z\bar z$ & $\frac12 g_2^4 z\bar z$ & $\frac14 y_f^4 (\bar z-z)^2$ \\ $Z_{L}$ & $0$ & $0$ & $\frac12 g_2^4 z\bar z \left( \bar z-z \right)^2$ & $\left(I_f^Z y_f^2 - Q_{f_s}^Z g_Z^2 z\bar z \right)^2$ \\ $[hZ_{L}]$ & $0$ & $0$ & $-\frac{i}{2} g_2^4 z\bar z \left( \bar z-z \right)$ & $(-1)^s\frac{i}{2} y_f^2 (\bar z-z) \left(I_f^Z y_f^2 - Q_{f_s}^Z g_Z^2 z\bar z \right)$ \end{tabular} } \end{subtable} \caption{Ultra-collinear longitudinal vector boson and Higgs boson splitting functions $d{\mathcal P}/dz\,dk_T^2$. The Higgs quartic coupling $\lambda_h$ is normalized such that $m_h^2 = \lambda_h v^2/2$. For the off-diagonal incoming $[h Z_{L}]$, the $\tilde k_T^4$ symbol stands for $(k_T^2 + \bar z m_B^2 + z m_C^2 - z\bar z m_h^2)\cdot(k_T^2 + \bar z m_B^2 + z m_C^2 - z\bar z m_Z^2)$. Other conventions are as in Tables~\ref{tab:broken_fermion_splittings}, \ref{tab:broken_vector_splittings} and in Appendix~\ref{sec:FeynmanRules}. } \label{tab:broken_scalar_splittings} \end{table} \egroup \section{Summary and Conclusions} \label{sec:conclusions} At very high energies, far above the electroweak scale, the full gauge and Yukawa structure of the Standard Model emerges, leading to an extremely rich set of parton showering phenomena. As this full SM parton shower evolves down in scale, it ultimately passes back through the electroweak scale. There it encounters additional showering phenomena that arise uniquely from EWSB, and then finally transitions back into the $SU(3)_{\rm QCD} \times U(1)_{\rm EM}$ gauge showers familiar from the past several decades of theoretical and experimental work. With an eye towards experiments in the next decade and beyond, in this paper we have attempted lay out the above picture of electroweak showering in a more comprehensive manner. We have systematically presented the electroweak collinear splitting functions in the SM in the $SU(2)_L \times U(1)_Y$ symmetric phase as well as in the broken phase after electroweak symmetry breaking. We discussed their general features in the collinear and soft-collinear regimes and identified the general class of EWSB contributions that are uniquely ``ultra-collinear,'' namely localized at $k_T \sim v$ with appreciable rates, but otherwise absent in conventional showering regimes. Effects of the ultra-collinear part of the shower include counter-intuitive ``violations'' of the Goldstone-boson Equivalence Theorem. We have also identified a convenient way to isolate EWSB effects within the shower, especially by disentangling contributions from gauge bosons and Goldstone bosons at high energies, using a novel gauge choice which we call Goldstone Equivalence Gauge (GEG). We further implemented the full EW shower in a numerical monte carlo, and showed a number of new results regarding its subtleties and practical impact in SM processes and beyond. Our main observations and results are as follows:\\ $\bullet$ The splitting functions of the unbroken $SU(2)_L\times U(1)_Y$ theory, presented in Sec.~\ref{sec:unbroken}, typically act as the leading contributions to showering processes at energies far above the EW scale. \\ $\bullet$ At splitting scales $k_T \sim gv$ and $yv$, the unbroken splitting functions become regulated and the new ultra-collinear splitting functions arising from EWSB appear, as presented in Sec.~\ref{sec:broken}. The latter is the analogue of ``higher-twist'' terms in terms of the formal power counting. While they do not contribute to the leading logarithmic evolution, numerically they can be larger than the unbroken contributions at low $k_T$, and in some cases can also account for a sizable fraction of the integrated splitting rates. \\ $\bullet$ Goldstone-boson equivalence ceases to hold in the ultra-collinear regime, allowing, e.g., for emission of relativistic longitudinal bosons from massless fermions. This effect is generalized here to all splitting functions in the SM, often involving nontrivial interplays of EWSB effects in gauge, Yukawa, and scalar couplings. \\ $\bullet$ We introduced the Goldstone Equivalence Gauge (as detailed in Appendix~\ref{sec:gauge}) that practically as well as conceptually disentangles the effects from the Goldstone bosons and the gauge fields. Utilization of this gauge choice makes the GET transparent {\it and} organizes its leading violations in a straightforward diagrammatic expansion (see Appendix~\ref{sec:FeynmanRules}). The concept of a ``nearly on-shell'' gauge/Goldstone boson as an intermediate state in the shower also becomes unambiguous. \\ $\bullet$ We implemented a practical EW showering scheme based on the calculated collinear and ultra-collinear splitting kernels in a Sudakov formalism. As discussed in Sec.~\ref{sec:split}, some additional novel features in the implementation include matching between showering and resonance decay, kinematic back-reaction corrections for multiple emissions of massive particles, and a density matrix treatment for the mixed-state evolution of neutral bosons ($\gamma/Z/h$). Our treatment of EW showering is fully self-contained, and far beyond the currently existing monte carlo simulation packages. \\ $\bullet$ We applied the EW showering formalism to a number of important physical processes at high energies. They include: electroweak partons in PDFs as the basis for vector-boson-fusion; EW FSR as a leading source of multiple gauge boson production, with splitting probabilities at the level of 10s of percent; EW showers initiated by top quarks, including Higgs bosons in the final state; and showers initiated by neutral bosons $\gamma/Z/h$, for which care must be taken to obtain meaningful results. The emergence of ``weak jets'' from high-energy new physics processes was illustrated using a heavy $W'$ as an example. In summary, we have derived the collinear splitting functions for the Standard Model electroweak sector, including the massive fermions, gauge bosons, and the Higgs boson, and implemented a collinear showering scheme in the Sudakov formalism for all SM particles at high energies. We have highlighted many novel features and the necessity to include them for physics explorations in and beyond the SM at high energies, including any physics at future colliders, as well as other processes in high energy regimes much above the electroweak scale. While our paper has explored collinear EW showering at a new level of detail compared to earlier works, it leaves open several interesting issues that we intend to address in future publications~\cite{EWshower}. One such issue is a more comprehensive picture of PDF evolution, folding together QCD and EW effects into a unified set of DGLAP equations that incorporate both quantum coherence effects and ultra-collinear effects, and allowing for a complete QCD+EW ISR showering scheme. Implications for the exclusive structure of multi-TeV VBF events would be particularly interesting to study. We also intend to address issues related to soft wide-angle EW exchanges, which lead to quantum entanglements between the isospins of the beams and the final state at NLL. These entanglements represent a formally subleading aspect of the notorious Bloch-Nordsieck violation, which naively implies double- and single-logarithmic divergences in inclusive cross sections sourced by isospin-exclusive initial states. The collinear formalism developed here would allow for simple LL resummation of the soft-collinear, double-logarithmic contributions. (See, e.g., Section~\ref{sec:Wprime} for simple examples in the final-state shower.) Capturing and resumming the remaining single-log, quantum-coherent contributions, as well as motivating factorization of the initial state at NLL, requires a more advanced formalism that uses the language of quantum ensembles. \section{Couplings and Feynman Rules} \label{sec:FeynmanRules} \subsection{Lagrangian, couplings, and charge conventions} \label{sec:conventions} In Goldstone Equivalence Gauge, each physical longitudinal gauge boson state is interpolated by two fields: $V_n$ and $\phi_V$, where $V=W^\pm,Z$. Unlike, e.g., in $R_\xi$ gauges, the relative phases of $V_n$-mediated and $\phi_V$-mediated processes must be explicitly kept track of. Here, we first present the Lagrangian of the SM in GEG to set the conventions. Before electroweak symmetry breaking (EWSB), the Lagrangian with the gauge fixing is written as \begin{eqnarray}\label{eq:lagrangian} \cal{L}_{\rm Gauge} &\,=\,& -\frac{1}{4} W^{a\mu\nu}W^a_{\mu\nu} -\frac{1}{4} (B_{\mu\nu})^2-\frac{1}{2\xi}(n\cdot W)^2-\frac{1}{2\xi} (n\cdot B)^2, \nonumber \\ \cal{L}_{\rm fermion} &\,=\,& i\bar{\psi}\slashed{D}\psi, \\ \cal{L}_{\rm Yukawa} &\,=\,& -y_d\bar{Q}_LHd_R-y_u\epsilon_{ij}\bar{Q}_L^iH^{*j}u_R-y_e\bar{L}_L He_R + {\rm h.c.}\ , \nonumber \\ \cal{L}_{\rm Higgs} &\,=\,& (D^{\mu}H)^{\dagger}D_{\mu}H -\frac{\lambda_h}{4}\left(H^{\dagger}H-\frac{v^2}{2}\right)^2, \nonumber \\ \cal{L}_{\rm Ghost} &\,=\,& { \bar{c^a}n^{\mu}D_{\mu}^{ab}c^b. } \nonumber \end{eqnarray} The flavor indices are suppressed since we do not consider the effects of flavor mixing. The covariant derivative $D_{\mu}$ and $SU(2)_L$ field strength component $W_{\mu\nu}^a$ are defined as \begin{eqnarray} \label{eq:convent} D_{\mu} = \partial_{\mu}-ig_2W_{\mu}^a T^a-ig_1YB_{\mu}, \qquad W_{\mu\nu}^a = \partial_{\mu}W_{\nu}^a-\partial_{\nu}W_{\mu}^a+g_2f^{abc}W_{\mu}^bW_{\nu}^c . \end{eqnarray} The gauge-fixing vector $n^\mu$ of Eq.~(\ref{eq:nmu2}) would here be treated as a differential operator of schematic form $(1,-\partial_t\vec\nabla/\sqrt{\partial_t^2 \vec\nabla\cdot\vec\nabla})$. This becomes a well-defined operation in momentum-space. We take the formal $\xi\to 0$ limit in what follows. After EWSB $\langle H^0 \rangle = v/\sqrt{2}$, and particles acquire masses. The neutral gauge fields $W^{\mu}_3$ and $B^{\mu}$ mixing to form mass eigenstates $Z^{\mu}$ and $A^{\mu}$. Gauge and fermion masses go as \begin{eqnarray} m_W = \frac12 g_2 v, \quad m_Z = \frac12 \sqrt{g_1^2 + g_2^2}\, v,\quad m_\gamma = 0,\quad m_f = \frac{1}{\sqrt 2} y_f v, \end{eqnarray} with $g_1 \approx 0.36$ and $g_2\approx 0.65$ at the weak-scale, $y_t \approx 1$, and $v \approx 246$ GeV. The Higgs field self-coupling is normalized such that \begin{eqnarray} m_h^2 = \frac12\lambda_h v^2, \end{eqnarray} such that $\lambda_h \simeq 0.52$ for $m_h \simeq 125$~GeV. As for the gauge-fermion interactions in a general basis, we denote them using $g_V$ as the gauge coupling constant for a vector boson $V = B^0, W^0, W^\pm, \gamma, Z$, \begin{eqnarray} ig_V\gamma^{\mu}\sum_{\tau=L,R} g^V_{\tau} P_{\tau} \,, \end{eqnarray} where the chirality projection operators are $P_{R/L}=\frac{1}{2}(1\pm \gamma^5)$. They are all built up from the underlying $U(1)_Y$ and $SU(2)_L$ gauge couplings. Specifically, \begin{eqnarray} g_{B^0} = g_1, \quad g_{W^0} = g_{W^\pm} = g_2, \quad g_{\gamma} = e = \frac{g_1 g_2}{\sqrt{g_1^2 + g_2^2}}, \quad g_Z = \sqrt{g_1^2 + g_2^2} . \end{eqnarray} As usual, the weak mixing angle is defined as \begin{eqnarray} c_W \equiv \cos\theta_W = {g_2 \over g_Z} \quad {\rm or} \quad s_W \equiv \sin\theta_W = {g_1 \over g_Z} . \end{eqnarray} We denote the gauge charge $Q$ of a particle $p$ (chiral fermion or scalar) under a given gauge boson $V$ by $Q^V_p$.\footnote{For $V = W^\pm$, two different components of a left-handed doublet participate, but they can be assigned a common charge of $1/\sqrt{2}$, with either flavor plugged in.} We list the full set of charges in Table~\ref{tab:charges}. \begin{table}[] \centering \begin{tabular}{r|rrrrr} & $Q^{B^0}_p = Y_p$ & $Q^{W^0}_p = T^3_p$ & \ $Q^{W^\pm}_p$ \ & $Q^{\gamma}_p = Q^{\rm EM}_p$ & $Q^Z_p = T^3_p-Q^{\rm EM}_p s^2_W$ \vspace{1mm} \\ \hline $p = \qquad\ u_L$ & 1/6 & 1/2 & $1/\sqrt{2}$ & 2/3 & $1/2 - (2/3)s_W^2$ \\ $u_R$ & 2/3 & 0 & 0 & 2/3 & $- (2/3)s_W^2$ \\ $d_L$ & 1/6 & $-1/2$ & $1/\sqrt{2}$ & $-1/3$ & $-1/2 + (1/3)s_W^2$ \\ $d_R$ & $-1/3$ & 0 & 0 & $-1/3$ & $(1/3)s_W^2$ \\ $\nu_L$ & $-1/2$ & 1/2 & $1/\sqrt{2}$ & 0 & $1/2$ \\ $e_L$ & $-1/2$ & $-1/2$ & $1/\sqrt{2}$ & $-1$ & $-1/2 + s_W^2$ \\ $e_R$ & $-1$ & 0 & 0 & $-1$ & $s_W^2$ \\ $\phi^+$ & 1/2 & 1/2 & $1/\sqrt{2}$ & 1 & $1/2 - s_W^2$ \\ $H^0 = \frac{h+i\phi^0}{\sqrt{2}}$ & 1/2 & $-1/2$ & $1/\sqrt{2}$ & 0 & $-1/2$ \end{tabular} \caption{Gauge charges of chiral fermions and scalars in the Standard Model. For the fermions, first generation is used, but charges for second and third generations follow the same pattern.} \label{tab:charges} \end{table} We now turn to the quadratic Lagrangian terms involving gauge fields and Goldstone fields. The quadratic terms of $Z$ and $\phi_Z$ Lagrangian are \begin{eqnarray} \mathcal{L}_{Z^2}&=&-\frac{1}{2}\partial^{\mu}Z^{\nu}\partial_{\mu}Z_{\nu}+\frac{1}{2}\partial^{\mu}Z_{\mu}\partial^{\nu}Z_{\nu}+\frac{1}{2}m_Z^2Z_{\mu}Z^{\mu} -\frac{1}{2\xi}(n^{\mu}Z_{\mu})^2 . \\ \mathcal{L}_{\phi_ZZ} &=& -m_ZZ^{\mu}\partial_{\mu}\phi_Z,\qquad \mathcal{L}_{\phi_Z^2} = \frac{1}{2}(\partial^{\mu}\phi_Z)^2 . \end{eqnarray} Note that the minus sign in $ \mathcal{L}_{\phi_ZZ}$ follows from the sign convention of the covariant derivative, Eq.~(\ref{eq:convent}), as well as our expansion of the Higgs doublet in Eq.~(\ref{eq:HiggsExpansion}), namely $H^0 \to (v+h - i\phi^0)/\sqrt{2}$. This in turn determines the phase factor of the polarization vector $Z_n$. (Though of course our convention choices ultimately have no effect on physical rates.) For $W^{\pm}_{\mu}/\phi^{\pm}$, the unmixed kinetic and mass terms are analogous, and the quadratic mixing term is given by \begin{eqnarray} \mathcal{L}_{W\phi}=-im_W W^+_{\mu}\partial^{\mu}\phi^- + {\rm h.c.} \end{eqnarray} \subsection{External polarizations and propagators} \label{sec:polarizations} We decompose all fermions and gauge bosons into helicity basis within the hard process CM frame, including off-shell particles. We emphasize that in computing leading-order $1\to2$ splitting functions, {\it all} particle polarization states should be set on-shell, since the off-shell corrections are strictly non-collinear. An on-shell polarization can be associated with an off-shell momentum, for example, by adjusting the three-momentum at fixed energy. The fermion external spinors are as usual, though to facilitate extraction of $O(v)$ effects we Taylor expand in $m_f/E = (y_f/\sqrt{2})(v/E)$. Explicitly, for fermions moving approximately along the $z$-axis, possibly offset toward the $x$-axis by a small angle $\theta$, \begin{equation} \label{eq:ext_state} u_{s=L} \,\simeq\, \sqrt{2E} \left(\begin{array}{r} \left(\begin{array}{c} -\theta/2 \\ 1 \end{array}\right) \vspace{1mm} \\ \frac{m_f}{2E} \left(\begin{array}{c} -\theta/2 \\ 1 \end{array}\right) \end{array}\right)\,, \qquad u_{s=R} \,\simeq\, \sqrt{2E} \left(\begin{array}{r} \frac{m_f}{2E} \left(\begin{array}{c} 1 \\ \theta/2 \end{array}\right) \vspace{1mm} \\ \left(\begin{array}{c} 1 \\ \theta/2 \end{array}\right) \end{array}\right) \,. \end{equation} Propagators are also as usual, but given our approximate decomposition into on-shell spin states, they fall into a factorizable form. For a generic off-shell $k^\mu$, we can build an effective on-shell $\tilde k^\mu$ by keeping $k^0 \equiv E$ fixed but changing \begin{eqnarray} \vec k = \hat k \sqrt{E^2-k^2} \;\to\; \hat k \sqrt{E^2-m_f^2} = \vec k + {\cal O}((k^2-m_f^2)/E). \end{eqnarray} We may then rewrite the propagator as \begin{eqnarray} \frac{\slashed{k}+m_f}{k^2-m_f^2} &\,=\,& \frac{(\slashed{\tilde k}+m_f) \,+\, \mathcal{O}((k^2-m_f^2)/E)}{k^2-m_f^2} \nonumber \\ &\,=\,& \frac{\sum_{s=L,R}u_s(\tilde k)\,\bar u_s(\tilde k)}{k^2-m_f^2} \;+\; {\rm non \text{-} collinear\ terms}, \end{eqnarray} exploiting the fact that the leading correction away from a factorized numerator is set up to cancel the propagator's denominator. We ignore possible coherence effects between different spin channels. Transverse gauge bosons are also assigned their standard polarization vectors \begin{equation} \epsilon_{\pm}^\mu \,\simeq\, \frac{1}{\sqrt{2}}\big(0; 1,\pm i,-\theta \big) \,, \end{equation} with the complex-conjugate $\epsilon^{\mu*}_{\pm}$ used for outgoing bosons. However, the longitudinal gauge/Goldstone sector is treated somewhat unconventionally. Longitudinal gauge bosons can be created by Goldstone/pseudo-scalar boson fields. We set our phase conventions so that these creation and annihilation amplitudes are unity, maintaining continuity with the unbroken theory. However, longitudinal bosons may also still be created by gauge fields, in association with the ``remainder'' field component $V_n$ expanded out in Eq.~(\ref{eq:Wexpansion}). Synchronizing these component fields such that they also create/annihilate external bosons with unit amplitude, their associated polarization vectors then carry nontrivial phases: \begin{eqnarray} \label{eq:phase} && {\rm incoming}\ Z: \ i\epsilon_n^{\mu}; \ \ {\rm outgoing}\ Z: \ (i\epsilon_n^{\mu})^* = -i\epsilon_n^{\mu}; \ \ {\rm incoming/outgoing:}\ W^\pm: \ \pm\epsilon_n^{\mu},\ \qquad~~~~\\ && {\rm with}\quad \epsilon_n^\mu = - \frac{m_V}{n\cdot k}n^{\mu} \simeq\, \frac{m_V}{2E}\big(-1;\theta,0,1\big)\,. \end{eqnarray} The light-like gauge-fixing vector $n^\mu$ is defined in Eq.~(\ref{eq:nmu2}). The corresponding propagators are given in Eq.~(\ref{app:prop}). Photons are subjected to the same gauge conditions, but this has little practical bearing on their showering behavior. As usual, they have purely transverse external polarization states, and only their transverse modes contribute to collinear-enhanced physics. As discussed in Section~\ref{sec:interference}, the transverse photon and $Z$ propagators should be treated coherently within a parton shower. The $h$ and $\phi^0/Z_n$ propagators should also be treated coherently. We will see an example of this, including the $Z_n$ component, in Appendix~\ref{sec:examples}. \subsection{Feynman rules for three-point couplings} \label{sec:three-points} Feynman rules in GEG are largely similar to those of standard gauges. We list below many of the relevant three-point vertex rules. For brevity, we omit four-point interactions, which do not play a role in $1\to2$ splittings at this order. Wherever explicitly referenced, we reckon all four-momenta as flowing into the vertex. We use the small arrows next to a particle line to indicate the flow of the momenta as well as the electric charge, where relevant. When no arrows labelled for the charged particles, charge conservation is implied at each vertex for the particles involved. Gauge field polarization vectors $\epsilon^\mu$ are kept explicit at the vertices here, and can take on three possible values associated with the propagating gauge degrees of freedom: the two spacelike transverse polarizations $\epsilon_\pm^\mu$ (or $\epsilon_{xy}^\mu$), and the lightlike polarization $\propto \epsilon_n^\mu$.\footnote{An off-shell photon does not have a physical pole associated with its $\epsilon_n$ polarization, and the phase of that polarization can be set arbitrarily since there is no associated phase with the creation/annihilation of asymptotic states. A simple default would be to follow the same convention as for the $Z$ boson.} The on-shell values of these polarizations and a convenient phase convention have been provided at the end of the preceding subsection as in Eq.~(\ref{eq:phase}). The extension to off-shell momenta follows immediately. However, some care should be taken with respect to how these polarizations are oriented relative to momentum flows, whether a boson is reckoned as ``incoming'' or ``outgoing.'' In particular, if the four-momentum $k$ is measured outgoing from a vertex, one should use $\epsilon(-k)$. (In many cases this is equivalent to $\epsilon(k)^*$, but an exception occurs for $W^\pm_n$.) Including the polarization vectors in the vertices as such, the vector boson propagators will not carry Lorentz indices, as given in Eq.~(\ref{app:prop}). \begin{eqnarray} \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0, 5)[]{$f$} \ArrowLine( 10, 5)(50,25) \Photon(50,25)(50,60){3}{3} \ArrowLine( 50,25)(85,5) \Text(90,5)[l]{$f$} \Text(57,60)[l]{$\gamma$} \end{picture}} \hspace{-4.0in} & =\ \ & i e Q^{EM}_f \slashed{\epsilon} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0,5)[]{$f$} \ArrowLine( 10,5)(50,25) \Photon(50,25)(50,60){3}{3} \ArrowLine( 50,25)(85,5) \Text(90,5)[l]{$f'$} \Text(57,60)[l]{$W^{\pm}$} \end{picture}} \hspace{-4.0in} & =\ \ & i \frac{g_2}{\sqrt{2}} \slashed{\epsilon} P_L \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0,5)[]{$f$} \ArrowLine( 10,5)(50,25) \Photon(50,25)(50,60){3}{3} \ArrowLine( 50,25)(85,5) \Text(90,5)[l]{$f$} \Text(57,60)[l]{$Z$} \end{picture}} \hspace{-4.0in} & =\ \ & i g_Z \slashed{\epsilon} \left((T^3_f - Q^{\rm EM}_f s_W^2)P_L - Q^{\rm EM}_f s_W^2 P_R\right) \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0,5)[]{$u$} \ArrowLine( 10,5)(50,25) \DashLine(50,25)(50,60){5} \ArrowLine( 50,25)(85,5) \Text(57,60)[l]{$\phi^\pm$} \Text(90, 5)[l]{$d$} \end{picture}} \hspace{-4.0in} & =\ \ & i \left(-y_d P_L + y_u P_R\right) \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0,5)[]{$d$} \ArrowLine( 10,5)(50,25) \DashLine(50,60)(50,25){5} \ArrowLine( 50,25)(85,5) \Text(57,60)[l]{$\phi^\pm$} \Text(90, 5)[l]{$u$} \end{picture}} \hspace{-4.0in} & =\ \ & i \left(y_u P_L - y_d P_R\right) \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0,5)[]{$f$} \ArrowLine( 10,5)(50,25) \DashLine(50,25)(50,60){5} \ArrowLine( 50,25)(85,5) \Text(57,60)[l]{$\phi^0$} \Text(90, 5)[l]{$f$} \end{picture}} \hspace{-4.0in} & =\ \ & (\delta_{fu}-\delta_{fd})\frac{y_f}{\sqrt{2}} \gamma_5 \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Text(0,5)[]{$f$} \ArrowLine( 10,5)(50,25) \DashLine(50,25)(50,60){5} \ArrowLine( 50,25)(85,5) \Text(57,60)[l]{$h$} \Text(90, 5)[l]{$f$} \end{picture}} \hspace{-4.0in} & =\ \ & -i \frac{y_f}{\sqrt{2}} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Photon( 10,5)(50,25){3}{3} \Photon(50,25)(50,60){3}{3 \Photon(50,25)(85, 5){3}{3} \Text(0,5)[]{$Z$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$k_1$} \Text(57,60)[l]{$W^-$} \Text(40, 45)[]{\rotatebox{90}{$\longleftarrow$}} \Text(31, 47)[]{$k_2$} \Text(90, 5)[l]{$W^+$} \Text(75, 20)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(80, 27)[]{$k_3$} \end{picture}} \hspace{-4.0in} & =\ \ & i g_2 c_W \,\epsilon_{ijk}(\epsilon_i\cdot \epsilon_j )\big(\epsilon_k\cdot(k_i-k_j)\big) \quad [\epsilon_{123} \equiv 1] \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Photon( 10,5)(50,25){3}{3} \Photon(50,25)(50,60){3}{3} \Photon(50,25)(85, 5){3}{3} \Text(0,5)[]{$\gamma$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$k_1$} \Text(57,60)[l]{$W^-$} \Text(40, 45)[]{\rotatebox{90}{$\longleftarrow$}} \Text(31, 47)[]{$k_2$} \Text(90, 5)[l]{$W^+$} \Text(75, 20)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(80, 27)[]{$k_3$} \end{picture}} \hspace{-4.0in} & =\ \ & i e \,\epsilon_{ijk}(\epsilon_i\cdot \epsilon_j )\big(\epsilon_k\cdot(k_i-k_j)\big) \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(50,25){6} \Photon(50,25)(50,60){3}{3} \DashLine(50,25)(85, 5){5} \Text(0,5)[]{$h$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$q$} \Text(57,60)[l]{$W^{\pm}$} \Text(90, 5)[l]{$\phi^{\mp}$} \Text(70, 5)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(70, -3)[]{$p$} \end{picture}} \hspace{-4.0in} & =\ \ & \pm i \frac{g_2}{2} (q-p)\cdot \epsilon \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(50,25){6} \Photon(50,25)(50,60){3}{3} \DashLine(50,25)(85, 5){5} \Text(0,5)[]{$h$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$q$} \Text(57,60)[l]{$Z$} \Text(90, 5)[l]{$\phi^{0}$} \Text(70, 5)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(70, -3)[]{$p$} \end{picture}} \hspace{-4.0in} & =\ \ & \frac{g_Z}{2}(q-p)\cdot \epsilon \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(50,25){6} \Photon(50,25)(50,60){3}{3} \DashLine(50,25)(85, 5){5} \Text(0,5)[]{$\phi^0$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$q$} \Text(57,60)[l]{$W^{\pm}$} \Text(90, 5)[l]{$\phi^{\mp}$} \Text(70, 5)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(70, -3)[]{$p$} \end{picture}} \hspace{-4.0in} & =\ \ & \frac{g_2}{2}(q-p)\cdot \epsilon \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(50,25){6} \Photon(50,25)(50,60){3}{3} \DashLine(50,25)(85, 5){5} \Text(0,5)[]{$\phi^+$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$q$} \Text(57,60)[l]{$Z$} \Text(90, 5)[l]{$\phi^-$} \Text(70, 5)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(70, -3)[]{$p$} \end{picture}} \hspace{-4.0in} & =\ \ & ig_Z\frac{c_{2W}}{2}(q-p)\cdot \epsilon\nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(50,25){6} \Photon(50,25)(50,60){3}{3} \DashLine(50,25)(85, 5){5} \Text(0,5)[]{$\phi^+$} \Text(30,5)[]{\rotatebox{30}{$\longrightarrow$}} \Text(30, -3)[]{$q$} \Text(57,60)[l]{$\gamma$} \Text(90, 5)[l]{$\phi^-$} \Text(70, 5)[]{\rotatebox{-30}{$\longleftarrow$}} \Text(70, -3)[]{$p$} \end{picture}} \hspace{-4.0in} & =\ \ & ie(q-p)\cdot \epsilon \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(46,24){6} \Text(50,25)[]{$\boldsymbol \otimes$} \Photon(50,29)(50,60){3}{3} \Photon(54,24)(85, 5){3}{3} \Text(0,5)[]{$h$} \Text(57,60)[l]{$W^-$} \Text(90, 5)[l]{$W^+$} \end{picture}} \hspace{-4.0in} & =\ \ & i g_2 m_W \, \epsilon_{W^+}\!\!\cdot\epsilon_{W^-} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(46,24){6} \Text(50,25)[]{$\boldsymbol \otimes$} \Photon(50,29)(50,60){3}{3} \Photon(54,24)(85, 5){3}{3} \Text(0,5)[]{$h$} \Text(57,60)[l]{$Z$} \Text(90, 5)[l]{$Z$} \end{picture}} \hspace{-4.0in} & =\ \ & i g_Z m_Z \, \epsilon_{Z_1}\!\!\cdot\epsilon_{Z_2} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(46,24){6} \Text(50,25)[]{$\boldsymbol \otimes$} \DashLine(50,29)(50,60){6} \DashLine(85, 5)(54,24){6} \Text(0,5)[]{$h$} \Text(57,60)[l]{$\phi^-$} \Text(90, 5)[l]{$\phi^+$} \end{picture}} \hspace{-4.0in} & =\ \ & -i \frac{\lambda_h v}{2} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(46,24){6} \Text(50,25)[]{$\boldsymbol \otimes$} \DashLine(50,29)(50,60){6} \DashLine(85, 5)(54,24){6} \Text(0,5)[]{$h$} \Text(57,60)[l]{$\phi^0$} \Text(90, 5)[l]{$\phi^0$} \end{picture}} \hspace{-4.0in} & =\ \ & -i \frac{\lambda_h v}{2} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 10,5)(46,24){6} \Text(50,25)[]{$\boldsymbol \otimes$} \DashLine(50,29)(50,60){6} \DashLine(85, 5)(54,24){6} \Text(0,5)[]{$h$} \Text(57,60)[l]{$h$} \Text(90, 5)[l]{$h$} \end{picture}} \hspace{-4.0in} & =\ \ & -i \frac{3\lambda_h v}{2} \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Photon( 10,5)(46,24){3}{3} \Text(50,25)[]{$\boldsymbol \otimes$} \DashLine(50,29)(50,60){6} \Photon(54,24)(85, 5){3}{3} \Text(0,5)[]{$Z$} \Text(57,60)[l]{$\phi^{\pm}$} \Text(90, 5)[l]{$W^{\mp}$} \end{picture}} \hspace{-4.0in} & =\ \ & -i g_2 s^2_W m_Z\ \epsilon_Z\cdot \epsilon_W \nonumber \\ \vcenter{\begin{picture}(100,80)(0,0) \SetColor{Black} \SetWidth{2} \Photon( 10,5)(46,24){3}{3} \Text(50,25)[]{$\boldsymbol \otimes$} \DashLine(50,29)(50,60){6} \Photon(54,24)(85, 5){3}{3} \Text(0,5)[]{$\gamma$} \Text(57,60)[l]{$\phi^{\pm}$} \Text(90, 5)[l]{$W^{\mp}$} \end{picture}} \hspace{-4.0in} & =\ \ & i e m_W\ \epsilon_\gamma\cdot \epsilon_W \nonumber \end{eqnarray} \vskip 0.1cm The symbol $\otimes$ denotes the mass (or $v$) insertion from the EWSB. \vskip 0.5cm \subsection{Example calculations with GEG} \label{sec:examples} Calculations in high energy processes involving longitudinal vector bosons can be complicated in dealing with gauge artifacts, often exhibiting artificial ``bad high energy behavior'' containing factors of $E/v$. Here we show some explicit examples to demonstrate how to calculate ultra-collinear splitting amplitudes in GEG, where all such amplitudes are automatically free of such artifacts and are simply proportional to the VEV. We focus in detail on the specific massive fermion splitting $t_s \rightarrow W^+_{L} b_s$, where the fermion helicity $s=L,R$ is preserved. This calculation is also trivially adapted to cases where one or both fermion is a massless flavor, such as the usual $u_L \to W^+_L d_L$, and is straightforward to extend to $Z_L$ boson emission with appropriate replacements of couplings and remainder polarization phases. We also outline below the diagrammatic construction of a few other processes for illustration. We first reemphasize that the longitudinal gauge boson $W_{L}^+$ in GEG should be interpolated by both the Goldstone field $\phi^+$ and the remainder gauge field $W_n^+$, leading us to break up the splitting amplitude as \begin{eqnarray} i{\mathcal M}(t_s\rightarrow W_{L}^+ b_s) \,=\, i{\mathcal M}(t_s\rightarrow \phi^+ b_s) \,+\, i{\mathcal M}(t_s\rightarrow W^+_n b_s) . \end{eqnarray} Applying the three-point Feynman rules in Sec.~\ref{sec:three-points}, and taking the exact collinear limit ($\theta, k_T \to 0$) to extract the leading behavior, we have for the LH process\footnote{Note that for the charge-conjugate process, producing $W_n^-$, we would instead use the remainder polarization vector times $(-1)$: $-\epsilon_n$.} \begin{eqnarray} i{\mathcal M}(t_L\rightarrow \phi^+ b_L) &\,=\, & i\,\bar{u}(b_L)(y_t P_R - y_b P_L)u(t_L) \nonumber \\ &\,\simeq\,& i\,y_t\sqrt{2E_b}\frac{m_t}{\sqrt{2E_t}} \,-\, i\,y_b\frac{m_b}{\sqrt{2E_b}}\sqrt{2E_t} \nonumber \\ &\,\simeq\,& i\,v \left(\frac{y_t^2}{\sqrt{2}}\sqrt{\bar z} \,-\, \frac{y_b^2}{\sqrt{2}} \frac{1}{\sqrt{\bar z}}\right), \nonumber \\ i{\mathcal M}(t_L\rightarrow W_n^+ b_L) &\,=\, & i\,\frac{g_2}{\sqrt{2}} \bar{u}(b_L)\big(\slashed{\epsilon}_n(W) P_L \big) u(t_L) \nonumber \\ &\,\simeq\,& i\,\frac{g_2}{\sqrt{2}}\cdot 2\sqrt{2E_b} \left(-\frac{m_W}{2E_W}\right) \sqrt{2E_t} \nonumber \\ &\,=\, & -i\,v \frac{g_2^2}{\sqrt{2}} \frac{\sqrt{\bar z}}{z} \,. \end{eqnarray} The full LH splitting amplitude is then \begin{eqnarray} i{\mathcal M}(t_L\rightarrow W_{L}^+ b_L ) \,=\, i\,v \frac{1}{z\sqrt{\bar z}} \left(\frac{1}{\sqrt{2}}(y_t^2\bar z-y_b^2)z- \frac{1}{\sqrt{2}}g_2^2\bar z\right). \end{eqnarray} Plugging this in Eq.~(\ref{eq:split}), we have the splitting function \begin{eqnarray} \frac{d{\mathcal P}_{t_L\rightarrow W^+_L b_L}}{dz\,dk_T^2} = \frac{1}{16\pi^2} \frac{v^2}{\tilde{k}^4_T} \left(\frac{1}{z}\right) \left(\frac{1}{\sqrt{2}}(y_t^2\bar z-y_b^2)z- \frac{1}{\sqrt{2}}g_2^2\bar z\right)^2. \label{eq:tL} \end{eqnarray} As for the RH transition $t_R \rightarrow W^+_L b_R$, there is no analogous amplitude for $W_n$ at $\mathcal{O}(v)$ due to the absence of RH charged-currents, so the amplitude is dominated by the Yukawa contribution, \begin{eqnarray} i{\mathcal M}(t_R\rightarrow W^+_L b_R) & \,\simeq\, & i{\mathcal M}(t_R\rightarrow \phi^+ b_R) \nonumber \\ & \,=\, & i\,\bar{u}(b_R)(y_t P_R - y_b P_L)u(t_R) \nonumber \\ & \,\simeq\, & i\,y_t\frac{m_b}{\sqrt{2E_b}}\sqrt{2E_t} \,-\, i\,y_b\sqrt{2E_b}\frac{m_t}{\sqrt{2E_t}} \nonumber \\ & \,=\, & i\,v \frac{y_t y_b}{\sqrt{2}}\left({1\over \sqrt{\bar z}}-\sqrt{\bar z}\right) \nonumber \\ & \,=\, & i\,v \frac{y_t y_b}{\sqrt{2}} \frac{z}{\sqrt{\bar z}} , \end{eqnarray} and the splitting function is \begin{equation} \frac{d{\mathcal P}_{t_R\rightarrow W^+_L b_R}}{dz\,dk_T^2} \,=\, \frac{1}{16\pi^2} \frac{v^2}{\tilde{k}^4_T} z \left(\frac{1}{\sqrt{2}}y_t y_b z \right)^2 \,=\, \frac{1}{16\pi^2} \frac{v^2}{\tilde{k}^4_T} \left(\frac12 y_t^2 y_b^2 z^3 \right). \label{eq:tR} \end{equation} Of course, given the small value of $y_b$, this process ends up becoming highly suppressed in practice. The results in Eqs.~(\ref{eq:tL}) and~(\ref{eq:tR}) lead to some of the formulas in Table \ref{tab:broken_fermion_splittings}. When combined with conventional collinear top quark splittings, the ultra-collinear splittings become important for modeling the approach to the top resonance peak. This includes as well the process $t_R \to W_T^+ b_L$. We show these individual shower contributions and their continuity with a simple Breit-Wigner model of top decay (weighted by $\Gamma_t(M(Wb))/\Gamma_t(m_t)$) in Fig.~\ref{app:top}. Here we have taken 10~TeV top quarks of either helicity, zooming into near the top quark pole, and set a decay/shower matching threshold of 187~GeV. All polarizations are measured in ``lab frame'' (as opposed to the top's rest frame). QCD and other electroweak showering effects are not incorporated. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/top_mergedShowerDecay_zoomed_tLpolarizedW.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/top_mergedShowerDecay_zoomed_tRpolarizedW.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ Invariant mass distributions for EW decay/splitting of a 10~TeV polarized top quark for (a)~conventional-collinear $t_L \to W_T^+ b_L$ and ultra-collinear $t_L \to W_L^+ b_L$, and (b)~conventional-collinear $t_R \to W_L^+ b_L$ and ultra-collinear $t_R \to W_T^+ b_L$. Decay and shower are matched at 187~GeV (vertical dashed line). The conventional-collinear contributions correspond to the upper histograms, while the ultra-collinear contributions correspond to the lower histograms. } \label{app:top} \end{figure} We have seen above how GEG allows us to organize the amplitude's dependence on EWSB by explicitly decomposing it into individual mass-insertion terms, or equivalently VEV-insertion terms. External-state fermion mass insertions are found by Taylor-expanding the fermion Dirac spinors, and external-state gauge boson mass insertions are found via the remainder polarization $\epsilon_n$. For more general processes, there may also be three-point interactions that function as VEV-insertions, such as interactions between the scalars or the $h V^\mu V_\mu$ vertices (listed in Sec.~\ref{sec:three-points}). Generally, we may rather straightforwardly construct any ultra-collinear amplitude at $\mathcal{O}(v)$ by adding together diagrams with exactly one mass-insertion or EWSB interaction. Besides helping to organize a calculation, this approach serves as a convenient tool for visualizing where different EWSB contributions arise in a given amplitude. Figures \ref{fig:ultra_collinear} provide several examples, including \begin{itemize} \item Fig.~\ref{fig:tWb}: $t_L\rightarrow W^+_L b_L$, representative calculation for Table \ref{tab:broken_fermion_splittings}; \item Fig.~\ref{fig:TTL}: $W^{\pm}_T\rightarrow W^{\pm}_L Z_T$, representative calculation for Table \ref{tab:broken_vector_splittings}; \item Fig.~\ref{fig:LLL}: $Z_L \rightarrow W^+_L W^-_L$, representative calculation for Table \ref{tab:broken_scalar_splittings}; \item Fig.~\ref{fig:hLL}: $h \rightarrow W^+_L W^-_L$, representative calculation for Table \ref{tab:broken_scalar_splittings}. \end{itemize} \begin{figure}[t!] \centering \begin{subfigure}[t]{\textwidth} \centering \begin{picture}(400,120)(0,0) \SetColor{Black} \SetWidth{2} \ArrowLine( 0,90)(45,90) \DashLine(55,93)(95,108){4} \ArrowLine( 55,87)(95,70) \Text(25,80)[]{$\boldsymbol \Leftarrow$} \Text(70,71)[]{\rotatebox{-25}{$\boldsymbol \Leftarrow$}} \Text(0,100)[l]{$t_L$} \Text(100,108)[l]{$W^+_{L}$} \Text(100,70)[l]{$b_L$} \SetWidth{1} \GCirc(50,90){8}{0.7} \SetOffset(35,0) \SetWidth{2} \Text(-20,25)[]{$\boldsymbol =$} \ArrowLine( 0,25)(50,25) \Photon(50,25)(69,33){3}{2} \DashLine(76,36)(95,45){4} \ArrowLine( 50,25)(95,5) \Text(72.5,35)[]{$\boldsymbol \otimes$} \Text(25,15)[]{$\boldsymbol \Leftarrow$} \Text(70,6)[]{\rotatebox{-25}{$\boldsymbol \Leftarrow$}} \Text(120,25)[]{$\boldsymbol +$} \SetOffset(175,0) \ArrowLine( 0,25)(21,25) \ArrowLine( 29,25)(50,25) \DashLine(50,25)(95,45){6} \ArrowLine( 50,25)(95,5) \Text(25,25)[]{$\boldsymbol \otimes$} \Text(12,15)[]{$\boldsymbol \Leftarrow$} \Text(38,15)[]{$\boldsymbol \Rightarrow$} \Text(70,6)[]{\rotatebox{-25}{$\boldsymbol \Leftarrow$}} \Text(120,25)[]{$\boldsymbol +$} \SetOffset(315,0) \ArrowLine( 0,25)(50,25) \DashLine(50,25)(95,45){6} \ArrowLine( 50,25)(68,17) \ArrowLine( 76,13.4)(95,5) \Text(72.5,15)[]{$\boldsymbol \otimes$} \Text(25,15)[]{$\boldsymbol \Leftarrow$} \Text(60,10)[]{\rotatebox{-25}{$\boldsymbol \Rightarrow$}} \Text(80,1)[]{\rotatebox{-25}{$\boldsymbol \Leftarrow$}} \end{picture} \caption{$t_L\to W_{L}^+b_L$} \label{fig:tWb} \vspace{9mm} \end{subfigure} \begin{subfigure}[t]{\textwidth} \centering \begin{picture}(400,60)(0,0) \SetColor{Black} \SetWidth{2} \Photon( 0,25)(45,25){3}{4} \DashLine(55,28)(95,43){4} \Photon( 55,21)(95,5){3}{4} \Text(0,40)[l]{$W^{\pm}_T$} \Text(100,43)[l]{$W^{\pm}_{L}$} \Text(100,5)[l]{$Z_T$} \SetWidth{1} \GCirc(50,25){8}{0.7} \SetOffset(160,0) \SetWidth{2} \Text(-10,25)[]{$\boldsymbol =$} \Photon( 15,25)(65,25){3}{4} \Photon(65,25)(84,33){3}{2} \DashLine(91,36)(110,45){4} \Photon( 65,25)(110,5){3}{4} \Text(87.5,35)[]{$\boldsymbol \otimes$} \Text(125,25)[]{$\boldsymbol +$} \SetOffset(300,0) \Photon( 0,25)(45,25){3}{4} \DashLine(53,27)(95,45){4} \Photon( 53,23)(95,5){3}{4} \Text(50,25)[]{$\boldsymbol \otimes$} \end{picture} \caption{$W_T^{\pm}\to W_{L}^{\pm}Z_T$} \label{fig:TTL} \vspace{9mm} \end{subfigure} \begin{subfigure}[t]{\textwidth} \centering \begin{picture}(400,120)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 0,90)(45,90){4} \DashLine(55,93)(95,108){4} \DashLine( 55,87)(95,70){4} \Text(0,100)[l]{$Z_{L}$} \Text(100,108)[l]{$W^+_{L}$} \Text(100,70)[l]{$W^-_{L}$} \SetWidth{1} \GCirc(50,90){8}{0.7} \SetOffset(20,0) \SetWidth{2} \Text(-5,25)[]{$\boldsymbol =$} \DashLine( 15,25)(65,25){4} \Photon(65,25)(84,33){3}{2} \DashLine(91,36)(110,45){4} \DashLine( 65,25)(110,5){4} \Text(87.5,35)[]{$\boldsymbol \otimes$} \Text(135,25)[]{$\boldsymbol +$} \SetOffset(175,0) \DashLine( 0,25)(50,25){4} \DashLine(50,25)(95,45){4} \Photon( 50,25)(68,17){3}{2} \DashLine( 76,13.4)(95,5){4} \Text(72.5,15)[]{$\boldsymbol \otimes$} \Text(120,25)[]{$\boldsymbol +$} \SetOffset(315,0) \DashLine( 0,25)(21,25){4} \Photon( 29,25)(50,25){3}{2} \DashLine(50,25)(95,45){4} \DashLine( 50,25)(95,5){4} \Text(25,25)[]{$\boldsymbol \otimes$} \end{picture} \caption{$Z_{L}\to W_{L}^+W^-_{L}$} \label{fig:LLL} \vspace{9mm} \end{subfigure} \begin{subfigure}[t]{\textwidth} \centering \begin{picture}(400,120)(0,0) \SetColor{Black} \SetWidth{2} \DashLine( 0,90)(45,90){4} \DashLine(55,93)(95,108){4} \DashLine( 55,87)(95,70){4} \Text(0,100)[l]{$h$} \Text(100,108)[l]{$W^+_{L}$} \Text(100,70)[l]{$W^-_{L}$} \SetWidth{1} \GCirc(50,90){8}{0.7} \SetOffset(20,0) \SetWidth{2} \Text(-5,25)[]{$\boldsymbol =$} \DashLine( 15,25)(65,25){4} \Photon(65,25)(84,33){3}{2} \DashLine(91,36)(110,45){4} \DashLine( 65,25)(110,5){4} \Text(87.5,35)[]{$\boldsymbol \otimes$} \Text(135,25)[]{$\boldsymbol +$} \SetOffset(175,0) \DashLine( 0,25)(50,25){4} \DashLine(50,25)(95,45){4} \Photon( 50,25)(68,17){3}{2} \DashLine( 76,13.4)(95,5){4} \Text(72.5,15)[]{$\boldsymbol \otimes$} \Text(120,25)[]{$\boldsymbol +$} \SetOffset(315,0) \DashLine( 0,25)(45,25){4} \DashLine(55,27)(95,45){4} \DashLine( 55,23)(95,5){4} \Text(50,25)[]{$\boldsymbol \otimes$} \end{picture} \caption{$h\to W_{L}^+W^-_{L}$} \label{fig:hLL} \end{subfigure} \caption{Representative ultra-collinear splittings with multiple contributing diagrams. The effects of the VEV are indicated schematically by the symbol $\otimes$. } \label{fig:ultra_collinear} \end{figure} \section{Final-State Shower Simulation} \label{sec:FSR} In order to facilitate studies of final-state weak showering at the level of exclusive rates, we have programmed a variation of the {\tt PYTHIA6}~\cite{Sjostrand:2006za} timelike virtuality-ordered parton shower. Basic collinear QCD is included by default, extended to the massive showering formalism outlined in Section~\ref{sec:split}, and including purely ultra-collinear processes. In addition, the full set of weak showering processes described in this paper has been added, with a number of novel features compared to standard showering programs, outlined in the main text. In particular, see Section~\ref{sec:novel_features}. Here we describe a few additional technicalities of the implementation. Splitting functions in the virtuality-ordered shower are simple to relate to those in the $k_T$-ordered shower, which we have used by default for most of the presentation. Using the relativistic/collinear approximation for a splitting $A \to B+C$, we get \begin{equation} Q^2 \,\simeq\, \frac{1}{z\bar z} (k_T^2 + \bar z m_B^2 + z m_C^2) \, . \label{eq:kTtoQ} \end{equation} Working in $\log Q$, we can build the translation \begin{equation} \frac{d{\cal P}}{dz \, d\log Q^2} \,\simeq\, \frac{1}{z\bar z} \frac{Q^2}{(Q^2 - m_A^2)^2} \left( \tilde k_T^4 \frac{d{\cal P}}{dz \, dk_T^2} \right) \, . \end{equation} The function in parentheses goes either as $k_T^2$ or as $v^2$. For given $Q$, $z$, and daughter masses, the former is simple to derive either by inverting the approximate Eq.~(\ref{eq:kTtoQ}) or by using exact kinematics. For the energy-sharing variable $z$, we use CM-frame three-momentum fraction $|\vec k_B|/(|\vec k_B|+|\vec k_C|)$. To approximately model the phase space effects in the nonrelativistic limit, we further weight the splitting probabilities by a velocity factor $|\vec k_B| |\vec k_C|/E_B E_C$. We also suppress splittings at angles larger than $\theta \approx \pi/2$, where the collinear shower would be highly untrustworthy. As in {\tt PYTHIA6}, the input to the shower is a ``hard'' partonic configuration with some characteristic virtuality scale, assumed here to be large compared to the weak scale. Evolution is based on a simple recoiler method, whereby particles are showered in pairs. (At the current level, no dipole coherence effects or color/isospin flows are incorporated, nor are they strictly necessary at leading-log level, but they would be possible to include in more advanced approaches.) Each particle in a pair undergoes a trial QCD/EW Sudakov evolution, defined in the hard event's rest frame, and ignoring the possible evolution of its sister. In general, each particle may undergo a $1\to2$ splitting and acquire an off-shell mass. Kinematics are then adjusted within the pair's rest frame, by boosting each showered system along the pair's axis to preserve momentum and energy. If the summed masses from the trial evolutions exceeds the original pair's mass, the more off-shell splitting is vetoed, and that particle's evolution restarted. The procedure is easily recursed to build up completely showered events, with the two daughters from a given splitting serving as paired sisters in subsequent evolution. Kinematic back-reaction effects are also incorporated, as discussed in Section~\ref{sec:mass_effects} and parametrized in Eq.~\ref{eq:weight}. The kinematic re-arrangments required by setting a daughter off-shell through its secondary showering can have a sizable effect on the mother's splitting rate. We introduce this back-reaction factor as an additional weight multiplying the daughter's splitting probability. In our virtuality-ordered implementation, the virtuality of the mother (invariant mass of the daughter pair) remains unchanged, so $Q^*=Q$. The Jacobian for the transformation is then simply $|dz^*/dz|$, and its explicit form is tied to our kinematic prescription above. Within the mother splitting $A \to B+C$, assume that particle $B$ with momentum-fraction $z$ is the one to be set off-shell: $B \to B^*$. Within the $A$ rest-frame, the direction of $B$ ($C$) is held at a fixed angle $\Theta$ ($\pi-\Theta$) relative to $A$'s boost axis from the CM-frame. The angle $\Theta$ has a one-to-one mapping to both the old $z$ and the new $z^*$, and is a useful intermediate variable. Another useful intermediate variable is the ratio $Y \equiv z^2/\bar z^2$, and the analogous $Y^*$. The Jacobian can then be built up in pieces as \begin{equation} \left| \frac{dz^*}{dz} \right| \,=\, \left| \frac{dY^*}{dz^*} \right|^{-1} \, \left| \frac{dY^*}{d\Theta} \right| \, \left| \frac{dY}{d\Theta} \right|^{-1} \, \left| \frac{dY}{dz} \right| \, , \end{equation} where, \begin{equation} \frac{dY}{dz} \,=\, \frac{2z}{\bar z^3} \end{equation} and \begin{equation} \frac{dY}{d\cos\Theta} \,=\, \frac{\mathcal{A}(\bar\mathcal{B}-\mathcal{B})\cos^2\Theta + 2\mathcal{A}(\bar\mathcal{C} - \mathcal{C})\cos\Theta + (\mathcal{B}\bar\mathcal{C} - \bar\mathcal{B}\mathcal{C})}{(\mathcal{A}\cos^2\Theta + \bar\mathcal{B}\cos\Theta + \bar\mathcal{C})^2} \, . \end{equation} The symbols $\mathcal{A}$, etc, here are shorthand for various quantities built out of $A$'s velocity $\beta_A$, and daughter kinematics in its rest-frame: the $A$-frame three-momentum magnitude of either of the daughters $P$, and their individual $A$-frame energies and kinematic masses $E_B$, $E_C$, $m_B$, $m_C$. We have \begin{equation} \begin{matrix} \mathcal{A} \,\equiv\, \beta_A^2 P^2 \, , \quad \mathcal{B} \,\equiv\, 2\beta_A P E_B \, , \quad \bar\mathcal{B} \,\equiv\, -2\beta_A P E_C \, , \\ \\ \mathcal{C} \,\equiv\, P^2 + \beta_A^2 m_B^2 \, , \quad \bar\mathcal{C} \,\equiv\, P^2 + \beta_A^2 m_C^2 \, . \end{matrix} \end{equation} Analogous formulas hold with $Y^*$ and $z^*$, defining the coefficients $\mathcal{A}$, etc, using the $A$-frame kinematic quantities redefined with $B$ set off-shell. (Prescriptions yielding simpler analytic formulas than ours almost certainly exist.) The differential splitting function of the mother must also be re-evaluated using the off-shell daughter kinematics. This is much simpler, as there the main effect is just the change in $z$. Explicit EWSB mass factors for the daughters, which appear in the numerators of the ultra-collinear splitting functions, are not adjusted from their on-shell values. Angular-ordering may also be invoked. If the showering pair was itself produced from a splitting, the event-frame angles of each daughter splitting and mother splitting can be compared, and the former splitting(s) vetoed if it has a larger angle. This veto may be applied selectively depending on the nature of the splitting and its parent splitting. In our approach, parton shower evolution is automatically matched onto decay for $W^\pm$, $Z$, Higgs, and top. This matching is particularly simple in the virtuality-ordered shower. Particles that survive down to their decay/shower matching scale are assigned masses drawn from a Breit-Wigner distribution and final-state flavors assigned according to known branching fractions. In practice, we also weight the Breit-Wigner distribution accounting for the different available decay phase space at different off-shell virtualities. Similar to a shower splitting, the decays are then further weighted with back-reaction factors, if the decaying particle was itself produced in a splitting. The back-reaction factor here is applied as a simple probabilistic veto. Finally, we re-emphasize that the neutral bosons $\gamma/Z_T$ and $h/Z_{L}$ are produced and evolved as general quantum mixed states. They are assigned initial kinematic masses of zero and $m_Z$, respectively, and given nontrivial $2\times 2$ density matrices that evolve via matrix-valued Sudakov factors. There is one major practical difference in implementing these Sudakovs relative to simple number-valued Sudakovs. In the latter case, a given particle's wavefunction decreases in magnitude as its evolution proceeds, but the surviving probability is an automatic outcome of the differential splitting rates integrated via monte carlo. In practice, these splitting rates are integrated over $z$ with the expedient of over-estimator functions, and vetoed-down to the true rates. In the matrix-valued case, however, the wavefunction can also {\it rotate}, and capturing this effect using over-estimator functions and a veto algorithm does not appear to be as straightforward. Instead, we use explicit formulas for the $z$-integrated splitting matrices at each virtuality step. These formulas are necessarily approximate, but we have verified that they yield results similar to what would be obtained by costly brute-force numerical integration. \section{Goldstone Equivalence} \label{sec:gauge} As discussed in Section~\ref{sec:broken}, there are considerable conceptual and technical complications in handling processes involving longitudinal gauge bosons at high energies. The behavior of longitudinal gauge bosons in high energy scattering and showering, both as off-shell intermediate states and as external particles participating in collinear splittings, becomes most transparent in ``physical'' non-covariant gauges where gauge-Goldstone mixing is left explicit, and the Goldstone fields remains capable of interpolating external particles~\cite{Beenakker:2001kf,Dams:2004vi,Srivastava:2002mw} (see also~\cite{Wulzer:2013mza}). We propose a particularly convenient physical gauge dubbed ``Goldstone Equivalence Gauge'' (GEG), wherein the emergence of Goldstone equivalence and its leading violations are manifest and easily calculable at tree-level, while maintaining some residual Lorentz symmetry and avoiding unphysical gauge poles. In this Appendix, we work out the details of this gauge. GEG is essentially a hybrid of Coulomb and light-cone gauges. It employs a light-like gauge reference four-vector that rotates with momentum\footnote{$k^0$ can be negative for general off-shell modes. The given parametrization of $n^\mu$ is not unique. For example, (sign$(k^0),-\hat{k}$) and (sign$(k^0)|\vec{k}|,-\vec{k}$) also serve the same purpose.} \begin{equation} n^\mu(k) \,=\, (n^0(k), \vec{n}(k)) \,\equiv\, (1, -\hat k \; {\rm sign}(k^0)), \qquad n^\mu n_\mu=0 . \label{eq:nmu2} \end{equation} Representing a generic gauge adjoint component of a vector field by $W^\mu$, we decompose the gauge degrees of freedom as the components of $W_n\ (W_{\bar n})$ aligned (anti-aligned) with $n^\mu$ and the two $\pm1$ helicity (or ``$xy$'') transverse modes, collectively $W_T$: \begin{equation} W^\mu(k) \,=\, W_T(k) \ \epsilon_T^\mu(k) \,+\, W_n(k) \ \epsilon_n^\mu(k) \,+\, W_{\bar n}(k) \ \epsilon_{\bar n}^\mu(k) \, , \label{eq:Wexpansion} \end{equation} with $\bar n^\mu \equiv (1,+\hat k \, {\rm sign}(k^0))$. Since $W^\mu$ is a real vector field here, we have chosen the above definition such that $n^\mu(k)^* = n^\mu(-k)$. Introducing the gauge-fixing Lagrangian in momentum space as \begin{equation}\label{eq:gauge-fixing} {\cal L}_{\rm fix} \,=\, -{1\over 2\xi} \big(n(k)\cdot W(k)\big)\big( n(k)\cdot W(-k)\big), \quad\quad (\xi \to 0), \end{equation} the large light-like component of the on-shell longitudinal polarization, $W_{\bar n}$ field, ceases to propagate because of its infinite ``mass'' $1/\xi$. This is the key feature for GEG by design. We are left with three physical degrees of freedom that can propagate. It is interesting to note that GEG respects the rotational symmetry under $SO(3)$ by construction. The surviving polarization states are also invariant (up to a possible rescaling) under boosts collinear to $\vec k$. Incorporating EWSB, neither the gauge boson mass nor the would-be-Goldstone field $\phi$ are folded into the gauge-fixing procedure. The normalization of $W_n$ and its associated polarization vector $\epsilon_n^\mu \propto n^\mu$ can be chosen such that $W_n$ will interpolate external particles with unit amplitude: \begin{equation} \epsilon_n^\mu(k) \,\equiv\, \frac{-\sqrt{|k^2|}}{n(k)\cdot k}\ n^\mu(k) \,\,\overset{\overset{\text{\rm \footnotesize on-shell}}{}}{\to}\,\, \frac{m_W}{E+|\vec k|}\left(-1,\hat k\right). \label{eq:epsilon_n} \end{equation} This polarization vector is what remains of the standard longitudinal polarization $\epsilon_{L}^\mu(k)$ upon subtraction of the Goldstone-equivalence term (scalarization term) $k^\mu/m_W$. Preserving Hermiticity of the $W_n$ field also necessitates introduction of a factor of $i$ into the polarization vector, such that $(i\epsilon_n^\mu(k))^* = i\epsilon_n^\mu(-k)$. This will also conveniently synchronize the phase of states created by the $W_n$ field and the $\phi$ field.\footnote{When working in a complex gauge basis, as for $W^\pm$, these polarization phase factors become simply $\pm 1$. In all cases, care must be taken to rigorously define the orientation of momentum flows when computing amplitudes, since $\epsilon_n^\mu(-k) = -\epsilon_n^\mu(k)$, and the sign is often needed to determine the relative phase between gauge-interpolated and Goldstone-interpolated diagrams.} Accounting for the gauge-Goldstone mixing term, the quadratic Lagrangian can then be expressed as \begin{eqnarray} \mathcal{L}_T(k)+{\rm h.c.} & \,=\, & W_T(k) \big(k^2 - m^2\big) W_T(-k) \nonumber \\ \mathcal{L}_{n\phi}(k)+{\rm h.c.} & \,=\, & \begin{bmatrix} W_n(k) & \phi(k) ) \end{bmatrix} \begin{pmatrix} |k^2| & -m_W\sqrt{|k^2|} \\ -m_W\sqrt{|k^2|} & k^2 \end{pmatrix} \begin{bmatrix} W_n(-k) \\ \phi(-k) \end{bmatrix} \end{eqnarray} Inverting yields the propagators \begin{eqnarray} \big< W_T(k) W_T(-k) \big> & \,=\, & \frac{i}{k^2-m_W^2},\qquad \big< W_n(k) W_n(-k) \big> \,=\, \frac{i}{k^2-m_W^2}\,{\rm sign}(k^2) , \nonumber \\ \big< \phi(k) \phi(-k) \big> & \,=\, & \frac{i}{k^2-m_W^2},\quad \big< W_n(k) \phi(-k) \big> \,=\, \frac{i}{k^2-m_W^2}\frac{m_W}{\sqrt{|k^2|}} \, . \label{app:prop} \end{eqnarray} These propagators are naively fully Lorentz-invariant, though choosing a polarization basis in the first place has anyway tied us to a specific frame. They share a unique, common pole at $k^2 = m_W^2$ with residue +1. The mixed $W_n$ and $\phi$ fields interpolate the same particle: the ``longitudinal gauge boson'' or ``Goldstone boson,'' depending on perspective.\footnote{This may be seen in various ways. Probably the most intuitive is to incorporate the $W$'s decay into massless fermions, as actually occurs in the SM. A $W_n$/$\phi$ created from some hard process would then coherently propagate and decay into the same final-state with the same amplitude.} Note that the apparent spurious pole at $k^2 = 0$ in the mixed propagator is purely an artifact of our momentum-dependent field normalization, and does not lead to light-like gauge poles in complete Feynman diagrams.\footnote{Such a pole arises in Lorenz-Landau gauge, where gauge-fixing on the light-cone is incomplete. Generally, gauge poles will cancel between gauge-exchange and Goldstone-exchange diagrams, but can lead to spurious singularities in individual diagrams. In GEG, the only such gauge pole occurs at the zero-mode, $k^\mu=0$, and only in the mixed gauge-Goldstone propagator. The loop-level and renormalization properties of this gauge could be interesting to study, assuming that there are no obvious analytic obstructions do doing so. However, as we here confine ourselves to tree-level, we save this topic for future work.} Goldstone boson equivalence in the high-energy limit now emerges trivially, diagram-by-diagram. For a process where $|k^2| \gg m_W^2$ for all internal gauge/Goldstone lines and $E \gg m_W$ for all external bosons, the mixed propagators and $\epsilon_n$ factors scale away, leaving over only the Goldstone contributions. In addition, since there are no terms that go like $k/m_W$ or $E/m_W$, power-counting of corrections $\propto m_W$ becomes straightforward at the level of individual Feynman diagrams. Upon introduction of complete fermion and scalar sectors, we may generalize to counting VEV factors associated with arbitrary masses and interactions introduced by spontaneous symmetry breaking. Some simple examples for splitting calculations are given in Appendix~\ref{sec:FeynmanRules}. \begin{figure}[t] \begin{center} \begin{picture}(500,75)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.4} \Text(10,50)[]{$\left\{ \begin{array}{c} \\ \\ \\ \\ \\ \\ \end{array} \right. $} \SetOffset(15,5) \Line(25,15)(50,50) \Line(0,40)(50,50) \Line(15,85)(50,50) \Vertex(15,55){1.5} \Vertex(15,62){1.5} \Vertex(17,69){1.5} \Photon(50,50)(90,50){3}{3} \Text(40,20)[]{$\boldsymbol \otimes$} \DashLine(109,50)(150,50){6} \LongArrow(150,50)(215,45) \LongArrow(150,50)(215,55) \GCirc(50,50){12}{0.7} \GCirc(150,50){12}{0.7} \Text(100,20)[]{\bf +} \SetOffset(130,5) \Line(25,15)(50,50) \Line(0,40)(50,50) \Line(15,85)(50,50) \Vertex(15,55){1.5} \Vertex(15,62){1.5} \Vertex(17,69){1.5} \Photon(50,50)(150,50){3}{5} \LongArrow(150,50)(215,45) \LongArrow(150,50)(215,55) \GCirc(50,50){12}{0.7} \GCirc(150,50){12}{0.7} \SetOffset(15,55) \Line(25,15)(50,50) \Line(0,40)(50,50) \Line(15,85)(50,50) \Vertex(15,55){1.5} \Vertex(15,62){1.5} \Vertex(17,69){1.5} \DashLine(50,50)(150,50){6} \LongArrow(150,50)(215,45) \LongArrow(150,50)(215,55) \GCirc(50,50){12}{0.7} \GCirc(150,50){12}{0.7} \Text(100,20)[]{\bf +} \SetOffset(130,55) \Line(25,15)(50,50) \Line(0,40)(50,50) \Line(15,85)(50,50) \Vertex(15,55){1.5} \Vertex(15,62){1.5} \Vertex(17,69){1.5} \DashLine(50,50)(90,50){6} \Text(40,20)[]{$\boldsymbol \otimes$} \Photon(109,50)(150,50){3}{3} \LongArrow(150,50)(215,45) \LongArrow(150,50)(215,55) \GCirc(50,50){12}{0.7} \GCirc(150,50){12}{0.7} \Text(100,20)[]{\bf +} \SetOffset(-5,0) \Text(245,50)[]{$\left. \begin{array}{c} \\ \\ \\ \\ \\ \\ \end{array} \right\} $} \Text(270,50)[]{$\simeq$} \SetOffset(290,65) \Line(25,15)(50,50) \Line(0,40)(50,50) \Line(15,85)(50,50) \Vertex(15,55){1.5} \Vertex(15,62){1.5} \Vertex(17,69){1.5} \DashLine(50,50)(100,50){6} \GCirc(50,50){12}{0.7} \Text(60,20)[]{\bf +} \SetOffset(365,65) \Line(25,15)(50,50) \Line(0,40)(50,50) \Line(15,85)(50,50) \Vertex(15,55){1.5} \Vertex(15,62){1.5} \Vertex(17,69){1.5} \Photon(50,50)(100,50){3}{3} \GCirc(50,50){12}{0.7} \SetOffset(250,-5) \DashLine(100,50)(150,50){6} \LongArrow(150,50)(215,45) \LongArrow(150,50)(215,55) \GCirc(150,50){12}{0.7} \Text(100,20)[]{\bf +} \SetOffset(325,-5) \Photon(100,50)(150,50){3}{3} \LongArrow(150,50)(215,45) \LongArrow(150,50)(215,55) \GCirc(150,50){12}{0.7} \SetOffset(290,84.5) \Text(0,0)[]{$\left\{ \begin{array}{c} \\ \\ \end{array} \right. $} \SetOffset(405,84.5) \Text(0,0)[l]{$\left. \begin{array}{c} \\ \\ \end{array} \right\} {\boldsymbol\times}$} \SetOffset(360,50) \Text(0,0)[]{$\left(\frac{i}{k^2-m_W^2}\right) \quad {\boldsymbol\times}$} \SetOffset(290,15.5) \Text(0,0)[]{$\left\{ \begin{array}{c} \\ \\ \end{array} \right. $} \SetOffset(405,15.5) \Text(0,0)[l]{$\left. \begin{array}{c} \\ \\ \end{array} \right\} $} \SetOffset(350,-7) \Text(0,0)[]{{\bf +} non-collinear} \end{picture} \end{center} \caption[]{Schematic tree-level collinear factorization for an arbitrary process with a splitting Goldstone/longitudinal in the final state.} \label{fig:factorizationEquation} \end{figure} We can also see how this gauge choice facilitates a factorized picture of longitudinal gauge/Goldstone boson production and splitting in the parton shower, beyond the simple Goldstone-equivalent picture at zeroth-order in the VEV. Fig.~\ref{fig:factorizationEquation} illustrates how this works schematically in a final-state shower. A generic hard process produces an off-shell gauge/Goldstone boson of virtuality $k^2$ with $m_W^2 \ll k^2 \ll E^2$, and this boson subsequently splits. There are four contributing classes of diagrams, corresponding to the four possible propagator exchanges between the production and splitting processes. We would like to approximate this as an on-shell production amplitude multiplied by a universal splitting amplitude. The decomposition is trivial for the leading pure Goldstone exchange diagram, but the other, subleading diagrams involve interplays between the propagators and the off-shell polarization vectors $\epsilon_n^\mu \propto (\sqrt{k^2}/E)n^\mu$. For the mixed diagrams, the propagator factor $m_W/\sqrt{k^2}$ can be combined with the polarization factor $\sqrt{k^2}/E$ to yield an approximate on-shell polarization proportional to $m_W/E$. Assuming that there is no large back-reaction in the hard production matrix element (at least to $O(m_W^2)$), contracting with the rescaled off-shell polarization approximately reproduces the on-shell hard process. For the mixed diagram where the gauge field contracts with the splitting process, this decomposition would simply instruct us to compute the splitting amplitude with an effective on-shell $\epsilon_n$. The pure gauge exchange does not immediately fit this pattern, but it can be separated into two pieces: $1/(k^2-m_W^2) = (m_W^2/k^2)/(k^2-m_W^2) + 1/k^2$. The former piece has the correct structure to provide $m_W/\sqrt{k^2}$ factors to each gauge polarization. The latter piece cancels the $\sqrt{k^2}$'s from each polarization vector, but leaves over no poles or mass factors. It therefore produces a non-collinear interaction that goes as $1/E^2$ instead of $1/(k^2-m_W^2)$, and can be grouped together with the neglected non-collinear diagrams. We can view all of the remaining collinear contributions as a simple product of on-shell gauge+Goldsone production and gauge+Goldstone splitting matrix elements, connected by the standard scalar propagator $i/(k^2-m_W^2)$. Analogous results were obtained for the factorization of logarithmic virtual corrections to external gauge/Goldsone bosons in~\cite{Beenakker:2001kf} by working directly in Coulomb gauge, and in~\cite{Denner:2000jv,Denner:2001gw} by invoking the Goldstone Boson Equivalence Theorem in Feynman-'t~Hooft gauge. Our own approach directly exhibits the applicability of the Equivalence Theorem in the corresponding real emission processes at tree-level, and extends them beyond the strict Goldstone limit to $O(m_W/E)$. \section{Shower Implementation and Related New Phenomena} \label{sec:implementation} We are now in a position to implement the splitting formalism and to present some initial physics results. Our studies here involving PDFs have been generated using simple numerical integration techniques. Our studies involving final-state radiation, which provide much more exclusive event information, have been generated using a dedicated virtuality-ordered weak showering code. Some technical aspects of this code can be found in Appendix~\ref{sec:FSR}. We do not presently study the more technically-involved exclusive structure of weak ISR radiation. More detailed investigations of specific physics applications will appear in future work~\cite{EWshower}. We first show some representative integrated splitting rates for an illustrative set of electroweak splitting processes in Table~\ref{table:splitting_rates}, at incoming energies of 1 and 10~TeV, as well as the leading-log asymptotic behavior. We have mainly focused on examples from Sections~\ref{sec:unbroken} and~\ref{sec:broken} that exhibit single- or double-logarithmic scaling with energy. Unless otherwise noted, the rates are summed/averaged over spins and particle species. (For instance, $q=u_L,u_R,d_L,d_R$, and $f$ denotes all twelve fermion types of either spin.) The symbols in the parentheses denote the conventional collinear-enhanced (CL), infrared-enhanced (IR) and ultra-collinear (UC) behaviors, respectively. Radiation of a $V_T$ boson exhibits the usual CL+IR double-log behavior. Notably, the largest splitting rates occur for $V_T \to V_T V_T$, due to the large adjoint gauge charge. Splittings of this type occur with roughly $35\%$ probability at 10~TeV, a factor that is enormous for an ``EW correction'' and which clearly indicates the need for shower resummation. We also see the analogous UC+IR process $V_T\to V_L V_T$, which only grows single-logarithmically, but which still represents a sizable fraction of the total splitting rate (even more so if we focus on low-$k_T$ regions, similar to Fig.~\ref{fig:zkt}). Similarly, the other ultra-collinear channels are smaller but not negligible. We next present our numerical results for various exclusive splitting phenomena, paying special attention to the novelties that arise in the EW shower. \begin{table} \begin{center} \begin{tabular}{ l | c | c | c } Process \ & $\approx {\mathcal P}(E)$ (leading-log term) & \ ${\mathcal P}(1~{\rm TeV})$ \ & \ ${\mathcal P}(10~{\rm TeV})$ \ \\ \hline $q \to V_Tq^{(\prime)}$ \ (CL+IR) & $(3\times10^{-3})\left[\log\frac{E}{m_{W}^{}} \right]^2$ & 1.6\% & 7\% \\ $q \to V_{L}q^{(\prime)}$ \ (UC+IR) & $ (2\times10^{-3})\log\frac{E}{m_{W}^{}}$ & 0.4\% & 1.1\% \\ \hline $t_R \to W_L^+ b_L$ \ (CL) & $(8\times10^{-3}) \log\frac{E}{m_{W}^{}}$ & 2.5\% & 4\% \\ $t_R \to W_T^+ b_L$ \ (UC) & $(6\times10^{-3}) $ & 0.6\% & 0.6\% \\ \hline $V_T \to V_T V_T$ \ (CL+IR) & $(0.015)\left[\log\frac{E}{m_{W}^{}} \right]^2$ & 7\% & 34\% \\ $V_T \to V_{L}V_T$ \ (UC+IR) & $(0.014)\log\frac{E}{m_{W}^{}}$ & 2.7\% & 7\% \\ $V_T \to f\bar f$ \ (CL) & $(0.02)\log\frac{E}{m_{W}^{}}$ & 5\% & 10\% \\ \hline $V_{L} \to V_T h$ \ (CL+IR) & $(2\times10^{-3})\left[\log\frac{E}{m_{W}^{}} \right]^2$ & 0.8\% & 4\% \\ $V_{L} \to V_L h$ \ (UC+IR) & $(2\times10^{-3})\log\frac{E}{m_{W}^{}} $ & 0.5\% & 1\% \end{tabular} \end{center} \caption{Representative electroweak splitting behaviors and integrated fixed-order splitting probabilities for an illustrative set of processes at two parent energies $E=1,\ 10$ TeV. The symbols in the parentheses denote the collinear (CL), infrared (IR), and ultra-collinear (UC) behaviors, respectively.} \label{table:splitting_rates} \end{table} \subsection{Weak boson PDFs} We first revisit the classic calculation of weak boson PDFs within proton beams~\cite{Kane:1984bb,Dawson:1984gx}. The basic physical picture has been dramatically confirmed with the observation of the Higgs boson signal via vector boson fusion at the LHC~\cite{LHCVBF}. It is anticipated that at energies in the multi-TeV regime, the total production cross section for a vector boson fusion process $V_1 V_2\to X$ can be evaluated by convoluting the partonic production cross sections over the gauge boson PDFs, originated from the quark parton splittings $q\to W^\pm q',\ q \to \gamma/Z q$.\footnote{It should be noted that a formal factorization proof for electroweak processes in hadronic collisions is thus far lacking. For instance, it is not presently demonstrated whether contributions from gauge boson exchanges between the two incoming partons are factorizable. Nonetheless, we expect that the factorized PDF approach should furnish a reliable and useful calculation tool at very high energies at leading order, as indicated by simple scaling arguments~\cite{Kunszt:1987tk,Borel:2012by}.} A useful intermediate object in this calculation is the parton-parton luminosity, consisting of the convolutions of the PDFs from each proton. We write the cross section in terms of the parton luminosity of gauge boson collisions as \begin{equation} \sigma_{PP}(V_1 V_2 \to X) \,=\, \int_{\tau_{\rm low}}^{\tau_{\rm high}} d\tau \, \frac{d\mathcal{L}_{V_1 V_2}}{d\tau}\ \hat{\sigma}(V_1 V_2 \rightarrow \hat X_\tau) \, , \end{equation} and can approximate this luminosity at fixed-order using the concept of weak boson PDFs of individual quarks within the proton: \begin{eqnarray} \frac{d\mathcal{L}_{V_1 V_2}}{d\tau} & \,\simeq\, & \frac{2}{(\delta_{V_1 V_2}+1)} \int^1_\tau\frac{d\xi}{\xi} ~\int^1_{\tau/\xi}\frac{dz_1}{z_1}~\int^1_{\tau/\xi/ z_1}\frac{dz_2}{z_2} \times \nonumber \\ & & \sum_{q_1,q_2} f_{V_1\in q_1}(z_1)f_{V_2\in q_2}(z_2)~f_{q_1\in P}(\xi)f_{q_2\in P}\left(\frac{\tau}{\xi z_1 z_2}\right) \, . \label{eq:partonLumi} \end{eqnarray} Here, $\tau = s/S$ is the ratio of the partonic and hadronic energies squared, and $\tau_{\rm low}$ and $\tau_{\rm high}$ the kinematic boundaries (e.g., defining a bin in a histogram). We assume $\tau_{\rm low} \gg 4m_W^2/S$. The objects $f_{V\in q}$ are evaluated at fixed-order as \begin{equation} f_{V\in q}(z) \,\approx\, \int_{0}^{{\cal O}(s/4)} dk_T^2 \, \frac{d{\cal P}_{q\to Vq^{(\prime)}}}{dz \, dk_T^2}(z,k_T^2) \, , \end{equation} where the upper boundary of the $k_T$ integration is of order the partonic CM energy. For example~\cite{Kane:1984bb,Dawson:1984gx}, \begin{eqnarray} f_{W_T^\pm \in u/d}(z) \,\simeq\, \frac{\alpha_W}{8\pi} \frac{1+\bar z^2}{z} \log\left(\frac{s}{4m_W^2}\right) , \quad f_{W_{L}^\pm \in u/d}(z) \,\simeq\, \frac{\alpha_W}{4\pi} \frac{\bar z}{z} , \label{eq:fixed-order-PDFs} \end{eqnarray} where the PDFs have been integrated up to $k_T^2 = s/4$, assumed to be much larger than $m_W$. We emphasize that in deriving these illustrative fixed-order weak boson PDFs, we have {\it not} resummed the logarithmic enhancement, which remains explicit in Eq.~(\ref{eq:fixed-order-PDFs}) for the transverse bosons. There are also corresponding double- and single-log EW enhancements in the virtual corrections for the sourcing quarks, arising from integrating over both $z$ and $k_T$, which we have not accounted for. While these are of formally higher-order concern in determining the weak boson PDFs, they would also be required for an all-orders resummation of the leading-order effects. (We comment on other novel EW effects on the quark PDFs at the end of this subsection.) A related issue is that there are factorization scales implicit in the definition of the sourcing quark PDFs. Since the weak coupling and $\log(E/m_W)$ factors are together still below $\mathcal{O}(1)$ size at planned future machines, the choice of factorization scale might also seem to be of strictly higher-order concern. However, the interleaving of the much faster QCD evolution complicates the situation somewhat, especially at a large value of the energy fraction $z$. We have already noted above that the longitudinal $W/Z$ PDFs would not continue to be sourced above $m_W$, as their ultra-collinear generation is constrained to the region $k_T \sim m_W$. It is therefore important to fix a factorization scale of ${\cal O}(m_W)$ for the quark PDFs from which the fixed-order $W_{L}$ PDFs are derived, even for processes where $\sqrt{s} \gg m_W$~\cite{Han:1992hr}. However, the transverse $W/Z$ PDFs are sourced continuously at all scales. Higher-order calculations and/or full solution of the mixed QCD/EW DGLAP equations would be required to more fully resolve the issue of scale choices for the transverse bosons. Here we simply fix the scale for the sourcing quark PDFs to be the geometric mean of $\sqrt{s}$ and $m_W$ (e.g., ${\cal O}$(1~TeV) in a 10~TeV process).\footnote{This calculation uses only QCD evolution for the quark PDFs. The additional impact of electroweak evolution effects on the sourcing of the electroweak PDFs should indeed be small. Note also that mixed processes, such as $V_T V_{L} \to X$ would generally need a different factorization scale for each sourcing quark PDF.} Figures \ref{fig:PDF}(a) and \ref{fig:PDF}(b) show the predicted fixed-order luminosities for a variety of possible colliding partons, including quarks as well as polarized $W^\pm$ bosons and photons, at the 14 TeV LHC and a 100~TeV $pp$ collider. At low scales, the ``EW'' PDFs are of course wholly dominated by photons. However, at scales above $m_W$, the $W^\pm$ PDFs are of comparable size. This can be seen here by comparing the $q\gamma$ and $qW_T^\pm$ parton luminosities, as well as the $W_T^+\gamma$ and $W_T^+W_T^-$ luminosities. Note that in this comparison, we have also derived the photon PDF at fixed-order, sourced from quark PDFs. Attempts at fitting the photon PDFs with LHC data have recently been made~\cite{Ababekri:2016kkj}. Some recent discussions regarding the factorization scale uncertainties can be found in Ref.~\cite{Alva:2014gxa}. More importantly, a complete description will ultimately require including as well the $Z_T$ and {\it mixed} $\gamma/Z_T$ PDFs~\cite{EWshower}. The PDFs and corresponding parton luminosities for longitudinal gauge bosons can be seen to be significantly smaller than those of transverse bosons. Of course, these nonetheless remain uniquely important for probing the nature of the electroweak sector beyond the Standard Model \cite{Lee:1977eg,Chanowitz:1985hj,Barger:1990py,Bagger:1995mk,Agashe:2004rs,Giudice:2007fh}. In Fig.~\ref{fig:PDF}(c), we show the ratios of the partonic luminosities at the 100~TeV collider and the LHC $dL^{100}(s) /dL^{14}(s)$. The increase with energy is largest for $W_{L}W_{L}$, with an enhancement factor about two orders of magnitude for $\sqrt s = 1$--4~TeV. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/Lumin_WW_14TeV.eps} \vspace{-0.1cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/Lumin_WW_100TeV.eps} \vspace{-0.1cm}\caption{}\end{subfigure} \\ \vspace{0.2cm} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/Lumin_WW_ratio.eps} \vspace{-0.1cm}\caption{}\end{subfigure} \end{center} \caption[]{Representative parton luminosities in $pp$ collisions at (a)~$\sqrt S=14$~TeV, (b)~$\sqrt S=100$~TeV, and (c) the ratio of luminosities between the two beam energies as a function of partonic CM energy $\sqrt{s}$. } \label{fig:PDF} \end{figure} As discussed in Sec.~\ref{sec:evolv}, some additional novel electroweak effects in the PDFs involve the different gauge interactions of left-handed and right-handed chiral fermions, and the isospin non-singlet nature of typical beam particles. The former leads to more rapid evolution to low-$x$ for left-handed fermions than for right-handed fermions. The latter leads to Bloch-Nordsieck violation \cite{Ciafaloni:2000rp,Bell:2010gi,Manohar:2014vxa}. In PDF language, this appears as a self-correcting instability wherein the two LH isospin components of the beam flip between one another at a progressively increasing double-logarithmic rate, via soft/collinear $W^\pm$ emissions. Both effects contribute to spontaneous beam polarization. In particular, in unpolarized proton beams the $u_L$ and $d_L$ PDFs will gradually split off from the $u_R$ and $d_R$ PDFs, and begin to asymptotically merge together into a common ``$q_L$'' PDF at high energies. We investigate these phenomena in future work~\cite{EWshower}. \subsection{Final states with multiple gauge bosons} The collinear showering approximation allows us to estimate the leading contributions for multiple EW gauge boson production at high energies. A major component is splittings amongst the gauge bosons themselves via their non-Abelian interactions, in analogy with $g \to gg$ splittings in QCD. These have so far received little dedicated study in the electroweak theory within a parton shower framework. For some earlier studies of the fixed-order Sudakov effects in high-$p_T$ gauge boson production, see for example~\cite{Kuhn:2005az,Kuhn:2005gv,Kuhn:2007cv}.\footnote{As a simple cross-check of our shower framework, we can make a comparison to the $p_T$-dependent EW radiative corrections in $Wj$ production, as computed to NLO and approximate NNLO~\cite{Kuhn:2007cv}. Since our shower is defined only for FSR, we study $Wq$ production and square the inferred Sudakov factor for the final-state quark. This approximately includes the Sudakov contribution of the initial-state quark. We select events without $W/Z$ emissions, but allow final-state photons. At $p_T = 1$~TeV, the EW correction to (NLO,NNLO) order is computed to be $-(27,24)\%$, whereas our resummed shower Sudakov also predicts $-24\%$. At $p_T = 2$~TeV, the EW correction to (NLO,NNLO) order is computed to be $-(42,34)\%$, whereas our resummed shower Sudakov predicts $-33\%$. (Following the exponentiation pattern of the corrections, the NNNLO contribution would be $\sim 1\%$.)} \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/TaoPlot_MadGraph.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/TaoPlot_PYTHIAshower.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \\ \vspace{0.2cm} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/TaoPlot_minimalShower_alphaDiv10.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=200pt]{figs/TaoPlot_fullShower.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{Event population for exclusive $WZ+j$ production in the plane of $2p_T(W)/H_T$ versus $\Delta R(W,Z)$ with $p_T(j) \ge 3$~TeV at a 100~TeV proton collider. (a)~$2\to 3$ fixed-order $WZj$ production generated with {\tt MadGraph}; (b)~$2\to 2$ dressed with the {\tt PYTHIA} weak shower, which includes only $q\to Vq$ splittings; (c)~$2\to 2$ $Wj$ and $Zj$ production dressed with fixed-order FSR splitting functions; (d)~$2\to 2$ dressed with the full EW FSR shower, including all collinear final-state Sudakov effects. QCD showering is not incorporated. An integrated luminosity of 10~ab$^{-1}$ is used for illustration.} \label{fig:WZjet}\end{figure} As a simple illustration of the onset of shower-dominated behavior, we show in Fig.~\ref{fig:WZjet}(a) a 2D kinematic distribution in fixed-order $W^\pm Z + q/g$ production at a 100~TeV proton collider, generated with {\tt MadGraph5}~\cite{Alwall:2011uj}. A single kinematic cut $p_T(q/g) > 3$~TeV is applied. The horizontal axis is the $\Delta R$ separation between the $W$ and $Z$, and the vertical axis is the relative transverse momentum carried by the $W$: $2p_T(W)/H_T$ with $H_T$ defined as the scalar sum of all object $p_T$s. Several features are immediately apparent. Most of the rate is concentrated along a curved band at low $\Delta R(W,Z)$, indicating $W(q/g)$ production with a secondary collinear $W \to ZW$ splitting, and with enhancements at high (low) relative $p_T$ for $W\ (Z)$ events. A second clear concentration of events occurs at $\Delta R(W,Z) \simeq \pi$ and near-maximal relative $H_T$ indicating $Wq$ production with a secondary $q \to Zq$ splitting. A third, more subtle concentration is visible at $\Delta R(W,Z) \simeq \pi$ and low relative $H_T$, representing $Zq$ production with a secondary $q \to Wq'$ splitting. We can show how portions of this distribution arise within an available showering framework by generating $Vj$ events within {\tt PYTHIA8}, and applying its native weak parton shower~\cite{Christiansen:2014kba}. This shower currently includes only $q\to Vq$ splittings, and does not model the $V\to VV$ splittings responsible for the dominant rate near $\Delta R(W,Z) \simeq 0$. The resulting incomplete distribution is shown in Fig.~\ref{fig:WZjet}(b). As a step toward gaining a more complete picture, we show in Fig.~\ref{fig:WZjet}(c) the same distribution with hard $Vj$ events supplied by {\tt PYTHIA8} but dressed with our own EW FSR treatment (Appendix~\ref{sec:FSR}), for the moment using fixed-order splitting functions and without Sudakov evolution effects. Now including $V\to VV$ as well as $V\to Vq$, the agreement becomes quite good in all of the collinear-enhanced regions where we expect splitting functions to furnish a reliable description.\footnote{Physics parameters here and in the {\tt MadGraph} simulation are evaluated at a fixed scale of $m_Z$ for simplicity of comparison, using {\tt MadGraph}'s defaults. The PDF set is CTEQ6L1, evaluated at a factorization scale of 3~TeV. The {\tt PYTHIA} simulation does not track fermion chirality throughout the hard event, and directly collapses $\gamma/Z$ states into mass basis instead of providing a gauge-space wave function. We have explicitly corrected for both of these effects in this comparison and below.} Besides the simpler generation of high-multiplicity final-states in collinear regions, the advantage of the parton shower is the ability to automatically fold in Sudakov corrections, going beyond fixed-order predictions. We show the result of running the full parton shower evolution Fig.~\ref{fig:WZjet}(d), including as well important contributions such as $V \to f\bar f$. Exclusive $W^\pm Z(q/g)$ events are selected as including exactly one each of ``on-shell'' $W$ and $Z$, defined as lying within 10$\Gamma$ of their pole mass, and we allow for multiple photon emissions. While the distribution looks similar to that at fixed-order, the overall rates in the collinear regions are reduced by several tens of percent due to the Sudakov corrections. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/bosonCounting_quark.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/bosonCounting_W.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ Normalized rates versus the number of multiple final-state $W/Z$ emissions with a 10 TeV initial state particle, (a) $d_L$-initiated showers for $q\to Vq$ and $V\to VV$ splittings with full EW FSR (solid histogram), $q\to Vq$ splitting only (long-dashed), and $q\to Vq$ without back-reaction correction (short-dashed). Output from {\tt PYTHIA} $q\to Vq$ weak shower is also included for comparison (dotted histogram). (b) $W_T$-initiated showers for fully constrained FSR (solid histogram), compared with various stages of approximations as labeled. } \label{fig:multiple} \end{figure} While formally any secondary parton splittings involve rate penalties of $O(\alpha_W)$, they become progressively more log-enhanced at high energies. This is again in close analogy to QCD. However, unlike in QCD, individual weak splittings in arbitrarily soft/collinear limits are in principle both observable and subject to perturbative modeling. Figure \ref{fig:multiple} shows the predicted number of $W/Z$ generated from showering off a highly energetic particle with $E = 10$~TeV. In this calculation, we keep the weak bosons stable and include only the splittings $f \to V f$ and $V \to VV$. QCD showering is also turned off. We construct ``weak jets'' by clustering particles with the anti-$k_T$ algorithm~\cite{Cacciari:2008gp} with $R = \pi/2$, and count the contained $W/Z$ bosons. In Fig.~\ref{fig:multiple}(a), we show the results for a left-handed chiral fermion $(d_L)$. Roughly speaking, we see that the emission of each additional gauge boson comes with an ${\cal O}$(10\%) suppression factor, which can be compared to the naive (not log-enhanced) ${\cal O}$(1\%) suppression typical of adding gauge bosons to lower-energy processes. The solid histogram shows the total rate and the long-dashed histogram indicates the rate with non-Abelian gauge splittings turned off. The difference indicates the large contribution from the gauge boson self-interaction beyond the first emission. As a cross-check, we include as well the prediction from the {\tt PYTHIA8} weak shower~\cite{Christiansen:2014kba}, as shown by the dotted histogram. Our own shower by default includes a back-reaction correction, discussed in Section~\ref{sec:mass_effects}, which approximates the expected suppression of multiple emissions due to dead cone-like effects for off-shell particles. To make a more direct comparison, we have also switched this off, and plotted the result as the short-dashed histogram. The two showers, both modeling unrestricted $q\to Vq$ emissions, are then seen to be in close agreement. In Fig.~\ref{fig:multiple}(b), we show the predicted number of $W/Z$ contained in ``weak jets'' generated from showering off of a highly energetic transversely-polarized $W^\pm$ boson with $E_W = 10$~TeV. As already indicated in Table~\ref{table:splitting_rates}, the overall emission rates are much higher, close to 40\% for the first emission (including both photons and $Z$ bosons). Here we have again considered the effect of turning on/off back-reaction corrections. In addition, from experience with QCD showers, it is known that coherence effects in emission amplitudes lead to effective color-screening and approximate angular-ordering of nested emissions in non-Abelian splittings. To test this, we have also turned on/off a strict angular-ordering veto in our shower simulation. The results, visible in Fig.~\ref{fig:multiple}(b), are that both the back-reaction correction and the angular ordering can have an ${\cal O}(1)$ effect at high multiplicities, but that the two effects come with sizable overlap. Splittings with large opening angles tend to exhibit large back-reaction effects, and vice-versa. This observation provides some evidence that modeling of the high-multiplicity region might be made to quickly converge, though more study is required. It should be noted that at higher energy scales, the production of multiple gauge bosons could be the characteristic signature in many scenarios for physics beyond the SM \cite{Agashe:2007ki,Dennis:2007tv}. \subsection{EW Showers initiated by top quarks} Top quarks are instrumental in searches for new physics related to the EWSB sector, and for exotica such as resonances with large couplings to the third generation, as well as third-generation squarks \cite{Agashe:2013hma}. High-energy tops can be produced copiously at the LHC and at future accelerators, and multi-TeV top quarks offer a particularly rich laboratory to study the effects of weak showering. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/top_mergedShowerDecay.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/tR_to_neutrals.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ Invariant mass distributions for EW splittings initiated by a 10~TeV polarized top quark (a) for $t_L\to Wb$ (top curve), $t_R\to Wb$ (middle curve) and a fixed-width Breit-Wigner for unpolarized top decay without shower (lower curve); (b) for $t_R \to ht_L/Z_{L} t_L,\ Z_T t_R$ (upper curves) and to $h t_R,\ Z_{L} t_R$ (lower curves), respectively. } \label{fig:top} \end{figure} We start by considering splittings that follow the same structure as the top quark's weak decay, $t\to W^+b$. Figure \ref{fig:top}(a) shows the resulting $Wb$ mass spectrum from applying this splitting process to 10~TeV top quarks of left-handed or right-handed helicities. One immediate feature is the transition between shower and decay: the Breit-Wigner peak centered at $m_t$ continuously matches onto a high-virtuality shower dominated either by $W_T$ emission from left-handed top quarks, or $W_{L}$ emission from right-handed top quarks.\footnote{To improve the matching, we have distributed the ``decay'' events according to a Breit-Wigner distribution weighted by $\Gamma_t(Q)/\Gamma_t(m_t)$. This constitutes approximately a 30\% effect at the given matching scale of 187~GeV.} The former are simple manifestations of $SU(2)_L$ gauge showers with a larger rate (upper curve), whereas the latter are a due to the Goldstone-equivalent Yukawa showers with a smaller rate (middle curve). Ultra-collinear emissions are necessary for properly modeling the shower/decay transition, as shown in more detail in Appendix~\ref{sec:FeynmanRules} (see Fig.~\ref{app:top}). We also show the unpolarized top decay with a fixed-width Breit-Wigner without shower (lower curve in Fig.~\ref{fig:top}(a)). The events are understandably much more constrained to the region $M(Wb)\simeq m_t$. It is very important to appreciate the difference, for example since one must properly model the properties of off-shell top quarks in searching for new physics~\cite{Agashe:2006hk,Lillie:2007yh,Frederix:2007gi,Han:2008xb,Degrande:2010kt,Agashe:2013hma} associated with the top quark as well as the Higgs sector. Top quarks may also radiate Higgs bosons and, analogously, longitudinal $Z$ bosons. Both of these Yukawa-showering processes occur with similar rates off of left-handed and right-handed tops, and grow single-logarithmically with energy. In Fig.~\ref{fig:top}(b), we present a 10 TeV right-handed top quark splitting via the EW shower. The rates for $t_R \to ht_L$ and to $Z_L t_L$ are governed by the Yukawa coupling and essentially the same, due to the GET. The channel $t_R \to Z_T t_R$, shown for reference, is via the gauge coupling of nearly pure $B^0$, which is rather small. The other two channels $t_R \to h t_R,\ Z_L t_R$ are helicity-conserving scalar emissions and are of the ultra-collinear nature. The integrated splitting rates for all the above channels are of similar size: $\mathcal{P}(t_R \to h t_L) \simeq \mathcal{P}(t_R \to Z_L t_L) \approx 7.2\times 10^{-3}$, $\mathcal{P}(t_R \to h t_R)$ and $\mathcal{P}(t_R \to Z_T t_R) \approx 4.5\times 10^{-3}$, and $\mathcal{P}(t_R \to Z_L t_R) \approx 2.3\times 10^{-3}$. Notably, the rates for the ultra-collinear processes are concentrated toward smaller virtualities (and correspondingly smaller $k_T$s). Though the total splitting rate represented in Fig.~\ref{fig:top}(b) is only a few percent, the fact that top quarks are produced through strong interactions can lead to significant numbers of showered events at a hadron collider. On the other hand, the splitting rates to a Higgs boson are in sharp contrast to the much smaller rate for an on-shell top quark decay to a Higgs boson in the Standard Model \cite{Han:2013sea}, of the order $10^{-9}$. In considering determination of the top-quark Yukawa coupling in the processes $t\bar t h/t\bar t Z$ at high energies~\cite{Plehn:2015cta}, the qualitative features shown here should be informative. \subsection{EW Showers initiated by neutral bosons} The neutral bosons $\gamma$, $Z_T$, $h$, and $Z_{L}$ contain rich physics at high energies, but their showering requires special treatment due to the presence of sizable interference effects. \subsubsection{$\gamma/Z_T$ coherence} For the $\gamma/Z_T$ system, these interference effects have two aspects: the mass basis is misaligned with the gauge interaction basis, and even when viewed within the $B^0/W^0$ interaction basis, the existence of a preferred physical isospin basis for asymptotic states leads to observable coherence between $B^0$ and $W^0$ exchanges. A rigorous final-state shower must address both of these aspects simultaneously by using Sudakov evolution based on density matrices, as outlined in Section~\ref{sec:interference}. More specific details can be found in Appendix~\ref{app:split}. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/V-to-WW_BWeL.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/V-to-WW_B0.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ Invariant mass distributions for $W^+W^-$ produced in the EW splitting of a 2.5 TeV $\gamma/Z$ neutral boson, initiated from (a) $e_L$ current with full coherent EW FSR (solid curve), fixed-order FSR (dashed curve), and the hypothetical incoherent $\gamma$ or $Z$ splittings (lower curves); (b) $e_R$ current with full coherent EW FSR (solid curve) and the hypothetical incoherent $\gamma$ or $Z$ splittings (upper curve). } \label{fig:WW} \end{figure} As a simple example of the basis alignment issue, consider high energy showering of neutral bosons $\gamma/Z \to W^+W^-$. A naive treatment would shower the photon and $Z$ including the triple-vector processes $\gamma \to W^+ W^-$ and $Z \to W^+ W^-$.\footnote{Such a simplification has been made in~\cite{Ciafaloni:2010ti} for neutral bosons produced in dark matter annihilation.} However, depending on the gauge charges of the initial sources, the interference between these two mass-basis splitting channels can be ${\cal O}(1)$. In particular, for an energetic $\gamma/Z$ emitted from a right-handed chiral electron line, the $SU(2)_L$ content of the produced neutral gauge bosons is practically zero, suggesting a near absence of collinear $W^+W^-$ splittings in the final state. We explicitly compute these splittings assuming either an $e^-_L$ or $e^-_R$ source, which radiate off 2.5~TeV $\gamma/Z$ bosons (e.g., via neutral boson pair-production at a 5~TeV $e^-e^+$ collider). The results are displayed in Fig.~\ref{fig:WW}. Our full EW FSR treatment is labeled as ``coherent shower,'' contrasting with the hypothetical incoherent contributions from individual $\gamma$ or $Z$. For the $\gamma/Z$ produced by left-handed electrons in Fig.~\ref{fig:WW}(a), the $W^0$ fraction is prominent from the constructive interference between $\gamma/Z$, leading to a total splitting rate of roughly 15\% (black solid curve) and noticeable Sudakov distortions relative to a simple fixed-order splitting calculation (dashed curve). Fig.~\ref{fig:WW}(b) shows the result for a right-handed electron source, exhibiting the almost complete destructive interference between the $\gamma$ and $Z$ channels, due to the fact that the produced boson is nearly pure $B^0$ when viewed in gauge basis. The small residual rate at high virtualities is actually dominated by the unbroken-phase vector-to-scalar splitting $B^0 \to \phi^+\phi^- \sim W_{L}^+ W_{L}^-$. In our GEG approach, this is simply computed as a distinct process, rather than due to a delicate cancellation. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/V-to-ff_nuL.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/V-to-ff_eL.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \\ \vspace{0.2cm} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/V-to-ff_eR.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ Invariant mass distributions for fermion pairs produced in the EW splitting of a 2.5 TeV $\gamma/Z$ neutral boson, sourced by an $e_L$ current, for exclusive final states (a) $\nu_L\bar \nu_R$, (b) $\ell_L^- \ell_R^+$, and (c) $\ell_R^- \ell_L^+$. Three treatments of the showering neutral bosons are: hypothetical incoherent $B^0/W^0$ (dotted), incoherent $\gamma/Z$ (dashed), and the full coherent EW evolution (solid). } \label{fig:ff} \end{figure} Perhaps more subtle are the interference effects between different exclusive isospin channels. Naively, we might expect to be able to treat $SU(2)_L \times U(1)_Y$ in a manner analogous to $SU(3)_{\rm QCD} \times U(1)_{\rm EM}$, wherein the showers of the two gauge groups are simply run independently of one another. However, weak isospin quantum numbers are directly correlated with electric charge, and are therefore usually experimentally distinguishable. (Consider, e.g., the response of a detector to $e_L$ versus $\nu_L$.) Therefore, weak isospin cannot be summed/averaged like QCD color. As a consequence, observable rate asymmetries arise due to interference between the $SU(2)_L$ and $U(1)_Y$ gauge boson exchanges. Although a well-known effect, it has never been implemented in a parton shower framework. Again, we illustrate this by the splittings of 2.5 TeV $\gamma/Z$ neutral bosons, here produced off of a left-handed chiral electron line. This boson may subsequently split into a $\ell^- \ell^+$ or $\nu \bar\nu$ pair. The splitting rates with/without interference effects are shown in Fig.~\ref{fig:ff}.\footnote{For the incoherent sum over mass or gauge eigenstates, we have evolved separate samples starting from the individual pure-state density matrices, and recombined them according to their squared production amplitudes. Sudakov evolution of these density matrices has been switched off.} Besides the full coherent EW evolution (solid curves), two hypothetical incoherent treatments are shown using $\gamma$-$Z$ mass basis (dashed curves) and $B^0$-$W^0$ gauge basis (dotted curves). It is instructive to see that $Z\to \nu_L \bar \nu_R$ contribution alone gives the correct result as seen in Fig.~\ref{fig:ff}(a); $B^0 \to \ell^-_R \bar \ell^+_L$ alone also gives the correct result at high masses as seen in Fig.~\ref{fig:ff}(c), although it misses substantial destructive interference near $m_Z$ due to the unequal $\gamma$ and $Z$ masses; and $\ell^-_L \bar \ell^+_R$ would need coherent treatment in the whole kinematical regime as seen in Fig.~\ref{fig:ff}(b). The same issues of course arise in hadron colliders, though the numerical impact is often smaller because of the healthy admixtures of $u/d$ flavors and LH/RH chiralities, as well as the charge-rearranging effects of hadronization. Nonetheless, we strongly advocate for a consistent treatment based on matrix-valued splitting functions and Sudakovs. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/H0star-to-WW.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/DRhh.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ (a) $W^+W^-$ invariant mass distributions from the EW splitting of a 10~TeV $h/Z_L\ (H^{0*}) \to W^+W^-$, labeled by the helicities and charges as $T^+L^-$, $L^+T^-$, $T^+T^-$, and $L^+L^-$. The ``incoherent $T^+L^-$ or $L^+T^-$'' curve shows the corresponding result from showering $h$ and $Z_{L}$ states independently. (b) Kinematic $\Delta R$ separation between the final state Higgs boson pair for the ultra-collinear splitting process $h \to hh$ from a 1~TeV Higgs boson. } \label{fig:h_WW} \end{figure} \subsubsection{Higgs splitting and $h/Z_L$ coherence} \label{sec:hzl} Analogous interference effects also occur between the Higgs boson and longitudinal $Z$ boson. In the high-energy gauge theory, these appear as different components of the same complex scalar, and particular linear combinations carry a partially-conserved ``Higgs number'' that flows through the shower. As a simple illustration, consider high energy production of $W^{+}_T \to (h/Z_{L}) W^+_{L}$. The coherently mixed $h/Z_{L}$ carries Higgs number of $-1$, and corresponds to the ``anti-Higgs'' state $H^{0*}$. This state preferentially splits into $W^+_{T} W^-_{L}$ (or, equivalently, $W^+_{T} \phi^-$), as shown in the top curve of Fig.~\ref{fig:h_WW}(a), labeled by the $W$ helicities and charges as $T^+L^-$. The charge conjugate state $W^+_{L} W^-_{T}$ (labeled $L^+T^-$) carries the opposite Higgs number and thus is highly suppressed. It arises only at low virtuality, mainly due to the Higgs-$Z$ mass difference. An incoherently-showered admixture of $h$ and $Z_{L}$ would instead distribute probability equally between these two different polarization channels, as shown in the figure with the middle curve. (A similar charge-polarization correlation also occurs in splittings to top quark pairs.) The contributions from the other sub-leading ultra-collinear polarization channels are shown by curves labeled $L^+L^-$ and $T^+T^-$. Though not obvious from the virtuality distributions, we note that coherence effects also significantly influence these channels. In particular, the ultra-collinear splitting $H^{0*} \to W_{L}^+ W_{L}^-$ inherits the soft divergence from the regular gauge splitting $H^{0*} \to W_{T}^+ W_{L}^-$, but only in the limit as the $W_{L}^+$ becomes soft. Similarly for the CP-conjugate process. The individual $h$ and $Z_{L}$ incoherent showers, on the other hand, exhibit parts of the soft-singular behaviors of each of their $H^0$ and $H^{0*}$ components. See Table~\ref{tab:broken_scalar_splittings}. As a final novel example of neutral boson showering, we consider the purely ultra-collinear splitting $h \to hh$. This proceeds through the Higgs cubic interaction that arises after EWSB, and it is the unique $1\to2$ splitting process in the SM that is strictly proportional to Higgs boson self-interaction $\lambda_h$. Isolating the $h$ component of a general energetic $h/Z_{L}$ state, the total splitting rate comes out to about $0.14\%$ for $E \gg m_h$. We illustrate in Fig.~\ref{fig:h_WW}(b) the kinematic distribution $\Delta R(h,h)$, for an example initial Higgs energy of 1~TeV. The distribution peaks at roughly $2m_h/E$, which in this example is close to $0.25$. Generally, the majority of the phase space for high-energy production $hhX$ for any $X$ becomes dominated by such collinear configuration. While this ultra-collinear splitting process lacks any log-enhancements, integrating the splitting phase space yields a total rate relative to $hX$ that scales like $\lambda_h/16\pi^2$, whereas the non-collinear regions contribute a relative rate of order $\lambda_h^2/16\pi^2 \times v^2/E^2$. Therefore the ``collinear enhancement'' here is $E^2/\lambda_hv^2 \sim E^2/m_h^2$, rather than a conventional logarithm. Though the splitting rate is still quite small, for a 100~TeV $pp$ collider with 10's of~ab$^{-1}$ integrated luminosity, we expect thousands of such events arising from the (also novel) high-energy production process $qV_{L} \to q^{(\prime)}(h/Z_{L})$ at $p_T \sim 1$~TeV. In future precision Higgs physics \cite{deFlorian:2016spz}, accurate description of such Higgs splittings could serve an interesting role. \subsection{EW showers by a new heavy state: $W'$ example} \label{sec:Wprime} The possibility of multiple weak boson emissions in the same event, and indeed even from the same parent particle, leads us inevitably to start considering final-states in terms of ``weak jets'' rather than in terms of individual, well-separated EW-charged particles (possibly dressed with QCD and EM radiation). Besides altering the energy spectra of the particles emerging from a hard interaction, EW emissions can significantly alter the multiplicity and flavor structure of an event. In particular, this new feature could have major consequences for how a new physics signal would be detected and reconstructed. While it is beyond the scope of this current paper to present detailed examples for physics beyond the SM in high energy collisions \cite{Morrissey:2009tf}, we study a simple case for illustration. We consider the decay of a narrow heavy $W^{\prime +}$ resonance into $\nu_L\ell^+_R$, with a left-handed coupling and $M_{W'} \gg m_{W}$. Nominally, the resonance is reconstructed from the charged lepton and the missing transverse momentum using the transverse mass variable $M_T(\ell,\displaystyle{\not}E_T)$, which gives a Jacobian peak at $M_{W'}$. When multiple EW emissions are taken into account, various new flavor channels open up, as well as additional kinematic handles that can facilitate more accurate resonance reconstruction. For example, in~\cite{Hook:2014rka}, it was pointed out that collinear weak emissions $\nu \to Z\nu$ can effectively reveal the neutrino's direction-of-flight when the $Z$ decays visibly. For illustration here, we simply divide up the showered signal by inclusive lepton multiplicity, focusing on channels up to three charged leptons. Quarks and $\tau$-leptons may be present in the secondary $W/Z$ showering/decays, but are ignored here for simplicity. Within each lepton multiplicity channel, we approximately reconstruct the resonance using the ``cluster transverse mass'' variable $M_{T \rm cl}$, defined as \cite{Barger:1987re} \begin{equation} M_{T \rm cl}^{2} = \left(\sqrt{p_{T,\ell's}^2 + M_{\ell's}^2} + \displaystyle{\not}E_T \right)^2 - (\vec p_{T,\ell's} + \vec{\displaystyle{\not}E_T } )^2. \label{eq:mc} \end{equation} The result of this analysis is displayed in Fig~\ref{fig:Wprime}(a), taking $M_{W'} = 20$~TeV. Solid curves are those from the nominal EW shower for $1\ell + X$, $2\ell + X$ and $3\ell + X$, where $X$ represents the rest of the particles in the event (mainly neutrinos and quarks). The dotted line shows the result of the naive two-body decay calculation, without the parton shower. To focus on the weak-scale contributions, we have terminated the EW shower at a lower virtuality of 50~GeV. The showering reduces the total visible rate within 10\% of the nominal peak by about $10\%$ due to the radiation. In this window, the relative contributions from 1-lepton, 2-lepton, and 3-lepton are respectively 0.81, 0.13 and 0.06. Although higher lepton multiplicities are rarer, their $M_{T \rm cl}$ distributions are also more sharply-peaked. It is also instructive to compare these predictions to those of a simple fixed-order splitting calculation, which captures the leading-log corrections but does not resum them. We find that this calculation predicts 9\% more 1-lepton events than the full EW shower in the near-peak region. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=220pt]{figs/Wprime-ev.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \\ \vspace{0.2cm} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/Wprime.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=210pt]{figs/Wprime_withQCD.eps} \vspace{-0.5cm}\caption{}\end{subfigure} \vspace{-0.6cm} \end{center} \caption[]{ Showered events from 20 TeV $W'^+$ decays. (a)~$W'^+ \to \nu_L \ell_R^+$ cluster transverse mass distributions, running the full EW shower and breaking down the signal by inclusive lepton multiplicity (solid curves), as well as the uncorrected two-body decay result (dotted curve). (b)~$W'^+ \to t_L \bar b_R$ quark-pair invariant mass distributions, running the full EW shower, and (c)~combining EW and QCD showering. } \label{fig:Wprime} \end{figure} Like $e_L$ and $\nu_L$, left-handed top and bottom quarks live together in a weak isospin doublet, and can also convert into one another through soft/collinear $W^\pm$ emissions. Similar to the Bloch-Nordsieck violation effect discussed above for PDFs, the distinction between $t_L$- and $b_L$-jets therefore becomes somewhat blurred at high energy~\cite{Manohar:2014vxa}. This effect, which is double-log enhanced at fixed order, is automatically resummed in the parton shower. Consider again, as a simplified example, a narrow 20~TeV $W^{\prime +}$ resonance, this time decaying to $t_L \bar b_R$ of 10~TeV each in energy. The final flavor content of two heavy quarks should gradually average out. We show in Fig.~\ref{fig:Wprime} the mass spectrum of the two-quark system resulting from the decay plus EW parton shower, individually in $t \bar b$, $b \bar b$, $t \bar t$, and $b \bar t$ channels. (For this purpose, the threshold between the ``shower'' and ``decay'' of a top quark is set to $m_t + 10\Gamma_t$.) Respectively, these are dominated by unshowered events, events with a single $t \to W^+ b$ splitting, events with a single $\bar b \to W^- \bar t$ splitting, and events with one of each such splitting. The relative rates of the four channels are about 0.77, 0.09, 0.12, and 0.015. Within 10\% of the $W'$ mass peak, the nominal $t\bar b$ signal would be reduced by almost 30\% from purely electroweak effects. Of course, this observation invites ``weak jet'' reconstructions that add back in the emitted gauge and scalar particles, though inferring the resonance's charge becomes somewhat more complicated. Finally, we can consider the interplay of EW and QCD radiation, which is shown in Fig.~\ref{fig:Wprime}(c) for the mass spectra of the quarks when $t\to gt$ and $b\to gb$ emissions are also turned on. Again the shower is terminated at 50~GeV virtuality to focus on effects at and above the EW scale. The full Standard Model showering leads to dramatic distortions in both mass and flavor distributions. Now the $W'$ mass could be more accurately reconstructed by adding back in both-the EW {\it and} QCD radiation, which practically may overlap heavily since emitted weak bosons dominantly decay hadronically. \section{Introduction} \label{sec:intro} \subsection{Electroweak parton showers} Process-independent parton showers in QED and QCD have long served as invaluable tools for particle physics in high energy collisions and decays. By exploiting formal factorizations between hard/wide-angle physics and soft/collinear physics~\cite{Collins:1984kg,Collins:1989gx,Bengtsson:1986et}, the extremely complicated exclusive structure of high energy scattering events can be viewed in a modular fashion. The dominant flows of energy and other quantum numbers are modeled with manageable, low-multiplicity matrix elements. These are subsequently dressed with soft/collinear radiation, and hadronization applied to bare color charges. Detailed implementations have varied significantly in specific approach, but showering programs such as {\tt PYTHIA}~\cite{Sjostrand:2007gs}, {\tt HERWIG}~\cite{Bahr:2008pv}, and {\tt SHERPA}~\cite{Gleisberg:2008ta} are now standard workhorses required for describing realistic collider events. They have also found widespread use in modeling the interactions of high-energy cosmic rays \cite{Knapp:2002vs}, as well as the exclusive products of dark matter annihilation and decay \cite{Cirelli:2008pk,Cirelli:2010xx}. Collinear parton showers become a ubiquitous phenomenon for processes at energies far above the mass scales of the relevant final-state particles, such as the electron mass in QED or the confinement scale in QCD. With the upgraded LHC and proposed future accelerators \cite{Arkani-Hamed:2015vfh,Mangano:2016jyj,Golling:2016gvc} and a growing suite of instruments sensitive to indirect signals of multi-TeV dark matter~\cite{Abramowski:2013ax,Lefranc:2015vza,Carr:2015hta}, we are now forced to confront processes at energies far above the next known mass threshold in Nature, the electroweak (EW) scale $v\approx 246$~GeV (the electroweak vacuum expectation value, ``VEV'' in short). Consequently, we are entering a phase in particle physics where it becomes appropriate to consider electroweak parton showers, extending the usual $SU(3)_{\rm QCD} \times U(1)_{\rm EM}$ showers into the fully $SU(3)_{\rm QCD} \times SU(2)_L \times U(1)_Y$ symmetric framework of the Standard Model (SM). In effect, we will start to see electroweak gauge bosons, Higgs bosons, and top quarks behaving like massless partons~\cite{Dawson:2014pea,Han:2014nja}, appearing both as constituents of jets \cite{Almeida:2008tp} as well as of initial-state beam particles. This is in stark contrast to the conventional perspective in which they are viewed as ``heavy'' particles that are only produced as part of the hard interaction. The concept of electroweak bosons as partons has a long history, beginning with the ``effective-$W$ approximation"~\cite{Kane:1984bb,Dawson:1984gx,Chanowitz:1985hj}. This picture of electroweak vector bosons radiating off of initial-state quarks is now strongly supported by the experimental observation of Higgs boson production via vector boson fusion (VBF) at the LHC~\cite{LHCVBF}. As we imagine probing VBF-initiated processes at even higher energies, with both the initial weak bosons and their associated tag jets becoming significantly more collinear to the beams, the idea of weak parton distribution functions (PDFs) within protons becomes progressively more appropriate. Many calculations have further revealed large negative electroweak virtual corrections to a variety of exclusive high-energy processes, wherein real emission of additional weak bosons is not included. Such large ``non-emission'' rate penalties indicate the onset of the universal, logarithmically-enhanced Sudakov form-factors characteristic of massless gauge theories~\cite{Melles:2000gw,Beenakker:2000kb}. For example, exclusive di-jet production receives corrections from virtual $W/Z$ exchange that begin to exceed $-10\%$ for transverse momenta exceeding 3~TeV~\cite{Moretti:2006ea,Dittmaier:2012kx}, and grow to approximately $-$30\% at the 10's of TeV energies expected at future hadron colliders. For processes that include weak bosons at the hard event scale, such as $\gamma/Z/W$+jets or vector boson pair production, the corrections can quickly grow to $O(1)$~\cite{Kuhn:2004em,Kuhn:2005az,Kuhn:2005gv,Kuhn:2007qc,Kuhn:2007cv,Hollik:2007sq,Becher:2013zua}. A process-independent framework for extracting all such log-enhanced electroweak virtual corrections at fixed leading-order has been developed in~\cite{Denner:2000jv,Denner:2001gw}, and next-to-leading logarithmic resummation of the gauge corrections has been achieved using SCET formalism in~\cite{Chiu:2007yn,Chiu:2007dg,Chiu:2008vv,Chiu:2009mg,Chiu:2009ft}. The total rates of real $W/Z$ emissions and other electroweak parton splittings have a direct correspondence with the ``lost'' event rates encoded in the negative electroweak virtual corrections, with matching logarithmic enhancements in accordance with the Kinoshita-Lee-Nauenberg theorem. Iterating this observation across all possible nested emissions and loops within a given process builds up the usual parton shower picture, allowing formal resummations of the logarithms that would otherwise still appear in well-defined exclusive rates. Many studies have addressed aspects of electroweak parton showering in the past several years~\cite{Ciafaloni:2000rp,Ciafaloni:2000gm,Ciafaloni:2005fm,Baur:2006sn,Bell:2010gi,Chiesa:2013yma,Christiansen:2014kba,Krauss:2014yaa,Bauer:2016kkv}. Parts of the complete shower are already available in public codes and are being tested at the LHC, with ATLAS recently making a first observation of collinear-enhanced $W/Z$ radiation within QCD jets~\cite{Aaboud:2016ylh}. A detailed listing of electroweak collinear splitting functions and PDF evolution equations, restricted to processes that survive in the unbroken limit, has been worked out in~\cite{Ciafaloni:2005fm}. There, the effects of electroweak symmetry breaking (EWSB) are addressed minimalistically by including a hard phase space cutoff and working in a preferred isospin basis. These results and more recent SCET-based calculations have also been adapted for the problem of TeV-scale dark matter annihilation in~\cite{Ciafaloni:2010ti,Cavasonza:2014xra,Bauer:2014ula,Ovanesyan:2014fwa,Baumgart:2014vma,Baumgart:2014saa,Baumgart:2015bpa}. For general-purpose applications, recent versions of {\tt PYTHIA} incorporate radiation of $W$ and $Z$ bosons off of light fermions~\cite{Christiansen:2014kba}, including a detailed model of how this component of the shower turns off due to $W/Z$ mass effects. A study using {\tt SHERPA}~\cite{Krauss:2014yaa} instead breaks down these emissions into separate transverse ($V_T$) and longitudinal ($V_{L}$) components, coupling in the latter strictly using Yukawa couplings by appealing to the Goldstone-boson Equivalence Theorem (GET) \cite{Lee:1977eg,Chanowitz:1985hj}. The problem has been approached in different way within {\tt ALPGEN}~\cite{Mangano:2002ea,Chiesa:2013yma}, by multiplying exclusive hard event rates with the fixed-order Sudakov factors of~\cite{Denner:2000jv,Denner:2001gw} and supplementing with exact fixed-order real emission processes. This approach, which is itself a first step towards electroweak shower matching, works well when the soft/collinear phase space enhancements are modest and the need for added accuracy of higher-multiplicity hard event generation balances the added computational complexity. However, a complete matching prescription will also ultimately involve a dedicated parton shower step, especially when convolved with QCD radiation. The simpler, process-independent parton shower approach will also become particularly useful in new physics applications~\cite{Hook:2014rka,Rizzo:2014xma}. \subsection{Our approach} Notably, no existing general-purpose parton showering algorithm that is capable of generating fully exclusive events has addressed the full scope of universal collinear electroweak physics. In particular, a complete treatment must include the high-rate of non-Abelian splittings amongst the weak bosons themselves, as well as showers that involve longitudinal/scalar states and many of the sometimes subtle effects of spontaneous symmetry breaking. The goal of the present paper is to outline such an algorithm, providing a comprehensive framework in which all collinear electroweak showering phenomena can be implemented, and including a systematic treatment of EWSB. Towards this end, we derive and tabulate the complete set of electroweak splitting functions in the broken phase, including the massive fermions, gauge bosons, and the Higgs boson. These generalize and unify both the unbroken-phase evolution equations of~\cite{Ciafaloni:2005fm} and the purely broken-phase effects already observed within the effective-$W$ approximation, namely the generation of longitudinal vector boson beams from massless fermions~\cite{Kane:1984bb,Dawson:1984gx,Chanowitz:1985hj}. We further investigate some of the physical consequences of these various electroweak showering phenomena. Relative to QED and QCD showers, the complete electroweak parton shower exhibits many novel features. At the level of the unbroken theory at high energies, the shower becomes chiral and the particle content is extended to include an EW-charged scalar doublet. Most of the degrees of freedom contained in this scalar are to be identified with the longitudinal gauge bosons via the Goldstone-boson Equivalence Theorem. Including Yukawa couplings, the set of core splitting function topologies expands from the usual three to seven. EWSB also already makes a subtle imprint here due to the presence of a preferred isospin basis for asymptotic states, leading to interference and self-averaging effects between different exclusive isospin channels. The latter are intimately related to ``Bloch-Nordsieck violation'' when occurring in the initial state~\cite{Ciafaloni:2000rp,Bell:2010gi,Manohar:2014vxa}. As the shower evolves down through the weak scale, it becomes physically regulated by the appearance of gauge boson, scalar, and fermion masses. Unlike in QCD where the shower regulation occurs non-perturbatively due to confinement, or in QED where a small photon mass is sometimes used as an artificial regulator for soft emissions, the electroweak shower exhibits a perturbative transition with genuinely massive gauge bosons. It is possible to describe this transition rather accurately, but doing so requires a careful accounting of symmetry-violating effects beyond simple kinematic suppressions, and a consistent elimination of gauge artifacts. In particular, Goldstone-boson equivalence ceases to hold at relative transverse momenta of order the weak scale, allowing for an additional burst of many ``ultra-collinear'' radiation processes that do not exist in the unbroken theory, and are highly suppressed at energy scales $k_T \gg v$. To cleanly isolate these effects, we introduce a novel gauge dubbed ``Goldstone Equivalence Gauge'' (GEG). This is a particularly convenient choice of non-covariant gauge, allowing a completely transparent view of Goldstone-boson equivalence within the shower, as well as systematic corrections away from it in the splitting matrix elements, organized in a power series in VEV factors. The naively bad high energy behavior of the longitudinal gauge bosons is deleted, and the Goldstone fields allowed to interpolate physical states, at the cost of re-introducing explicit gauge-Goldstone boson mixing. Our formalism developed here has deep implications and rich applications at TeV-scale energies and beyond. Some aspects include EW parton distribution functions associated with initial state radiation (ISR), multiple emissions in EW final state radiation (FSR), consistent merging of EW decays with EW showering, a quantum-coherent treatment of the Sudakov evolution of $\gamma/Z/h$ states, as well as modeling of general ultra-collinear processes including, e.g., $t_R \to h t_R$ and $h\to hh$. We also make some preliminary studies of the impact of EW showering on new physics searches in the context of a heavy $W'$ decay. Quite generally, we begin to see the emergence of the many nontrivial phenomena of ``weak jets'' across a broad range of SM and BSM phenomena. Before proceeding, we also clarify what is {\it not} covered in our current treatment. We make exclusive use here of the collinear approximation, which, in physical gauges such as GEG, explicitly factorizes all soft and collinear divergences particle-by-particle, isolating them to $1\to 2$ real emission diagrams and self-energy loops. This furnishes a formally leading-log model of EW showering, capturing all double-log effects from the soft-collinear region of gauge emissions, as well as the single-logs associated to all hard-collinear splittings. The former are identical to the double-logs that would be inferred from the collinear limits of the eikonal approximation, whose particle-by-particle factorization can be seen upon application of Ward identities~\cite{Denner:2000jv,Denner:2001gw,Bell:2010gi}. However, there are additional single-log soft divergences within gauge emission interferences and virtual exchanges between different particles, which do not factorize so simply. For non-singlet EW ensembles, these contributions lead to global entanglements of isospin quantum numbers between different particles in the event, which are absent in our shower. These isospin entanglements are somewhat analogous to the global kinematic entanglements that occur due to soft gluon emissions/exchanges at NLL level in QCD. Nonetheless, the dominant effects of isospin rearrangements, in particular the Bloch-Nordsieck violation, arise already at the double-log level, and are modeled by our shower up to residual single-log ambiguities. We will address approaches to the NLL resummation of isospin entanglements in a future work~\cite{EWshower}. The rest of the paper is organized as follows. We begin in Section~\ref{sec:split} with a generic discussion of splitting and evolution formalism with massive particles. We then outline some of the other nontrivial features such as PDFs for massive particles, interference between different mass eigenstates, showers interpolating onto resonances, and back-reaction effects from multiple emissions. In Section~\ref{sec:unbroken}, we introduce the splitting kernels for the unbroken electroweak theory, namely $SU(2)_L \times U(1)_Y$ gauge theory with massless fermions in SM representations, a single (massless) scalar doublet, and Yukawa interactions. We then proceed in Section~\ref{sec:broken} to generalize these results to the broken phase. After a discussion of the violation of the Goldstone-boson Equivalence Theorem, we introduce the Goldstone Equivalence Gauge. We then discuss the EWSB modifications to the unbroken splitting functions and present a complete list of ultra-collinear processes that arise at leading-order in the VEV. Section~\ref{sec:implementation} explores some key consequences of electroweak showering in final-state and initial-state splitting processes, including a discussion of EW parton distribution functions and multiple EW final state radiation. We emphasize the novel features of the EW shower and illustrate some of the effects in the decay of a heavy vector boson $W'$. We summarize and conclude in Section~\ref{sec:conclusions}. Appendices give supplementary details of Goldstone Equivalence Gauge, the corresponding Feynman rules and illustrative examples of practical calculations, more details on the density-matrix formalism for coherent Sudakov evolution, and a short description of our virtuality-ordered showering program used for obtaining numerical FSR results. \section{Showering Preliminaries and Novel Features with EWSB} \label{sec:split} We first summarize the general formalism for the splitting functions and evolution equations with massive particles that forms the basis for the rest of the presentation. We then lay out some other novel features due to EWSB. \subsection{Splitting formalism} \begin{figure}[t] \begin{center} \begin{picture}(350,100)(0,0) \SetColor{Black} \SetWidth{2} \ArrowLine(55,45)(90,25) \LongArrow(90,25)(130,25) \LongArrow(90,25)(120, 0) \SetWidth{1} \ArrowLine(0,0)(50,50) \ArrowLine(0,100)(50,50) \LongArrow(50,50)(120,100) \LongArrow(50,50)(130, 60) \Vertex(110,64){1.5} \Vertex(108,73){1.5} \Vertex(105,82){1.5} \GCirc(50,50){12}{0.7} \Text(135,85)[]{$\boldsymbol X$} \Text(70,25)[]{$\boldsymbol A$} \Text(138,25)[]{$\boldsymbol B$} \Text(130,0)[]{$\boldsymbol C$} \SetOffset(225,0) \SetWidth{2} \ArrowLine(-20,0)(10,20) \ArrowLine(10,20)(45,45) \LongArrow(10,20)(80,10) \SetWidth{1} \ArrowLine(-20,100)(50,50) \LongArrow(50,50)(120,100) \LongArrow(50,50)(120, 30) \Vertex(109,50){1.5} \Vertex(110,60){1.5} \Vertex(108,70){1.5} \GCirc(50,50){12}{0.7} \Text(135,65)[]{$\boldsymbol X$} \Text(-27,-3)[]{$\boldsymbol A$} \Text(15,40)[]{$\boldsymbol B$} \Text(90,10)[]{$\boldsymbol C$} \Text(-27,103)[]{$\boldsymbol{B'}$} \end{picture} \end{center} \caption[]{Schematic processes involving a collinear splitting $A\to B+C$ in either the final state (left) or initial state (right).} \label{fig:split} \end{figure} Consider a generic hard process nominally containing a particle $A$ in the final state, slightly off-shell and subsequently splitting to $B$ and $C$, as depicted in Fig.~\ref{fig:split} (left figure). In the limit where the daughters $B$ and $C$ are both approximately collinear to the parent particle $A$, the cross section can be expressed in a factorized form~\cite{Collins:1989gx} \begin{equation} d\sigma_{X,BC} \,\simeq\, d\sigma_{X,A} \times d{\mathcal P}_{A\rightarrow B+C} \, , \label{eq:FSR} \end{equation} where $d{\mathcal P}$ is the {\it differential splitting function} (or probability distribution) for $A\to B+C$. A given splitting can also act as the ``hard'' process for later splittings, building up jets. The factorization of collinear splittings applies similarly for initial-state particles, leading to the picture of parton distribution functions (PDFs) for an initial state parton $B$ or $C$, as in Fig.~\ref{fig:split} (right figure), \begin{equation} d\sigma_{AB'\to CX} \,\simeq\, d{\mathcal P}_{A\rightarrow B+C} \times d\sigma_{BB' \to X} \, . \label{eq:ISR} \end{equation} We will discuss this situation in the next subsection. While our main focus here is on the leading-log resummation of these splitting effects in a parton shower/evolution framework, at a leading approximation Eqs.~(\ref{eq:FSR}) and~(\ref{eq:ISR}) can also be taken as-is, with a unique splitting in the event and no virtual/resummation effects, in order to quickly capture the tree-level collinear behavior of high energy processes. In our further analyses, we will refer to such a treatment as a ``fixed-order EW shower'' or ``fixed-order EW FSR (ISR).'' Integrating out the azimuthal orientation of the $B+C$ system, the splitting kinematics are parametrized with two variables: a dimensionful scale (usually chosen to be approximately collinear boost-invariant) and a dimensionless energy-sharing variable $z$. Common choices for the dimensionful variable are the daughter transverse momentum $k_T$ relative to the splitting axis, the virtuality $Q$ of the off-shell particle in the process, and variations proportional to the daughters' energy-weighted opening angle $\theta E_A$. Our descriptions here will mainly use $k_T$, as this makes more obvious the collinear phase space effects in the presence of masses. For our numerical results in Section~\ref{sec:implementation}, we switch to virtuality, which allows for a simpler matching onto $W/Z/t$ decays. Mapping between any of these different scale choices is however straightforward. The energy-sharing variable $z$~($\bar z \equiv 1-z$) is commonly taken to be the energy fraction of $A$ taken up by $B$~($C$). The splitting kinematics takes the form \begin{eqnarray} E_B \approx z E_A,\quad E_C \approx \bar z E_A, \quad k_T^{} \approx z\bar z E_A \theta \ . \end{eqnarray} When considering splittings involving massive or highly off-shell particles, various possible definitions of $z$ exist which exhibit different non-relativistic limits. Besides strict energy fraction, a common choice is the light-cone momentum fraction, $z \equiv (E_B+\vec k_{B}\cdot\hat k_A)/(E_A+|\vec k_A|)$. Our specific implementation in Section~\ref{sec:implementation} uses the three-momentum fraction \begin{eqnarray} z \equiv {|\vec k_B| \over |\vec k_B| + |\vec k_C| }, \end{eqnarray} which makes phase space suppression in the non-relativistic limit more transparent. However, in the relativistic regime, where the collinear factorization is strictly valid, all of these definitions are equivalent, and we do not presently make a further distinction.\footnote{There is unavoidably some frame-dependence to this setup, as there is in all parton showers that are defined strictly using collinear approximations. A more complete treatment would exhibit manifest Lorentz-invariance and control of the low-momentum region, at the expense of more complicated book-keeping of the global event's kinematic and isospin structure, by using superpositions of different $2\to 3$ dipole splittings. Extending our treatment in this manner is in principle straightforward, but beyond the scope of the present work.} In the simplest cases, generalizing the collinear splitting function calculations to account for masses is straightforward. Up to the non-universal and convention-dependent factors that come into play in the non-relativistic/non-collinear limits, the splitting functions can be expressed as \begin{equation} \frac{d{\mathcal P}_{A\rightarrow B+C}}{dz\,dk_T^2} \,\simeq\, {1\over 16\pi^2} \ { z \bar z \: |{\cal M^{\rm (split)}}|^2 \over (k_T^2 + \bar z m_B^2 + z m_C^2 - z \bar z m_A^2)^2 } \ . \label{eq:split} \end{equation} Here, ${\cal M^{\rm (split)}}$ is the $A\to B+C$ splitting matrix-element, which can be computed from the corresponding amputated $1\to 2$ Feynman diagrams with on-shell polarization vectors (modulo gauge ambiguities, which we discuss later). This may or may not be spin-averaged, depending on how much information is to be kept in the shower. Depending upon the kinematics, the mass-dependent factors in the denominator act to either effectively cut off collinear divergences at small $k_T$ or, in final-state showers, to possibly transition the system into a resonance region. In cases where interference between different mass eigenstates can be important, this basic framework must be further generalized. Resonance and interference effects are introduced in Section~\ref{sec:novel_features}. On dimensional grounds, $|{\cal M^{\rm (split)}}|^2$ goes like either $k_T^2$ or some combination of the various $m^2$'s. Conventional splitting functions typically scale like $dk_T^2/k_T^2$, which is exhibited by all of the gauge and Yukawa splittings of the massless unbroken electroweak theory, as to be shown in Section~\ref{sec:unbroken}. There can also be mass-dependent splitting matrix elements that lead to $m^2 dk_T^2/k_T^4$ type scaling. These splittings are highly suppressed for $k_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m$. However, they are much more strongly power-enhanced at low $k_T$, a behavior which we call {\it ultra-collinear}. Upon integration over $k_T$, the total rate for an ultra-collinear splitting comes out proportional to dimensionless combinations of couplings and masses, with the vast majority of the rate concentrated near $k_T \sim m$. Such processes exist in familiar contexts like QED and QCD with massive fermions, for example the helicity-flipping splittings $e_L \to \gamma e_R$ and $g \to b_L \bar b_L$. They are usually not treated as distinct collinear physics with their own universal splitting functions, though they are crucial for systematically modeling shower thresholds. We choose to treat them on independent footing, since the threshold behaviors of the electroweak shower are highly nontrivial, including processes that are qualitatively different from the massless limit. In both the conventional collinear and ultra-collinear cases, the remaining $z$ dependence after integrating over $k_T$ can be either $dz/z$ or $dz \times $(regular). The former yields additional soft logarithms (again, formally regulated by the particle masses), and appears only in splittings where $B$ or $C$ is a gauge boson. \subsection{Evolution equations} \label{sec:evolv} When applied to the initial state, the splitting functions outlined in the previous section lead to both initial state radiation (ISR) as well as the dynamical generation of $B$ and $C$ parton distribution functions from a parent $A$. Considering a generic parton distribution function $f_i(z,\mu^2)$ with a factorization scale $\mu$ in $k_T$-space, the leading-order convolution relation is \begin{equation} f_B(z, \mu^2) \,=\, f_B(z,\mu_0^2) \,+\, \sum_A \int^1_z {d\xi \over \xi} f_A(\xi,\mu_0^2) \int^{\mu^2}_{\mu_0^2} dk_T^2 \, \frac{d{\mathcal P}_{A\rightarrow B+C}({z/ \xi}, k_T^2)}{dz \, dk_T^2} \, , \label{eq:convolution} \end{equation} where $\mu_0$ is an input factorization scale. Differentiating with respect to $\mu^2$ and incorporating as well the evolution of the $f_A$ leads to the celebrated Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equation \cite{Dokshitzer:1977sg,Gribov:1972ri,Altarelli:1977zs}. \begin{equation} {\partial f_B(z, \mu^2) \over \partial \mu^2} = \sum_A \int^1_z {d\xi \over \xi}\ { d{\mathcal P}_{A\rightarrow B+C}({z/ \xi}, \mu^2) \over dz \, dk_T^2} f_A(\xi, \mu^2) \, . \end{equation} Gauge theories such as QED and QCD predict that at high energies the splitting functions $d{\mathcal P}/d k_T^2$ go like $1 /k_T^2$, and thus that the PDFs evolve like $\ln (Q^2/\mu^2)$. This is the classic violation of the Bjorken scaling law~\cite{Bj}. In the broken electroweak theory, there are also the qualitatively different ultra-collinear splitting functions, which instead go as $m^2/k_T^4$. The PDFs arising from these splittings ``live'' only at the scale $k_T \sim m$. Instead of evolving logarithmically, they are cut off by a strong power-law suppression at $k_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m$. The corresponding PDFs preserve Bjorken scaling, up to contributions beyond leading order. In particular, longitudinal weak boson PDFs are practically entirely determined at splitting scales of ${\cal O}(m_W)$, even when used as inputs into processes at energies $E \gg m_W$.\footnote{This observation persists even in the presence of QCD corrections. We can imagine that a quark is first evolved to large $k_T$ (and hence large space-like virtuality $Q$) from multiple gluon emissions, and then splits into an on-shell quark and space-like longitudinal vector boson. The former emerges as an ISR jet and the latter participates in a hard interaction. We would find (e.g., using Goldstone Equivalence Gauge, introduced in Section~\ref{sec:GEG}) that the collinear-enhanced piece of the scattering amplitude carries a net suppression factor of ${\cal O}(m^2/Q^2)$, which cannot be compensated by integration over the collinear emission phase space.} Numerical computation of electroweak PDFs with a proper scale evolution do not exist yet in the literature, though the complete unbroken-theory evolution equations appear in~\cite{Ciafaloni:2005fm}, and fixed-order results are straightforward to obtain with the simple convolution in Eq.~(\ref{eq:convolution}). In the resummed treatment, contributions from the region $k_T \sim m_W$ can perhaps most simply be incorporated as perturbative ``threshold'' effects, essentially adding in their integrated fixed-order contributions up to some scale (a~few)$\times m_W$ as $\delta$-functions in $k_T$-space. These would include the finite, mass-suppressed contributions from the turn-on of $f\to W_T f$ splittings, as well as the entire ultra-collinear $f \to W_{L} f$ contribution. Equivalently at leading-order, they may instead be folded continuously into the DGLAP evolution using the massive splitting functions defined as in Eq.~(\ref{eq:split}). This latter approach may also be simpler when alternative scaling variables are used, such as virtuality. The other qualitatively new electroweak effects in the PDFs concern the treatment of weak isospin. First, the chiral nature of the EW gauge interactions leads to more rapid evolution toward low-$x$ for left-handed fermions than for right-handed fermions. Furthermore, the isospin non-singlet nature of typical beam particles yields an additional interesting subtlety. In QED and color-averaged QCD evolution, the soft-singular limits of, e.g., $q\to gq$ at a given scale become indistinguishable from $q \to q$ with no splitting. Indeed, this allows for the balancing of real and virtual IR divergences as $z$ is formally taken to zero at fixed $k_T$, conventionally encoded in the plus-prescription. However, following this prescription for the electroweak evolution of fermion PDFs at $k_T \gg m_W$ leads to unregulated divergences in isospin-flipping transitions, such as $u_L \leftrightarrow d_L$ via arbitrarily soft $W^\pm$ emission. This is a manifestation of the so-called Bloch-Nordsieck violation effect~\cite{Ciafaloni:2000rp,Bell:2010gi,Manohar:2014vxa}. Regulation and resummation of this effect requires the introduction of some form of explicit cutoff $z \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} k_T/E$ in the evolution equations when formulated in $(k_T,z)$ space, in order to avoid non-collinear emission regions~\cite{Ciafaloni:2005fm}.\footnote{In QED and QCD, these non-collinear emissions are implicitly and ``incorrectly'' integrated over in the plus-prescription. However, in the limit $E \gg k_T$, the numerical impact of doing so is of sub-leading importance.} The net effect is a gradual, controlled merging of the $u_L$ and $d_L$ PDFs (or $e_L$ and $\nu_L$ PDFs in the case of electron beams) into a common ``$q_L$'' (``$\ell_L$") PDF. Unlike conventional PDF evolution, implementing the $z$ cutoff in this way necessitates extending the arguments of the PDFs to explicitly include the (CM-frame) beam energy. While this is not a major complication, we do point out that different choices of scaling variables may yield the same non-collinear regulation without requiring the extra energy argument. A particularly simple choice would be the energy-weighted angle $\theta E_A$. We defer a detailed study of these issues to future work~\cite{EWshower}. We caution that this treatment of the initial state using PDFs remains strictly valid only within the leading-log, collinear approximation. Soft $W^\pm$ virtual exchanges between the isospin non-singlet beams will induce single-log entanglements that do not factorize between the individual beams, and even more complicated entanglements emerge when we also consider isospin-exclusive final states. The proper generalization for the initial state is from running PDFs to running quantum-ensemble parton luminosities defined for {\it pairs} of beams. But it is also possible to define a scheme where these beam-entanglement effects are selectively treated at fixed-order, and PDF resummation still suffices~\cite{EWshower}. (The entanglement effects actually wash out as the scale is raised and the isospin ensembles become incoherent.) However, these PDFs will still likely reference the global beam setup via the aforementioned non-collinear cutoff. Even applying the conventional factorization at leading-log, some of the PDFs must also still be treated as matrices~\cite{Ciafaloni:2005fm}. This is particularly relevant for the photon and transverse $Z$-boson PDFs, which develop sizable off-diagonal contributions. Indeed, the naive concept of independent ``photon PDF'' and ``$Z$ PDF'' at $k_T \gg m_Z$ is necessarily missing important physics, as $\gamma$ and $Z$ are not gauge eigenstates. We outline the appropriate treatment in Section~\ref{sec:interference} and Appendix~\ref{app:split}. The same splitting functions that govern ISR and PDF generation also serve as the evolution kernels for final-state radiation (FSR). This integrates to the well-known Sudakov form factor $\Delta_A(t)$ characterizing the possible time-like branchings of parent $A$ at scales below $t \sim \log(k_T)$ or $\log(Q)$ \begin{eqnarray} && \Delta_A(t) \,=\, \exp \left[ -\sum_{BC} \int^t_{t_0} dt' \int dz \, \frac{d{\cal P}_{A\to B+C}(z,t')}{dz \, dt} \right] \, , \end{eqnarray} where the allowed $z$ range is determined by kinematics. Practically, we perform the evolution starting at a high $k_T$ or virtuality scale characterized by the CM-frame energy of the hard partonic process, and running continuously down through the weak scale with the proper mass effects. The Sudakov factor, evaluated in small $t$ steps, functions as a survival probability for $A$, upon which the usual Markov chain monte carlo is constructed. (See, e.g.,~\cite{Sjostrand:2006za}.) If $A$ does not survive at some step, it is split into a state $B+C$. This splitting acts as the ``hard'' process that produced particles $B$ and $C$, and Sudakov evolution is continued on each of those particles. The ``resolution'' scale $t_0$ can be any scale well below $m_W$, at which conventional QED and QCD showers can take over. Of course, the basic framework leaves many details unspecified, and allows for a great deal of freedom in specific implementation. For example, besides the choice of evolution variable, one must also specify a treatment of kinematic reshuffling. We elaborate on some additional aspects of our own implementation of final-state showers below and in Appendix~\ref{sec:FSR}. We will generally refer this treatment of Sudakov formalism as the ``full EW shower'' or ``full EW FSR'', in contrast to the fixed-order splitting calculations in Eqs.~(\ref{eq:FSR}) and (\ref{eq:ISR}). \subsection{Other novel features in EW showering} \label{sec:novel_features} There are several additional novel features in EW showering beyond those encountered in the standard formalism. We outline a few relevant to our later discussions and also propose concrete schemes for their implementations. \subsubsection{Mass effects} \label{sec:mass_effects} Besides the basic kinematic modifications and the emergence ultra-collinear splitting phenomena, the existence of a mass scale $m_{W,Z} \sim g v$ and $m_{f} \sim y_f v$ requires some special treatments as we approach kinematic thresholds and the boundaries of turnoff regions. An immediate complication is that final-state weak showering smoothly connects onto the on-shell weak decays of top quarks, $W/Z$ bosons, and (to a much lesser extent) Higgs bosons. The shower describes the highly off-shell behavior of these particles, including resummed logarithmically-enhanced effects. But the effect of the pole is nonetheless visible, encoded in the last term in the denominator of Eq.~(\ref{eq:split}). Within the resonance region, the dominant behavior is more correctly captured by the standard Breit-Wigner line-shape governed by the physical width $\Gamma$, which involves a very different kind of resummation. However, a few $\Gamma$ above the peak, both descriptions can be expanded perturbatively and yield numerically similar predictions.\footnote{The agreement is further improved if $\Gamma$ is generalized to $\Gamma(Q)$. E.g., $\Gamma_Z \to \Gamma_Z(Q) \simeq (Q/m_Z)\Gamma_Z$.} It is therefore straightforward to define a well-behaved matching prescription. This is easiest to formulate within a virtuality-ordered shower: Halt the shower at some matching scale $Q_{\rm match} = m + $(a~few)$\Gamma$, and if the state has survived to this point, distribute its final mass according to a Breit-Wigner resonance below $Q_{\rm match}$. The exact choice of matching scale here is not crucial, as long as it is within the region where the Breit-Wigner and shower predictions are comparable. For other shower ordering variables, such as $k_T$, we can instead run the shower down to its nominal kinematic limit, but not integrating $z$ within the region that would yield $Q < Q_{\rm match}$. In either case, the parton shower may be restarted on the resonance's decay products. Another place where mass effects can become important is in multiple emissions. In massless showers, sequential splittings are dominantly very strongly-ordered in scale, and as a consequence a given splitting rate can be computed without regard to the subsequent splittings while still capturing the leading behavior. However, in showers with massive particles, a large fraction of the available phase space for secondary splittings may require nontrivial kinematic rearrangements within the preceding splittings. For example, a $W$ boson might nominally be produced with a kinematic mass $m_W$ via emission off of a fermion. If the $W$ subsequently splits into a $W$ and a $Z$ boson at a virtuality $Q \gg m_W$, there is a chance that the off-shell $W$ now sits near a suppressed region (i.e., dead cone) for emission off of the mother fermion. In order to avoid badly mis-modeling such cases, secondary splittings can be weighted according to the relative rate modification that would be incurred on the previous splitting. This {\it back-reaction factor} depends in detail on how kinematic arrangements are done in the shower. Generally, a given $(z,Q)$ or $(z,k_T)$ parametrizing the mother splitting will be mapped onto a new $(z^*,Q^*)$ or $(z^*,k_T^*)$ for producing the off-shell daughter. The required back-reaction factor is the ratio of the new differential splitting function to the original one, multiplied by the Jacobian for the change of variables. For a final-state shower sequence $A^* \to B^*C \to (DE)C$, for the nested splitting we can use a splitting function multiplied by the back-reaction factor: \begin{equation} \frac{d{\cal P}(B^* \to DE)}{dz_{DE}\,dk_{T,DE}^2} \;\times\; \left( \frac{d{\cal P}(A^* \to B^*C)/dz^* dk_T^{2*}}{d{\cal P}(A^* \to BC)/dz\,dk_T^2} \cdot \left| {\rm det} \left[\frac{dz^* dk_T^{2*}}{dz\,dk_T^2} \right] \right| \right) \, . \label{eq:weight} \end{equation} The simplest implementation would compute this factor independently for each daughter branch, assuming an on-shell sister and neglecting possible correlations in the potentially fully off-shell final configuration $A^* \to B^*C^*$. But a more thoroughly correlated weighting scheme could be pursued if deemed numerically relevant. The above prescription also generalizes beyond massive showers, wherein it has a sizable overlap with the effects of standard angular vetoing. We further show below how back-reaction factors can be conveniently applied for a complete treatment of mixed neutral bosons, wherein an ``on-shell'' kinematic mass is not necessarily determined at their production. The above back-reaction effects can be particularly important for ultra-collinear emissions, as these occur almost exclusively at the boundaries delineated by finite-mass effects. For example, the prototypical ultra-collinear emission is $f\to W_{L} f'$ with massless fermions~\cite{Kane:1984bb,Dawson:1984gx,Chanowitz:1985hj}. It proceeds only via a delicate balancing between a suppression factor $m_W^2/E^2$ in the squared splitting matrix element and a strong $1/k_T^4$ power enhancement from the fermion propagator that gets cut off at $k_T \sim m_W$, controlled by the form of the denominator in Eq.~(\ref{eq:split}). Within a final-state shower, if either the $W_{L}$ or its sister $f'$ is set far off-shell by a secondary splitting at some scale $Q$ (possibly a QCD splitting), that cutoff moves out to $k_T \sim Q$ but the original production matrix element stays approximately the same, and the total rate picks up an additional relative power suppression factor of $O(m_W^2/Q^2)$.\footnote{When the $W_L$ is off-shell, we would naively compensate by using an off-shell gauge polarization, yielding $Q^2/E^2$ instead of $m_W^2/E^2$. However, the appropriate treatment, discussed in more detail in Appendices~\ref{sec:gauge} and~\ref{sec:FeynmanRules}, uses on-shell polarization factors throughout. Additional non-collinear corrections might still be present, but are more appropriately viewed as contributions to $1\to 3$ splittings. New soft logarithms might also arise in these processes, but new {\it collinear} logs will not.} Roughly speaking, ultra-collinear processes can only occur near the ``end'' of the weak parton shower as it passes through the weak scale, or conversely near the ``beginning'' of weak PDF evolution. Such a feature is essentially built-into $k_T$-ordered parton evolution. The back-reaction correction ensures that it is also enforced in showers built on other ordering variables, such as virtuality, while still allowing further low-scale showering such as $q\to gq$ and $W_{L}\to \gamma W_{L}$. \subsubsection{Mixed-state evolution} \label{sec:interference} Thus far, the shower formalism that we have presented neglects the possibility of interference between different off-shell intermediate particle states contributing to a specific splitting topology. Traditionally in QED and QCD showers, such interference leads to sub-leading effects associated with the unmeasured spin and color of intermediate particles~\cite{Nagy:2007ty}. However, the full electroweak theory at high energies presents us with cases where different mass and gauge eigenstates can also interfere at $O(1)$ level, most notably the neutral boson admixtures $\gamma/Z_T$ and $h/Z_L$~\cite{Ciafaloni:2005fm}. All other particles in the SM carry (approximately) conserved charge or flavor quantum numbers that can flow out into the asymptotic state, and therefore they do not tend to interfere in this manner. Interferences originating from CKM/PMNS flavor violations should be small and difficult to observe, and we neglect them for simplicity. Showering involving superpositions of different particle species can be described using density matrix formalism. Let us consider the simpler case of final-state showers for illustration. The initial value of the density matrix is set proportional to the outer product of production amplitudes: $\rho_{ij} \propto {\mathcal M}_i^{{\rm (prod)}*} {\mathcal M}_j^{\rm (prod)}$, tracing out over other details of the rest of the event.\footnote{This treatment does not attempt to address quantum correlations between different branches of an event or shower.} Here, the indices run over the particle species. The probability for an initial mixed quantum state to subsequently split into a specific exclusive final state must be computed by generalizing the splitting functions to Hermitian splitting {\it matrices} $d{\mathcal P}_{ij}$. The exclusive splitting rates are then computed by tracing against the normalized density matrix,\footnote{In more complete generality, a mixed state can split into another mixed state, leading to an enlarged set of indices for the splitting matrices. However, in most cases, the final-state density matrices are fully determined by the initial-state density matrices, such that in practice a single pair of indices suffices.} \begin{equation} d{\cal P} \,=\, \frac{\rho_{ij}\ d{\mathcal P}_{ji}}{{\rm tr}[\rho]} \ . \label{eq:rhoSplitting} \end{equation} Representing the propagator matrix as ${\cal D}_{ij}$, and the amputated splitting amplitudes as ${\cal M}^{\rm (split)}_i$, this modifies Eq.~(\ref{eq:split}) to the more complete, yet more complicated form \begin{equation} \left[ \frac{d{\mathcal P}_{A\rightarrow B+C}}{dz\,dk_T^2} \right]_{ij} \,\simeq\, {1\over 16\pi^2} \ \frac{1}{z\bar z} \ {\cal M}^{\rm (split)*}_k {\cal D}_{ki}^* {\cal D}_{jl} {\cal M}^{\rm (split)}_l \ . \end{equation} Note that large interference effects can persist even in the massless limit with unmixed propagators. A full treatment, including the Sudakov evolution for $\rho_{ij}$ and the explicit form of the propagators for $\gamma/Z_T$ and $h/Z_{L}$ systems, is given in Appendix~\ref{app:split}. Handling the kinematics and decays of mixed states requires some additional steps. ``On-shell'' kinematics cannot be defined a priori, and we cannot collapse onto mass eigenstates or a showered final-state with well-defined mass until the coherent Sudakov evolution has run its course. A simple prescription is to first produce a mixed boson with its minimum possible kinematic mass (zero for $\gamma/Z_T$, $m_Z$ for $h/Z_{L}$) in order to fully fill out the phase space. Splittings that occur before reaching the resonance are weighted by a back-reaction factor as per Eq.~(\ref{eq:weight}). If the state survives un-split down to the heavier resonance's matching threshold, we can decide to project onto a specific mass eigenstate according to the relative probabilities encoded in the surviving density matrix. The back-reaction factor may once again be employed here, implemented as a veto probability for the heavier resonance. (The factor will typically come out less than one for a sensibly-defined change of variables.) If the veto is thrown, the splitting that produced the mixed state is undone, and its mother's evolution continued. This prescription especially becomes relevant when evolving near kinematic thresholds or suppressed regions, for example where $Z$ boson emission would be suppressed but photon emission allowed. For the mixed $\gamma/Z_T$ system, if a photon is projected out, we can restart a pure QED parton shower ($\gamma\to f\bar f$) with virtuality constrained below the $Z$ boson's $Q_{\rm match}$ scale at $\approx 100$~GeV. Interference effects below the matching scale can also be incorporated by coherently adding both the $\gamma$ and $Z$ contributions within the $Z$ resonance region. This requires delineating as well a lower virtuality boundary, ideally at a scale $O(1)$ smaller than $m_Z$. Depending on the integrated probability in this region (modulo the back-reaction veto), we would either create an $f\bar f$ state with an appropriately-distributed mass, or again set the state to a photon and continue running a pure QED shower, now constrained below the $Z$ resonance region. We also comment that a fully consistent treatment here would require minor changes to the standard output formats of hard event generators. The standard practice of immediately collapsing onto mass eigenstates is equivalent to assuming trivial Sudakov evolution, and cannot formally be inverted such that a proper coherent parton shower can be applied. In particular, only one specific linear combination of $\gamma/Z_T$ states participates in the high-rate non-Abelian splittings to $W_T^\pm W_T^\mp$. While collapsing onto mass eigenstates is required to obtain well-defined hard event kinematics, a simple remedy here would be to supply for these particles their production density matrices, using some appropriately-mapped massless kinematics. \section{Coherent Showering} \label{app:split} Showering involving superpositions of different particle species can be described using density matrix formalism. The initial value of the density matrix is proportional to the outer product of production amplitudes $$\rho_{ij} \propto {\mathcal M}_i^{{\rm (prod)}*} {\mathcal M}_j^{\rm (prod)} ,$$ tracing out over other details of the rest of the event. Here, the indices run over the species. We nominally assign the state its smallest possible kinematic mass (zero for $\gamma/Z$, $m_Z$ for $h/Z_L$), and subsequently reweight/veto the splitting probability and adjust the global kinematics as necessary (see Section~\ref{sec:mass_effects}). This prescription specifically becomes relevant when evolving near kinematic thresholds. The probability for an initial mixed quantum state to subsequently split into a specific exclusive final state, {\it e.g.}\ $\gamma/Z \to e_L^- e_R^+$ or $\nu_L\bar\nu_R$, must be computed by generalizing the splitting functions to Hermitian splitting {\it matrices} $d{\mathcal P}_{ij}$. The exclusive splitting rates are then computed by tracing against the normalized density matrix: \begin{equation} d{\cal P} \,=\, \frac{\rho_{ij}\ d{\mathcal P}_{ji}}{{\rm tr}[\rho]}. \label{eq:rhoSplittingAppendix} \end{equation} If a boson is not split, the Sudakov evolution of $\rho$ proceeds analogous to mixed-state radioactive decay: \begin{equation} d\rho_{ij} \,=\, -\frac12\sum_{\rm channels}(\rho_{ik}d{\mathcal P}_{kj} + d{\mathcal P}_{ik}\rho_{kj}). \label{eq:rhoEvolution} \end{equation} As usual, this just represents the wave-function running, now applied to multi-component states. The splitting matrices for an initial mixed quantum state are computed from outer products of splitting amplitudes, convolved with the mixed propagators. Representing the propagator matrix as ${\cal D}_{ij}$, and the amputated splitting amplitudes as ${\cal M}^{\rm (split)}_i$, the generalization from single-state evolution is \begin{equation} d{\cal P} \,\propto\, \frac{1}{q^4}|{\cal M^{\rm (split)}}|^2 \;\,\rightarrow\,\; d{\cal P}_{ij} \,\propto\, {\cal M}^{\rm (split)*}_k {\cal D}_{ki}^* {\cal D}_{jl} {\cal M}^{\rm (split)}_l. \label{eq:splittingMatrix} \end{equation} Using the relativistic approximation $q^2 \simeq (k_T^2 + \bar z m_B^2 + z m_C^2)/z\bar z$ for final-state splitting, this modifies Eq.~(\ref{eq:split}) to the more complicated form \begin{equation} \left[\frac{d{\mathcal P}_{A\rightarrow B+C}}{dz\,dk_T^2}\right]_{ij} \,\simeq\, {1\over 16\pi^2} \ \frac{1}{z\bar z} \ {\cal M}^{\rm (split)*}_k {\cal D}_{ki}^* {\cal D}_{jl} {\cal M}^{\rm (split)}_l \ . \end{equation} In the massless limit with unmixed propagators, ${\cal D}_{ij} = i\delta_{ij}/q^2$, the form of the splitting matrix reduces to $d{\mathcal P}_{ij} \propto {\mathcal M}_i^{{\rm (split)}*}{\mathcal M}_j^{\rm (split)}/q^4$. In more complete generality, a mixed state can split into another mixed state, leading to an enlarged set of indices for the splitting matrices. However, in most cases, the final-state density matrices are fully determined by the initial-state density matrices, such that in practice a single pair of indices suffices. While the formalism is basis-independent, we default to some standard bases in our EW shower approach. Within the unbroken phase (Section~\ref{sec:unbroken}), we present neutral gauge and scalar splitting functions in the interaction basis $(B^0,W^0)$, $(H^0,H^{0*})$. In the broken phase (Section~\ref{sec:broken}), we present them in the mass basis $(\gamma,Z)$, $(h,Z_{L})$. The corresponding propagator matrices in the unbroken-phase basis, including the effects of EWSB, are\footnote{The shower formalism automatically accounts for logarithmic running effects in the wavefunction factors for these propagators. We do not attempt to account for mass renormalization effects, as the masses are anyway of power-suppressed importance at very high virtualities. Additional perturbative corrections near the weak scale are also neglected.} \begin{equation} {\mathcal D}_{B^0B^0} = \frac{i \cos^2\theta_W}{q^2} + \frac{i \sin^2\theta_W}{q^2-m_Z^2}\,,\qquad {\mathcal D}_{W^0W^0} = \frac{i \sin^2\theta_W}{q^2} + \frac{i \cos^2\theta_W}{q^2-m_Z^2}\,, \nonumber \end{equation} \begin{equation} {\mathcal D}_{B^0W^0} = {\mathcal D}_{W^0B^0} = \frac{i\cos\theta_W\sin\theta_W(-m_Z^2)}{q^2(q^2-m_Z^2)} \label{eq:BWpropagators} \end{equation} for the gauge bosons ($\theta_W$ is the weak mixing angle), and \begin{equation} {\mathcal D}_{H^{0}H^{0*}} = {\mathcal D}_{H^{0*}H^{0}} = \frac{i/2}{q^2-m_h^2} + \frac{i/2}{q^2-m_Z^2}\,, \nonumber \end{equation} \begin{equation} {\mathcal D}_{H^{0}H^{0}} = {\mathcal D}_{H^{0*}H^{0*}} = \frac{i/2}{q^2-m_h^2} - \frac{i/2}{q^2-m_Z^2} , \label{eq:hZpropagators} \end{equation} for the neutral scalars. In the mass basis, the matrices are diagonal and have entries corresponding to the usual poles: \begin{equation} {\cal D}_{\gamma\gamma} = \frac{i}{q^2}\,, \quad {\cal D}_{ZZ} = \frac{i}{q^2-m_Z^2}\,, \quad {\cal D}_{\gamma Z} = {\cal D}_{Z\gamma} = 0 \end{equation} \begin{equation} {\cal D}_{hh} = \frac{i}{q^2-m_h^2}\,, \quad {\cal D}_{Z_{L}Z_{L}} = \frac{i}{q^2-m_Z^2}\,, \quad {\cal D}_{h Z_{L}} = {\cal D}_{Z_{L}h} = 0. \end{equation} Similar considerations apply in the application and generation of PDFs~\cite{Ciafaloni:2005fm}. The $\gamma/Z$ and (in principle) $h/Z_{L}$ PDFs should each properly be treated as $2\times2$ matrices, and hard process cross sections sourced by these PDFs computed by tracing against the hard matrix elements. The PDF evolution equations involve matrix-valued splitting functions. In the high-$k_T$/high-virtuality limit, these follow straightforwardly from the splitting functions presented in the Section~\ref{sec:unbroken}. However, unless working well above the TeV-scale, mass effects can still be important. The above propagator modifications must then be applied at the (spacelike) virtual leg emerging from a splitting. \section{Splitting Functions in Unbroken $SU(2)_L\times U(1)_Y$} \label{sec:unbroken} Before working out the complete set of electroweak splitting functions in the broken phase, it is important to first consider a conceptual limit with an unbroken $SU(2)_L \times U(1)_Y$ gauge symmetry with massless gauge bosons and fermions, supplemented by a massless complex scalar doublet field $H$ without a VEV. This last ingredient is the would-be Higgs doublet. This simplified treatment in the unbroken phase is not only useful to develop some intuition, but also captures the leading high-$k_T$ collinear splitting behavior of the broken SM electroweak sector. Some aspects of electroweak collinear splitting and evolution at this level have been discussed, e.g., in~\cite{Ciafaloni:2005fm}. Anticipating electroweak symmetry breaking, we adopt the electric charge basis in weak isospin space. The corresponding $SU(2)_L$ bosons are $W^\pm$ and $W^0$, and the hyper-charge gauge boson we denote as $B^0$. Gauge boson helicities are purely transverse ($T$), and are averaged.\footnote{While the gauge helicity averaging is not strictly necessary, especially given that we will later make a distinction between transverse and longitudinal polarizations, it does simplify our presentation. We also do not incorporate azimuthal interference effects, though this would be straightforward in analogy with QCD~\cite{Bahr:2008pv}.} For the scalar doublet, we decompose as \begin{equation} H=\left({\begin{array}{c} H^+ \\ H^0 \end{array}}\right) = \left({\begin{array}{c} \phi^+ \\ \frac{1}{\sqrt{2}}(h - i\phi^0) \end{array}}\right), \label{eq:HiggsExpansion} \end{equation} where $\phi^\pm,\phi^0$ will later become the electroweak Goldstone bosons and $h$ the Higgs boson. However, at this stage, we will keep the neutral bosons $h$ and $\phi^0$ bundled into the complex scalar field $H^0$, as they are produced and showered together coherently. In the absence of the VEV, the doublet carries a perturbatively-conserved ``Higgs number,'' which may also be taken to flow through RH-chiral fermions in the Yukawa interactions.\footnote{We have expanded the neutral scalar field as $H^0 \propto h - i\phi^0$, adopting a phase convention such that $h$ and $\phi^0$ fields create/annihilate their respective one-particle states with trivial phases, and $H^0$ annihilates the one-particle state $\ket{H^0} \propto \ket{h} + i\ket{\phi^0}$. Treating $h$ and $\phi^0$ as independent showering particles would be analogous to adopting a Majorana basis instead of a Dirac basis for the fermions in QED or QCD. An incoherent parton shower set up in such a basis would not properly model the flow of fermion number and electric charge. Analogously, $H^0$ and $H^{0*}$ particles carry well-defined Higgs number that we choose to explicitly track through the shower. This leads to correlations between spins and electric charges within asymptotic states.} We denote a generic fermion of a given helicity by $f_s$ with $s=L,R$ (or equivalently $s=\mp$). We do not always specify the explicit isospin components of $f$ at this stage, but implicitly work in the usual $(u,d)$/$(\nu,e)$ basis. Isospin-flips (including RH-chiral isospin where appropriate) will be indicated by a prime, e.g.~$u' = d$. Effects of flavor mixing are ignored. The $U(1)_Y$ and $SU(2)_L$ gauge couplings are respectively taken to be $g_1 \approx 0.36$ and $g_2 \approx 0.65$ (here evaluated near the weak scale, though in general run to a scale of $\mathcal{O}(k_T$)). For compactness we often represent a generic gauge coupling by $g_V^{}$. We represent the gauge charge $Q$ of a particle $p$ coupling to gauge boson $V$ by $Q^V_p$, and we give the complete list of the gauge charges for the SM fermions and scalars in Table \ref{tab:charges} in Appendix~\ref{sec:conventions}. The splitting functions that involve only fermions and gauge bosons closely follow those of QED and QCD. Fermions with appropriate quantum numbers may emit transverse $SU(2)_L$ and $U(1)_Y$ gauge bosons with both soft and collinear enhancements, yielding total rates that grow double-logarithmically with energy. At this stage, fermion helicity coincides with the corresponding chirality, and is strictly conserved in these processes. The $SU(2)_L$ bosons also couple to one another via their non-Abelian gauge interactions, and similarly undergo double-logarithmic soft and collinear splittings $W^0\to W^+W^-$ and $W^\pm \to W^\pm W^0$. This is in direct analogy to $g\to gg$ in QCD, except that here we do not sum/average over gauge indices. All of the electroweak gauge bosons may also undergo single-log collinear splittings into fermion pairs, similar to $g\to q\bar q$ or $\gamma \to f \bar f$. The results can be cast into a familiar form. We write the probability function of finding a parton $B$ inside a parton $A$ with an energy or momentum fraction $z$ in terms of the collinear splitting kernels for $A\to B$ as $P_{BA}(z)$. Stripping the common $g^2/8\pi^2$ and $1/k_T^2$ factors, as well as group theory factors that depend on the gauge representations (hyper-charges or $SU(2)_L$ quadratic Casimirs and Dynkin indices), we are left with \begin{eqnarray} P_{Vf}(z) = {1+\bar z^2 \over z},\quad P_{V'V}(z) = {(1- z \bar z)^2\over z \bar z},\quad P_{fV}(z) = {z^2+\bar z^2\over 2},~~ \label{eq:qcd} \end{eqnarray} with $\bar z \equiv 1-z$. Note that the other possible splitting $f\to f^{(\prime)}V$ is given by $P_{f^{(\prime)}f}(z) = {(1+ z^2)/ \bar z}$, but it is not independent and can be derived from $P_{Vf}$ with $z \leftrightarrow \bar z$. The factor of $1/2$ in $P_{fV}$, relative to the standard form in QED with the electric charge stripped (or in QCD with the $SU(3)$ Dynkin index stripped), is due to the fact that we treat each chiral fermion individually. Interference between different gauge groups is a subtlety that is absent in the color-averaged $SU(3)_{\rm QCD} \times U(1)_{\rm EM}$ shower, and arises here from the fact that we have fixed a preferred gauge basis for asymptotic states instead of summing over gauge indices. Within different exclusive isospin channels in this basis, exchanges of $B^0$ and $W^0$ can exhibit $O(1)$ interference, and thus must be described using density matrices, which have briefly been discussed in Section~\ref{sec:interference}. In a truly massless theory, the physical preparation and identification of states in any preferred weak isospin basis is actually impossible, since arbitrarily soft $W^\pm$ can be radiated copiously at no energy cost and randomize the isospin.\footnote{Absent the quark chiral condensate at $O(100$~MeV), massless $SU(2)_L$ would also technically confine in the IR, so that asymptotic states would anyway be isospin-singlet bound states, making the situation even more analogous to QCD.} Our preferred basis here only becomes physical once we turn on the electroweak VEV and cut off the IR divergences. But the tendency for states to self-average in isospin space will persist at high energies. \bgroup \def1.3{1.4} \begin{table}[] \centering \vspace{-1cm} \begin{tabular}{l|ccc} \multicolumn{1}{l}{} & \multicolumn{2}{c}{ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \ArrowLine( 0,25)(50,25) \Photon(50,25)(95,45){3}{3} \ArrowLine( 50,25)(95,5) \Text(15,10)[]{$\boldsymbol \Leftarrow$} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Leftarrow$}} \end{picture} } & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \ArrowLine( 0,25)(50,25) \DashLine(50,25)(95,45){5} \ArrowLine( 50,25)(95,5) \Text(15,10)[]{$\boldsymbol \Leftarrow$} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Rightarrow$}} \end{picture} \\ & \multicolumn{2}{c}{\rule[-3ex]{0pt}{0pt} \underline{\hspace{0.4cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(\dfrac{1+\bar z^2}{z}\right)$\hspace{0.4cm}}} & \underline{\hspace{0.5cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(\dfrac{z}{2}\right)$\hspace{0.5cm}} \\ & $\to \ V_T \: f_{s}^{(\prime)}$ & $[BW]_T^0 \: f_{s}$ & $H^{0(*)} \: f_{\text{-} s}\ {\rm or} \ \ \phi^{\pm} \: f^{\prime}_{\text{-} s}$ \\ \hlin $f_{s=L,R}$\ & $\ \ g^2_V(Q^V_{f_s})^2$ & $g_1g_2Y_{f_s}T^3_{f_s}$ & $y^2_{f^{(\prime)}_R}$ \end{tabular} \caption{Chiral fermion splitting functions $d{\mathcal P}/dz\,dk_T^2$ in the massless limit, with $z$ ($\bar z \equiv 1-z$) labeling the energy fraction of the first (second) produced particle. The fermion helicity is labelled by~$s$. Double-arrows in Feynman diagrams indicate example fermion helicity directions. Prime indicates isospin partner ($u_s' = d_s$, etc, independent of~$s$). Yukawa couplings are labelled by the participating RH-helicity fermion. The state $H^{0*}$ is the ``anti-$H^0$'', produced when the RH fermion is down-type and in the initial-state, or up-type in the final-state. Processes with $B^0$ and $W^0$ implicitly represent the respective diagonal terms in the neutral gauge boson's density matrix, whereas $[BW]^0$ indicates either of the off-diagonal terms (see text). Anti-fermion splittings are obtained by CP conjugation. The conventions for the couplings are given in \ref{sec:conventions}. } \label{tab:massless_fermion_splittings} \vspace{0.5cm} \begin{tabular}{l|cccc} \multicolumn{1}{l}{} & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \Photon( 0,25)(50,25){3}{3} \Photon(50,25)(95,45){3}{3} \Photon( 50,25)(95,5){3}{3} \end{picture} & \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \Photon( 0,25)(50,25){3}{3} \ArrowLine(50,25)(95,45) \ArrowLine(95,5)( 50,25) \Text(50,32)[]{\rotatebox{26}{$\boldsymbol \Leftarrow$}} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Rightarrow$}} \end{picture} & \multicolumn{2}{c}{ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \Photon( 0,25)(50,25){3}{3} \DashLine(50,25)(95,45){5} \DashLine(95,5)( 50,25){5} \end{picture} } \\ & \rule[-3ex]{0pt}{0pt} \underline{\hspace{0.0cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(\dfrac{(1- z \bar z)^2}{z \bar z}\right)$\hspace{0.0cm}} & \underline{\hspace{0.0cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(\dfrac{z^2+\bar z^2}{2}\right)$\hspace{0.0cm}} & \multicolumn{2}{c}{\underline{\hspace{2.0cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(z\bar z\right)$\hspace{2.0cm}}} \\ & $\to \ W_T \: W_T$ \ & $f_s \: \bar f_{\text{-} s}^{(\prime)}$ & $\phi^+ \: \phi^- \ \overset{\rm or}\ \ H^0 \: H^{0*}$ & $\phi^+ \: H^{0*}\ \overset{\rm or}\ \ \phi^-\: H^0$ \\ \hlin $V_T$\ & $2 g_2^2\ (V\!\!=\!W^{0,\pm})$ & $N_f g_V^2(Q^V_{f_s})^2$ & $\frac14 g_V^2$ & $\frac12 g_2^2$ \\ $[BW]_T^0$\ & $0$ & $N_f g_1g_2Y_{f_s}T^3_{f_s}$ & $\frac12 g_1g_2T^3_{\phi^+,H^0}$ & $0$ \end{tabular} \caption{Transverse vector boson splitting functions $d{\mathcal P}/dz\,dk_T^2$ in the massless limit, where allowed by electric charge flow. $N_f$ is a color multiplicity factor ($N_f=1$ for leptons, $N_f=3$ for quarks). Other conventions as in Table~\ref{tab:massless_fermion_splittings}.} \label{tab:massless_vector_splittings} \vspace{0.5cm} \begin{tabular}{l|ccccc} \multicolumn{1}{l}{} & \multicolumn{3}{c}{ \begin{picture}(50,40)(0,0) \SetColor{Black} \SetWidth{2} \SetScale{0.7} \DashLine( 0,25)(50,25){6} \Photon(50,25)(95,45){3}{3} \DashLine( 50,25)(95,5){5} \end{picture} } & \multicolumn{2}{c}{ \begin{picture}(50,40)(0,0) \SetScale{0.7} \SetColor{Black} \SetWidth{2} \DashLine( 0,25)(50,25){6} \ArrowLine(50,25)(95,45) \ArrowLine(95,5)( 50,25) \Text(50,32)[]{\rotatebox{26}{$\boldsymbol \Leftarrow$}} \Text(50,3)[]{\rotatebox{-26}{$\boldsymbol \Leftarrow$}} \end{picture}} \\ & \multicolumn{3}{c}{\rule[-3ex]{0pt}{0pt} \underline{\hspace{1.8cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(\dfrac{2 \bar z}{z}\right)$\hspace{1.8cm}}} & \multicolumn{2}{c}{\underline{\hspace{1.0cm}$\dfrac{1}{8\pi^2}\dfrac{1}{k_T^2}\left(\dfrac12\right)$\hspace{1.0cm}}} \\ & \ $\rightarrow \ V_T^0 \, H$ \ & \ $[BW]_T^0 \: H$ \ & \ $W_T^\pm \: H'$ \ & \ $u_R \: \bar u_R^{(\prime)}$ & $\bar d_L \: d_L^{(\prime)} \ \overset{\rm or}\ \ \bar e_L \: e_L^{(\prime)}$ \\ \hlin $H=\phi^+,H^{0}$\ & $\frac14 g_V^2$ & $\frac12 g_1g_2T^3_{\phi^+,H^0}$ & $\frac12 g_2^2$ & $3y^2_{u}$ & $N_{d,e}y^2_{d,e}$ \end{tabular} \caption{Scalar splitting functions $d{\mathcal P}/dz\,dk_T^2$ in the massless limit via gauge couplings and Yukawa couplings. The symbol $H$ in the column headings represents the appropriate state $\phi^+,H^{0}$ for the given splitting, and $H'$ represents the $SU(2)_L$ isospin partner (e.g., $H^{0\prime} = \phi^+$). Anti-particle splittings are obtained by CP conjugation. Other conventions as in Tables~\ref{tab:massless_fermion_splittings} and~\ref{tab:massless_vector_splittings}.} \label{tab:massless_scalar_splittings} \end{table} \egroup Beyond these, the major change is the introduction of the scalar doublet.\footnote{We neglect all $1\to 3$ splittings coming from either the scalar quartic or the scalar-gauge 4-point. These may feature single-logarithmic collinear divergences, but are expected to be rather highly numerically suppressed due to an additional $O(1/16\pi^2)$ phase space factor.} First, the scalars may themselves radiate $SU(2)_L$ and $U(1)_Y$ gauge bosons. The soft-collinear behavior is identical to their fermionic counterparts, but the hard-collinear behavior is different. Second, the electroweak gauge bosons can split into a pair of scalars, again in close analog with splittings to fermion pairs. Third, fermions with appreciable Yukawa couplings to the scalar doublet can emit a scalar and undergo a helicity flip. Finally, the scalars can split into a pair of collinear, opposite-chirality (same-helicity) fermions. The corresponding splitting function kernels are found to be \begin{eqnarray} P_{H f}(z) = {z \over 2},\quad P_{H V}(z) = z \bar z,\quad P_{V H}(z) = {2 \bar z \over z},\quad P_{f H}(z) = {1\over 2}. \label{eq:EW} \end{eqnarray} The other possible splittings $H \to H^{(\prime)} V$ and $f_s \to f_{\text{-} s}^{(\prime)} H$ are given by $P_{H^{(\prime)} H}(z) = 2z/ \bar z$ and $P_{f_{\text{-} s}^{(\prime)} f_s}(z) = \bar z/2$, derived from $P_{V H}$ and $P_{H f}$, respectively.\footnote{Note that transitions involving the scalars must conserve the Higgs number introduced earlier in this section. For example, we may have $H^0 \to W^-\phi^+$, but not $H^0 \to W^+\phi^-$. Similarly, $H^0 \to t_R \bar t_R$ is allowed but $H^0 \to t_L \bar t_L$ is not.} The splittings $W^0/B^0 \to H^0 H^{0(*)}$ can also be conveniently represented by the final-state $h\phi^0$, in what will ultimately become $hZ_{L}$ in mass/CP basis. Here the final-state bosons are entangled, but the effects of that entanglement are subtle and only become relevant if {\it both} bosons undergo secondary splittings and/or hard interactions. In practice, we will simply take the expedient of collapsing the final state to $h\phi^0$. The complete set of splitting functions is summarized in Tables~\ref{tab:massless_fermion_splittings} through~\ref{tab:massless_scalar_splittings}. The tables are organized according to the spin of the incoming particles: polarized fermions with helicity $s$, transverse gauge bosons ($V_T$), and scalars. Each table is further subdivided according to the spins of outgoing particles, all together corresponding to seven unique core splitting functions. The various table entries associated to a specific set of incoming and outgoing spins provide the remaining coupling and group theory factors. All of the splitting functions have a conventional collinear logarithmic enhancement $dk_T^2/k_T^2$, and those involving emission of a massless gauge boson have an additional soft logarithmic enhancement $dz/z$. (The latter are the only emissions that preserve the leading particle's helicity in the soft emission limit.) To represent the off-diagonal terms for the neutral gauge bosons (either in production or splitting, where appropriate), we use the symbol $[BW]^0$. Otherwise, processes involving $B^0$ or $W^0$ alone implicitly represent the respective diagonal term in the density matrix.
1,477,468,750,527
arxiv
\section{Acknowledgements} This work was supported by the Austrian Science Fund (FWF) projects ViCom (F4109-N28) and POLOX (Grant No. I 2460-N36), by the ERC Advanced Research Grant `OxideSurfaces' and by the FWF Wittgenstein-prize (Z250-N16). The computational results were achieved by using the Vienna Scientific Cluster (VSC).
1,477,468,750,528
arxiv
\section{Introduction} Throughout this paper we will work with the two-dimensional incompressible Euler equations in vorticity form. The evolution of the vorticity $\omega$ is given by \begin{equation}\label{euler} \begin{cases} \partial_t \omega+ u \cdot \nabla \omega = 0 \ &\text{ in } \mathbb{R}^2 \times \mathbb{R}_+, \\ u(\cdot,t) = -\nabla^{\perp}(-\Delta)^{-1}\omega(\cdot,t) &\text{ in }\mathbb{R}^2,\\ \omega(\cdot,0) = \omega_0&\text{ in }\mathbb{R}^2, \end{cases} \end{equation} where $\nabla^\perp := (-\partial_{x_2}, \partial_{x_1})$. Note that we can express $u$ as $ u(\cdot,t) = \nabla^\perp(\omega(\cdot,t) * \mathcal{N}), $ where $\mathcal{N}(x) := \frac{1}{2\pi}\ln |x|$ is the Newtonian potential in two dimensions. In this paper we will be focusing on constructing non-radial stationary solutions to equations \eqref{euler}. We will work in the \emph{patch} setting, where $\omega(\cdot,t)= \sum \Theta_i 1_{D_i(t)}$ is an sum of indicator functions of bounded sets that move with the fluid, although some of our results translate into the smooth setting as well (where $\omega(\cdot,t)$ is smooth and compactly-supported in $x$). \color{black} See Remark~\ref{remark1}. \color{black} For well-posedness results for patch solutions, see the global well-posedness results \cite{Bertozzi-Constantin:global-regularity-vortex-patches,Chemin:persistance-structures-fluides-incompressibles}. \color{black} Stationary solutions of the Euler equations are an important building block since they might play a role in many different directions: for example understanding turbulence both in 2D \cite{Caglioti-Lions-Marchioro-Pulvirenti:stationary-2d-euler} and in 3D realizing turbulent flows as a superposition of Beltrami flows (which are particular stationary solutions of 3D Euler whose curl is proportional to themselves) \cite{Constantin-Majda:beltrami-spectrum,Dombre-Frisch-Greene-Henon-Mehr-Soward:chaotic-streamlines-abc,Pelz-Yakhot-Orszag-Shtilman-Levich:velocity-vorticity-patterns-turbulence}. In the context of numerical simulation, steady solutions of the 2D Euler equations were used by Chorin \cite{Chorin:numerical-study-slightly-viscous-flow} to perform numerical simulations of the 2D Navier-Stokes equations with small viscosity, approximating the NS solutions as a superposition of steady eddies of constant vorticity that solve the 2D Euler equations. In the context of convex integration \cite{DeLellis-Szekelyhidi:turbulence-geometry-nash-onsager,Buckmaster-Vicol:convex-integration-survey}, Beltrami flows and Mikado flows are classes of stationary solutions of the Euler equations that have been used within an iteration scheme to generate fast oscillating perturbations in order to construct weak solutions. \color{black} In \cite{GomezSerrano-Park-Shi-Yao:radial-symmetry-stationary-solutions} we showed that any stationary solution (in both the patch and smooth settings) for which $\omega \geq 0$ and was compactly supported had to be radial. In this paper we want to address the necessity of the hypothesis $\omega \geq 0$ by answering (on the positive) the following question: \begin{question} Do there exist non-trivial stationary solutions for which $\omega$ changes sign? \label{question1} \end{question} Our first main result immediately gives the answer to the above question. \begin{theorem}[Corollary~\ref{infinite_energy_solution}] \label{firsttheorem} There exist non-radial, sign-changing vortex patch solutions with analytic boundary to the 2D Euler equation~\eqref{euler} whose kinetic energy is infinite, that is, $\int_{\mathbb{R}^2}|\nabla^\perp \omega * \mathcal{N}|^2dx = \infty$. \end{theorem} \begin{rem} In \cite[Theorem A]{GomezSerrano-Park-Shi-Yao:radial-symmetry-stationary-solutions}, it was shown that any non-negative stationary vortex patch must be radially symmetric up to a translation. Theorem~\ref{teoremaestacionarias} implies that by allowing an arbitrarily small portion of negative vorticity, one can find a non-radial stationary vortex patch. More precisely, for any $\epsilon>0$, one can find a non-radial stationary vortex patch $\omega$ such that $\int_{\mathbb{R}^2} \omega^-(x)dx <\epsilon$ while $\int_{\mathbb{R}^2} \omega^+(x)dx$ is uniformly bounded from below, where $\omega^{-}(x) := -\omega(x)1_{\left\{ x\in \mathbb{R}^2 : \omega(x) < 0 \right\}}$ and $\omega^+(x) := \omega(x)1_{\left\{ x\in \mathbb{R}^2 : \omega(x) >0 \right\}}$. \end{rem} Our second main theorem concerns the solutions with \textit{finite} kinetic energy. For the radial vorticity $\omega$ with zero-average, $\int_\mathbb{R} \omega(x)dx=0$, its velocity vanishes outside the support of $\omega$. With this observation, one can easily produce \quotes{globally non-radial} solutions by placing multiple copies of such vorticity so that their supports do not overlap. See Figure~\ref{diagram3}. However the flow on each connected component of the support of the velocity is still circular (around different points). From now on, we say that such a solution is locally radial. More precisely a solution to the 2D Euler equation~\eqref{euler} is locally radial if each connected component of $\omega$ is radial up to a translation and $\int_{\mathcal{C}}\omega dx = 0$ for each connected component $\mathcal{C}$ of $\text{supp}(\omega)$. The next theorem states that there exist more non-trivial stationary solutions. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{diagram4.pdf} \caption{One can choose $\Theta$ so that the radial vorticity has zero average (left). Since the support of such velocity and vorticity coincide, one can place such solutions to produce non-radial solutions (right). \label{diagram3}} \end{center} \end{figure} \begin{theorem}[Corollary~\ref{corollary_1}, Theorem~\ref{ofc}] \label{secondtheorem} There exist vortex patch solutions to the 2D Euler equation~\eqref{euler} that are not locally radial, with finite kinetic energy and analytic boundary. Furthermore, the solutions have compactly supported velocity. \end{theorem} \begin{rem}\label{remark1} In this paper, we construct patch-type solutions with compactly supported velocity. This is a consequence of the existence of stationary solutions with finite kinetic energy and our key Lemma~\ref{zero mean}. We note that a smooth stationary solution with finite kinetic energy also has compactly supported velocity (see Remark~\ref{smooth_app}). \end{rem} \color{black} \subsection{2D Euler rigidity and construction of stationary solutions} In this subsection we will summarize some of the history of stationary solutions, mostly focusing on the rigidity (only trivial solutions exist) vs flexibility (non-trivial solutions exist) dichotomy. The first result goes back to Fraenkel \cite[Chapter 4]{Fraenkel:book-maximum-principles-symmetry-elliptic}, who proved that if $D$ is a stationary, simply connected patch, then $D$ must be a disk. The main idea uses the fact that in this setting, the stream function $\psi=1_D*\mathcal{N}$ solves a semilinear elliptic equation $\Delta \psi = g(\psi)$ in $\mathbb{R}^2$ with $g(\psi)=1_{\{\psi<C\}}$, and one can apply the moving plane method developed in \cite{Serrin:symmetry-moving-plane,Gidas-Ni-Nirenberg:symmetry-maximum-principle} using the monotonicity of $g$ to obtain the symmetry of $\psi$. However, this result does not cover the non simply-connected case due to the fact that $\psi$ may take different values on the different parts of the boundary and thus one can not apply moving plane techniques. This was solved by the authors and Yao in \cite{GomezSerrano-Park-Shi-Yao:radial-symmetry-stationary-solutions} using a variational approach that does not require this condition, and generalized to the smooth case as long as the vorticity is non-negative. Fraenkel's result (and method) was generalized to other classes of active scalar equations such as the generalized SQG (where the velocity in \eqref{euler} is given by the perpendicular gradient of the convolution with $\frac{1}{|x|^{\alpha}}$ as opposed to the Newtonian potential) by Reichel \cite[Theorem 2]{Reichel:balls-riesz-potentials}, Lu--Zhu \cite{Lu-Zhu:overdetermined-riesz-potential} and Han--Lu--Zhu \cite{Han-Lu-Zhu:characterization-balls-bessel-potentials} in the case of $\alpha \in[0,1)$ and Choksi--Neumayer--Topaloglu \cite{Choksi-Neumayer-Topaloglu:anisotropic-liquid-drop-models} in the case $\alpha \in[0,\frac53)$. In \cite{GomezSerrano-Park-Shi-Yao:radial-symmetry-stationary-solutions} we closed the problem for the full range $\alpha \in [0,2)$. In the last few years, there has been an emergence of results on rigidity conditions, namely under which hypotheses we can guarantee that the solution has some rigid features in order to ultimately characterize stationary solutions by other geometric properties (such as being a shear or being radial). These are usually referred as `Liouville'' type of results. In the case of 2D Euler, Hamel--Nadirashvili in \cite{Hamel-Nadirashvili:liouville-euler,Hamel-Nadirashvili:shear-flow-euler-strip-halfspace} proved that any stationary solution without a stagnation point must be a shear flow whenever the domain is a strip and also in \cite{Hamel-Nadirashvili:rigidity-euler-annulus} proved the corresponding rigidity (radial symmetry) result whenever the domain is a two-dimensional bounded annulus, an exterior circular domain, a punctured disk or a punctured plane. Constantin--Drivas--Ginsberg \cite{Constantin-Drivas-Ginsberg:rigidity-flexibility-MHD, Constantin-Drivas-Ginsberg:rigidity-flexibility} obtained rigidity and flexibility results for Euler and other equations (such as MHD) in both 2D and 3D. Coti-Zelati--Elgindi--Widmayer \cite{CotiZelati-Elgindi-Widmayer:stationary-kolmogorov-poiseuille} constructed stationary solutions close to the Kolmogorov and Poiseuille flows in $\mathbb{T}^2$. In the case of 2D Navier--Stokes, Koch--Nadirashvili--Seregin--\v{S}ver\'ak also proved a Liouville theorem in \cite{Koch-Nadirashvili-Seregin-Sverak:liouville-navier-stokes}. See also \cite{GomezSerrano-Park-Shi-Yao:rotating-solutions-vortex-sheet,GomezSerrano-Park-Shi-Yao:rotating-solutions-vortex-sheet-rigidity} where together with Yao we proved rigidity and flexibility results for the vortex sheet problem. We now review some additional results related to the characterization or construction of nontrivial stationary solutions to 2D Euler (flexibility). Nadirashvili \cite{Nadirashvili:stationary-2d-euler}, following Arnold \cite{Arnold:geometrie-differentielle-dimension-infinie,Arnold:apriori-estimate-hydrodynamic-stability,Arnold-Khesin:topological-methods-hydrodynamics} studied the geometry and the stability of stationary solutions. When the problem is posed on a surface, Izosimov--Khesin \cite{Izosimov-Khesin:characterization-steady-solutions-2d-euler} characterized stationary solutions of 2D Euler. Choffrut--\v{S}ver\'ak \cite{Choffrut-Sverak:local-structure-steady-euler} showed that locally near each stationary smooth solution there exists a manifold of stationary smooth solutions transversal to the foliation, and Choffrut--Sz\'ekelyhidi \cite{Choffrut-Szekelyhihi:weak-solutions-stationary-euler} showed that there is an abundant set of stationary weak ($L^{\infty}$) solutions near a smooth stationary one. Shvydkoy--Luo \cite{Luo-Shvydkoy:2d-homogeneous-euler,Luo-Shvydkoy:addendum-homogeneous-euler} looked at stationary smooth solutions of the form $v = \nabla^{\perp}(r^{\gamma}f(\omega))$, where $(r,\omega)$ are polar coordinates and were able to obtain a classification of them. In a different direction, Turkington \cite{Turkington:stationary-vortices} used variational methods to construct stationary vortex patches of a prescribed area in a bounded domain, imposing that the patch is a characteristic function of the set $\{\Psi > 0\}$, and also studied the asymptotic limit of the patches tending to point vortices. He also studied the case of unbounded domains. We emphasize that those solutions do not have finite energy, unless the domain is bounded. Long--Wang--Zeng \cite{Long-Wang-Zeng:concentrated-steady-vortex-patches} studied the regularity in the smooth setting (see also \cite{Cao-Wang:nonlinear-stability-patches-bounded-domains}) as well as their stability. For other variational constructions close to point vortices, we would like to mention the work done by Cao--Liu--Wei \cite{Cao-Liu-Wei:regularization-point-vortices}, Cao--Peng--Yan \cite{Cao-Peng:planar-vortex-patch-steady} and Smets--van Schaftingen \cite{Smets-VanSchaftingen:desingularization-vortices-euler}. Musso--Pacard--Wei \cite{Musso-Pacard-Wei:stationary-solutions-euler} constructed nonradial smooth stationary solutions with finite energy but without compact support in $\omega$. Our solutions are different from all of these constructions since they are not close to point vortices. The (nonlinear $L^1$) stability of circular patches was proved by Wan--Pulvirenti \cite{Wan-Pulvirenti:stability-circular-patches} and later Sideris--Vega gave a shorter proof \cite{Sideris-Vega:stability-L1-patches}. See also Beichman--Denisov \cite{Beichman-Denisov:stability-rectangular-strip} for similar results on the strip. {Recently, Choi--Lim \cite{Choi-Lim:stability-monotone-vorticities} generalized the stability results for radial patches to radially symmetric monotone vorticity.} Lately, Gavrilov \cite{Gavrilov-stationary-euler-3d,Gavrilov:stationary-euler-helix} managed to construct nontrivial stationary solutions of 3D Euler with compactly supported velocity, which was further simplified and extended to other equations by Constantin--La--Vicol \cite{Constantin-La-Vicol:remarks-gavrilov-stationary}. In \cite{Dominguez-Enciso-PeraltaSalas:piecewise-smooth-stationary-euler}, Dom\'inguez-V\'azquez--Enciso--Peralta-Salas construct a different family of non-localizable, stationary solutions of 3D Euler that are axisymmetric with swirl. Regarding the stability results in 3D, we refer to the work of Choi \cite{Choi:stability-hill-vortex} for solutions near Hill's spherical vortex. \subsection{Structure of the proofs} The skeleton of our proof employs bifurcation theory, trying to find a perturbation of trivial solutions (radial vorticity). A very natural procedure is to frame it using a Crandall-Rabinowitz Theorem approach (see \cite{Castro-Cordoba-GomezSerrano:uniformly-rotating-smooth-euler,Castro-Cordoba-GomezSerrano:global-smooth-solutions-sqg,Castro-Cordoba-GomezSerrano:existence-regularity-vstates-gsqg,Castro-Cordoba-GomezSerrano:analytic-vstates-ellipses,Castro-Lear:traveling-waves-couette,Garcia:Karman-vortex-street,Garcia:vortex-patch-choreography,Garcia-Hmidi-Soler:non-uniform-vstates-euler,Garcia-Hmidi-Mateu:time-periodic-3d-qg,GomezSerrano:stationary-patches,Hassainia-Hmidi:v-states-generalized-sqg, Hmidi-Mateu-Verdera:rotating-vortex-patch,Hmidi-delaHoz-Mateu-Verdera:doubly-connected-vstates-euler,Hmidi-Mateu-Verdera:rotating-doubly-connected-vortices,Hmidi-Mateu:bifurcation-kirchhoff-ellipses} for applications in the context of fluid mechanics). In a suitable functional setting, the problem boils down to finding non-trivial zeros of a nonlinear functional of the form (see \eqref{stationary_equation}) : \[ \mathcal{F}(\Theta,R)=0, \quad F:\mathbb{R}\times H^{k+1}(\mathbb{T})\mapsto H^{k}(\mathbb{T}), \quad k\ge 3, \text{ given that $\mathcal{F}(\Theta,0) = 0 $ for any $\Theta\in \mathbb{R}$}. \] Note that the loss of derivatives attributes to the fact that each component of the functional in \eqref{stationary_equation} takes account of the tangent vector of the boundaries. An easy observation is that the non-degeneracy of the velocity at the trivial solution (more precisely, the angular velocity $\ne 0$) guarantees that the linearized functional is Fredholm. For a radial vorticity with infinite energy, this can be more explicitly revealed in the linearized operators obtained in Proposition~\ref{Linearized_operator}, where the the angular velocity contributes to the diagonal elements of the matrix $M_n$, which can also be seen in \eqref{derivative_matrix_3}. However, for a radial vorticity whose average vanishes, the corresponding velocity field, which can be explicitly computed, vanishes outside the support of the vorticity. At such trivial solutions, the linearized functional fails to be Fredholm and one cannot directly apply the Crandall-Rabinowitz Theorem. Indeed, the degeneracy of the velocity on the outmost boundary yields a mismatch of the image spaces of the functional at the nonlinear level and the linear level in terms of the regularity. One possible attempt to overcome this issue is to find two bifurcation curves emanating from negative-average vorticity and positive-average vorticity that are close to each other and show that the two curves merge together forming a loop, using the strategy in \cite{Hmidi-Renault:existence-small-loops-doubly-connected-euler} (see Figure~\ref{diagram1}). It is not difficult to find (in the three-layer setting) values for $\Theta^+$ and $\Theta^-$ and doing numerical continuation one can observe those loops. \begin{figure}[h!] \begin{center} \includegraphics[scale=1.1]{diagram1.pdf} \caption{One possible strategy: Find two bifurcation curves emanating from negative average and positive average and show that those two curves are connected. If it is possible, then there must be a solution with zero-average $(\Theta^*,R^*)$ by continuity. \label{diagram1}} \end{center} \end{figure} Our key Lemma~\ref{zero mean} shows that the degeneracy of the velocity is not a special case at the trivial solutions, but a generic phenomenon. More precisely, if the vorticity $\omega$ is stationary (not necessarily radial) with zero-average, then $u = 0$ outside the support of $\omega$ as long as $\text{supp}(\omega)$ is simply connected. This gives two crucial implications: \begin{itemize} \item A non-trivial stationary solution near a trivial one cannot be obtained by using the implicit function theorem. In Figure~\ref{diagram1}, $D_R\mathcal{F}(\Theta^*,R^*)$, the linearized operator at $(\Theta^*,R^*)$ (if it exists) cannot be an isomorphism. \item The mismatch in regularity between the image spaces described above may be treated in the sprit of Nash-Moser scheme with the use of an \textit{approximate inverse}. \end{itemize} The first implication shows that one cannot use the Lyapunov-Schmidt reduction which is a crucial tool in the strategy in \cite{Hmidi-Renault:existence-small-loops-doubly-connected-euler} to find a loop of bifurcation curves and even more importantly in the original proof of the Crandall-Rabinowitz theorem. Regarding the second implication, the Nash-Moser scheme has been well-adapted in the context of steady-state solutions \cite{Choffrut-Sverak:local-structure-steady-euler,Iooss-Plotnikov:small-divisor,Iooss-Plotnikov-Toland:standing-waves-infinitely-deep-gravity}, dynamical solutions \cite{Lannes:well-posedness-water-waves,Rodrigo:evolution-sharp-fronts-qg} and more recently, embedded into a KAM scheme in the context of quasiperiodic solutions for the water waves problem (see \cite{Baldi-Berti-Haus-Montalto:quasiperiodic-gravity-waves-finite-depth} and the references therein). In Section~\ref{Section4}, we will find a bifurcation curve without the use of such a {reduction technique}, and combine it with the Nash-Moser scheme in the following way: \begin{itemize} \item[1)] In Subsection~\ref{main_finite_1}, we slightly modify our functional setting given in Section~\ref{functional_setting} so that $(\Theta,R)$ represents a zero-average vorticity. We are led to find non-trivial zeros of the modified functional (see \eqref{stationarR_equation3} for the definition of $G$) $$G:\mathbb{R}\times H^{k+1}\mapsto H^{k}, \quad \text{given that ${G}(\Theta,0) = 0 $ for any $\Theta\in \mathbb{R}$ }.$$ See also \eqref{def_G_stream}, where it is shown that each {component} of $G$ is the tangential derivative of the stream function on each boundary component. \item[2)] We analyze the linearized operator $D_RG$ to find $\Theta^*\in \mathbb{R}$ such that $D_RG(\Theta^*,0)$ satisfies (Subsection~\ref{Spectral}) \begin{itemize} \item[$\bullet$] Ker$(D_RG(\Theta^*,0))$ is one-dimensional, that is, Ker$(D_RG(\Theta^*,0)) = \text{span}\left\{ v\right\}$ for some $v\in C^\infty$. \item[$\bullet$] Im$(D_RG(\Theta^*,0))^\perp$ is one-dimensional. \item[$\bullet$] $(D_RG(\Theta^*,0))$ satisfies the transversality condition: $\partial_\Theta D_R(G(\Theta^*,0))[v]\notin \text{Im}D_RG(\Theta^*,0)$. \end{itemize} This indicates that a possible non-trivial solution can be found as a zero of the following functional for sufficiently small $s>0$: \begin{align}\label{tilde_g_intro} \tilde{G}_s:H^{k+1}\mapsto H^{k}, \quad \tilde{G}_s(R):=G(\Theta^*+v\cdot R,sv + (I-P)[R]), \end{align} where $P:H^{k+1}\mapsto C^\infty$ is the projection to $\text{Ker}(D_RG(\Theta^*,0))$, and $I$ is the identity operator (see \eqref{def_tilde_g}). \item[3)] We perform Newton's method for the functional $\tilde{G}_s$. The two important ingredients for Newton's method to work are 1) a sufficiently good initial guess and 2) the invertibility of the linearized operator $D\tilde{G}_s$. For the initial guess, we already have $\tilde{G}_s = O(s^2)$, since $v\in \text{Ker}(D_RG(\Theta^*,0))$. \item[4)] Regarding the second ingredient, we observe that the operator $D_RG(\Theta,R)$ can be decomposed \[ D_RG = a + A \text{ as in \eqref{decomp_11},} \] where the operator $a(\Theta,R)$ vanishes if $(\Theta,R)$ corresponds to a stationary solution. This is due to the fact that the velocity on the outmost boundary vanishes by Lemma~\ref{zero mean}. This observation leads us to decompose $D\tilde{G}_s$, which can be immediately computed from \eqref{tilde_g_intro} as \[ D\tilde{G}_s = \partial_\Theta G\circ P + D_RG\circ (I-P) = \underbrace{ \partial_\Theta G\circ P + A\circ (I-P)}_{=: T_s} + a\circ (I-P), \quad \text{ see \eqref{T_sdef}}. \] Thanks to the transversality condition, the one dimensionality of $\text{Im}(D_RG(\Theta^*,0))^\perp (=\text{Im}A(\Theta^*,0)^\perp$ since $a(\Theta^*,0)=0$) is compensated by $\partial_\Theta G\circ P$. Indeed, it is the main goal of Subsection~\ref{Analysis_T} to show that $T_s$ is invertible. \item[5)] As described in 4) we do not have the invertibility of $D\tilde{G}_s$, however, the invertibility of $T_s$ is enough for the iteration in \eqref{definition_of_approx_sol} to work since $a$ tends to $0$ along the iteration steps towards the solution. More precisely, $T_s$ plays a role of the approximate inverse of $D\tilde{G}_s$. The loss of derivatives occurring at each iteration step is treated by the use of regularizing operator in \eqref{regularizing1} in the spirit of the Nash-Moser scheme. \end{itemize} \color{black} We note that our finite energy solutions consist of three-layered vortex patches, unlike the infinite energy case (See Subsection~\ref{main_infinite} and \ref{main_finite_1}). From the technical point of view, the two-layered, zero-average trivial solutions do not give a linearized operator that has finite dimensional Kernel. More precisely, one cannot find a pair of parameters $(b,\Theta)$ such that 1) the matrix $M_n$ in Proposition~\ref{Linearized_operator} has zero-determinant and 2) the corresponding $\omega$ determined by $(b,\Theta)$ has zero-average (see Remark~\ref{zero_mean_bifurcation1}). An important aspect of such two-layered stationary vorticity is that its stream function $\psi$ always satisfies \begin{align}\label{stream_formulation} \Delta \psi = g(\psi), \text{ for some $g:\mathbb{R}\mapsto \mathbb{R}$}. \end{align} We emphasize that the solutions that we construct in this paper \textbf{cannot} be captured by \eqref{stream_formulation} unlike the earlier works, for example, \cite{Smets-VanSchaftingen:desingularization-vortices-euler,CotiZelati-Elgindi-Widmayer:stationary-kolmogorov-poiseuille,Constantin-Drivas-Ginsberg:rigidity-flexibility}. That is, there is no $g$ such that the stream function $\psi$ solves \eqref{stream_formulation}. This feature can been seen from the fact that the trivial solutions from which we bifurcate to obtain finite energy solutions exhibit non-monotone stream functions in the radial direction. Lastly, we point out that the desingularization from point vortices is not applicable to find a stationary solution with finite energy, since a steady configuration of point vortices has zero vortex angular momentum, which together with the finite energy hypothesis implies that the individual circulations have to be all zero (see \cite[Lemma 1.2.1]{ONeil:stationary-point-vortices}). \subsection{Organization of the paper} The paper is organized in the following way: Section \ref{functional_setting} sets up the functional framework and the equation that a stationary solution has to satisfy. Section \ref{patchsetting} proves Theorem \ref{firsttheorem}, in the easier case where the finite energy hypothesis is dropped. Finally, section \ref{Section4} proves Theorem \ref{secondtheorem} in the full generality setting using the Nash-Moser scheme. The Appendix contains some technical background results used throughout the proof, as well as basic bifurcation theory and some integrals needed for the spectral study. \section{Functional equations and functional setting for stationary vortex patches}\label{functional_setting} Let us consider vorticity $\omega$ of the form $\omega:=\sum_{i=1}^n\Theta_i1_{D_{i}}$ for some $n\in \mathbb{N}$, where $\Theta_i\in \mathbb{R}$ and $D_i$ is a simply-connected domain for each $i=1,\ldots,n$. Suppose the boundary of $D_i$ is parametrized by a time-dependent $2\pi$-periodic curve $z_i(\cdot,t):\mathbb{T}\mapsto \mathbb{R}^2$. The evolution equation of the boundary can be written as a system of equations, \begin{align} \label{evolution_patch} \partial_tz_j(x,t)\cdot\partial_xz_j(x,t)^{\perp} & = u(z_j(x,t),t) \cdot\partial_xz_j(x,t)^{\perp}. \end{align} where $u(\cdot,t):=\nabla^{\perp}\left(\omega(\cdot,t) *\frac{1}{2\pi}\log|\cdot| \right)$ is the velocity vector. We will look for stationary solutions to $\eqref{evolution_patch}$. Let us assume that each $z_i$ can be written as \begin{align}\label{bd_parametrization} z_i(x)=(b_i+R_i(x))(\cos(x),\sin(x)), \quad x\in \mathbb{T}, \end{align} where $b_i$ is a positive constant for each $i$. By plugging these parametrizations into \eqref{evolution_patch}, we are led to solve the following system for $b:=(b_1,\ldots,b_n)$, $\Theta:=(\Theta_1,\ldots,\Theta_n)$ and $R:=(R_1,\ldots,R_n)$: \begin{align}\label{stationary_equation} 0 =\mathcal{F}_j(b,\Theta,R):= u(z_j(\cdot)) \cdot z_j'^{\perp} &:= \sum_{i=1}^n \frac{1}{4\pi}\Theta_i u_{i,j}(R)\cdot z_{j}'^{\perp}=\sum_{i=1}^n \frac{1}{4\pi}\Theta_i \underbrace{\left( u_{i,j}^{\theta}(R)R_j' - u_{i,j}^{r}(R)(b_j+R_j) \right)}_{=:S_{ij}(b_i,b_j,R_i,R_j) }. \end{align} where $u_{i,j}(R)(x) := \nabla^{\perp}\left( 1_{D_{i}}*\log|\cdot|^2 \right) (z_j(x))$, which can be thought of as a contribution of the $i$th patch on the $j$th curve, and $u_{i,j}^{\theta}$ and $u_{i,j}^r$ are the angular and the radial components of $u_{i,j}$. More explicitly, we have \begin{align}\label{sij} S_{ij}(b_i,b_j,R_i,R_j) & = \int_0^{2\pi} \cos(x-y)((b_i + R_i(y))R_j'(x) - (b_j + R_j(x))R_i'(y)) \nonumber\\ & \times \log((b_j + R_j(x))^2 + (b_i + R_i(y))^2 - 2(b_j + R_j(x))(b_i + R_i(y))\cos(x-y))dy \nonumber\\ & - \int_0^{2\pi} \sin(x-y)((b_i + R_i(y))(b_j + R_j(x)) + R_i'(y)R_j'(x)) \nonumber\\ & \times \log((b_j + R_j(x))^2 + (b_i + R_i(y))^2 - 2(b_j + R_j(x))(b_i + R_i(y))\cos(x-y))dy. \end{align} It is clear that $S_{ij}(b_i,b_j,0,0) = 0,$ since $u_{i,j}^{r} = 0$ for radial patches. Thus we have \begin{align}\label{trivial_one} \mathcal{F}(b,\Theta,0) = 0, \text{ for any $b,\Theta$.} \end{align} In what follows, we will pick one of $\Theta_i$ as a bifurcation parameter, while the others are fixed. We now proceed to discuss the functional spaces that we will use. In Section \ref{patchsetting} we will work with the following analytic spaces. Following \cite{Castro-Cordoba-GomezSerrano:analytic-vstates-ellipses}, we denote the space of analytic functions in the strip $\left\{ z \in \mathbb{C} : \text{Im}(z) \le c\right\}$ by $\mathcal{C}_w(c)$. For $k \in \mathbb{Z}$, we will consider the following spaces of $\frac{2\pi}{m}$-periodic functions: \begin{align*} X^{k,m}_{c} := \left\{ f(x) \in \mathcal{C}_{w}(c), \quad f(x) = \sum_{j=1}^{\infty}a_{jm}\cos(jmx),\quad \rVert f \rVert_{X^{k,m}_c} < \infty \right\}, \\ Y^{k,m}_{c} :=\left\{ f(x) \in \mathcal{C}_{w}(c), \quad f(x) = \sum_{j=1}^{\infty}a_{jm}\sin(jmx),\quad \rVert f \rVert_{Y^{k,m}_c} < \infty \right\}, \end{align*} where $\rVert f \rVert_{X^{k,m}_c} = \rVert f \rVert_{Y^{k,m}_c}:= \sum_{j=1}^\infty |a_{jm}|^2(1+jm)^{2k}(\cosh(cjm)^2 + \sinh(cjm)^2)$. In Section \ref{Section4} we will consider the following spaces of $\frac{2\pi}{m}$-periodic functions: \begin{align*} X^{k,m} := \left\{ f(x) \in H^k(\mathbb{T}): \quad f(x) = \sum_{j=1}^{\infty}a_{jm}\cos(jmx),\quad \rVert f \rVert_{X^{k,m}} < \infty \right\}, \\ Y^{k,m} :=\left\{ f(x) \in H^k(\mathbb{T}): \quad f(x) = \sum_{j=1}^{\infty}a_{jm}\sin(jmx),\quad \rVert f \rVert_{Y^{k,m}} < \infty \right\}, \end{align*} where $\rVert f \rVert_{X^{k,m}} = \rVert f \rVert_{Y^{k,m}}:= \rVert f \rVert_{H^k(\mathbb{T})}$. \section{Warm-up: Existence of non-radial stationary vortex patches with infinite energy}\label{patchsetting} As a warm-up, in this section we aim to show that there exists a non-trivial patch solution with infinite kinetic energy, $\frac{1}{2\pi}\int_{\mathbb{R}^2} |\nabla \left( \omega * \log|\cdot|\right)|^2 dx = \infty$. Recall that (\cite[Proposition 3.3]{Majda-Bertozzi:vorticity-incompressible-flow}), \begin{align}\label{mean_energy} \int_{\mathbb{R}^2} |\nabla \left( \omega * \log|\cdot|\right)|^2 dx < \infty \iff \int_{\mathbb{R}^2}\omega(x)dx = 0. \end{align} In our proof, we will find a continuous bifurcation curve, emanating from a two-layered vortex patch whose vorticity does not has zero mean. \subsection{Main results for infinite energy}\label{main_infinite} Let us consider two-layer vortex patches, that is, $i\in\left\{ 1,2\right\}$ in the setting in Section~\ref{functional_setting}. For $b,\Theta \in (0,1)$, using the scaling invariance of the equations, we will choose the parameters to be \begin{align}\label{parameters1} b_1=1,\quad b_2=b,\quad \Theta_1=\Theta,\quad \Theta_2=-1, \end{align} so that if $R_1=R_2=0$, then \begin{align}\label{vortexpatch} \omega = \begin{cases} \Theta & \text{ in }B_1\backslash B_b\\ \Theta -1 & \text{ in } B_b, \end{cases} \end{align} where $B_r$ denotes the unit disk centered at the origin. Note that the case $\Theta=1$ corresponds to an annulus. Later, we will fix $b$ as well, and let $\Theta$ play the role as a bifurcation parameter. With this setting, the system \eqref{stationary_equation} is equivalent to \begin{align} \label{stationary_equation2} 0=\mathcal{F}(\Theta,R)=(F^1(\Theta,R),F^2(\Theta,R)), \quad R=(R_1,R_2), \end{align} where we omit the dependence of $\Theta_2$ and $b_1$, $b_2$ for notational simplicity. \begin{theorem} \label{teoremaestacionarias} Let $k,m$ be such that $k\geq 3$, $2 \le m \in \mathbb{N}$. Let $b$ satisfy the condition in Lemma \ref{propbstar}. Then for some $c>0$ and $s_0=s_0(k,m,c,b)>0$, there exist two bifurcation curves $[0,s_0)\ni s\mapsto (\Theta^{\pm}(s),R^{\pm}(s)) \in \mathbb{R} \times (X^{k,m}_{c} )^2$ such that for each $s \in (0,s_0)$, $(\Theta^{\pm}(s),R^{\pm}(s))$ is a solution of the equation \eqref{stationary_equation2} and $ R^{\pm}(s) \ne 0 \in (X^{k,m}_c)^2$. The bifurcation curve emanates from $(\Theta^{\pm}(0),R^{\pm}(0)) = (\Theta_m^{\pm},0)$, where $\Theta_m^{\pm}$ are defined in Lemma \ref{propbstar}. \end{theorem} Theorem~\ref{teoremaestacionarias} immediately implies the existence of non-radial stationary vortex patches with infinite energy. \begin{corollary}\label{infinite_energy_solution} Let $2\le m\in \mathbb{N}$. Then there is an $m$-fold symmetric stationary patch solution for the 2D Euler equation with analytic boundary and infinite kinetic energy, that is \[\int_{\mathbb{R}^2} |\nabla \left( \omega * \frac{1}{2\pi}\log|\cdot| \right)|^2 dx = \infty.\] \end{corollary} \begin{proof} By Theorem~\ref{teoremaestacionarias}, there are two continuous bifurcation curves $\Psi^{\pm} : [0,s_0) \mapsto \mathbb{R} \times \left( X^{k,m}_c\right)^2$ of solutions of \eqref{stationary_equation2} such that \begin{align*} \Psi^{\pm}(s) = \left( \Theta^{\pm}(s),R(s) \right), \quad F(\Psi^{\pm}(s))=0, \quad \text{ and } \quad \Psi^{\pm}(0)=\left( \Theta_m^{\pm},0\right), \end{align*} for some $s_0>0$. From \eqref{evolution_patch} and \eqref{stationary_equation}, it is clear that for each $s\in (0,s_0)$ and for each choice of $\pm$, \begin{align}\label{vorticity} \omega^s(x) := \begin{cases} \Theta-1 & \text{ if }x\in D_2(s), \\ \Theta & \text{ if }x\in D_1(s)\backslash \overline{D_2(s)}\\ 0 & \text { otherwise, } \end{cases} \end{align} is a stationary solution to the Euler equation, where $D_1(s)$ and $D_2(s)$ are the bounded domains determined by \eqref{bd_parametrization} with $R(s)$. Since $R(s)\in \left( X^{k,m}_c\right)^2$, the boundaries are analytic. Now we consider the kinetic energy of the solution. From \eqref{mean_energy}, it suffices to show that $\int_{\mathbb{R}^2} \omega^0(x) dx \ne 0$. By the continuity of the bifurcation curve, this immediately implies that $\int_{\mathbb{R}^2} \omega^s(x) dx \ne 0$ for small $s>0$. Then it follows from Lemma~\ref{bstar_theta} that $b^2 < \Theta_m^{\pm}$, hence \begin{align*} \int_{\mathbb{R}^2}\omega^0(x)dx=\pi\left(-b^2+\Theta_m^{\pm}\right)>0. \end{align*} This completes the proof. \end{proof} The rest of this section will be devoted to prove Theorem~\ref{teoremaestacionarias}. The proof will be divided into 5 steps. These steps correspond to check the hypotheses of the Crandall-Rabinowitz theorem~\ref{CRtheorem} for our functional $F$ in \eqref{stationary_equation2}. The hypotheses in Theorem~\ref{CRtheorem} can be read as follows in our setting: \begin{enumerate} \item The functional $\mathcal{F}$ satisfies $$\mathcal{F}(\Theta,R)\,:\, (0,1)\times \{V^{\epsilon}\}\mapsto (Y^{k-1,m}_{c})^2,$$ where $V^{\epsilon}$ is an open neighborhood of 0, \begin{align*} V^{\epsilon}=\left\{ (f,g)\in (X^{k,m}_{c})^2\,:\, ||f||_{X^{k,m}_{c}}+||g||_{X^{k,m}_{c}}<\epsilon \right\} \end{align*} for some $\epsilon>0$ and $k\geq 3$. \item $\mathcal{F}(\Theta,0) = 0$ for every $0 < \Theta< 1$. \item The partial derivatives $\partial_{\Theta} \mathcal{F}$, $D\mathcal{F}$, $\partial_{\Theta} D\mathcal{F}$ exist and are continuous, where $D\mathcal{F}$ is Gateaux derivative of $\mathcal{F}$ with respect to the functional variable $R$. \item Ker$(D\mathcal{F}(\Theta_m^{\pm},0)) \subset (X^{k,m}_c)^2$ and $(Y^{k-1,m}_{c})^2$/Im($D\mathcal{F}(\Theta_m^{\pm},0)$) are one-dimensional (see Proposition \ref{propbstar} for the definition of $\Theta_{m}^{\pm}$). \item $\partial_{\Theta} D\mathcal{F}(\Theta_m^{\pm},0)[v_0] \not \in$ Im($D\mathcal{F}(\Theta_m^{\pm},0)$), where $v_0$ is a non-zero element in Ker$(D\mathcal{F}(\Theta_m^{\pm},0))$. \end{enumerate} \begin{rem}\label{analyticity_of_S} We remark that if $i \ne j$ then the functions inside the logarithm in $S_{i,j}$ in \eqref{sij} are uniformly bounded from below in $y$ for all $x$ by a strictly positive constant depending on the parameters. Then we can analytically extend the integrand in $x$ to the strip $|\Im(z)| \leq c$ in such a way that the real part of this extension stays uniformly bounded away from 0 for a small enough $c$. The case $i=j$ can be treated similarily as in \cite[Remark 2.1]{Castro-Cordoba-GomezSerrano:analytic-vstates-ellipses}. \end{rem} \subsection{Proof of Theorem~\ref{teoremaestacionarias}} \subsubsection{Steps 1,2 and 3: Regularity}\label{regularity_step} In order to check the first three steps, it suffices to check if $S_{ij}$ in \eqref{sij} satisfies the hypotheses, since $F$ is a linear combination of $S_{ij}$. As mentioned in Remark~\ref{analyticity_of_S}, the case $i\ne j$ is trivial since there is no singularity in the integrand and analytically extended into a strip in $\mathbb{C}$, if $(R_1,R_2)$ is in a sufficiently small neighborhood of $(0,0)$. For $i=j$, the first three steps with slightly different settings were already done in the literature. For example, step 1 can be done in the same way as in \cite{delaHoz-Hassainia-Hmidi:doubly-connected-vstates-gsqg}. Step 2 follows immediately from \eqref{trivial_one}. Existence and continuity of the Gateaux derivatives for the gSQG equation was done in \cite{GomezSerrano:stationary-patches} and the same proof can be adapted to our setting straightforwardly. \subsubsection{Step 4: Analysis of the linear part.} In this section, we will focus on the spectral study of the Gateaux derivative $D\mathcal{F}(\Theta,0):=D_{R}\mathcal{F}(\Theta,0)$. \paragraph{Calculation of $D\mathcal{F}$} We aim to express the Gateaux derivative of $\mathcal{F}$ around $(\Theta,0)$ in the direction $(H(x),h(x))$ in terms of Fourier series. \begin{lemma}\label{linearpart1} Let $S_{ij}$ be defined as in \eqref{sij}. Then: \begin{align*} \left.\frac{d}{dt}S_{ij}(b_i,b_j,th_i,th_j)\right|_{t=0} & = \int b_i (h_j'(x)\cos(x-y)-h_i'(y))\log(b_j^2 + b_i^2 - 2b_jb_i\cos(x-y))dy =: \mathcal{L}_1+\mathcal{L}_2. \end{align*} \end{lemma} \begin{proof} Let $V^{1,ab}_{ij}$ (resp. $V^{2,ab}_{ij}$) be the contribution of the first term (resp. second term) of \eqref{sij} where the first factor contributes with a $t^a$ and the second with $t^b$. We are looking for all combinations such that $a+b = 1$. We start looking at the first summand. We have that: \begin{align*} V_{ij}^{1,10} & = \int \cos(x-y)(b_ih_j'(x)-b_jh_i'(y))\log(b_j^2 + b_i^2 - 2b_jb_i\cos(x-y))dy. \end{align*} Similarly, for the second one, \begin{align*} V_{ij}^{2,01} & = -2\int \sin(x-y)(b_i b_j) \frac{h_j(x)(b_j-b_i\cos(x-y)) + h_i(y)(b_i-b_j\cos(x-y))}{b_j^2+b_i^2-2b_jb_i\cos(x-y)}dy, \\ V_{ij}^{2,10} & = -\int \sin(x-y)(b_ih_j(x) + b_jh_i(y))\log(b_j^2 + b_i^2 - 2b_jb_i\cos(x-y))dy, \end{align*} where we have used Lemma \ref{lemaexpansionlog}. Integrating by parts in $V_{ij}^{2,01}$: \begin{align*} V_{ij}^{2,01} & = \int ((b_i h_j(x) + b_jh_i(y))\sin(x-y) - h_i'(y)(b_i - b_j\cos(x-y))) \log(b_j^2+b_i^2-2b_jb_i\cos(x-y)) dy, \end{align*} Finally, adding all the log terms and the non-log terms together: \begin{align*} V_{ij}^{1,10} + V_{ij}^{2,01} + V_{ij}^{2,10} & = \int b_i (h_j'(x)\cos(x-y)-h_i'(y))\log(b_j^2 + b_i^2 - 2b_jb_i\cos(x-y))dy, \end{align*} as we wanted to prove. \end{proof} \begin{lemma}\label{linearpart2} Let $h_i = A_i \cos(mx)$ and $r = \frac{\min\{b_i,b_j\}}{\max\{b_i,b_j\}}$. Then: \begin{align*} \left.\frac{d}{dt}S_{ij}(b_i,b_j,th_i,th_j)\right|_{t=0} = 2\pi\sin(mx)m b_i \left(A_j r - A_i \frac{r^m}{m}\right). \end{align*} \end{lemma} \begin{proof} From Lemma~\ref{linearpart1} and Corollary~\ref{integral_a0}, we have that \begin{align*} \mathcal{L}_{1} & = -m b_i A_j\sin(mx) \mathcal{A}_{0}(r,1), \\ \mathcal{L}_{2} & = m b_i A_i \sin(mx) \mathcal{A}_{0}(r,m), \end{align*} Adding the two contributions gives the desired result. \end{proof} Note that the functional $\mathcal{F}$ is a linear combination of $S_{ij}$ (see \eqref{stationary_equation}). Using the above two lemmas, we obtain the following proposition: \begin{prop}\label{Linearized_operator} Let $h(x) = \sum_{n}a_n \cos(nx),$ $H(x) = \sum_{n}A_n\cos(nx)$, then we have that: \begin{align*} D\mathcal{F}(\Theta,0)[H,h] = \left(\begin{array}{c}U(x) \\ u(x) \end{array}\right), \end{align*} where \begin{align*} U(x) = \sum_{n}U_n \sin(nx), \quad u(x) = \sum_{n} u_n \sin(nx), \end{align*} and the coefficients satisfy, for any $n$: \begin{align*} \left(\begin{array}{c}U_n \\ u_n \end{array}\right) := (-n) M_n(\Theta) \left(\begin{array}{c}A_n \\ a_n \end{array}\right) := (-n) \left( \begin{array}{cc} \frac{b^{2}}{2} - \frac{\Theta}{2} + \frac{\Theta}{2n} & - \frac{b^{n+1}}{2n} \\ \Theta\frac{b^{n}}{2n} & -\frac{b}{2n} + \frac{b}{2}(1-\Theta) \end{array} \right) \left(\begin{array}{c}A_n \\ a_n \end{array}\right). \end{align*} \end{prop} \begin{proof} It follows from \eqref{parameters1}, the definition of $\mathcal{F}$ in \eqref{stationary_equation2}, \eqref{stationary_equation} and Lemma~\ref{linearpart2}. \end{proof} \paragraph{One dimensionality of the Kernel of the linear operator.} Our goal here is to verify the one dimensionality of Ker($D\mathcal{F}(\Theta,0)$) for some $\Theta$. More precisely, we will prove the following proposition: \begin{prop}\label{onedimensionality} Fix $2\leq m \in \mathbb{N}$ and take any $b\in (0,b_m)$, where $b_m$ is as in Lemma~\ref{propbstar}. Then there exist two $\Theta_m^{\pm} \in (0,1)$, such that Ker$(D\mathcal{F}(\Theta_m^{\pm},0))$ is one-dimensional. Furthermore, \begin{align*} \text{Ker}(D\mathcal{F}(\Theta_m^{\pm},0,0)) = \text{span}\left\{ v_0(\Theta_m^{\pm})\cos(mx) := \begin{pmatrix}\frac{1}{2m}b - \frac{b}{2}(1-\Theta_m^{\pm})\\ \frac{\Theta_m^{\pm}}{2m}b^m \end{pmatrix} \cos(mx) \right\} \subset X^{k,m}_c \times X^{k,m}_c. \end{align*} \end{prop} The proof of the above proposition relies on the analysis of the matrix $M_n$ in Lemma~\ref{propbstar} and \ref{propbstar2}, which we will prove below. \begin{lemma}\label{propbstar} Let $\Delta_{m}(\Theta)$ be \begin{align}\label{determinantm} \Delta_{m}(\Theta) := \frac{4m^2}{b}\text{det}(M_{m}(\Theta)) = \left(\Theta b^{2m} + b^{2}m(m(1-\Theta) - 1) + \Theta(1-m)(m(1-\Theta)-1)\right) \end{align} Then, for any $m \geq 2$, there exists $0 < b_m < 1$ such that for any $0<b<b_m<1$, there exists $0<\Theta^{-}_m<\Theta^{+}_m<1$ such that $\Delta_{m}(\Theta_{m}^{\pm}) = 0$. We also have that rk$(M_{m}(\Theta_{m}^{\pm})) = 1$ for those values of $\Theta^{+}_m$, $\Theta^{-}_m$, where $rk(A)$ is the rank of a matrix $A$. \end{lemma} \begin{proof} For fixed $m$, we study the polynomial $\Delta_{m}(\Theta)$. We need to solve \begin{align}\label{thetaequation} \Delta_{m}(\Theta)=m(m-1)\Theta^2-((m-1)^2+b^2m^2-b^{2m})\Theta+b^2m(m-1)=0. \end{align} Since $\Delta_m(\Theta)$ is a quadratic function, we only need to show that the discriminant is positive. We have \begin{align*} &D_m:=((m-1)^2+b^2m^2-b^{2m})^2-4m^2(m-1)^2b^2\\ &=((m-1)^2+b^2m^2-b^{2m}-2m(m-1)b)((m-1)^2+b^2m^2-b^{2m}+2m(m-1)b)\\ &=(m-1-bm-b^{m})(m-1-bm+b^{m})((m-1)^2+b^2m^2-b^{2m}+2m(m-1)b)\\ &=:D_{m,1}\cdot D_{m,2}\cdot D_{m,3}. \end{align*} Since $m\geq 2$, $0<b<1$, we have $D_{m,3}>1-1=0$. We also have \[ D_{m,2}=m(1-b)+b^m-1=(1-b)[m-\frac{1-b^m}{1-b}]=(1-b)(m-(1+b+b^2+...+b^{m-1}))>0. \] $D_{m,1}$ is decreasing in $b$ when $b\geq 0$ and $D_{m,1}(0)=m-1>0$, $D_{m,1}(1)=-2$. Let $b_m$ be the only zero point of $D_{m,1}$ in (0,1). If we take $0<b<b_m$, we have \begin{align}\label{d1mnonzero} D_{1,m}>0, D_{m}>0. \end{align} Hence $\Delta_m(\Theta)$ has two different solutions $\Theta_m^-<\Theta_m^+$. Moreover, the matrix does not vanish when $b\neq 0$, implying rk$(M_{m}(\Theta_{m}^{\pm})) = 1$. Now we are left to show $0<\Theta_m^{\pm}<1$. We have \[ \Theta_m^{+}\Theta_m^{+}=b^2<1, \] and \[ \Theta_m^{+}+\Theta_m^{-}=\frac{((m-1)^2+b^2m^2-b^{2m})}{m(m-1)}\geq \frac{1-1}{m(m-1)}>0. \] Hence, $0<\Theta_m^{-}<1$ and $0<\Theta_m^{+}$. If $\Theta_m^{+}\geq1$, then $\Delta_{m}(1)\leq 0$. However, \begin{align}\label{deltam1} &\Delta_m(1)=b^{2m}-mb^2-(1-m)\\\nonumber &=(1-b^2)(m-\frac{1-b^{2m}}{1-b^2})\\\nonumber &=(1-b^2)(m-1-b^2-...-b^{2m-2})>0. \end{align} Therefore $0<\Theta_m^{\pm}<1$. \end{proof} We now show that $\Delta_{jm}(\Theta_{m}^{\pm})\neq 0$ for any $j \neq 1$. \begin{lemma}\label{propbstar2} Let $j > 1$ and let $\Theta_{m}^{\pm}$ and $\Theta_{jm}^{\pm}$ be defined as in the previous Lemma. Then $\Theta_{jm}^{+}>\Theta_{m}^{+}>\Theta_{m}^{-}>\Theta_{jm}^{-}$. Hence, $M_{jm}(\Theta_{m}^{\pm})$ is non-singular for all $j>1$. \end{lemma} \begin{proof} Since $\Theta_{m}^{\pm}$ solves the equation \[ \Theta^2-\frac{((m-1)^2+b^2m^2-b^{2m})}{m(m-1)}\Theta+b^2=0. \] We only need to show $F(b,m):=\frac{((m-1)^2+b^2m^2-b^{2m})}{m(m-1)}$ is strictly increasing with respect to $m$. We have \begin{align*} &F(b,m+1)-F(b,m)\\ &=\frac{(m^2+b^2(m+1)^2-b^{2(m+1)})}{(m+1)m}-\frac{((m-1)^2+b^2m^2-b^{2m})}{m(m-1)}\\ &=\frac{(m^2+b^2(m+1)^2-b^{2(m+1)})(m-1)-((m-1)^2+b^2m^2-b^{2m})(m+1)}{(m+1)m(m-1)}\\ &=\frac{b^2((m+1)^2(m-1)-m^2(m+1))+m^2(m-1)-(m+1)(m-1)^2-b^{2m+2}(m-1)+(m+1)b^{2m}}{(m+1)m(m-1)}\\ &=\frac{-b^2(m+1)+m-1-b^{2m+2}(m-1)+b^{2m}(m+1)}{(m+1)m(m-1)}. \end{align*} Thus \begin{align*} &F(b,m+1)-F(b,m)>0\Leftrightarrow -b^2(m+1)+m-1-b^{2m+2}(m-1)+b^{2m}(m+1)>0\\ &\Leftrightarrow (1+b^2)(-1+b^{2m})-(-1+b^2)(1+b^{2m})m>0\Leftrightarrow -(1+b^2)(b^{2m-2}+b^{2m-4}...+b^2+1)+(1+b^{2m})m>0\\ &\Leftrightarrow -\sum_{k=1}^{m}b^{2m-2k}-\sum_{k=1}^{m}b^{2k}+(1+b^{2m})m>0 \Leftrightarrow \sum_{k=1}^{m}(1-b^{2k})(1-b^{2m-k})>0. \end{align*} It is easy to show the last inequality since $0<b<1$. \end{proof} \begin{proofprop}{onedimensionality} Let $m \ge 2$ and let $b$, $\Theta_m^{\pm}$ be as defined in Lemma~\ref{propbstar}. Assume that $H(x) = \sum_{j}A_{jm}\cos(jmx)$ and $h(x) = \sum_{j}a_{jm} \cos(jmx)$ satisfy \[ DF(\Theta_m^{\pm},0)[H,h] = (0,0). \] Then it follows from Proposition~\ref{Linearized_operator} that \begin{align*} M_{jm}(\Theta_m^{\pm}) \begin{pmatrix} A_{jm}\\ a_{jm} \end{pmatrix} = \begin{pmatrix} 0\\ 0 \end{pmatrix}. \end{align*} For all $j>1$, it follows from Lemma~\ref{propbstar2} that $M_{jm}(\Theta_m^{\pm})$ is invertible, thus $A_{jm} = a_{jm} = 0$. For $j=1$, Lemma~\ref{propbstar} tells us that $(A_m,a_m)\in \text{Ker}(M_m(\Theta_m^{\pm})) = \text{span}\left\{ v_0(\Theta_m^{\pm}) := \begin{pmatrix}\frac{1}{2m}b - \frac{b}{2}(1-\Theta_m^{\pm})\\ \frac{\Theta_m^{\pm}}{2m}b^m \end{pmatrix} \right\}$. This finishes the proof. \end{proofprop} \paragraph{Codimension of the image of the linear operator.} We now characterize the image of $D\mathcal{F}(\Theta_m^{\pm},0)$. We have the following proposition: \begin{prop}\label{codimension_one} Let \begin{align*} Z = \left\{(Q,q) \in Y^{k-1,m}_c \times Y^{k-1,m}_c, Q(x) = \sum_{j=1}^{\infty}Q_{jm}\sin(jmx), q(x) = \sum_{j=1}^{\infty}q_{jm}\sin(jmx), \right.\\ \left.\exists \lambda_{Q,q} \in \mathbb{R} \text{ s.t.} \left(\begin{array}{c}Q_{m} \\ q_{m}\end{array}\right)= \lambda_{Q,q} \left( \begin{array}{c} -\frac{1}{2m}b^{m+1} \\ -\frac{1}{2m}b + \frac{b}{2}(1-\Theta_m^{\pm}) \end{array} \right)\right\}. \end{align*} Then $Z = \text{Im}\left(D\mathcal{F}(\Theta_m^{\pm},0,0)\right)$. \end{prop} \begin{proof} In view of Proposition~\ref{Linearized_operator}, $\text{Im}\left( D\mathcal{F}(\Theta_m^{\pm},0) \right) \subset Z$ is trivial, since $M_{jm}(\Theta_m^{\pm})$ is non-singular for $j>1$, and $\text{Im}(M_m(\Theta_m^{\pm})) = \text{span}\left\{ \begin{pmatrix} -\frac{1}{2m}b^{m+1} \\ -\frac{1}{2m}b + \frac{b}{2}(1-\Theta) \end{pmatrix} \right\}.$ In order to prove $\text{Im}\left( D\mathcal{F}(\Theta_m^{\pm},0) \right) \supset Z$, we need to check whether the possible preimage satisfies the desired regularity. To do so, we have the following asymptotic lemma: \begin{lemma}\label{bstar_theta} For fixed $m\geq 2$ and $b$ defined as in Proposition~\ref{onedimensionality}, we have $b^2-\Theta_m^{\pm}< 0 $ and \begin{align} \frac{b}{4j^2m^2}\Delta_{jm}(\Theta_m^{\pm})= \frac{b}{4}\left( b^2-\Theta_m^{\pm} \right)(1-\Theta_m^{\pm})+O\left( \frac{1}{jm} \right), \quad \text{ as }j\to\infty, \end{align} Consequently, we have \begin{align}\label{asymptote} \text{det}{(M_{jm}(\Theta_m^{\pm}))^{-1}} \lesssim_{m,\Theta} 1, \quad \text{ for sufficiently large }j. \end{align} \end{lemma} \begin{rem}\label{zero_mean_bifurcation1} As shown in the above lemma, there is no bifurcation curve from the two-layered vortex patch with zero-average. This is due to the fact that the radial vorticity $\omega$ determined by $b$ and $\Theta^{\pm}$ as in \eqref{vortexpatch} satisfies $\int_{\mathbb{R}^2}\omega dx = \pi\left(-b^2+\Theta_m^{\pm}\right) > 0$. Note that if we require $\Theta=b^2$ to ensure $\int \omega dx = 0$, it follows from \eqref{determinantm} that $\Delta_m(b^2)=mb^2(1-b^2)+b^2(b^{2m}-1)$, which does not vanish for any $m\ge 2$ unless $b=0$ or $b=1$. Therefore, for any $0<b<1$, the linearized operator is an isomorphism and the implicit function theorem shows that there cannot be a bifurcation. \end{rem} \color{black} \begin{prooflem}{bstar_theta} First we show that $b^2< \Theta_m^{\pm}$. By Lemma \ref{propbstar}, $\Theta_m^+ \Theta_m^-=b^2$ and $0<\Theta_m^{\pm}<1$. Thus $\Theta_m^{\pm}>b^2$. Therefore the first assertion is proved. The second assertion follows directly from \eqref{determinantm} since \begin{align*} \frac{b}{4(jm)^2}\Delta_{jm}(\Theta_m^{\pm})=\frac{b}{4(jm)^2}\left((b^2-\Theta_m^{\pm})(1-\Theta_m^{\pm})(jm)^2+O\left( m \right) \right)=\frac{b}{4}\left(b^2-\Theta_m^{\pm}\right)(1-\Theta_m^{\pm})+O\left( \frac{1}{jm} \right). \end{align*} Lastly, by choosing $j$ large so that $|\frac{b}{4(jm)^2}\Delta_{jm}(\Theta_m^{\pm})|>\frac{b}{8}(\Theta_m^{\pm}-b^2)(1-\Theta_m^{\pm})>0$, we have \begin{align*} |\text{det}(M_{jm}(\Theta_m^{\pm})^{-1})| = \frac{1}{|\frac{b}{4(jm)^2}\Delta_{jm}(\Theta_m^{\pm})|}\lesssim 1, \end{align*} which proves \eqref{asymptote}. \end{prooflem} Now for an element $(Q,q)\in Z$, let $(H,h)$ be such that \begin{align*} H(x)=\sum_{j=1}^\infty A_{jm}\cos(jmx), \quad h(x)=\sum_{j=1}^\infty a_{jm}\cos(jmx), \end{align*} with \begin{align*} \begin{pmatrix} A_m\\ a_m \end{pmatrix} =-\frac{1}{m} \begin{pmatrix} 0\\ \lambda_{Q,q} \end{pmatrix}, \quad \begin{pmatrix} A_{jm}\\ a_{jm} \end{pmatrix} = (-jm)^{-1}M_{jm}(\Theta_m^{\pm})^{-1} \begin{pmatrix} Q_{jm}\\ q_{jm} \end{pmatrix} \quad \text{ for $j>1$}. \end{align*} It is clear from \eqref{Linearized_operator} that $D\mathcal{F}(\Theta_m^{\pm},0,0)(H,h)=(Q,q).$ We will prove that $(H,h) \in X^{k,m}_c \times X^{k,m}_c$. From Lemma~\ref{bstar_theta} and the fact that $M_{jm}(\Theta_m^{\pm})$ is nonsingular for $j>1$, it follows that \begin{align}\label{asymptote2} |A_{jm}|^2+|a_{jm}|^2 \lesssim (jm)^{-2}\left(|Q_{jm}|^2+|q_{jm}|^2\right) \quad \text{ for all }j>1. \end{align} Thus, we obtain \begin{align*} \rVert H\rVert_{X^{k,m}_c}^2+\rVert h\rVert_{X^{k,m}_c}^2 &= \sum_{j=1}^{\infty}\left( |A_{jm}|^2+|a_{jm}|^2 \right)(1+jm)^{2k}(\cosh(cjm)^2+\sinh(cjm)^2)\\ & \lesssim \frac{1}{m^{2}}\lambda_{Q,q}^{2}(1+m)^{2k}(\cosh(cm)^{2} + \sinh(cm)^{2}) \\ & + \sum_{j=2}^{\infty} (jm)^{-2}\left(|Q_{jm}|^2+|q_{jm}|^2\right) (1+jm)^{2k}(\cosh(cjm)^{2} + \sinh(cjm)^{2})\\ &\lesssim 1+(\rVert Q\rVert_{Y^{k-1,m}_c}^2+\rVert q\rVert_{Y^{k-1,m}_c}^2)\\ & <\infty. \end{align*} This proves that $(H,h)\in X^{k,m}_c\times X^{k,m}_c$, and therefore $Z\subset \text{Im}\left( D\mathcal{F}(\Theta_m^{\pm},0) \right)$. \end{proof} \subsubsection{Step 5: Transversality}\label{Step_5_Transversality} \begin{prop}\label{transv_prop} We have that \begin{align}\label{transversality_1} \partial_{\Theta} D\mathcal{F}(\Theta_m^{\pm},0)[v_0] \not \in \text{Im}(D\mathcal{F}(\Theta_m^{\pm},0)), \end{align} where $v_0=v_0(\Theta_m^{\pm})$ is as given in Proposition~\ref{onedimensionality}. \end{prop} \begin{proof} For $h(x) = \sum_{j}a_{jm} \cos(jmx),$ $H(x) = \sum_{j}A_{jm}\cos(jmx)$, we have that (see Proposition~\ref{Linearized_operator}): \begin{align*} \partial_\Theta D\mathcal{F}(\Theta_m^{\pm},0)[H,h] = \left(\begin{array}{c}U(x) \\ u(x) \end{array}\right), \end{align*} where \begin{align*} U(x) = \sum_{j}U_{jm} \sin(jmx), \quad u(x) = \sum_{j} u_{jm} \sin(jmx), \end{align*} and the coefficients satisfy, for any $j$: \begin{align*} \left(\begin{array}{c}U_{jm} \\ u_{jm} \end{array}\right) := (-jm) \partial_{\Theta}M_{jm}(\Theta_m^{\pm}) \left(\begin{array}{c}A_{jm} \\ a_{jm} \end{array}\right) := (-jm) \left( \begin{array}{cc} -\frac12 + \frac{1}{2m} & 0 \\ \frac{b^{m}}{2m} & -\frac{b}{2} \end{array} \right) \left(\begin{array}{c}A_{jm} \\ a_{jm} \end{array}\right). \end{align*} Letting \begin{align*} v_{0}(\Theta_m^{\pm}) = \left( \begin{array}{c} \frac{b}{2m} - \frac{b}{2}(1-\Theta_m^{\pm}) \\ \frac{\Theta_m^{\pm}}{2m}b^{m} \end{array} \right), \quad w(\Theta_m^{\pm}) = \left( \begin{array}{c} -\frac{1}{2m}b^{m+1} \\ -\frac{1}{2m}b + \frac{b}{2}(1-\Theta_m^{\pm}) \end{array} \right), \end{align*} be the generators of Ker$(M_{m}(\Theta_m^{\pm}))$ and Im$(M_{m}(\Theta_m^{\pm}))$ respectively, the transversality condition is equivalent to prove that $w_1(\Theta_m^{\pm})$ and $w(\Theta_m^{\pm})$ are not parallel, where \begin{align*} w_{1}(\Theta_m^{\pm}) & = \partial_{\Theta} M_{m}(\Theta_m^{\pm}) v_{0}(\Theta_m^{\pm}) = \left( \begin{array}{c} \frac{b}{4}\left(\frac{1}{m}-1\right)\left(\frac{1}{m}-(1-\Theta_m^{\pm})\right)\\ \frac{b^{m+1}}{4m}\left(\frac{1}{m}-1\right) \end{array} \right) \end{align*} This is equivalent to prove that: \begin{align*} 0 \neq -\frac{b^{2}}{8}\left(\frac{1}{m}-1\right)\left(\frac{1}{m}-(1-\Theta_m^{\pm})\right)^{2} + \frac{b^{2m+2}}{8m^{2}}\left(\frac{1}{m}-1\right) \Leftrightarrow \left(\Theta_m^{\pm} - (1 - \frac{1+b^m}{m})\right)\left(\Theta_m^{\pm} - (1 - \frac{1-b^m}{m})\right)\neq 0. \end{align*} We prove it by contradiction. If $\Theta_m^{\pm} = 1 - \frac{1+b^m}{m}$, we have $b^m=m(1-\Theta_m^{\pm})-1$. Moreover, by \eqref{determinantm},we have \[ \Theta_m^{\pm}b^{2m}+b^{m+2}m+\Theta_m^{\pm}(1-m)b^m=0. \] Hence, \begin{align*} & \Theta_m^{\pm}b^{m}+b^{2}m+\Theta_m^{\pm}(1-m)=0\\ &\Rightarrow \Theta_m^{\pm}(m(1-\Theta_m^{\pm})-1)+b^{2}m+\Theta_m^{\pm}(1-m)=0\\ &\Rightarrow -m(\Theta_m^{\pm})^2+b^2m=0. \end{align*} Since $\Theta_m^{\pm}>0, b>0$, we have \[ \Theta_m^{\pm}=b, \] implying a contradiction since $\Theta_m^+ \Theta_m^- = b^2$ and $\Theta_m^+ \neq \Theta_m^-$. If $\Theta_m^{\pm} = 1 - \frac{1-b^m}{m}$, we can follow the same way to get $\Theta_m^{\pm}=b$ and get a contradiction. \end{proof} \begin{proofthm}{teoremaestacionarias} All the hypotheses of the Crandall-Rabinowitz theorem were checked in Propositions~\ref{onedimensionality}, \ref{codimension_one} and \ref{transv_prop}. Therefore the desired result follows immediately. \end{proofthm} \section{Existence of non-radial stationary vortex patches with finite energy} \label{Section4} In this section, we aim to prove that there exist non-trivial patch solutions with finite kinetic energy, $\frac{1}{2\pi}\int_{\mathbb{R}^2} |\nabla \left( \omega * \log|\cdot|\right)|^2 dx < \infty$. As mentioned in \eqref{mean_energy}, this property is equivalent to $\int_{\mathbb{R}}\omega dx = 0$. By Remark \ref{zero_mean_bifurcation1}, we can not use two-layer patches and instead we will consider three-layer patches. \subsection{Main results for finite energy}\label{main_finite_1} We consider vortex patches with three layers, that is, $i\in \left\{1,2,3\right\}$ in the setting in Section~\ref{functional_setting}. The total vorticity that we consider is of the form $\omega= \sum_{i=1}^3 \Theta_i 1_{D_i}$, where $D_i$ is determined by $\partial D_i = \left\{ (b_i + R_i(x))(\cos x,\sin x) : x \in \mathbb{T} \right\}$. We will look for a bifurcation curve from the radial one, $\sum_{i=1}^3 \Theta_i 1_{B_{b_i}}$, where $B_r$ denotes the disk with radius $r$ centered at the origin. We have the following parameters and functional variables: \begin{itemize} \item $b_i \in \mathbb{R}$: the radii of the different layers of the annuli. We will have $1 =: b_1 > b_2 > b_3$. \item $\Theta_i\in \mathbb{R}$: the vorticity at the different layers. We will choose $\Theta_1 := 1$. \item $R:=(R_1,R_2,R_3) \in (X^{k,m})^3$, for some $3\le k\in \mathbb{N}$ : the functional variables that determine the boundaries. \end{itemize} In the rest of this section, we will fix $m$, $b_2$ and $\Theta_2$ so that for $2 \le m \in \mathbb{N}$, \begin{align}\label{parameter_3} 0< b_2<\left(\frac{1}{2}\right)^{\frac{1}{2m}} \quad \text{ and }\quad \frac{m({b_2}^2-1)}{(1-{b_2}^{2m})b_2^2}< \Theta_2< \min\bigg\{2\frac{{b_2}^{2m-2}(b_2^2-1)m}{1-b_2^{2m}},\ \frac{-1}{b_2^2}\bigg\}. \end{align} Given $R$, $\Theta_i$, $b_1$ and $b_2$, we choose $b_3$ so that \begin{align}\label{Theta b relation} \int_{\mathbb{R}^2}\omega(x)dx = \sum_{i=1}^3\int_0^{2\pi}\Theta_i(b_i+R_i(x))^2dx=0. \end{align} Since $b_1$, $b_2$ and $\Theta_1$, $\Theta_2$ are fixed constants, \eqref{Theta b relation} implies that $b_3$ is a function of $\Theta_3$ and $R$, more precisely, \begin{align}\label{def_b3} b_3 &= b_3(\Theta_3,R) \nonumber \\ & = \sqrt{-\frac{1}{2\pi\Theta_3} \left( \Theta_1\int_0^{2\pi} (b_1 + R_1(x))^2dx + \Theta_2 \int_0^{2\pi} (b_2 + R_2(x))^2dx + \Theta_3\int_0^{2\pi}R_3(x)^2dx \right) } \end{align} If $\Theta_3,b_3(\Theta_3,R)\ne 0$, then its derivative with respect to $R$ is given by \begin{align}\label{b_der_R} Db_3(\Theta_3,R)[h] & := \frac{d}{dt}b_3(\Theta_3,R+th)\bigg|_{t=0} \nonumber\\ & = -\frac{1}{2\pi \Theta_3 b_3(\Theta_3,R)}\left( \Theta_1\int_{\mathbb{T}}R_1(x)h_1(x)dx + \Theta_2\int_{\mathbb{T}}R_2(x)h_2(x)dx + \Theta_3\int_{\mathbb{T}}R_3(x)h_3(x)dx\right), \end{align} where we used $\int_\mathbb{T}h_i(x)dx = 0$ for $h_i\in X^{k,m}$. Note that for sufficiently small $\|R_i\|_{L^\infty}$ and $|\Theta_3-\Theta^*_{3,m}|$, where $\Theta^*_{3,m}$ is as defined in Lemma~\ref{kernel}, we can choose $b_3=b_3(\Theta_3,R_i)$ so that \eqref{Theta b relation} is compatible with $b_2>b_3>0$ (see Lemma~\ref{kernel}). Therefore, a 4-tuple $(\Theta_3,R_1,R_2,R_3)=:(\Theta_3,R)$ uniquely determines $\omega= \sum_{i=1}^3 \Theta_i 1_{D_i}$ such that the boundary of the $i$th patch surrounds the $j$th patch if $i < j$ and $\int_{\mathbb{R}}\omega dx = 0$. In the proof, $\Theta_3$ will play the role of the bifurcation parameter and we will look for a bifurcation from $(\Theta^*_{3,m},0)\in \mathbb{R}\times (X^{k,m})^3$. With this setting, the system \eqref{stationary_equation} is equivalent to \begin{align}\label{stationarR_equation3} 0=G(\Theta_3, R) :=(G_1(\Theta_3, R), G_2(\Theta_3, R), G_3(\Theta_3, R)), \end{align} where \begin{align}\label{stationarR_equation4} G_j(\Theta_3, R):=\mathcal{F}_j(b(\Theta_3,R),\Theta_3,R)), \text{ and }b(\Theta_3,R):=(b_1,b_2,b_3(\Theta_3,R)), \quad j=1,2,3. \end{align} Now, we are ready to state the main theorem of this section: \begin{theorem} \label{teoremaestacionarias2} Let $k\geq 3$ and $2 \le m\in \mathbb{N}$, $\Theta_1 = b_1= 1$ and $b_2$ and $\Theta_2$ as in \eqref{parameter_3}. Then for some $s_0=s_0(m,k,b_2,\Theta_2) > 0$, there exists a bifurcation curve $[0,s_0) \ni s\mapsto (\Theta_3(s),R(s)) \in \mathbb{R} \times (X^{k,m})^3$ such that for each $s\in (0,s_0)$, $(\Theta_3(s),R(s))$ is a solution of the equation \eqref{stationarR_equation3} and $(R(s)) \ne 0\in (X^{k,m})^3$. The bifurcation curve emanates from $(\Theta_3(0),R(0)) = (\Theta^*_{3,m},0)$, where $\Theta^*_{3,m}$ is defined in Lemma~\ref{kernel}. \end{theorem} Theorem~\ref{teoremaestacionarias2} immediately implies the existence of non-radial stationary vortex patches with finite kinetic energy. \begin{corollary}\label{finite_energy_solution} Let $2\le m\in \mathbb{N}$ and $k\ge 3$. Then there is an $m$-fold symmetric stationary patch solution of the 2D Euler equation with $H^k$-regular boundary and finite kinetic energy, that is \[ \int_{\mathbb{R}^2} \left|\nabla \left( \omega * \frac{1}{2\pi}\log|\cdot| \right)\right|^2 dx < \infty. \] \end{corollary} \begin{proof} By the definition of $b_3$ in \eqref{Theta b relation}, each $\omega=\sum_{i=1}^3 \Theta_i 1_{D_i}$ which is determined by $(\Theta_3(s),R(s))$ for $s\in (0,s_0)$, satisfies $\int_\mathbb{R} \omega dx = 0$. This is equivalent to $\int_{\mathbb{R}^2} |\nabla \left( \omega * \frac{1}{2\pi}\log|\cdot| \right)|^2 dx < \infty$, (see \eqref{mean_energy}). \end{proof} The existence of the bifurcation curves will be proved by means of a Nash-Moser iteration scheme (Theorem~\ref{theorem1}). The proof of Theorem~\ref{teoremaestacionarias2} will be accomplished in Subsection~\ref{checking_subsection} by checking the hypotheses of Theorem~\ref{theorem1}. \subsection{Compactly supported velocity} In this subsection, we digress briefly to observe an interesting consequence of Theorem~\ref{teoremaestacionarias2}. Thanks to a simple maximum principle lemma, it can be shown that each stationary solution on the bifurcation curves has compactly supported velocity. \begin{lemma}(the key Lemma)\label{zero mean} Assume that $\omega\in L^{1}\cap L^{\infty}(\mathbb{R}^n)$ for $n\ge 2$ is compactly supported and let $\Omega$ be the unbounded connected component of $\text{supp}(\omega)^c$. We additionally assume that $\int_{\mathbb{R}^n}\omega dx = 0$. Then for $f := \omega * \mathcal{N}$, where \begin{align*} \mathcal{N}(x) = \begin{cases} \frac{1}{2\pi}\log|x| & \text{ if }n=2 \\ \frac{1}{n(2-n)V_n}|x|^{2-n}, \quad V_n:=\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)} & \text{ if } n>2, \end{cases} \end{align*} it holds that \begin{align*} \sup_{x\in \Omega}f(x)=\max_{x\in \partial \Omega}f(x) \quad \text{and} \quad \inf_{x\in \Omega}f(x)=\min_{x\in \partial \Omega}f(x). \end{align*} Consequently, if $f$ is constant on $\partial \Omega$, then $f$ is constant in $\Omega$. \end{lemma} \begin{rem} The above lemma does not hold without the assumption $\int_{\mathbb{R}^n} \omega dx = 0$. For example, $f:=1_{B_1} * \mathcal{N}$ is harmonic in $\Omega:=B_1^c$, while $\max_{\Omega}f$ is unbounded in $\Omega^c$. \end{rem} \begin{proof} It suffices to prove the maximum part since we can apply the argument to $-f$ for the minimum part. The proof is classical but we present a proof for the sake of completeness. Let $M:=\sup_{x\in \Omega}f(x)$. Then $A:=\left\{ x\in \Omega : f(x)=M\right\}$ is relatively closed in $\Omega$, since $f$ is continuous. Furthermore, since $f$ is harmonic in $\Omega$, the mean value property yields that $A$ is open. Therefore $A$ must be either $\emptyset$ or $\Omega$ since $\Omega$ is connected. If $A=\Omega$, then the result follows trivially. Now, let us suppose that $A=\emptyset$. Towards a contradiction, assume that $M > \max_{x\in \partial \Omega}f(x)$. Since $f$ is bounded in $\mathbb{R}^n$ (this property still holds when $n=2$, thanks to $\int_{\mathbb{R}^2}\omega dx =0$), therefore the only possible case is that \begin{align}\label{maximum} \lim_{|x|\to \infty}f(x)=M. \end{align} Let us consider $\phi(r):=\frac{1}{|\partial B_r|}\int_{|x|=r}f(x)d\sigma(x)$. For any sufficiently large $r$ such that $\Omega^c \subset B_r$, where $B_r$ is the ball centered at the origin with radius r, it follows that \begin{align*} \frac{d}{dr}\phi(r) & = \frac{d}{dr}\frac{1}{|S_{n-1}|}\int_{|x|=1}f(rx)d\sigma(x) = \frac{1}{|\partial B_r|}\int_{|x|<r} \omega (x)dx=0, \end{align*} Furthermore, \eqref{maximum} yields that \begin{align*} \lim_{r\to\infty}\phi(r)=M, \end{align*} hence, we have $\phi(r)=M$ for all sufficiently large $r$. For such $r>0$, we have \begin{align*} M=\frac{1}{|\partial B_r|}\int_{|x|=r}f(x)d\sigma\le \max_{x\in \partial B_r}f(x)\le M. \end{align*} This implies $A$ cannot be empty, which is a contradiction. Hence $M=\max_{x\in \partial \Omega}f(x)$. \end{proof} \begin{corollary}\label{corollary_1} Let $2\le m\in \mathbb{N}$ and $k\ge 3$. There exists an m-fold symmetric stationary patch solution of the 2D Euler equation with $H^k$-regular boundary and compactly supported velocity. \end{corollary} \begin{proof} Let $\omega = \sum_{i=1}^3\Theta_i1_{D_i}$ be the vorticity determined by $(\Theta_3(s),R(s))$ for $s\in (0,s_0)$ in Theorem~\ref{teoremaestacionarias2}. From \eqref{evolution_patch} and \eqref{stationary_equation}, it follows that its stream function $f:=\frac{1}{2\pi}\omega * \log|\cdot|$ is constant on the outmost boundary $\partial D_1$. However, Lemma~\ref{zero mean} implies that $\sup_{D_1^c} f = \inf_{D_1^c}f$, therefore, $f$ is constant in $D_1^c$. This proves that $\text{supp}(\nabla^\perp f) \subset D_1$. \end{proof} \begin{rem}\label{smooth_app} In this paper we focus on patch type solutions $\omega=\sum_{i=1}^n\Theta_i 1_{D_i}$. Lemma~\ref{zero mean} is still applicable for smooth $\omega$ as well as long as the boundary of $\Omega:=\text{supp}(\omega)$ can be approximated by regular level curves of $\omega$ (See Figure~\ref{diagram2}). This is due to the fact that the stream function of the smooth stationary $\omega$ must be constant on each regular level set of $\omega$ (see \cite[Section 1]{GomezSerrano-Park-Shi-Yao:radial-symmetry-stationary-solutions}), hence the stream function has to be constant on each connected component of $\partial \Omega$. If $\partial \Omega$ has only one connected component, then the velocity vanishes in the unbounded component of $\Omega^c$. \end{rem} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.9]{diagram2.pdf} \caption{Illustration for smooth stationary $\omega$ where $\Omega:=\text{supp}(\omega)$ is colored in blue, whose boundary is not necessarily connected. The dashed lines are regular level sets of $\omega$, which converges to the outermost boundary. In such case, the stream function is constant on the outermost boundary $\partial \Omega^{out}$, thus Lemma~\ref{zero mean} implies that the velocity vanishes in the unbounded part of $\Omega^c$. \label{diagram2}} \end{center} \end{figure} \color{black} \subsection{Nash-Moser theorem} We first prove a bifurcation theorem using the Nash-Moser scheme under some assumptions, which will turn out to be satisfied by our nonlinear functional. We follow the ideas from Berti \cite{Berti:nash-moser-tutorial}. Let $2\le m\in \mathbb{N}$ fixed. We denote \[ X^{k}:=X^{k,m}, \quad Y^k := Y^{k,m},\quad C^\infty:=C^\infty(\mathbb{T}), \quad R:=(R_1,R_2,R_3)\in \left( X^{k+1}\right)^3 \cap (C^{\infty})^3, \quad \partial_{\Theta}:=\partial_{\Theta_3},\] for simplicity. Furthermore, for a Banach space $X$ and an element $R\in X$, we denote the norm of $R$ by $|R|_{X}:=\rVert R \rVert_{X}$. In addition, we use the notation $A\lesssim_{a,b} B$ if there exists a constant $C=C(a,b)>0$ depending on some variables $a,b$ such that $A\le CB$. We also use $c_0,c_1,....$ to denote universal constants that may vary from line to line. \begin{theorem}\label{theorem1} Assume that there exists $\Theta_3^* \in \mathbb{R}$ and an open neighborhood $I\times V^{3}$ of $(\Theta^*_3,0)\in \mathbb{R}\times \left( X^{3}\right)^3$ such that for each $2 \le k \in \mathbb{N}$, $G : I \times \left(X^{k+1} \right)^3 \mapsto \left(Y^{k}\right)^3$ satisfies the following: For $(\Theta_3,R)\in I\times \left(V^3 \cap (C^\infty)^3\right)$ \begin{itemize} \item[(a)] (Existence of a curve of trivial solutions) $G(\Theta_3,0) = 0$ for all $\Theta_3 \in I$. \item[(b)] (Regularity) It holds that \begin{align} &|G(\Theta_3,R)|_{(Y^{k})^3} \lesssim_{k} 1 + |R|_{(X^{k+1})^3}, \label{lineargrowth1}\\ &\left|\partial_\Theta G(\Theta_3,R)\right|_{(Y^{k})^3} \lesssim_{k} 1+\left| R \right|_{(X^{k+1})^3}\label{lineargrowth2}\\ &| D^2 G(\Theta_3,R)[h,h]|_{(Y^{k})^3} \lesssim_{k} (1+| R |_{(X^{k+3})^3})|h|_{(X^{k+1})^3}^2\label{D2G}\\ &| \partial_\Theta D G(\Theta_3,R)[h]|_{(Y^{k})^3} \lesssim_{k} (1+| R |_{(X^{k+3})^3})| h |_{(X^{k+1})^3}\label{dtDG}\\ &|\partial_\Theta D^2G(\Theta^*_3, R)[h,h]|_{(Y^{k})^3} \lesssim_{k} (1+| R |_{(X^{k+3})^3})| h |_{(X^{k+1})^3}^2\label{dtDG2}\\ &| \partial_{\Theta\Theta} D G(\Theta_3,R)[h]|_{(Y^{k})^3} \lesssim_{k} (1+| R |_{(X^{k+3})^3})| h |_{(X^{k+1})^3}\label{dttDG} \end{align} \item[(c)] (Decomposition of $DG$) $DG(\Theta_3,R)\in \mathcal{L}((X^{k+1})^3;(Y^k)^3)$ has the following decomposition: \begin{align*} DG(\Theta_3,R)[h] = a(\Theta_3,R)[h] + A(\Theta_3,R)[h], \end{align*} such that $a(\Theta_3,R)\in \mathcal{L}((X^{k+1})^3,(Y^{k})^3)$, $A(\Theta_3,R) \in \mathcal{L}((X^{k+1})^3,Y^{k+1}\times (Y^{k})^2)$. Also, there exists $0 < \eta <1$ such that if $\rVert R \rVert_{(H^{k+2})^3}\le \eta$, then \begin{align} &{|a(\Theta_3,R)[h]|_{(Y^{k})^3} \lesssim_{k} |G(\Theta_3,R)|_{(Y^{k})^3}|h|_{(X^{k+1})^3} }\label{approxinverse1}, \\ &|A(\Theta_3,R)[h]|_{Y^{k+1}\times (Y^{k})^2} \lesssim_{k} (1+| R |_{(X^{k+3})^3}) |h|_{(X^{k+1})^{3}},\label{approxinverse2} \end{align} \item[($\tilde{c}$-1)] $A:\mathbb{R}\times (X^{k+3})^3\mapsto \mathcal{L}((X^{k+1})^3,Y^{k+1}\times (Y^{k})^2)$ is Lipschitz continuous. That is, if $$(\Theta_3^1,R^1),\ (\Theta_3^2,R^2)\in I\times (V^3\cap (C^\infty)^3),$$ and $\rVert R^1\rVert_{(H^{k+3}(\mathbb{T}))^2}, \rVert R^2 \rVert_{(H^{k+3}(\mathbb{T}))^2} \le 1$, it holds that \begin{align}\label{NM_lip} |A(\Theta_3^1,R^1)[h] - A(\Theta_3^2,R^2)|_{(Y^{k})^3} \lesssim_k \left( |\Theta_3^1 - \Theta_3^2| + |R^1-R^2|_{(X^{k+3})^3}\right) |h|_{(X^{k+1})^3}. \end{align} \item[($\tilde{c}$-2)] (Tame estimates) There exists $0 < \eta < 1$ such that if $\rVert R \rVert_{(H^{k+3}(\mathbb{T}))^3} \le \eta$, and $A(\Theta_3,R)[h] =z$ for some $z\in (C^\infty)^3$ and $h \in \text{Ker}(A(\Theta^*_3,0))^{\perp}$, then $h\in (C^\infty)^3$. Furthermore, for any even $\sigma \in \mathbb{N}\cup \left\{ 0 \right\}$, it holds that \begin{align}\label{tame1} \left| h \right|_{(X^{k+1+\sigma})^3} \lesssim_{k,\sigma} (1 + \left| R \right|_{(X^{k+4+\sigma})^3} )\left| z \right|_{Y^{k+1}\times (Y^{k})^{2}} + \left| z \right|_{Y^{k+1+\sigma}\times (Y^{k+\sigma})^{2}}. \end{align} \item[(d)] (Fredholm index zero) There exist non-zero vectors $ v$ and $w$ such that $v$ and $w$ are supported on the $m$-th Fourier mode and \begin{align*} \text{Ker}(A(\Theta_{3}^{*},0)) = \text{span}\left\{ v\right\}, \quad \text{Im}(A(\Theta_{3}^{*},0))^{\perp} = \text{span}\left\{w\right\}, \end{align*} \item[(e)] (Transversality) $\partial_{\Theta} A(\Theta_{3}^{*},0)[v] \notin \text{Im}(A(\Theta_{3}^{*},0))$. \end{itemize} Then, for any $k_0\ge 2$, there exist a constant $s_0=s_0(k_0)>0$ and a curve $[0,s_0) \ni s\mapsto (\Theta_3(s),R(s))\in I\times (X^{k_0+1})^3$ such that $G(\Theta_3,R)=0$ and $R(s)\ne 0$ for $s>0$. The curve emanates from $(\Theta_3(0),R(0)) = (\Theta^*_3,0)$. \end{theorem} \begin{rem} Note that the evenness of $\sigma$ for the tame estimate in $(\tilde{c}-2)$ is simply because $X^k$ is a space of even functions and any odd order derivatives of $h\in X^k$ are even. \end{rem} The rest of this section is devoted to prove Theorem~\ref{theorem1}. Towards the proof, let $k_0 \ge 2$ be fixed. We define the projections $P:(X^{k_0+1})^3 \mapsto \text{Ker}(A(\Theta_3^*, 0))$ and $Q: (Y^{k_0})^3\mapsto \text{Im}(A(\Theta_{3}^{*}, 0))^{\perp}$ by \begin{align}\label{projections} PR:=\left( v\cdot R\right) v \quad \text{ and } \quad QR:=(w\cdot R)w, \end{align} where $(f\cdot g)$ denotes the usual $L^2$ inner product. Note that from the assumptions $(d)$ and $(e)$, we make an ansatz that for sufficiently small $s>0$, the bifurcation curve $s\mapsto(\Theta_3(s),R(s))$ can be written as \[ (\Theta_3(s),R(s)) = \left(\Theta^*_3 + v\cdot \tilde{R}(s), sv+(I-P)\tilde{R}(s) \right), \] for some $\tilde{R}(s) \in (X^{k_0+1})^3$ such that $|(I-P)\tilde{R}|_{(X^{k_0+1})^3} = o(s)$. From this ansatz, we define a family of functionals $\tilde{G_s}: (X^{k_0+1})^3 \mapsto (Y^{k_0})^3$ by \begin{align}\label{def_tilde_g} \tilde{G}_s(R) := G(\Theta^*_3 + v\cdot R, sv + (I-P)R), \end{align} and look for $R\in (X^{k_0+1})^3$ such that $\tilde{G}_s(R) = 0$ and $|(I-P)R|_{(X^{k_0+1})^3} = o(s)$ for sufficiently small $s>0$. This will be achieved by Newton's method, where the first approximate solution is $R=0$. To perform Newton's method, we need to study the linearized operator $D\tilde{G}_s$ at each approximate solution $R \ne 0$: For $h\in (X^{k_0+1})^3$, which can be directly computed from \eqref{def_tilde_g}, \begin{align} D\tilde{G}_s(R)[h]& := D_R\tilde{G}_s(R)[h]\nonumber\\ & = \partial_{\Theta}G(\Theta_{3}^{*}+v\cdot R,sv+(I-P)R)(v\cdot h) + DG(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R)[(I-P)h]\nonumber\\ & =: T_s(R)[h] + a(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R)[(I-P)h],\label{dGanda} \end{align} where \begin{align}\label{T_sdef} T_s(R)[h] := \partial_{\Theta}G(\Theta_{3}^{*}+v\cdot R,sv+(I-P)R)(v\cdot h) + A(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R)[(I-P)h]. \end{align} The above decomposition of $D\tilde{G}_s$ into $T_s + a$ follows from the assumption $(c)$. Recall that Newton's method relies on the \quotes{invertibility} of $T_s(R)$ for small $s>0$. However, as we will see in the next lemma, $T_s$ is not fully invertible between $(X^{k_0+1})^3$ and $(Y^{k_0})^3$, because of its loss of derivatives. This is a motivation of adapting a Nash-Moser scheme in our proof. However, \eqref{approxinverse1} suggests that the inverse of $T_s$ is a good approximate right inverse of $D\tilde{G}_s$. In the next subsection, we will focus on the properties of $T_s(R)$. \subsubsection{Analysis of ${T}_s$}\label{Analysis_T} We will look for a solution $R$ to $\tilde{G}_s (R) = 0$ in $(X^{k_0+1})^3$ for small $s>0$. In each step of Newton's method (Nash-Moser iteration), we will regularize the approximate solution $R_n$ (see \eqref{definition_of_approx_sol}). The theorem will be achieved by proving that $R_n$ converges in $(X^{k_0+1})^3$. However, we will also obtain boundedness of the sequence $R_n$ in the higher norms, which is necessary because of the extra regularity conditions as in (c), $(\tilde{c}-1)$ and $(\tilde{c}-2)$. For this reason, we will establish several lemmas assuming that an approximate solution $R$ is more regular then $(X^{k_0+1})^3$, which will turn out to be true at the end of the proof. \begin{lemma}\label{approxinv} Let $0 < \epsilon < 1 $ and $2\le k_0\in \mathbb{N}$ be fixed. There exist positive constants $s_0(\epsilon,k_0),\ c_0(\epsilon,k_0)$ and $0<\delta(\epsilon,k_0)<1$ such that for each $0< s < s_0$, the following holds: If \begin{align} &|PR|_{(X^{k_0+2})^3} \leq s^{\epsilon},\label{assumptionfory1}\\ &|(I-P)R|_{(X^{k_0+2})^3}\leq s^{1+\epsilon},\label{assumptionfory2}\\ &|R|_{(H^{k_0+4})^3}\leq \delta,\label{assumptionfory3} \end{align} then \begin{itemize} \item[(A)] For all $t\in[0,1]$, \begin{align}\label{containedinV} (\Theta^*_3 + v\cdot R, sv + (I-P)R)\in I\times V^3, \end{align} where $V^{3}$ is as in Theorem \ref{theorem1}. Therefore, \begin{align*} T_s(R)[h] := \partial_{\Theta}G(\Theta_{3}^{*}+v\cdot R,sv+(I-P)R)(v\cdot h) + A(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R)[(I-P)h], \end{align*} is well-defined. \item[(B)] $T_s(R):(X^{k_0+1})^3\mapsto Y^{k_0+1}\times(Y^{k_0})^2$ is an isomorphism and \begin{align} &|T_s(R)[h]|_{Y^{k_0+1}\times(Y^{k_0})^2} \le c_0 |h|_{(X^{k_0+1})^3},\label{Tbound}\\ &|T_s(R)^{-1}[z]|_{(X^{k_0+1})^3} \le \frac{c_0}{s}|z|_{Y^{k_0+1}\times(Y^{k_0})^2}\label{Tinvbound0},\\ &\left|\left(D\tilde{G}_s(R)\circ T_s(R)^{-1} - I\right)[z]\right|_{(Y^{k_0})^3} \le c_0 |\tilde{G}_s(R)|_{(Y^{k_0})^3}|T_s(R)^{-1}[z]|_{(X^{k_0+1})^3},\label{Approximate_inv} \end{align} for some $c_0=c_0(\epsilon,k_0)>0$. If $R\in C^{\infty}$, then for any even $\sigma\in \mathbb{N}\cup \left\{ 0 \right\}$ and $z\in X^{k_0+1+\sigma}\times (X^{k_0+\sigma})^{2}$, we have $T_s(R)^{-1}[z] \in \left( X^{k_0+1+\sigma}\right)^3$. Also, we have that \begin{align}\label{highernorm_inversion} \left| T_s(R)^{-1}[z] \right|_{\left( X^{k_0+1+\sigma}\right)^3} \le \frac{c_0}{s} \left( (1 + |R|_{(X^{k_0+4+\sigma})^3})|z|_{Y^{k_0+1}\times (Y^{k_0})^{2}} + |z|_{Y^{k_0+1+\sigma}\times (Y^{k_0+\sigma})^{2}}\right), \end{align} where $c_0$ may depend on not only $\epsilon,k_0$ but also $\sigma$. \item[(C)]Furthermore, we can choose $c_0$ so that when $R=0$, the following hold: \begin{align} |P(T_s(0)^{-1}[z])|_{(X^{k_0+1})^{3}} \le \frac{c_0}{s}|z|_{Y^{k_0+1}\times(Y^{k_0})^{2}},\label{Tinvbound1}\\ |(I-P)(T_s(0)^{-1}[z])|_{(X^{k_0+1})^{3}} \le c_0 |z|_{Y^{k_0+1}\times(Y^{k_0})^{2}}.\label{Tinvbound2} \end{align} \end{itemize}\end{lemma} We will frequently use \begin{align}\label{crudebound} |T_s(R)^{-1}z|_{\left(X^{k_0+1}\right)^3} \le \frac{c_0}{s}|z|_{(Y^{k_0+1})^3}, \end{align} which is more crude than \eqref{Tinvbound0}. \begin{proof} Let us fix $\epsilon>0$ and $k_0\ge2$. We will show that if \eqref{assumptionfory1}-\eqref{assumptionfory3} hold for sufficiently small $s>0$ and for some small $\delta>0$ depending on $\epsilon$, then (A),(B) and (C) hold for some $c_0>0$. \textbf{Proof of (A).} To see (A), notice that \eqref{containedinV} is trivial. From the assumptions (b) and (c) in Theorem~\ref{theorem1}, the linear operator $T_s(R)$ is well-defined. \textbf{Proof of (B).} The estimate \eqref{Tbound}, follows from \eqref{T_sdef}, \eqref{approxinverse2}, \eqref{lineargrowth2} and \[ | sv + (I-P)R |_{\left(H^{k_0+3}\right)^3} \lesssim 1 + \delta \lesssim 1, \] where the last inequality follows from \eqref{assumptionfory3}. Before proving \eqref{Tinvbound0}, we first prove that \begin{align}\label{novanishingQ} |Q\partial_\Theta G(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R) |_{X^{k_0+1}\times (X^{k_0})^2}\ge cs, \end{align} for some $c=c(\epsilon)>0$. The above inequality is a consequence of the transversality condition $(e)$ in Theorem~\ref{theorem1}. In what follows, $c,c_1,c_2,\ldots$ denote positive constants that may change from line to line and depend only on $\epsilon$ but not on $s$. By the regularity assumptions of $G$ in (b) in Theorem~\ref{theorem1}, we estimate the quantity in \eqref{novanishingQ} using a Taylor expansion up to linear order: For a fixed $R$, we set $f(s,y):=Q\partial_\Theta G(\Theta_{3}^{*}+v\cdot R, sv+(I-P)y)$, so that \begin{align}\label{fisQG} f(s,R) = Q\partial_\Theta G(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R). \end{align} Using the fundamental theorem of calculus, we can write \begin{align*} f(s,R) &:= \int_0^{1}\frac{d}{dt}\left( f(s,tR)\right) dt + f(s,0)\\ & = \int_0^{1}\frac{d}{dt}\left( f(s,tR)\right) dt + f(0,0) + \int_0^{s}\frac{d}{dt}f(t,0)dt. \end{align*} In terms of $G$, \eqref{fisQG} and the above equality give us that \begin{align}\label{QGestimate} |Q\partial_\Theta G(\Theta_{3}^{*}&+v\cdot R, sv+(I-P)R) |_{Y^{k_0+1}\times (Y^{k_0})^2} =|f(s,R)|_{Y^{k_0+1}\times (Y^{k_0})^2}\nonumber\\ & \ge -{\sup_{0\leq t\leq 1}\|Q\partial_\Theta DG(\Theta_{3}^{*}+v\cdot R, sv+t((I-P)R))\|_{\mathcal{L}((X^{k_0+2})^3,(Y^{k_0+1})^3)}|(I-P)R|_{\left(X^{k_0+2}\right)^3}} \nonumber\\ &\ - {|Q\partial_\Theta G(\Theta_{3}^{*}+v\cdot R,0)|_{Y^{k_0+1}\times (Y^{k_0})^2}}\nonumber\\ & \ +{\bigg|\int_{0}^{s} Q\partial_\Theta DG(\Theta_{3}^{*}+v\cdot R, tv)[v] dt \bigg|_{Y^{k_0+1}\times (Y^{k_0})^2}}\nonumber\\ & =: -L_1 - L_2 + L_3, \end{align} where the first term after the inequality follows from b) in Theorem~\ref{theorem1} and \begin{align*} \int_0^1 \left| \frac{d}{dt}\left( f(s,tR)\right) \right|_{Y^{k_0+1}\times (Y^{k_0})^2} dt & \le \int_0^1 \left| \frac{d}{dt}\left( f(s,tR)\right)\right|_{(Y^{k_0+1})^3} dt \\ & \le \sup_{0\le t \le 1}\left| D_Rf(s,tR)[R] \right|_{(Y^{k_0+1})^3}, \end{align*} and the rest follows simply from the triangular inequality. Then the regularity assumption (b) in Theorem~\ref{theorem1} yields that (recall $\rVert R \rVert_{(X^{k_0+4})^3}<\delta < 1$,) \begin{align}\label{l1} L_1 \le c_1 |(I-P)R|_{\left(X^{k_0+2}\right)^3} \le c_1 s^{1+\epsilon}, \end{align} where the last inequality follows from \eqref{assumptionfory2}. For $L_2$, we use (a) in Theorem~\ref{theorem1} to obtain \begin{align}\label{l2} L_2 = 0. \end{align} To estimate $L_3$, we compute \begin{align}\label{L_3estimate112} L_3 &\ge \bigg| \int_{0}^{s}Q\partial_{\Theta} DG(\Theta_{3}^{*},tv)[v]dt\bigg|_{Y^{k_0+1}\times (Y^{k_0})^2} - \int_{0}^{s}\sup_{u\in(0,|v\cdot R|)}\bigg|Q\partial_{\Theta\Theta}DG(\Theta_{3}^{*}+u,tv)[v]\bigg|_{Y^{k_0+1}\times (Y^{k_0})^2}|v\cdot R|dt. \end{align} For the first integral, we can write \begin{align*} Q\partial_\Theta DG(\Theta^*_3,tv)[v] &= Q\partial_\Theta DG(\Theta^*_3,0)[v] + \int_0^{t}Q\partial_\Theta D^2G(\Theta^*_3, uv)[v,v]du\\ & = QA(\Theta^*_3,0)[v] + \int_0^{t}Q\partial_\Theta D^2G(\Theta^*_3, uv)[v,v]du, \end{align*} where the last equality follows from (a) and (c) in Theorem~\ref{theorem1}, which shows $a(\Theta^*_3,0)=0$. For the integral term, using \eqref{dtDG2}, we have \[ \left| Q\partial_\Theta D^2 G(\Theta^*_3,uv)[v,v]\right|_{Y^{k_0+1}\times (Y^{k_0})^2} \lesssim \left| Q\partial_\Theta D^2 G(\Theta^*_3,uv)[v,v]\right|_{(Y^{k_0+1})^3} \lesssim (1+\rVert R \rVert_{(X^{k_0+4})^3})\rVert v \rVert_{(X^{k_0+2})^3} \le c, \] where the last inequality follows from \eqref{assumptionfory3}. Therefore \begin{align*} \bigg| \int_{0}^{s}Q\partial_{\Theta} DG(\Theta_{3}^{*},tv)[v]dt\bigg|_{X^{k_0+1}\times (X^{k_0})^2} &\ge s \left| QA(\Theta^*_3,0)[v] \right|_{X^{k_0+1}\times (X^{k_0})^2} - \int_0^{s}\int_0^t c dudt \\ & \ge s \left| Q\partial_\Theta DG(\Theta^*_3,0)[v] \right|_{X^{k_0+1}\times (X^{k_0})^2} - c_2 s^2 \\ & \ge c_3 s - c_2 s^2, \end{align*} where the last inequality follows from the transversality (e). For the second integral in \eqref{L_3estimate112}, we have that for sufficiently small $s>0$, \[ \sup_{u\in (0,|v\cdot R|),t\in (0,s)}\left| \partial_{\Theta\Theta}DG(\Theta^*_3 + u,tv)[v] \right|_{Y^{k_0+1}\times (Y^{k_0})^2} \lesssim \sup_{t\in (0,s)}(1+| tv |_{(X^{k_0+4})^3})| v |_{(X^{k_0+2})^3}\le c_1, \] for sufficiently small $s$, which follows from \eqref{dttDG}. Hence using \eqref{assumptionfory1}, we obtain from \eqref{L_3estimate112} that \begin{align}\label{l3} L_3 \ge c_3 s - c_2s^2 - c_1 s^{1+\epsilon} \end{align} Thus the claim \eqref{novanishingQ} follows from \eqref{QGestimate} \eqref{l1}, \eqref{l2} and \eqref{l3} for small $s$, depending on $\epsilon$. Towards the proof of \eqref{Tinvbound0}, we will consider how to invert $A(\Theta_3^*+v\cdot R,sv+(I-P)R)[(I-P)]$. Since the dimension of $\text{Im}(DG(\Theta_3^*,0)^\perp)$ is one and $w$ is the basis of $\text{Im}(DG(\Theta_{3}^{*},0))^{\perp}$, the above claim \eqref{novanishingQ} proves that \begin{align}\label{novanishingQ2} \left| w\cdot \partial_\Theta G(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R) \right| > c s. \end{align} To simplify the notations, we denote \begin{align} &\tilde{A}[h] := A(\Theta_{3}^{*}+v\cdot R, sv+(I-P)R)[(I-P)h],\label{Atilde_def1}\\ &\partial_\Theta G:=\partial_{\Theta}G(\Theta^*+v\cdot R,sv+(I-P)R).\nonumber \end{align} We denote by $Q_1$ the projection from $Y^{k_0+1}\times \left( Y^{k_0}\right)^2$ into $\text{Im}(\tilde{A})^\perp$. Since the norm of $R$ is small, we expect that the functional structure of $\tilde{A}$ should be similar to the structure of $A(\Theta^*_3,0)$. Indeed, by continuity hypotheses \eqref{NM_lip}, \eqref{assumptionfory3} and Lemma~\ref{functional_stability}, where we can think of $\tilde{A}$ as a linear map from $\text{Ker}(A(\Theta^*_3,0))$, we can choose $\delta$ small enough so that if \eqref{assumptionfory3} holds, then there exists $ 0\ne w_1\in Y^{k_0+1}\times (Y^{k_0})^2$ such that \begin{align} &\text{Im}(\tilde{A})^{\perp}=\text{span}\left\{w_1\right\} \label{Atilde1}\\ &|w - w_1|_{Y^{k_0+1}\times (Y^{k_0})^2} \le c\delta, \label{Atilde_2} \\ &|\tilde{A}|_{\mathcal{L}(\text{Ker}\left(A(\Theta_{3}^{*},0)\right)^{\perp}, \text{Im}(\tilde{A}))} \le c, \label{Atilde_3} \\ &|\tilde{A}^{-1}|_{\mathcal{L}( \text{Im}(\tilde{A}),\text{Ker}\left(A(\Theta_{3}^{*},0)\right)^{\perp})} \le c.\label{Atilde_4} \end{align} Now, we claim that we can further restrict $\delta$ if necessary so that \begin{align}\label{claimforw1} \left| w_1\cdot \partial_\Theta G \right| > c s. \end{align} In fact, note that \eqref{assumptionfory1}-\eqref{assumptionfory2} imply that $R\in (X^{k_0+2})^3$, thus by \eqref{lineargrowth2}, we have $\partial_\Theta G\in (Y^{k_0+1})^3\subset Y^{k_0+1}\times (Y^{k_0})^2$. Also we have \begin{align*} \partial_\Theta G &= \int_0^{1} \partial_t\left( \partial_\Theta G(\Theta^*_3+v\cdot R, t(sv + (I-P)R))\right)dt \\ & = \int_0^{1}\partial_\Theta DG(\Theta^*_3+v\cdot R,t(sv+(I-P)R))[sv+(I-P)R]dt, \end{align*} thus \begin{align}\label{claimforw2} \left| \partial_\Theta G\right|_{Y^{k_0+1}\times (Y^{k_0})^2} \le \left| \partial_\Theta G\right|_{(Y^{k_0+1})^3} \le c |sv+(I-P)R|_{(X^{k_0+2})^2} \le cs, \end{align} where the second inequality follows from \eqref{lineargrowth2} and the last inequality follows from \eqref{assumptionfory2}. Then it follows that (recall that $f\cdot g$ is the dot product in $L^2$ space), \begin{align*} |\partial_\Theta G\cdot w_1| & \geq |\partial_{\Theta}G\cdot w| -| \partial_{\Theta}G \cdot (w_1 - w)| \\ &\ge c_ 1s - |\partial_{\Theta}G\cdot (w_1-w)|\\ &\ge c_1s -cs|w-w_1|_{X^{k_0+1}\times (X^{k_0})^2}\\ &\ge c_1s - c\delta s, \end{align*} where we used \eqref{novanishingQ2}, \eqref{claimforw2} and \eqref{Atilde_2} for the second, third and the fourth inequality, respectively. Hence, assuming $\delta$ is small, we have \eqref{claimforw1}. To prove \eqref{Tinvbound0}, pick an arbitrary $z\in Y^{k_0+1}\times(Y^{k_0})^{2}$. There exists a unique $\eta\in \text{Ker}\left(A(\Theta_{3}^{*},0)\right)$ and a unique $h\in \text{Ker}\left(A(\Theta_{3}^{*},0)\right)^{\perp}$ such that \begin{align} & (v\cdot \eta) Q_1\partial_\Theta G = Q_1z \label{uniqueeta}\\ &(v\cdot \eta)(I-Q_1)\partial_\Theta G + \tilde{A}[h] = (I-Q_1)z.\label{uniqueh} \end{align} In fact, there exists a unique $\eta$ in \eqref{uniqueeta} thanks to \eqref{claimforw1} and that $Q_1$ is the projection to a one-dimensional space spanned by $w_1$. Once $\eta$ is fixed, the existence and the uniqueness of $h$ in \eqref{uniqueh} follows from \eqref{Atilde_4}. Once $\eta$ and $h$ are determined, it is clear that \begin{align}\label{tsinverse} T_s(R)[h+\eta] = (v\cdot (h+\eta))\partial_\Theta G + \tilde{A}[\eta+h] = (v\cdot \eta)\partial_\Theta G + \tilde{A}[h] = z, \end{align} where we used $v\cdot h = 0$ and $\tilde{A}[\eta] = \tilde{A}[(I-P)\eta] = 0$, which follows from the definition of $\tilde{A}$ in \eqref{Atilde_def1}. Therefore $T(R)^{-1}z = h+\eta$. Furthermore we have from \eqref{uniqueeta} and \eqref{uniqueh} that \begin{align} & |\eta|_{(X^{k_0+1})^3} \le c \frac{|z\cdot w_1|}{|\partial_{\Theta}G \cdot w_1|}\label{bound1}\\ & |h|_{(X^{k_0+1})^3} \le |\tilde{A}^{-1}(I-Q_1)\partial_{\Theta}G (v\cdot \eta)|_{{(X^{k_0+1})^3}} +| \tilde{A}^{-1}(I-Q_1)z|_{(X^{k_0+1})^3}\label{bound2}. \end{align} Using \eqref{claimforw1} and \eqref{Atilde_4}, we obtain \begin{align} &|\eta|_{(X^{k_0+1})^3} \le c \frac{|z|_{ Y^{k_0+1}\times(Y^{k_0})^{2}}}{s},\label{bound_3}\\ &|h|_{(X^{k_0+1})^3} \le c |\partial_{\Theta}G|_{Y^{k_0+1}\times (Y^{k_0})^2}\frac{|z|_{ Y^{k_0+1}\times(Y^{k_0})^{2}}}{s} +c |z|_{ Y^{k_0+1}\times(Y^{k_0})^{2}} \le c \frac{|z|_{ Y^{k_0+1}\times(Y^{k_0})^{2}}}{s},\label{bound3} \end{align} where we used \eqref{claimforw2}. This proves \eqref{Tinvbound0}. To show \eqref{Approximate_inv}, we compute \begin{align*} \left(D\tilde{G}_s(R)\circ T_s(R)^{-1}-I\right)[z] = a(\Theta_3^{*}+v\cdot R,sv+(I-P)R)\circ T(R)^{-1}[z], \end{align*} which follows from \eqref{dGanda}. This implies \begin{align*} |\left(D\tilde{G}_s(R)\circ T_s(R)^{-1}-I\right)[z]|_{\left(X^{k_0}\right)^3} & \le c|a(\Theta_3^{*}-v\cdot R,sv+(I-P)R)\circ T(R)^{-1}[z]|_{\left(X^{k_0}\right)^3}\\ &\le c |G(\Theta_3^{*}+v\cdot R, sv+(I-P)R)|_{\left(Y^{k_0}\right)^3}|T_s(R)^{-1}[z]|_{\left(X^{k_0+1}\right)^3}\\ & \le c |\tilde{G}_s(R)|_{\left(Y^{k_0}\right)^3}|T_s(R)^{-1}[z]|_{\left(X^{k_0+1}\right)^3}, \end{align*} where we used \eqref{approxinverse1} to get the second inequality and used the definition of $\tilde{G}_s(R)$ to obtain the last inequality. In order to prove \eqref{highernorm_inversion}, we improve on the estimates in \eqref{bound1} and \eqref{bound2}. For \eqref{bound1}, recall that $\eta$ is in a one-dimensional space, span$\left\{ v \right\}$ and $v$ is supported on the $m$-th Fourier mode. Thus, \begin{align}\label{higher_1} |\eta|_{(X^{k_0+1 + \sigma})^3} \le c |\eta|_{(X^{k_0+1})^3} \le c \frac{|z|_{ Y^{k_0+1}\times(Y^{k_0})^{2}}}{s}, \end{align} where $c$ may depend on $\sigma$. Furthermore, $(\tilde{c}$-2) in Theorem~\ref{theorem1} and \eqref{uniqueh} give us that \begin{align} |h|_{(X^{k_0+1+\sigma})^3} & \le c (1+ \left| R \right|_{(X^{k_0+4+\sigma})^3}) \left( \left| (v\cdot \eta)\partial_\Theta G \right|_{Y^{k_0+1}\times (Y^{k_0})^2} + \left| z \right|_{Y^{k_0+1}\times (Y^{k_0})^2} \right) \nonumber \\ & \ + c\left( \left| (v\cdot \eta) \partial_\Theta G\right|_{Y^{k_0+1+\sigma}\times (Y^{k_0+\sigma})^2} + \left| z \right|_{Y^{k_0+1+\sigma}\times (Y^{k_0+\sigma})^2} \right). \label{higher_estimate} \end{align} Using \eqref{claimforw2} and \eqref{bound_3}, we have \begin{align*} \left| (v\cdot \eta)\partial_\Theta G \right|_{Y^{k_0+1}\times (Y^{k_0})^2} \le \left| \partial_\Theta G \right|_{{Y^{k_0+1}\times (Y^{k_0})^2}}|v\cdot \eta| \le c\left| z \right|_{Y^{k_0+1}\times (Y^{k_0})^2}. \end{align*} For the higher norm of $\partial_\Theta G$, we use \eqref{lineargrowth2} and and \eqref{bound_3} and obtain \begin{align*} \left |(v\cdot \eta)\partial_\Theta G\right|_{Y^{k_0+1+\sigma}\times (Y^{k_0+\sigma})^2} &\le c \left| \partial_\Theta G \right|_{(Y^{k_0+1+\sigma})^3}|v\cdot \eta| \\ & \le \frac{c}{s}\left( 1 + \left| R \right|_{(X^{k_0+2+\sigma})^3}\right)\left| z \right|_{Y^{k_0+1}\times (Y^{k_0})^2}. \end{align*} Therefore, \eqref{higher_estimate} gives us \begin{align*} |h|_{(X^{k_0+1+\sigma})^3} \le \frac{c}{s}\left( (1 + \left| R \right|_{(X^{k_0+4+\sigma})^3} )\left| z \right|_{Y^{k_0+1}\times (Y^{k_0})^2} + \left| z \right|_{Y^{k_0+1+\sigma}\times (Y^{k_0+\sigma})^2 } \right). \end{align*} Combining with \eqref{higher_1}, we obtain \eqref{highernorm_inversion}. \textbf{Proof of (C).} Finally, if $R=0$, then we have $|\partial_\Theta G|_{Y^{k_0+1}\times (Y^{k_0})^2} = |\partial_\Theta G(\Theta_3,sv)|_{{Y^{k_0+1}\times (Y^{k_0})^2}}\le c s$. Therefore we can improve \eqref{bound3} and obtain \begin{align*} |h|_{(X^{k_0+1})^3} \le c |z|_{ Y^{k_0+1}\times(Y^{k_0})^{2}}, \end{align*} which implies \eqref{Tinvbound2}. \eqref{Tinvbound0} follows trivially from \eqref{Tinvbound1}. This completes the proof. \end{proof} \begin{lemma}\label{conditionsforH} Let $\tilde{G}_s$ be as in \eqref{def_tilde_g}. Then there exists an open set $V^3$ near $0\in (X^3)^3$ such that for all $R,\tilde{R}\in V^3$ and $k\ge 2$, \begin{itemize} \item[(a')] (Initial value) $|\tilde{G}_s(0)|_{(X^{k})^3} \lesssim_k s^2.$ \item[(b')] (Taylor estimate) If $\rVert R \rVert_{(X^{k+3})^3}, \rVert \tilde{R} \rVert_{(X^{k+3})^3} \le \eta$ for some $0 < \eta=\eta(k) < 1$, then \begin{align*} \begin{cases} |D\tilde{G}_s(R)[h]|_{(Y^{k})^3} \lesssim_{k} |h|_{(X^{k+1})^3}\\ |\tilde{G}_s(\tilde{R})-\tilde{G}_s(R)-D\tilde{G}_s(R)[\tilde{R}-R]|_{(Y^{k})^3} \lesssim_{k} |\tilde{R}-R|_{(X^{k+1})^3}^2\\ |\tilde{G}_s(\tilde{R})-\tilde{G}_s(R)-D\tilde{G}_s(R)[\tilde{R}-R]-\frac{1}{2}D^2\tilde{G}_s[\tilde{R}-R,\tilde{R}-R]|_{(Y^{k})^3} \lesssim_{k} |\tilde{R}-R|_{(X^{k+1})^3}^3. \end{cases} \end{align*} \end{itemize} \end{lemma} \begin{proof} (a') follows from the fact that $v \in \text{Ker}(DG(\Theta_3^{*},0))$ and $G(\Theta_3^*,0)=0$. (b') is due to (b) in Theorem~\ref{theorem1}. \end{proof} \subsubsection{Nash-Moser iteration} For $\beta,N>0$ and $2\le k\in \mathbb{N}$, we consider a regularizing operator $S(N): (X^{k})^3\mapsto (C^{\infty})^3$ such that \begin{align}\label{regularizing1} \begin{cases} |S(N)R|_{(X^{k+\beta})^3} \lesssim_{k,\beta} N^{\beta} |R|_{(X^{k})^3} & \forall \ R\in (H^{k})^3\\ |(I-S(N))R|_{(H^{k})^3} \lesssim_{k,\beta} N^{-\beta} |R|_{(X^{k+\beta})^3} & \forall \ R\in (H^{k+\beta})^3. \end{cases} \end{align} Note that we can choose $S$ so that $PS(N) = S(N)P$, since $v$ is supported on the $m$-th Fourier mode (see (d) in Theorem~\ref{theorem1}). For $N_n>1$ and even integer $\beta>0$, which will be chosen later, we set \begin{align}\label{definition_of_approx_sol} \begin{cases} R_{n+1} = R_n - S(N_n)T_s(R_n)^{-1}[\tilde{G}_s(R_n)] & \text{ for }n\ge0\\ R_0 = 0, \end{cases} \end{align} and \begin{align}\label{def_of_sequences} \begin{cases} a_n:=|R_{n+1}-R_n|_{(X^{k_0+1})^3}\\ a'_n:=|R_{n+1}-R_n|_{(X^{k_0+2})^3}\\ a''_n=|R_{n+1}-R_n|_{(X^{k_0+4})^3}\\ b_n:=|\tilde{G}_s(R_n)|_{(Y^{k_0})^3}\\ C_n:=|T_s(R_n)^{-1}[\tilde{G}_s(R_n)]|_{(X^{k_0+1+\beta})^3}. \end{cases} \end{align} Note that our goal is to show that $\sum_{n=0}^\infty a_n < \infty$, which implies that $R_n$ converges in $(X^{k_0+1})^3$. In order to prove the assumptions in \eqref{assumptionfory1}-\eqref{assumptionfory3}, we will need the boundedness of $a_n'$ and $a_n''$ as well. \begin{lemma}\label{iteration1} Let $\epsilon>0$ and $k_0\ge2$ fixed and let $s_0$ and $\delta$ be defined as in Lemma~\ref{approxinv}. We also assume that $\delta$ is even smaller if necessary, so that $\delta < \eta(k_0)$ where $\eta(k_0)$ is as in (b') in Lemma~\ref{conditionsforH}. For any $n\ge 0$ such that $R_n$ satisfies \eqref{assumptionfory1}, \eqref{assumptionfory2} and \eqref{assumptionfory3} for some $s \in (0,s_0)$, then we have \begin{align} &a_{0}\lesssim_{\epsilon,k_0} s, \label{recursion3}\\ & a_{n}\lesssim_{\epsilon,k_0} \frac{1}{s}N_nb_{n} &\text{ for }n\ge 1, \label{recursion1}\\ &b_{0} \lesssim_{\epsilon,k_0,\beta} s^2 \label{recursion221}\\ &b_{1} \lesssim_{\epsilon,k_0,\beta} (N_0^3+1)s^{3} + N_0^{-\beta}C_{0}. \label{recursion4}\\ &b_{n+1} \lesssim_{\epsilon,k_0,\beta} a_{n}^2 + \frac{1}{s}N_{n}b_{n}^2 + N_{n}^{-\beta}(1+b_{n})C_{n} & \text{ for }n\ge 1,\label{recursion2}\\ &a'_n \lesssim_{\epsilon,k_0} \frac{1}{s}N_n^2 b_n & \text{ for }n\ge 0, \label{recursion5}\\ &a''_n\lesssim_{\epsilon,k_0}\frac{1}{s}N_n^4 b_n & \text{ for }n\ge 0. \label{recursion6} \end{align} Furthermore, we have \begin{align} |PR_1|_{(X^{k_0+2})^3}\lesssim_{\epsilon,k_0} N_0 s \label{firststep1}\\ |(I-P)R_1|_{(X^{k_0+2})^3} \lesssim_{\epsilon,k_0} N_0 s^2. \label{firststep2} \end{align} \end{lemma} \begin{proof} \textbf{Proof of \eqref{recursion3}, \eqref{recursion1}, \eqref{firststep1} and \eqref{firststep2}.} We first claim that \begin{align} &|PR_1|_{(X^{k_0+1})^3} \lesssim s\label{claim1}\\ &|(I-P)R_1|_{(X^{k_0+1})^3} \lesssim s^2, \label{claim2} \end{align} Let us prove \eqref{claim1} first. Thanks to (a') in Lemma~\ref{conditionsforH} and \eqref{Tinvbound1}, we have \begin{align*} |PR_1|_{(X^{k_0+1})^3} = |S(N_0)PT_s(0)^{-1}[\tilde{G}_s(0)]|_{(X^{k_0+1})^3} \lesssim |PT_s(0)^{-1}[\tilde{G}_s(0)]|_{(X^{k_0+1})^3} \lesssim \frac{1}{s}|\tilde{G}_s(0)|_{(Y^{k_0}+1)^3} \lesssim s, \end{align*} where we used that $S(N_0)$ and $P$ commute to obtain the first equality. To prove \eqref{claim2}, we use \eqref{Tinvbound2} and compute \begin{align*} |(I-P)R_1|_{(X^{k_0+1})^3}& \lesssim |S(N_0)(I-P)T_s(0)^{-1}[\tilde{G}_s(0)]|_{(X^{k_0+1})^3} \\ & \lesssim |(I-P)T_s(0)^{-1}[\tilde{G}_s(0)]|_{(X^{k_0+1})^3} \\ & \lesssim |\tilde{G}_s(0)|_{(Y^{k_0+1})^3} \\ &\lesssim s^2, \end{align*} which proves \eqref{claim2}. With the claim, \eqref{recursion3} follows immediately. \eqref{recursion1} \eqref{firststep1} and \eqref{firststep2} can be proved in exactly the same way as above using \eqref{regularizing1}. \textbf{Proof of \eqref{recursion221}, \eqref{recursion4} and \eqref{recursion2}.} Note that Lemma~\ref{conditionsforH} immediately implies \eqref{recursion221}. Now, let us prove \eqref{recursion4} and \eqref{recursion2}. We have \begin{align}\label{bnestimate1} b_{n+1}:=|\tilde{G}_{s}(R_{n+1})|_{(Y^{k_0})^3} & \lesssim {|\tilde{G}_s(R_{n+1})-\tilde{G}_s(R_{n})-D\tilde{G}_s(R_{n})[R_{n+1}-R_{n}]|_{(Y^{k_0})^3}} \nonumber\\ & \ +{|\tilde{G}_s(R_{n})+D\tilde{G}_s(R_{n})[R_{n+1}-R_{n}]|_{(Y^{k_0})^3}}\nonumber\\ & =: J_1 + J_2, \end{align} We claim that \begin{align}\label{claim3} \begin{cases} J_1 \lesssim s^3 & \text{ if }n=0 \\ J_1 \lesssim a_{n}^2 =|R_{n+1}-R_{n}|_{(X^{k_0+1})^3}^2 & \text{ if }n\ge 1. \end{cases} \end{align} If $n=0$, we have \begin{align*} J_1 & \lesssim \bigg|\tilde{G}_{s}({R_1})-\tilde{G}_s(0)-D\tilde{G}_s(0)[R_1]-\frac{1}{2}D^2\tilde{G}_s(0)\left[R_1,R_1\right]\bigg|_{(Y^{k_0})^3} + \frac{1}{2}|D^2\tilde{G}_s(0)[R_1,R_1]|_{(Y^{k_0})^3}\\ &\lesssim |R_1|^3_{(X^{k_0+1})^3} + \frac{1}{2}|D^2\tilde{G}_s(0)[R_1,R_1]|_{(Y^{k_0})^3} \\ &\lesssim s^3 + \frac{1}{2}|D^2\tilde{G}_s(0)[R_1,R_1]|_{(Y^{k_0})^3}, \end{align*} where we used (b') in Lemma~\ref{conditionsforH} in the second inequality and \eqref{claim1} and \eqref{claim2} in the last inequality. To estimate $D^2\tilde{G}$, we recall the definition of $\tilde{G}_s$ and compute \begin{align*} D^2\tilde{G}_s(0)[R_1,R_1] & = \left(\frac{d}{dt}\right)^2 G(\Theta_3^*+tv\cdot R_1, sv+(I-P)tR_1)\bigg|_{t=0} \\ & = \partial_{\Theta \Theta}G(\Theta_3^*,sv)(v\cdot R_1)^2 + 2(v\cdot R_1)\partial_{\Theta}DG(\Theta_3^*,sv)[(I-P)[R_1] ]\\ & \ + D^2G(\Theta_3^*,sv)\left[ (I-P)R_1,(I-P)R_1\right]. \end{align*} Since $\partial_{\Theta \Theta}G(\Theta_3^*,0) = 0$, therefore \[ \left| \partial_{\Theta \Theta}G(\Theta^*_3,sv) \right|_{(Y^{k_0})^3} = \left| \int_0^{s}\frac{d}{dt}\left( \partial_{\Theta\Theta} G(\Theta^*_3,tv)\right)dt\right|_{(Y^{k_0})^3} \lesssim s, \] which implies $|\partial_{\Theta \Theta}G(\Theta_3^*,sv)(v\cdot R_1)^2|_{(Y^{k_0})^3} \lesssim s^3$, since $|v\cdot R_1| \lesssim |PR_1|_{(X^{k_0+1})^3}\lesssim s$, which follows from \eqref{claim1}. Also it follows from (b), \eqref{claim1} and \eqref{claim2} that \begin{align*} &|2(v\cdot R_1)\partial_{\Theta}DG(\Theta_3^*,sv)[(I-P)[R_1] ]|_{(Y^{k_0})^3} \lesssim s^3,\\ &|D^2G(\Theta_3^*,sv)[ (I-P)R_1,(I-P)R_1]|_{(Y^{k_0})^3} \lesssim s^4, \end{align*} therefore, $|D^2\tilde{G}_s(0)[R_1,R_1] |_{(Y^{k_0})^3} \lesssim s^3 $. Thus, we have $J_1 \lesssim s^3$, which proves \eqref{claim3} for $n=0$. If $n\ge 1$, the claim in \eqref{claim3} follows immediately from (b') in Lemma~\ref{conditionsforH}.\\ In order to estimate $J_2$ in \eqref{bnestimate1}, we have that for $n\ge 0$, \begin{align*} J_2 & \lesssim {|(I-D\tilde{G}_s(R_{n})\circ T_s(R_{n})^{-1})[\tilde{G}_s(R_{n})]|_{(Y^{k_0})^3}} \\ & \ +{|D\tilde{G}_s(R_{n})\left(I-S(N_{n})\right)T_s(R_{n})^{-1}[\tilde{G}_s(R_{n})]|_{(Y^{k_0})^3}}\\ & =: J_{21} + J_{22}. \end{align*} Then it follows from \eqref{regularizing1} and \eqref{Approximate_inv} that \begin{align*} J_{21} &\lesssim |\tilde{G}_s(R_{n})|_{(Y^{k_0})^3}|T_s(R_{n})^{-1}[\tilde{G}_{s}(R_{n})]|_{(X^{k_0+1})^3} \\ & \lesssim |\tilde{G}_s(R_{n})|_{(Y^{k_0})^3}|S(N_{n})T_s(R_{n})^{-1}[\tilde{G}_s(R_{n})]|_{(X^{k_0+1})^3} \\ & \ + |\tilde{G}_s(R_{n})|_{(Y^{k_0})^3}|\left(I-S(N_{n})\right)T_s(R_{n})^{-1}[\tilde{G}_s(R_{n})]|_{(X^{k_0+1})^3} \\ & \lesssim N_{n}|\tilde{G}_s(R_{n})|_{(Y^{k_0})^3}|T_s(R_{n})^{-1}[\tilde{G}_s(R_{n})]|_{(X^{k_0})^3}\\ & \ + N_{n}^{-\beta}|\tilde{G}_{s}(R_{n})|_{(Y^{k_0})^3}|T_s(R_{n})^{-1}[\tilde{G}_s(R_{n})]|_{(X^{k_0+1+\beta})^3} \\ &\lesssim \frac{1}{s}N_{n}|\tilde{G}_{s}(R_{n})|_{(Y^{k_0})^3}^2+N_{n}^{-\beta}|\tilde{G}_{s}(R_{n})|_{(Y^{k_0})^3}|T_s(R_{n})^{-1}[\tilde{G}_s(R_{n})]|_{(X^{k_0+1+\beta})^3}\\ &\lesssim \frac{1}{s}N_{n}b_{n}^2+N_{n}^{-\beta}b_{n}C_{n}, \end{align*} where the fourth inequality follows from \eqref{crudebound}. Also we have \begin{align*} J_{22}\lesssim |\left( I- S(N_{n})\right)T_s(R_{n})^{-1}[\tilde{G}_{s}(R_{n})]|_{(X^{k_0+1})^3} \lesssim N_{n}^{-\beta}C_{n}. \end{align*} Hence we have for $n\ge 0$ that \begin{align}\label{J2estimate1} J_2 \lesssim \frac{1}{s}N_{n}b_{n}^2 + N_{n}^{-\beta}(1+b_{n})C_{n}. \end{align} Thus with \eqref{bnestimate1}, \eqref{claim3}, \eqref{J2estimate1} and (a') in lemma \ref{conditionsforH}, we have that \begin{align}\label{bnrecursion} \begin{cases} b_{1} \lesssim (1+N_0)s^{3} + N_0^{-\beta}C_{0} \\ b_{n+1} \lesssim a_{n}^2 + \frac{1}{s}N_{n}b_{n}^2 + N_{n}^{-\beta}(1+b_{n})C_{n} & \text{ for }n\ge 1. \end{cases} \end{align} \textbf{Proof of \eqref{recursion5} and \eqref{recursion6}.} Finally, for \eqref{recursion5} and \eqref{recursion6}, we use \eqref{crudebound} to compute \begin{align} &|R_{n+1}-R_{n}|_{(X^{k_0+2})^3} = | S(N_n)T_s(R_{n})^{-1}[\tilde{G}_s(R_n)]|_{\left(X^{k_0+2}\right)^3} \lesssim N_n^2\frac{1}{s}b_n, \label{highnormestimate1} \\ &|R_{n+1}-R_{n}|_{(X^{k_0+4})^3} = | S(N_n)T_s(R_{n})^{-1}[\tilde{G}_s(R_n)]|_{\left(X^{k_0+4}\right)^3} \lesssim N_n^4\frac{1}{s}b_n. \label{highnormestimate} \end{align} This finishes the proof. \end{proof} Let us now derive a recursive formula for $C_n$. \begin{lemma}\label{iteration2} For $j=0,\ldots,n-1$ assume that $R_j$ satisfies the same assumptions as in Lemma~\ref{iteration1}, that is, given $\epsilon>0$ and $k_0\ge2$, $R_j$ satisfies \eqref{assumptionfory1}, \eqref{assumptionfory2} and \eqref{assumptionfory3} and the assumptions of Lemma~\ref{conditionsforH} for $s\in (0,s_0(\epsilon,k_0))$ and $\delta=\delta(\epsilon,k_0)>0$. Then, \begin{align*} C_{n} \lesssim_{\epsilon,k_0,\beta} \frac{1}{s}\left( 1 + n\sup_{j=0,\ldots n-1}N_{j}^{4} C_{j} \right), \quad \text{ for $n\ge 1$ and }\quad C_0\lesssim_{\epsilon,k_0,\beta} s. \end{align*} \end{lemma} \begin{proof} It follows from (a') in Lemma~\ref{conditionsforH} and \eqref{highernorm_inversion} in Lemma~\ref{approxinv} that \begin{align*} C_0 \lesssim \frac{1}{s}|\tilde{G}_s(0)|_{Y^{k_0+1+\beta}\times (Y^{k_0+\beta})^{2}}\lesssim \frac{1}{s}|\tilde{G}_s(0)|_{(Y^{k_0+1+\beta})^3}\lesssim s. \end{align*} Now let us assume that $n\ge 1$. It follows from the definition of $C_n$ in \eqref{def_of_sequences} and \eqref{highernorm_inversion} that (recall that $\beta$ is even) \begin{align*} C_n \lesssim \frac{1}{s} \left( (1+|R_n|_{(X^{k_0+4+\beta})^3})|\tilde{G}_s(R_n)|_{Y^{k_0+1}\times (Y^{k_0})^{2}} + |\tilde{G}_s(R_n)|_{Y^{k_0+1+\beta}\times (Y^{k_0+\beta})^{2}}\right). \end{align*} Using \eqref{lineargrowth1}, we have \begin{align*} &|\tilde{G}_s(R_n)|_{Y^{k_0+1}\times (Y^{k_0})^{2}} \lesssim |\tilde{G}_s(R_n)|_{(Y^{k_0+1})^3} \lesssim 1+ | R_n |_{(X^{k_0+2})^3}\lesssim 1,\\ &|\tilde{G}_s(R_n)|_{Y^{k_0+1+\beta}\times (Y^{k_0+\beta})^{2}} \lesssim 1+ | R_n |_{(X^{k_0+2+\beta})^3}. \end{align*} therefore, we have \begin{align*} C_n \lesssim \frac{1}{s} (1 + |R_n|_{(X^{k_0+4+\beta})^3}). \end{align*} For $R_n$, we have \begin{align*} |R_n|_{(X^{k_0+4+\beta})^3} &\lesssim \sum_{j=0}^{n-1}\left|R_{j+1} - R_{j}\right|_{(X^{k_0+4+\beta})^3} \\ & \lesssim \sum_{j=0}^{n-1}\left|S(N_j)T_s(R_j)^{-1}[\tilde{G}_s(R_j)]\right|_{(X^{k_0+4+\beta})^3} \\ & \lesssim \sum_{j=0}^{n-1}N_j^3 C_j\\ & \lesssim n\sup_{j=0,\ldots,n-1}N_{j}^3C_{j}. \end{align*} Hence, we obtain the desired result. \end{proof} Now we are ready to prove the main theorem of this section. \begin{proofthm}{theorem1} We fix $k_0\ge2$ and pick $\tilde{\epsilon},\ \tilde{s}>0$ so that \begin{align}\label{epsilonpick} \sum_{k=1}^{\infty}s^{-1+2\left(\frac{67}{64} \right)^{k}} \le s^{1+ \tilde{\epsilon}} \text{ for all $s \in (0,\tilde{s})$.} \end{align} And we choose \begin{align} \ \beta:= 816\quad \text{ and }\quad \epsilon:= \min\left\{ \frac{1}{4}, \frac{\tilde{\epsilon}}{2} \right\} >0, \label{param1}\end{align} and let $s_0=s_0(\epsilon,k_0),$ $ \delta=\delta(\epsilon,k_0)$ be as in Lemma~\ref{approxinv}. As before, we can also assume $\delta$ is small enough so that $\delta <\eta(k_0)$ where $\eta$ is as in (b') in Lemma~\ref{conditionsforH}. Since $k_0$, $\epsilon$ and $\beta$ are fixed, by Lemma~\ref{iteration1} and \ref{iteration2}, we can find a constant $K > 0$ such that as long as $R_n$ satisfies \eqref{assumptionfory1},\eqref{assumptionfory2} and \eqref{assumptionfory3}, for some $s\in (0,s_0)$, it holds that for any sequence of positive numbers $N_n$, and \begin{align} &a_0 \le Ks \label{a0formula}\\ & |PR_1|_{(X^{k_0+2})^3}\le K N_0 s \label{firststep11}\\ & |(I-P)R_1|_{(X^{k_0+2})^3} \le K N_0 s^2. \label{firststep12}\\ &a_n \le \frac{K}{s}N_nb_n \quad \text{for $n\ge 1$,} \label{a1formula}\\ &b_0 \le Ks^2, \label{b0formula} \\ &b_1 \le K\left( (N_0^3 + 1)s^3 + N_0^{-\beta}C_0 \right) \label{b1formula}\\ &b_{n+1} \le K(a_n^2 + \frac{1}{s}N_nb_n^2 + N_n^{-\beta}(1+b_n)C_n) \quad \text{ for $n\ge 1$}, \label{bnformula}\\ &a'_n \le \frac{K}{s}N_n^2 b_n \quad \text{ for $n\ge 0$}, \label{a'0formula}\\ &a''_n \le \frac{K}{s}N_n^4b_n \quad \text{ for $n\ge 0$}, \label{a''0formula}\\ &C_n \le \frac{K}{s}\left(1 + n \sup_{j=0,\ldots n-1}\left|N_{j}^{4} C_{j}\right| \right) \quad \text{ for $n\ge 1$,} \label{cnformula}\\ & C_0 \le Ks.\label{c0formula} \end{align} In what follows, we assume without loss of generality that \begin{align}\label{kanddelta} K > 1 , \quad \text{ and } \quad \delta<1. \end{align} For such $K$, we can find $s^*$ such that for all $s\in (0,s^*)$ (each of them can be easily verified for small enough $s>0$), \begin{align} &2Kn<s^{-2\left(\frac{67}{64}\right)^{n-1}+1} \quad \text{ for all $n\ge 1$},\label{condfors1} \\ & 4K^3s^{\frac{7}{16}}< \frac{1}{2}, \label{condfors2} \\ & Ks^{\frac{\tilde{\epsilon}}{2}} \le \frac{1}{2}, \label{condfors4}\\ & s<1, \label{condfors3}\\ & K^2s^{\frac{3}{4}} \le \frac{1}{2}\delta, \label{condfors5} \\ & K\sum_{k=1}^{\infty}s^{-1+\frac{127}{64}\left( \frac{67}{64}\right)^k} \le \frac{1}{2}\delta, \label{condfors6} \\ & K(n+2) \le s^{1-2\left( \frac{67}{64}\right)^{n}} \text{ for all $n\ge 0$}, \label{condfors7} \\ & 3K^3 \le s^{2+\left(-\frac{139}{32} + \frac{143}{64}\cdot\frac{67}{64} \right)\left(\frac{67}{64} \right)^{n}} \text{ for all $n\ge 1$}, \label{condfors9}\\ & 3K \le s^{1+\left(-\frac{141}{32}+\frac{143}{64}\left( \frac{67}{64}\right) \right)\left( \frac{67}{64}\right)^{n}} \text{ for all $n\ge 1$}, \label{condfors10}\\ & 6K \le s^{\left( - \frac{816}{16}+\frac{768}{16}+\frac{143}{64}\frac{67}{64}\right)\left(\frac{67}{64} \right)^{n}} \text{ for all $n\ge 1$}. \label{condfors11} \end{align} Lastly, we fix $s\in (0,\min\left\{s_0,s^*,\tilde{s} \right\})$ and \begin{align}\label{param2} N_n := s^{-\frac{1}{16}\left(\frac{67}{64} \right)^{n}} \quad \text{ for $n\ge 0$.} \end{align} We claim that for all $n\ge0$, \begin{itemize} \item[$(P1)_n$]: $R_{n+1}$ satisfies \eqref{assumptionfory1},\eqref{assumptionfory2} and \eqref{assumptionfory3}. \item[$(P2)_n$]: $C_{n} \le N_n^{768}$. \item[$(P3)_n$]: $b_{n+1} \le s^{\frac{143}{64}\left(\frac{67}{64}\right)^{n+1}}$. \item[$(P4)_n$]: $a_{n+1} \le Ks^{-1 + \frac{139}{64}\left( \frac{67}{64}\right)^{n+1}}$. \end{itemize} Once we have the above claims, then $(P1)_n$ justifies all the recurrence formulae above for all $n \in \mathbb{N}$, which follows from Lemma~\ref{iteration1} and \ref{iteration2}. Also, it is clear from \eqref{a1formula} and $(P4)_n$ that \begin{align*} \sum_{n=0}^{\infty}a_n \le KN_0s + K\sum_{n=1}^{\infty} s^{-1 + \frac{139}{64}\left( \frac{67}{64}\right)^{n}} < \infty. \end{align*} Therefore $R_n$ is a Cauchy sequence in $(X^{k_0+1})^3$, and we can find a limit $R_\infty:=\lim_{n\to\infty}R_n$. Then, $(P3)_n$ and the continuity of the functional $R\mapsto \tilde{G}_s$ implies that $\tilde{G}_s(R_\infty) = 0$. From the definition of $\tilde{G}_s$ in \eqref{def_tilde_g}, this implies the existence of a solution $R = R(s)\ne 0$ for each $s\in (0,\min\left\{s_0,s^*,\tilde{s} \right\})$ and finishes the proof. Now we prove the claim $(Pi)_n$ for $i=1,\ldots,4$. We will follow the usual induction argument. \textbf{Initial step.} In the initial case, we assume that $n=0$. We first prove $(P1)_0$. It follows from \eqref{firststep11},\eqref{firststep12} and \eqref{param2} that \begin{align} &|PR_1|_{(X^{k_0+2})^3}\le Ks^{\frac{15}{16}} \le s^{\frac{1}{4}} \le s^{\epsilon} \label{P1claim1}\\ &|(I-P)R_1|_{(X^{k_0+2})^3} \le K s^{1+\frac{15}{16}} \le s^{1+ \frac{1}{4}} \le s^{1+\epsilon}, \label{P1claim2} \end{align} where the two second inequalities follow from \eqref{condfors2} ($Ks^{\frac{15}{16}}\le 4K^3s^{\frac{7}{16}}s^{\frac{1}{2}}<s^{\frac{1}{4}}$) and the last inequalities follow from \eqref{condfors3} and \eqref{param1}. Furthermore, it follows from \eqref{a''0formula}, \eqref{b0formula} and \eqref{param2} that \begin{align}\label{cknorm1} |R_1|_{(X^{k_0+4})^3} = a''_0 \le \frac{K}{s} \cdot \left( s^{-\frac{1}{16}}\right)^4 \cdot \left( K s^2\right) \le K^2s^{\frac{3}{4}} \le \frac{1}{2}\delta, \end{align} where the last inequality follows from \eqref{condfors5}. Therefore $(P1)_0$ holds for $n = 0$. $(P2)_0$ follows immediately from \eqref{c0formula} and \eqref{param2}. In order to prove $(P3)_0$, note that thanks to \eqref{b1formula}, it is enough to show that \[ K\left( (N_0^3 + 1)s^3 + N_0^{-\beta}C_0 \right) \le s^{\frac{143}{64}\cdot\frac{67}{64}}, \] in other words, \[K\left( s^{\frac{45}{16}}+s^3 + Ks^{\frac{816}{16}}s \right) \le s^{\frac{143}{64}\cdot\frac{67}{64}},\] where we used \eqref{param1}, \eqref{c0formula} and \eqref{param2}. By \eqref{condfors3}, $s^{\frac{45}{16}}$ is the largest value among the three in the parentheses, hence it is sufficient to show that $3K^2 < s^{\frac{143}{64}\cdot \frac{67}{64} - \frac{45}{16}}.$ Since $\frac{143}{64}\cdot \frac{67}{64} - \frac{45}{16} >-\frac{1}{2}$, it is enough to show that $3K^3 \le s^{-\frac{1}{2}}$, which follows from \eqref{condfors2}. $(P4)_0$ follows from \eqref{a1formula}, \eqref{param2} and $(P3)_0$. \textbf{Induction step.} In this step, we assume that $(Pi)_{k}$ is true for all $0 \le k \le n_0$ and aim to prove $(Pi)_{n_0+1}$. Let us prove $(P1)_{n_0+1}$ first. It follows from \eqref{a'0formula}, \eqref{param2} and $(P3)_{k}$ for $k=0,\ldots,n_0$, that \[ \sum_{k=1}^{n_0+1} a'_k \le K \sum_{k=1}^{n_0+1} s^{-1}\left( s^{-\frac{1}{16}\left(\frac{67}{64}\right)^k}\right)^2 \cdot s^{\frac{143}{64}\left( \frac{67}{64}\right)^{k}} \le K\sum_{k=1}^{n_0+1}s^{-1+2\left( \frac{67}{64}\right)^{k}} \le K s^{1+\tilde{\epsilon}}, \] where the last inequality follows from our choice on $\tilde{\epsilon}$ and $s\in \tilde{s}$ in \eqref{epsilonpick}. Hence, we have \begin{align*} \sum_{k=1}^{n_0+1} a'_k \le \left( K s^{\frac{\tilde{\epsilon}}{2}}\right)s^{1+\frac{\tilde{\epsilon}}{2}} \le \frac{1}{2}s^{1+ \frac{\tilde{\epsilon}}{2}}\le \frac{1}{2} s^{1+\epsilon}, \end{align*} where the second inequality follows from \eqref{condfors4} and the last inequality follows from \eqref{param1} and \eqref{condfors3}. Therefore we obtain \[ |PR_{n_0+2}|_{(X^{k_0+2})^3} \le |PR_{1}|_{(X^{k_0+2})^3} + \sum_{k=1}^{n_0+1}a'_k \le Ks^{\frac{15}{16}} + \frac{1}{2}s^{1+\epsilon}\le \left( K s^{\frac{7}{16}}\right) s^{\frac{1}{2}} + \left( \frac{1}{2}s\right)s^{\epsilon} \le s^{\epsilon}, \] where the second inequality follows from \eqref{P1claim1} and the last inequality follows from $\epsilon < \frac{1}{2}$ and $Ks^{\frac{7}{16}}\le \frac{1}{2}$, which can be deduced from \eqref{condfors4} and \eqref{param1}. Using \eqref{P1claim2}, instead of \eqref{P1claim1}, one can easily obtain \[ |(I-P)R_{n_0+2}|_{(X^{k_0+2})^3} \le s^{1+\epsilon}. \] To prove \eqref{assumptionfory3} for $R_{n_0+2}$, we compute \begin{align}\label{a''nestimate} |R_{n_0+2}|_{(H^{k_0+4})^3} &\le a''_0 + \sum_{k=1}^{n_0+1}a''_k \le \frac{1}{2}\delta + K \sum_{k=1}^{n_0+1} s^{-1}\left( s^{-\frac{1}{16}\left(\frac{67}{64}\right)^k}\right)^4 \cdot s^{\frac{143}{64}\left( \frac{67}{64}\right)^{k}} \nonumber\\ & \le \frac{1}{2}\delta + K\sum_{k=1}^{\infty}s^{-1+\frac{127}{64}\left( \frac{67}{64}\right)^k} \le \delta, \end{align} where the second inequality follows from \eqref{cknorm1}, \eqref{a''0formula} and $(P3)_k$, for $0\le k\le n_0$, and the last inequality follows from \eqref{condfors6}. This proves $(P1)_{n_0+1}$. We turn to $(P2)_{n_0+1}$. Using \eqref{cnformula} and $(P2)_{n_0}$, we have \[ C_{n_0+1} \le \frac{K}{s}(2+n_0)N_{n_0}^4\sup_{j=0,\ldots,n_0}C_{j} \le \frac{K}{s}(2+n_0)N_{n_0}^{772}, \] where the last inequality follows from $(P2)_{n_0}$. Hence, it suffices to show that \[ \frac{K}{s}(2+n_0)N_{n_0}^{772} \le (N_{n_0+1})^{768}. \] Plugging \eqref{param2}, this is equivalent to \[ K(n_0+2) \le s^{1+\left( \frac{67}{64}\right)^{n_0}\left(\frac{772}{16}-\frac{768}{16}\cdot \frac{67}{64} \right)} = s^{1-2\left( \frac{67}{64}\right)^{n_0}}, \] which is true thanks to our choice of $s$ in \eqref{condfors7}. This proves $(P2)_{n_0+1}$. For $(P3)_{n_0+1}$, thanks to \eqref{bnformula}, it is enough to show that \[ K\left( a_{n_0+1}^2 + \frac{1}{s}N_{n_0+1}b_{n_0+1}^2 + N_{n_0+1}^{-\beta}(1+b_{n_0+1})C_{n_0+1}) \right) \le s^{\frac{143}{64}\left( \frac{67}{64}\right)^{n_0+2}}. \] Using $(P4)_{n_0}$, \eqref{param2}, $(P3)_{n_0}$, \eqref{param1}, $(P2)_{n_0+1}$ and \eqref{condfors3}, which implies $b_{n_0+1}\le 1$ , we only need to show that \begin{align}\label{p3indunction} K\left( \underbrace{\left( K s^{-1+\frac{139}{64}\left( \frac{67}{64}\right)^{n_0+1}} \right)^2 }_{=:A_1} + \underbrace{ s^{-1}s^{-\frac{1}{16}\left(\frac{67}{64}\right)^{n_0+1}}s^{\frac{143}{32}\left(\frac{67}{64}\right)^{n_0+1}}}_{=:A_2} + \underbrace{ 2s^{\frac{816}{16}\left(\frac{67}{64}\right)^{n_0+1}}s^{-\frac{768}{16}\left(\frac{67}{64}\right)^{n_0+1}}}_{=:A_3}\right) \le s^{\frac{143}{64}\left(\frac{67}{64}\right)^{n_0+2}}. \end{align} We will show that $KA_i \le \frac{1}{3}s^{\frac{143}{64}\left( \frac{67}{64}\right)^{n_0+2}}$ for $i=1,2,3$. For $A_1$, it suffices to show that \[ K^3s^{-2+\frac{139}{32}\left(\frac{67}{64}\right)^{n_0+1}} \le \frac{1}{3}s^{\frac{143}{64}\left(\frac{67}{64}\right)^{n_0+2}}, \] equivalently, \[ 3K^3 \le s^{2+\left(-\frac{139}{32} + \frac{143}{64}\cdot\frac{67}{64} \right)\left(\frac{67}{64} \right)^{n_0+1}}, \] and this follows from our choice of $s$ in \eqref{condfors9}. For $A_2$, we need to show that \[Ks^{-1+\frac{141}{32}\left( \frac{67}{64}\right)^{n_0+1}} \le \frac{1}{3}s^{\frac{143}{64}\left( \frac{67}{64}\right)^{n_0+2}}, \] eqvalently, \[ 3K \le s^{1+\left(-\frac{141}{32}+\frac{143}{64}\left( \frac{67}{64}\right) \right)\left( \frac{67}{64}\right)^{n_0+1}}, \] and this follows from \eqref{condfors10}. For $A_3$, it is enough to show that \[ 2K s^{\frac{816}{16}\left(\frac{67}{64}\right)^{n_0+1}} \cdot s^{-\frac{768}{16}\left( \frac{67}{64}\right)^{n_0+1}} \le \frac{1}{3}s^{\frac{143}{64}\left( \frac{67}{64}\right)^{n_0+2}}, \] equivalently, \[ 6K \le s^{\left( - \frac{816}{16}+\frac{768}{16}+\frac{143}{64}\frac{67}{64}\right)\left(\frac{67}{64} \right)^{n_0+1}}, \] which follows from \eqref{condfors11}. This proves $(P3)_{n_0+1}$. Lastly, $(P4)_{n_0+1}$ can be proved by \eqref{a1formula} and $(P3)_{n_0 + 1}$, that is, \[ a_{n_0+2} \le Ks^{-1}N_{n_0+2}b_{n_0+2} \le K s^{-1}s^{\left(-\frac{1}{16} + \frac{143}{64}\right)\left( \frac{67}{64}\right)^{n_0+2}} = Ks^{-1 + \frac{139}{64}\left( \frac{67}{64}\right)^{n_0+2}}, \] which finishes the proof. \end{proofthm} \subsection{Estimates on the velocity}\label{velocity_estimates} In Subsection~\ref{checking_subsection}, we will check whether our functional $G$ in \eqref{stationarR_equation3} satisfies the hypotheses in Theorem~\ref{theorem1}. In this section, we will derive some useful estimates on the velocity vector generated by each patch. Recall that given $b_1,b_2>0$, $b_1\ne b_2$, and $R_1,R_2\in C^{\infty}(\mathbb{T})$, we denote for $i,j=1,2$, \begin{align} &R = (R_1,R_2) \in (C^\infty(\mathbb{T}))^2,\label{R_2}\\ &z_i(R)(x) = (b_i+R_i(x))(\cos x,\sin x), \label{z_2}\\ &u_{i,j}(R)(x):=\begin{pmatrix} u^1_{i,j}(R) \\ u^2_{i,j}(R) \end{pmatrix} =\int_{\mathbb{T}}\log(|z_j(R)(x)-z_i(R)(y)|^2)z_i(R)'^{\perp}(y)dy.\label{u_def} \end{align} \begin{prop}\label{growth_velocity} For $k\ge 2$, there exists $\epsilon=\epsilon(k,b_1,b_2) >0$ such that if $\rVert R \rVert_{(H^{3}(\mathbb{T}))^2} \le \epsilon$, then \begin{itemize} \item[(A)] $u_{i,j}:(H^{k+1}(\mathbb{T}))^2 \mapsto (H^{k}(\mathbb{T}))^2, \text{ and }\rVert u_{i,j} \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim_{k,b_1,b_2} 1 + \rVert R \rVert_{(H^{k+1}(\mathbb{T}))^2}.$ \item[(B)] The Gateaux derivative $Du_{i,j}(R):(H^{k+1}(\mathbb{T}))^2 \mapsto (H^{k+1}(\mathbb{T}))^2$ exists and \[ \rVert Du_{i,j}(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim_{k,b_1,b_2} \rVert h \rVert_{(H^{k+1}(\mathbb{T}))^2} + \rVert R \rVert_{(H^{k+3}(\mathbb{T}))^2}\rVert h \rVert_{(H^1(\mathbb{T}))^2}, \text{ for }h\in (C^{\infty}(\mathbb{T}))^2. \] \item[(C)] The map $R\mapsto Du_{{i,j}}(R)$ is Lipschitz continuous, in the sense that for $R,r,h \in (C^\infty(\mathbb{T}))^2$ such that $ \rVert R \rVert_{(H^{3}(\mathbb{T}))^2}, \rVert r \rVert_{(H^{3}(\mathbb{T}))^2} \le \epsilon,$ and $\rVert R\rVert_{(H^{k+3}(\mathbb{T}))^2}, \rVert r \rVert_{(H^{k+3}(\mathbb{T}))^2} \le 1$, \[ \rVert Du_{i,j}(R)[h]-Du_{i,j}(r)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim_{k,b_1,b_2} \rVert R-r\rVert_{(H^2(\mathbb{T}))^2}\rVert h \rVert_{(H^{k+1}(\mathbb{T}))^2} + \rVert R - r \rVert_{(H^{k+3}(\mathbb{T}))^2}\rVert h \rVert_{(H^{1}(\mathbb{T}))^2}. \] \item[(D)] For $\sigma\in \mathbb{N}\cup \left\{ 0 \right\}$, there exists a linear operator $T_{i,j}^\sigma(R)$ such that for $h\in \left(C^\infty(\mathbb{T})\right)^2$, \[ \left(\frac{d}{dx}\right)^{\sigma} \left( Du_{i,j}(R)[h]\right) = Du_{i,j}(R)[h^{(\sigma)}] + T_{i,j}^\sigma(R)[h], \] where $T_{i,j}^\sigma(R)$ satisfies \[ \rVert T_{i,j}^\sigma(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim_{k} (1+ \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^2})\rVert h\rVert_{(L^2(\mathbb{T}))^2} + (1+\rVert R \rVert_{(H^4(\mathbb{T}))^2})\rVert h^{(k+\sigma)} \rVert_{(L^2(\mathbb{T}))^2}. \] \end{itemize} \end{prop} \begin{rem} It is well known that roughly speaking, if $\partial D$ is $C^{k+\alpha}$-regular for some $k\ge1,$ $\alpha>0$ then the velocity $\nabla^{\perp}(1_D*\log|x|)$ is also $C^{k+\alpha}$-regular up to the boundary. In $(A)$ in the above Proposition, we do not aim to prove the optimal regularity since it is not necessary in the proof of the main theorem. \end{rem} \color{black} The proof of the proposition will be given after Lemmas~\ref{Acondition} and \ref{BCcondition}. We will deal with the case $i=j$ only. If $i\ne j$, then the integrand in $u_{i,j}$ has no singularity thus the result follows straightforwardly in a similar manner. Hence, by abuse of notation, we denote for $b>0$ and $R\in C^{\infty}(\mathbb{T})$, \begin{align} &z(R)(x) := (b+R(x))(\cos x, \sin x),\label{single1}\\ &u(R)(x):=\begin{pmatrix}u^1(R) \\ u^2(R) \end{pmatrix} :=\int_{\mathbb{T}}\log\left(|z(R)(x) - z(R)(y)|^2 \right)z(R)'^{\perp}(y)dy.\label{single2} \end{align} Let us write $u(R)$ as \begin{align}\label{f_def} u(R)(x) & = \int_{\mathbb{T}} \log\left(2-2\cos(x-y)\right) z(R)'^{\perp}(y)dy + \int_{\mathbb{T}} \log\left(\frac{|z(R)(x) -z(R)(y)|^2}{2-2\cos(x-y)}\right) z(R)'^{\perp}(y)dy\nonumber\\ & = \int_{\mathbb{T}} \log\left(2-2\cos(x-y)\right) z(R)'^{\perp}(y)dy + \int_{\mathbb{T}}K(R)(x,y)z(R)'(y)^{\perp}dy \nonumber\\ & =: u_{L}(R)(x) + u_{N}(R)(x), \end{align} where \begin{align} &K(R)(x,y) = F(R(x),R(y),J(R)(x,y)),\nonumber\\ &F(u,v,w) := \log(b^2 + b(u+v)+uv+w^2),\label{F_smooth}\\ &J(R)(x,y) = \frac{R(x)-R(y)}{2\sin(\frac{x-y}{2})}.\nonumber \end{align} If $\rVert R \rVert_{H^{3}(\mathbb{T})} \le \epsilon$ for sufficiently small $\epsilon>0$, it follows from Lemmas~\ref{appendix_lem_1} and \ref{ponce_kato} that \begin{align}\label{f_norm1} \rVert J(R) \rVert_{H^{2}(\mathbb{T}^2)} \lesssim_{b} \rVert R \rVert_{H^{3}(\mathbb{T})} \lesssim \epsilon. \end{align} Therefore, for sufficiently small $\epsilon>0$, Lemmas~\ref{composition} and~\ref{appendix_lem_1} imply that for $l\ge 2$, \begin{align} &\rVert K(R) \rVert_{H^{l}(\mathbb{T}^2)} \lesssim_{l,b} 1 + \rVert R \rVert_{H^l(\mathbb{T})} + \rVert J(R)\rVert_{H^{l}(\mathbb{T}^2)} \lesssim_{l,b} 1+ \rVert R \rVert_{H^{l+1}(\mathbb{T})}, \label{Kestimate}, \end{align} The next lemma will be used to prove the growth condition of the velocity in (A) in Proposition~\ref{growth_velocity}. \begin{lemma}\label{Acondition} For $k\ge 2$, there exists $\epsilon=\epsilon(k,b) >0$ such that if $R\in C^{\infty}(\mathbb{T})$ and $\rVert R \rVert_{H^{3}(\mathbb{T})} \le\epsilon$, then \[ \rVert u \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim_{k,b} 1 + \rVert R \rVert_{H^{k+1}(\mathbb{T})}. \] \end{lemma} \begin{proof} For $u_L$, then it follows from Lemma~\ref{T1estimate} and the fact that $R\mapsto u_L(R)$ is linear that \[ \rVert u_L(R)\rVert_{(H^{k}(\mathbb{T}))^2} \lesssim \rVert z(R)' \rVert_{(H^{k-1}(\mathbb{T}))^2} \lesssim 1 + \rVert R \rVert_{H^{k}(\mathbb{T})}. \] Now, let us consider the nonlinear part. We have \[ \rVert u_N \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim \rVert K(R) \rVert_{H^{k}(\mathbb{T}^2)}\rVert z(R)'\rVert_{(L^\infty(\mathbb{T}))^2} \lesssim 1+\rVert R \rVert_{H^{k+1}(\mathbb{T})}, \] where the last inequality follows from \eqref{Kestimate}. \end{proof} Now we turn to (B), (C) and (D) in Proposition~\ref{growth_velocity}. We will use the following notations: \begin{align} DK(R)[h] := \frac{d}{dt}K(R+th)|_{t=0} &= \underbrace{\partial_uF(R(x),R(y),J(R)(x,y))}_{=:\partial_u F(R)(x,y)}h(x) \nonumber\\ &\ + \underbrace{\partial_vF(R(x),R(y),J(R)(x,y))}_{=:\partial_vF(R)(x,y)}h(y) \nonumber\\ & \ + \underbrace{\partial_wF(R(x),R(y),J(R)(x,y))}_{=:\partial_wF(R)(x,y)}J(h)(x,y),\label{DK_def} \end{align} where $J(h):=\frac{h(x)-h(y)}{2\sin\left(\frac{x-y}{2} \right)}$ and \begin{align} Dz(R)'^\perp[h] :=\frac{d}{dt}z(R+th)'^\perp|_{t=0} = -h'(x)(\sin x,\cos x) -h(x)(\cos x, \sin x). \label{Dz_def} \end{align} With the above notations, we can write the derivative of the nonlinear term as \begin{align}\label{Du_Nestimate1} Du_N(R)[h] & = \int_{\mathbb{T}}K(R)(x,y)Dz(R)'^\perp[h](y)dy + \int_{\mathbb{T}}DK(R)[h](x,y)z(R)'^\perp(y)dy \nonumber\\ & =: Du_N^1(R)[h] + Du_N^2(R)[h]. \end{align} Again, Lemma~\ref{composition} and \eqref{f_norm1} yield that if $\rVert R \rVert_{H^3(\mathbb{T})}\le \epsilon$ for sufficiently small $\epsilon>0$, then for $l\ge 2$, \begin{align}\label{Kernal_norm1} \rVert \partial_uF(R) \rVert_{(H^{l}(\mathbb{T}))^2},\rVert \partial_vF(R) \rVert_{(H^{l}(\mathbb{T}))^2},\rVert \partial_wF(R) \rVert_{(H^{l}(\mathbb{T}))^2}& \lesssim_l 1+\rVert R \rVert_{H^{l}(\mathbb{T})} + \rVert J(R) \rVert_{H^{l}(\mathbb{T}^2)}\nonumber \\ & \lesssim_l 1 + \rVert R \rVert_{H^{l+1}(\mathbb{T})}. \end{align} Also, if $R_1,R_2 \in C^\infty({\mathbb{T}})$ and $\rVert R_1\rVert_{H^{3}(\mathbb{T})},\rVert R_2\rVert_{H^{3}(\mathbb{T})} \le \epsilon$, then from Lemma~\ref{composition}, it follows that \begin{align}\label{Kernal_norm1_lip} \rVert \partial_uF(R_1) - \partial_uF(R_2) \rVert_{(H^{l}(\mathbb{T}))^2},\rVert \partial_vF(R_1) - \partial_vF(R_2) \rVert_{(H^{l}(\mathbb{T}))^2},\rVert \partial_wF(R_1) - \partial_wF(R_2) \rVert_{(H^{l}(\mathbb{T}))^2} \lesssim_l \rVert R_1-R_2 \rVert_{H^{l+1}(\mathbb{T})}. \end{align} For ${\partial_wF^\#(R)}(x) := \lim_{y\to x}\partial_wF(R)(x,y) = \partial_wF(R(x),R(x),R'(x))$, we have \begin{align} &\rVert {\partial_wF^\#(R)} \rVert_{H^{l}(\mathbb{T})}\lesssim_l 1 + \rVert R \rVert_{H^{l+1}(\mathbb{T})}, \label{Kernal_norm3}\\ & \rVert {\partial_wF^\#(R)} \rVert_{L^{\infty}(\mathbb{T})} \lesssim 1 + \rVert R \rVert_{H^2(\mathbb{T})} \lesssim 1.\label{Kernal_norm5}\\ &\rVert {\partial_wF^\#(R_1)}- {\partial_wF^\#(R_2)} \rVert_{H^{l}(\mathbb{T})}\lesssim_l \rVert R_1-R_2 \rVert_{H^{l+1}(\mathbb{T})}, \label{Kernal_norm3_lip}\\ & \rVert {\partial_wF^\#(R_1)}- {\partial_wF^\#(R_2)} \rVert_{L^{\infty}(\mathbb{T})} \lesssim \rVert R_1-R_2 \rVert_{H^2(\mathbb{T})}.\label{Kernal_norm5_lip} \end{align} We decompose $Du_N^2(R)[h]$ into \begin{align}Du_N^2(R)[h] &= \int_{\mathbb{T}}\underbrace{\partial_uF(R)(x,y)z(R)'^\perp}_{=:K_1(R)(x,y)} h(x)dy +\int_{\mathbb{T}}\underbrace{\partial_vF(R)(x,y)z(R)'^\perp}_{=:K_2(R)(x,y)} h(y)dy \nonumber\\ & \ +\int_{\mathbb{T}}\underbrace{\partial_wF(R)(x,y)z(R)'^\perp}_{=:K_3(R)(x,y)} J(h)(x,y)dy\label{def_K_3}\\ & =: Du_{N1}^2(R)[h]+Du_{N2}^2(R)[h]+Du_{N3}^2(R)[h]. \label{Du_N_decomp} \end{align} Then it follows from Lemma~\ref{ponce_kato}, \eqref{Kernal_norm1}, \eqref{Kernal_norm3} and \eqref{Kernal_norm5} that for $l\ge 2$, \begin{align} &\rVert K_i(R) \rVert_{H^{l}(\mathbb{T}^2)} \lesssim_l 1 + \rVert R \rVert_{H^{l+1}(\mathbb{T})}, \text{ for }i=1,2,3, \label{Kernal_norm2} \\ &\rVert {K_3}^\#(R) \rVert_{H^{l}(\mathbb{T})} \lesssim 1+\rVert R \rVert_{H^{l+1}(\mathbb{T})}, \text{ where }{K_3}^\#(R)(x):=K_3(R)(x,x),\label{Kernal_norm4}\\ &\rVert {K_3}^\#(R) \rVert_{L^\infty(\mathbb{T})} \lesssim \rVert {K_3}^\#(R) \rVert_{H^1(\mathbb{T})} \lesssim \rVert {K_3}^\#(R) \rVert_{H^2(\mathbb{T})} \lesssim 1+ \rVert R \rVert_{H^3({\mathbb{T}})} \lesssim 1,\label{Kernal_norm6}\\ &\rVert K_i(R_1)-K_2(R_2) \rVert_{H^{l}(\mathbb{T}^2)} \lesssim_l \rVert R_1-R_2 \rVert_{H^{l+1}(\mathbb{T})}, \text{ for }i=1,2,3, \label{Kernal_norm2_lip} \\ &\rVert {K_3}^\#(R_1)-{K_3}^\#(R_2) \rVert_{H^{l}(\mathbb{T})} \lesssim \rVert R_1-R_2 \rVert_{H^{l+1}(\mathbb{T})},\label{Kernal_norm4_lip}\\ &\rVert {K_3}^\#(R_1)-{K_3}^\#(R_2) \rVert_{L^\infty(\mathbb{T})} \lesssim \rVert R_1-R_2 \rVert_{H^2(\mathbb{T})}.\label{Kernal_norm6_lip} \end{align} Furthermore, it follows from the definitions of $K_3$ and $\partial_w F$ in \eqref{def_K_3}, \eqref{DK_def} and \eqref{F_smooth} that \begin{align}\label{K_3_nabla} \rVert \nabla K_3(R)(x,y) \rVert_{L^\infty(\mathbb{T}^2)} \lesssim 1 + \rVert R''\rVert_{L^\infty(\mathbb{T})} + \rVert \nabla J(R) \rVert_{L^\infty(\mathbb{T}^2)} \lesssim 1 + \rVert R'' \rVert_{L^\infty(\mathbb{T})} \lesssim 1. \end{align} \begin{lemma}\label{BCcondition} For $k\ge 2$, there exists $\epsilon=\epsilon(k,b) >0$ such that if $R\in C^{\infty}(\mathbb{T})$ and $\rVert R \rVert_{H^{3}(\mathbb{T})} \le\epsilon$, the Gateaux derivative $Du(R):H^{k+1}(\mathbb{T}) \mapsto (H^{k+1}(\mathbb{T}))^2$ exists and \begin{align}\label{tame_1} \rVert Du(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim_{k,b} \rVert h \rVert_{(H^{k+1}(\mathbb{T}))^2} + \rVert R \rVert_{H^{k+3}(\mathbb{T})}\rVert h \rVert_{H^1(\mathbb{T})}, \text{ for }h\in C^{\infty}(\mathbb{T}). \end{align} Furthermore, if $R_1,R_2\in C^\infty(\mathbb{T})$, $\rVert R_1 \rVert_{H^3(\mathbb{T})},\rVert R_2 \rVert_{H^3(\mathbb{T})}\le \epsilon$, and $\rVert R_1\rVert_{H^{k+3}(\mathbb{T})}, \rVert R_2 \rVert_{H^{k+3}(\mathbb{T})} \le 1$, then \begin{align}\label{tame2} \rVert Du(R_1)-Du(R_2)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim_{k,b} \rVert R_1 -R_2 \rVert_{H^2(\mathbb{T})}\rVert h \rVert_{H^{k+1}(\mathbb{T})} + \rVert R_1-R_2 \rVert_{H^{k+3}(\mathbb{T})}\rVert h \rVert_{H^1(\mathbb{T})}, \end{align} $ \text{ for }h\in C^{\infty}(\mathbb{T}).$ Lastly, for $\sigma\in \mathbb{N}\cup \left\{ 0 \right\}$, there exists a linear operator $T_\sigma(R)$ such that for $h\in C^\infty(\mathbb{T})$, \begin{align}\label{tame3} \left(\frac{d}{dx}\right)^{\sigma} \left( Du(R)[h]\right) = Du(R)[h^{(\sigma)}] + T^\sigma(R)[h], \end{align} where $T^\sigma(R)$ satisfies \begin{align}\label{tame4} \rVert T^\sigma(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim_{k} ((1+ \rVert R \rVert_{H^{k+4+\sigma}(\mathbb{T})})\rVert h\rVert_{L^2(\mathbb{T})} + (1+\rVert R \rVert_{H^4(\mathbb{T})})\rVert h^{(k+\sigma)} \rVert_{L^2(\mathbb{T})}. \end{align} \end{lemma} \begin{proof} We omit the proof of \eqref{tame2} since it can be proved exactly same way as \eqref{tame_1} with \eqref{lip_estimate_F} in Lemma~\ref{composition}. In order to prove \eqref{tame_1}, we deal with $Du_L$. Since it is linear, we have \[ Du_L(R)[h](x) := \int_{\mathbb{T}}\log(2-2\cos(x-y)) Dz(R)'^\perp[h](y) dy. \] Hence, it follows from Lemma~\ref{T1estimate} that \begin{align}\label{Du_Lestimate} \rVert Du_L(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim \rVert Dz(R)'^\perp[h]\rVert_{(H^{k}(\mathbb{T}))^2} \lesssim \rVert h \rVert_{H^{k+1}(\mathbb{T})}. \end{align} Now, we estimate $Du_N$. Recalling the decomposition in \eqref{Du_Nestimate1} we will estimate $Du_N^1$ and $Du_N^2$ separately. For $Du_N^1$, it follows from \eqref{Kestimate} and \eqref{Dz_def} that \begin{align}\label{Du_n1estimate} \rVert Du_N^1(R)[h] \rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim \rVert K(R)\rVert_{H^{k+1}(\mathbb{T}^2)}\rVert Dz(R)'^\perp \rVert_{(L^2(\mathbb{T}))^2} \lesssim (1+\rVert R\rVert_{H^{k+2}(\mathbb{T})})\rVert h \rVert_{H^1(\mathbb{T})}. \end{align} For $Du_N^2$, we recall the decomposition in \eqref{Du_N_decomp}, and use \eqref{Kernal_norm2} and Lemma~\ref{GN} to obtain \[ \rVert Du_{N1}^2(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2},\rVert Du_{N2}^2(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim \rVert h \rVert_{H^{k+1}(\mathbb{T})} + \rVert R \rVert_{H^{k+2}(\mathbb{T})}\rVert h \rVert_{L^\infty(\mathbb{T})}. \] For $Du_{N3}^2(R)[h]$, we use \eqref{Kernal_norm2}, \eqref{Kernal_norm4}, \eqref{Kernal_norm6}, \eqref{K_3_nabla} and Lemma~\ref{J_linear} to show that \[ \rVert Du_{N3}^2(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim \rVert R \rVert_{H^{k+3}(\mathbb{T})}\rVert h\rVert_{H^1(\mathbb{T})} + \rVert h \rVert_{H^{k+1}(\mathbb{T})}. \] Therefore $\rVert Du_N^2(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim \rVert h \rVert_{H^{k+1}(\mathbb{T})} +\rVert R \rVert_{H^{k+3}(\mathbb{T})}\rVert h \rVert_{H^{1}(\mathbb{T})}$. With \eqref{Du_Lestimate}, \eqref{Du_n1estimate} and \eqref{Du_Nestimate1}, the desired result follows. Now, we turn to \eqref{tame3} and \eqref{tame4}. Recall from \eqref{f_def}, \eqref{Du_Nestimate1} and \eqref{Du_N_decomp} that \begin{align*} Du(R)[h] &= \int_{\mathbb{T}}\log(2-2\cos(y))h(x-y)dy + \int_{\mathbb{T}}K(R)(x,y)Dz(R)'^\perp[h](y)dy \\ & \ + \int_{\mathbb{T}}K_1(R)(x,y)z(R)'^\perp(y) h(x)dy + \int_{\mathbb{T}}K_2(R)(x,y)z(R)'^\perp(y) h(y)dy \\ & \ + \int_{\mathbb{T}}K_3(R)(x,y)z(R)'^\perp(y) J(h)(x,y)dy\\ & = I_1(R)[h]+I_2(R)[h]+I_3(R)[h]+I_4(R)[h]+I_5(R)[h] \end{align*} where $K,K_1,K_2,K_3$ are of the form $H(R(x),R(y),J(R)(x,y))$ for some smooth function $H:\mathbb{R}^3 \mapsto \mathbb{R}$. It suffices to show that for each $i=1,\ldots,5$ and $\sigma\in \mathbb{N}\cup \left\{ 0 \right\}$, \[ \left(\frac{d}{dx}\right)^{\sigma} I_i(R)[h] = I_i(R)[h^{\sigma}] + T_i^{\sigma}(R)[h], \] for some $T_i^\sigma(R)$ such that \begin{align}\label{tame5} \rVert T_i^\sigma(R)[h]\rVert_{(H^{k+1}(\mathbb{T}))^2} \lesssim (1+ \rVert R \rVert_{H^{k+4+\sigma}(\mathbb{T})})\rVert h\rVert_{L^2(\mathbb{T})} + (1+\rVert R \rVert_{H^4(\mathbb{T})})\rVert h^{(k+\sigma)} \rVert_{L^2(\mathbb{T})}. \end{align} We only deal with $I_5$ since the other terms can be done in the same way. We also assume $\sigma\ge 1$ since $\sigma=0$ follows trivially. Let $\tilde{K}_3(R)(x,y):=K_3(R)(x,x-y)z(R)'^\perp(x-y)$. Then it follows from \eqref{ponce_kato}, \eqref{Kernal_norm2}, \eqref{Kernal_norm4} and \eqref{Kernal_norm6} that for $l\ge2$, \begin{align} &\rVert \tilde{K}_3(R) \rVert_{H^{l}(\mathbb{T}^2)} \lesssim_l 1 + \rVert R \rVert_{H^{l+1}(\mathbb{T})}, \label{Kernal_norm12} \\ &\rVert {\tilde{K}_3}^\#(R) \rVert_{H^{l}(\mathbb{T})} \lesssim 1+\rVert R \rVert_{H^{l+1}(\mathbb{T})}, \text{ where }{\tilde{K}_3}^\#(R)(x):=\tilde{K}_3(R)(x,x),\label{Kernal_norm14}. \end{align} Using the change of variables, $y\mapsto x-y$, we have \begin{align} \left( \frac{d}{dx}\right)^{\sigma}I_5(R)[h] &= \int_{\mathbb{T}}\tilde{K}_3(R)(x,y)J(h^{(\sigma)})(x,x-y)dy + \sum_{p+q=\sigma,q\le \sigma-1}C_{p,q,\sigma}\int_{\mathbb{T}}(\partial_x)^{p}\tilde{K}_3(R)(x,y)J(h^{(q)})(x,x-y)dy,\nonumber\\ & =: I_5(R)[h^{(\sigma)}](x) + \sum_{p+q=\sigma,q\le \sigma-1}C_{p,q,\sigma}\underbrace{\int_{\mathbb{T}}(\partial_x)^{p}\tilde{K}_3(R)(x,x-y)J(h^{(q)})(x,y)dy}_{=:T_{5}^{\sigma,p,q}(R)[h](x)}\label{tame6}\\ & =: I_5(R)[h^{(\sigma)}](x) + T_{5}^{\sigma}(R)[h](x),\label{tame7} \end{align} where $C_{p,q,\sigma}$ is some constant and we used $\partial_{x}\left( J(h)(x,x-y)\right) = J(h')(x,x-y)$. It suffices to show that $T_5^{\sigma,p,q}(R)[h]$ satisfies \eqref{tame5}. Let $\tilde{K}_3^*(R)(x,y) := (\partial_x)^p\tilde{K}_3(R)(x,x-y)$. Then it follows from \eqref{Kernal_norm12} and \eqref{Kernal_norm14} that for $l\ge2$, \begin{align} &\rVert \tilde{K}^*_3(R) \rVert_{(H^{l}(\mathbb{T}^2))^2} \lesssim_l 1 + \rVert R \rVert_{H^{l+p+1}(\mathbb{T})}, \label{Kernal_norm22} \\ &\rVert ({\tilde{K}^*_3})^\#(R) \rVert_{(H^{l}(\mathbb{T}))^2} \lesssim 1+\rVert R \rVert_{H^{l+p+1}(\mathbb{T})}, \text{ where }({\tilde{K}^*_3})^\#(R)(x):=\tilde{K}^*_3(R)(x,x),\label{Kernal_norm24}\\ &\rVert ({\tilde{K}^*_3})^\#(R) \rVert_{(L^\infty(\mathbb{T}))^2} \lesssim \rVert \tilde{K}_3(R) \rVert_{(W^{p,\infty}(\mathbb{T}^2))^2} \lesssim \rVert \tilde{K}_3(R) \rVert_{(H^{p+2}(\mathbb{T}^2))^2}.\label{Kernal_norm26} \end{align} Furthermore, it follows similarly as \eqref{K_3_nabla} that \begin{align}\label{K_3_nabla1} \rVert \nabla \tilde{K}^*_3(R)\rVert_{L^\infty(\mathbb{T}^2)}\lesssim \rVert \nabla^{(p+1)}\tilde{K}_3(R)\rVert_{L^\infty(\mathbb{T}^2)} \lesssim 1+ \rVert R^{(p+2)} \rVert_{L^\infty(\mathbb{T})} \lesssim 1+ \rVert R \rVert_{H^{p+3}(\mathbb{T})}. \end{align} Hence it follows from Lemma~\ref{J_linear} that \begin{align*} \rVert T^{\sigma,p,q}_5(R)[h]\rVert_{H^{k+1}(\mathbb{T})} & \lesssim \left( \rVert \tilde{K}^*_3(R) \rVert_{(H^{k+2}(\mathbb{T}^2))^2} + \rVert ({\tilde{K}^*_3})^\#(R) \rVert_{(H^{k+1}(\mathbb{T}^2))^2} \right)\rVert h^{(q)}\rVert_{H^1(\mathbb{T})} \\ & + \left(\rVert ({\tilde{K}^*_3})^\#(R) \rVert_{(L^{\infty}(\mathbb{T}^2))^2} + \rVert \nabla\tilde{K}^*_3(R) \rVert_{(L^{\infty}(\mathbb{T}^2))^2}\right)\rVert h^{(q)}\rVert_{H^{k+1}(\mathbb{T})} \\ & \lesssim (1+\rVert R \rVert_{H^{k+3+p}(\mathbb{T})})\rVert h^{(q)}\rVert_{H^1(\mathbb{T})} + (1+\rVert R \rVert_{H^{p+3}(\mathbb{T})})\rVert h^{(q)}\rVert_{H^{k+1}(\mathbb{T})}\\ & \lesssim (1+\rVert R \rVert_{H^{k+3+p}(\mathbb{T})})\rVert h\rVert_{H^{1+q}(\mathbb{T})} + (1+\rVert R \rVert_{H^{p+3}(\mathbb{T})})\rVert h\rVert_{H^{k+1+q}(\mathbb{T})} \end{align*} where the second last inequality follows from \eqref{K_3_nabla1}. Now, recalling that $\rVert R \rVert_{H^3(\mathbb{T})}\le \epsilon$ and plugging the following inequalities into the above estimate, \[ \rVert R \rVert_{H^{l}(\mathbb{T})} \lesssim \rVert R \rVert_{L^2(\mathbb{T})} + \rVert R^{(l)}\rVert_{L^2(\mathbb{T})}, \quad \rVert h \rVert_{H^{l}(\mathbb{T})} \lesssim \rVert h \rVert_{L^2(\mathbb{T})} + \rVert h^{(l)}\rVert_{L^2(\mathbb{T})}, \text{ for }l\ge 0, \] we obtain \begin{align}\label{T5sigma} \rVert T^{\sigma,p,q}_5(R)[h]\rVert_{H^{k+1}(\mathbb{T})} & \lesssim (1 + \rVert R^{(k+p+3)}\rVert_{L^2(\mathbb{T})})\left( \rVert h \rVert_{L^2(\mathbb{T})} + \rVert h^{(1+q)} \rVert_{L^2(\mathbb{T})} \right) \nonumber\\ & \ + (1 + \rVert R^{(p+3)} \rVert_{L^2(\mathbb{T})})(\rVert h \rVert_{L^2(\mathbb{T})}+\rVert h^{(k+1+q)} \rVert_{L^2(\mathbb{T})})\nonumber\\ & =: L_1 + L_2. \end{align} For $L_1$, note that $k+p+3 \le k+3 + \sigma$ and $q\le \sigma-1$, hence we have \begin{align}\label{L1estimate1} L_1 \lesssim (1 + \rVert R^{(k+\sigma+3)}\rVert_{L^2(\mathbb{T})})\rVert h \rVert_{L^2(\mathbb{T})} + \rVert h^{(\sigma)}\rVert_{L^2(\mathbb{T})} + \rVert R^{(k+p+3)}\rVert_{L^2(\mathbb{T})} \rVert h^{(1+q)} \rVert_{L^2(\mathbb{T})}. \end{align} Using the interpolation inequality in Lemma~\ref{GNinterpolation}, we have \begin{align*} &\rVert R^{(k+p+3)}\rVert_{L^2(\mathbb{T})} = \rVert (R^{(4)})^{(k+p-1)}\rVert_{L^2(\mathbb{T})} \lesssim \rVert R^{(4)}\rVert_{L^2(\mathbb{T})}^{1-\frac{k+p-1}{k+\sigma}}\rVert (R^{(4)})^{(k + \sigma)} \rVert_{L^2({\mathbb{T})}}^{\frac{k+p-1}{k+\sigma}} \\ & \rVert h^{(1+q)}\rVert_{L^2(\mathbb{T})} \lesssim \rVert h \rVert_{L^2(\mathbb{T})}^{1-\frac{1+q}{k+\sigma}}\rVert h^{(k+\sigma)}\rVert_{L^2(\mathbb{T})}^{\frac{1+q}{k+\sigma}}. \end{align*} Thus, we have \begin{align*} \rVert R^{(k+p+3)}\rVert_{L^2(\mathbb{T})} \rVert h^{(1+q)} \rVert_{L^2(\mathbb{T})} & \lesssim\left(\rVert R^{(4)}\rVert_{L^2(\mathbb{T})}\rVert h^{(k+\sigma)}\rVert_{L^2(\mathbb{T})} \right)^{\frac{1+q}{k+\sigma}} \left( \rVert (R^{(4)})^{(k + \sigma)} \rVert_{L^2({\mathbb{T})}}\rVert h \rVert_{L^2(\mathbb{T})} \right)^{\frac{k+p-1}{k+\sigma}} \\ & \lesssim \rVert R^{(4)}\rVert_{L^2(\mathbb{T})}\rVert h^{(k+\sigma)}\rVert_{L^2(\mathbb{T})} + \rVert R^{(k+4+\sigma)}\rVert_{L^2({\mathbb{T})}}\rVert h \rVert_{L^2(\mathbb{T})}, \end{align*} where we used $p+q = \sigma$ and Young's inequality. Plugging this inequality into \eqref{L1estimate1} and using $\rVert h^{(\sigma)}\rVert_{L^2(\mathbb{T})} \lesssim \rVert h \rVert_{L^{2}(\mathbb{T})} + \rVert h^{(k+\sigma)} \rVert_{L^{2}(\mathbb{T})}$ to estimate the second term on the right-hand side in \eqref{L1estimate1}, we obtain \[ L_1 \lesssim (1+ \rVert R \rVert_{H^{k+4+\sigma}(\mathbb{T})})\rVert h\rVert_{L^2(\mathbb{T})} + (1+\rVert R \rVert_{H^4(\mathbb{T})})\rVert h^{(k+\sigma)} \rVert_{L^2(\mathbb{T})}. \] Similarly, we can obtain \[ L_2 \lesssim (1+ \rVert R \rVert_{H^{k+4+\sigma}(\mathbb{T})})\rVert h\rVert_{L^2(\mathbb{T})} + (1+\rVert R \rVert_{H^4(\mathbb{T})})\rVert h^{(k+\sigma)} \rVert_{L^2(\mathbb{T})}. \] Recalling \eqref{T5sigma}, \eqref{tame6} and \eqref{tame7}, we obtain \eqref{tame5}. This proves \eqref{tame4} and finishes the proof. \end{proof} \begin{proofprop}{growth_velocity} If $i=j$, then the results follow immediately from Lemma~\ref{Acondition} and \ref{BCcondition}. For $i\ne j$, the same results follow straightforwardly since there is not singularity in the integrands. \end{proofprop} In the next proposition, we estimate the derivative of the velocity with respect to the parameter $b$ in view of \eqref{derivative_matrix_3}. \begin{prop}\label{b_derivative} Let $R$, $z_i(R)$, $u_{i,j}$, be as in \eqref{R_2}, \eqref{z_2} and \eqref{u_def}. For $k\ge 2$, there exists $\epsilon=\epsilon(k,b_1,b_2) >0$ such that if $\rVert R \rVert_{(H^{3}(\mathbb{T}))^2} \le \epsilon$, then \begin{align} \rVert \partial_{b_1} u_{i,j}(R) \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim 1 +\rVert R \rVert_{(H^{k+1}(\mathbb{T}))^2}.\label{growth_b10} \end{align} Also, if $\rVert R \rVert_{(H^{3}(\mathbb{T}))^2},\rVert r \rVert_{(H^{3}(\mathbb{T}))^2} \le \epsilon$ and $\rVert R\rVert_{(H^{k+1}(\mathbb{T}))^2}, \rVert r \rVert_{(H^{k+1}(\mathbb{T}))^2} \le 1$, then, \begin{align}\label{lip_b10} \rVert \partial_{b_1} u_{i,j}(R)-\partial_{b_1} u_{i,j}(r) \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim \rVert R - r \rVert_{(H^{k+1}(\mathbb{T}))^2}. \end{align} \end{prop} If $i,j\ne 1$, then \eqref{growth_b10} and \eqref{lip_b10} follow trivially, since $u_{2,2}$ is independent of $b_1$. As in Proposition~\ref{growth_velocity}, we only deal with the case where $i=j=1$. \begin{lemma}\label{b_derivative1} Let $R\in C^\infty(\mathbb{T})$ and $u(R)$ and $z(R)$ be as in \eqref{single1} and \eqref{single2}. For $k\ge 2$, there exists $\epsilon=\epsilon(k,b) >0$ such that if $\rVert R \rVert_{(H^{3}(\mathbb{T}))^2} \le \epsilon$, then \begin{align} \rVert \partial_{b} u(R) \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim 1 +\rVert R \rVert_{H^{k+1}(\mathbb{T})}.\label{growth_b1} \end{align} Also, if $\rVert R \rVert_{H^{3}(\mathbb{T})},\rVert r \rVert_{H^{3}(\mathbb{T})} \le \epsilon$ and $\rVert R\rVert_{(H^{k+1}(\mathbb{T}))^2}, \rVert r \rVert_{(H^{k+1}(\mathbb{T}))^2} \le 1$, then, \begin{align}\label{lip_b1} \rVert \partial_{b_1} u(R)-\partial_{b_1} u(r) \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim \rVert R - r \rVert_{H^{k+1}(\mathbb{T})}. \end{align} \end{lemma} \begin{proof} From \eqref{f_def}, we have \begin{align*} \partial_bu(R)(x) & = \partial_bu_L(R)(x) + \partial_bu_N(R)(x)\\ & = \int_{\mathbb{T}}\log(2-2\cos(x-y))(-\cos y,-\sin y)dy + \int_{\mathbb{T}}K_4(R)(x,y)z(R)'^\perp(y) dy \\ & \ + \int_{\mathbb{T}}K_5(R)(x,y)(-\cos y,-\sin y)dy\\ & =: B_1 + B_2(R) + B_3(R) \end{align*} where \begin{align*} &K_4(R)(x,y) := G(R(x),R(y),J(R)(x,y)),\\ &K_5(R)(x,y) := F(R(x),R(y),J(R)(x,y)),\\ &G(u,v,w) =\partial_bF(u,v,w)= \frac{2b+u+v}{b^2+b(u+v)+uv+w^2},\\ &F(u,v,w) =\log(b^2+b(u+v)+uv+w^2). \end{align*} Again, using Lemma~\ref{composition}, we have that for $\rVert R\rVert_{H^{3}(\mathbb{T})},\rVert r \rVert_{H^{3}(\mathbb{T})}\le \epsilon$ for sufficiently small $\epsilon>0$, \begin{align*} &\rVert K_4(R) \rVert_{H^{k}(\mathbb{T}^2)}, \rVert K_5(R) \rVert_{H^{k}(\mathbb{T}^2)} \lesssim 1 + \rVert R\rVert_{H^{k+1}(\mathbb{T})},\\ &\rVert K_4(R) - K_4(r) \rVert_{H^{k}(\mathbb{T}^2)}, \rVert K_5(R) - K_5(r) \rVert_{H^{k}(\mathbb{T}^2)} \lesssim (1+\rVert R \rVert_{H^{k+1}(\mathbb{T})}+\rVert r\rVert_{H^{k+1}(\mathbb{T})})\rVert R-r \rVert_{H^{k+1}(\mathbb{T})}. \end{align*} Then \eqref{growth_b1} and \eqref{lip_b1} follow immediately from Lemma~\ref{GN}. \end{proof} \begin{proofprop}{b_derivative} The case $i=j=1$ follows immediately from Lemma~\ref{b_derivative1}. If $i\ne j$ then the result follows straightforwardly in a similar manner. \end{proofprop} Regarding \eqref{D2G}, \eqref{dtDG} and \eqref{dttDG} (note that since $b_3$ in \eqref{def_b3} depends on $\Theta_3$ we need to estimate the second derivative wit respect to $b_3$ of $G$.), we need to estimate the higher derivatives of $u_{i,j}$. \begin{prop}\label{higer_derivative_estimates} Let $R$, $z_i(R)$, $u_{i,j}$, be as in \eqref{R_2}, \eqref{z_2} and \eqref{u_def}. For $k\ge 2$, there exists $\epsilon=\epsilon(k,b_1,b_2) >0$ such that if $\rVert R \rVert_{(H^{3}(\mathbb{T}))^2} \le \epsilon$, then \begin{align} &\rVert D^2u_{i,j}(R)[h,h] \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim_k (1+ \rVert R \rVert_{(H^{k+2}(\mathbb{T}))^2})\rVert h \rVert_{(H^{k}(\mathbb{T}))^2}^2,\label{second_derivative_1}\\ &\rVert \partial_{b_3}D^2u_{i,j}(R)[h,h] \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim_k (1+ \rVert R \rVert_{(H^{k+2}(\mathbb{T}))^2})\rVert h \rVert_{(H^{k}(\mathbb{T}))^2}^2,\nonumber\\ &\rVert \partial_{b_1}Du_{i,j}(R)[h] \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim_k (1+ \rVert R \rVert_{(H^{k+2}(\mathbb{T}))^2})\rVert h \rVert_{(H^{k}(\mathbb{T}))^2},\nonumber\\ &\rVert \partial_{b_{1}b_1}Du_{i,j}(R)[h] \rVert_{(H^{k}(\mathbb{T}))^2} \lesssim_k (1+ \rVert R \rVert_{(H^{k+2}(\mathbb{T}))^2})\rVert h \rVert_{(H^{k}(\mathbb{T}))^2}.\nonumber \end{align} \end{prop} \begin{proof} We also consider the $i=j=1$ case only and briefly sketch the idea for \eqref{second_derivative_1}, since the proof is almost identical to Lemma~\ref{BCcondition} and Lemma~\ref{b_derivative1}. Adapting the setting in \eqref{single1} and \eqref{single2}, the Gateaux second derivative of $u$ can be written as a linear combination of the integral operators of the form \[ \int_{\mathbb{T}}K(R)(x,y)L(x,y)dy, \] where $K(R)(x,y)=\tilde{K}(R(x),R(y),J(R)(x,y))$ for some smooth function $\tilde{K}:\mathbb{R}^3 \mapsto \mathbb{R}^2$, and $L(x,y)$ is a product of two of $h(x)$, $h(y)$, $h'(y)$ and $J(h)(x,y)$. As before, $K$ satisfies the estimates in \eqref{Kernal_norm2}, \eqref{Kernal_norm4} and \eqref{Kernal_norm6}. Let us consider $L(x,y) = J(h)(x,y)^2$ only, that is, \[ T(R)[h,h]:=\int_{\mathbb{T}}K(R)(x,y)J(h)(x,y)J(h)(x,y)dy. \] Therefore, it follows from Lemma~\ref{twojs} that \begin{align*} \rVert T(R)[h,h]\rVert_{H^{k}(\mathbb{T})} \lesssim \rVert K \rVert_{H^{k+1}(\mathbb{T})}\rVert h \rVert_{H^{k}(\mathbb{T})}^2 \lesssim (1+\rVert R \rVert_{H^{k+2}(\mathbb{T})})\rVert h \rVert_{H^{k}(\mathbb{T})}^2, \end{align*} where the last inequality follows from the fact that $K$ satisfies \eqref{Kernal_norm2}. As mentioned, the other terms can be estimated in a similar manner and we omit the proofs. \end{proof} \subsection{Checking the hypotheses in Theorem~\ref{theorem1}}\label{checking_subsection} In this subsection we will show the hypotheses in theorem \ref{theorem1} are satisfied. We will always assume that $2\le k\in \mathbb{N}$ is fixed. We denote $R := (R_1,R_2,R_3) \in \left(X^{k+1}(\mathbb{T})\cap C^\infty(\mathbb{T})\right)^3$. Throughout the rest of the paper, we assume that \begin{align}\label{sizeassumption} |\Theta_3^*-\Theta_3| + \| R \|_{(X^{3})^3} \le \epsilon, \end{align} for some $\epsilon$, sufficiently small. Note that from \eqref{parameter_3}, \eqref{def_b3} and Lemma~\ref{kernel}, it follows that \begin{align}\label{positive_b3} 0 < b_3(\Theta_3,R)<b_2, \end{align} for sufficiently small $\epsilon$ in \eqref{sizeassumption}. We recall our functional from \eqref{stationary_equation} and \eqref{stationarR_equation4} can be written as (since $b_1,b_2,\Theta_1,\Theta_2$ are fixed, we omit their dependence but mark the dependence of $\Theta_3$ in the notations) \begin{align}\label{def_G2} G(\Theta_3,R) = \colvec{G_1(\Theta_3,R) \\ G_2(\Theta_3,R) \\ G_3(\Theta_3,R)} =\frac{1}{4\pi}\colvec{\sum_{i=1}^3 \Theta_i\left( R_1'u^{\theta}_{i,1} -(b_1+R_1)u^r_{i,1}\right)\\\sum_{i=1}^3 \Theta_i\left(R_2'u^{\theta}_{i,2} -(b_2+R_2)u^r_{i,2}\right)\\ \sum_{i=1}^3\Theta_i\left(R_3' u^{\theta}_{i,3} -(b_3+R_3)u^r_{i,3} \right)} =: \colvec{R_1'u^{\theta}_1 - (b_1+R_1)u^{r}_1 \\ R_2'u^{\theta}_2 - (b_2+R_2)u^{r}_2 \\ R_3'u^{\theta}_3 - (b_3+R_3)u^{r}_3}, \end{align} where for $i,j,k \in \left\{ 1,2,3 \right\}$, \begin{align*} &u^{\theta}_k :=u^\theta_k(\Theta_3,R) = \frac{1}{4\pi}\sum_{i=1}^3 \Theta_iu^{\theta}_{i,k}, \quad u^{r}_k :=u^r_k(\Theta_3,R) = \frac{1}{4\pi}\sum_{i=1}^3 \Theta_iu^{r}_{i,k}, \\ &u^{\theta}_{i,j} = u^{\theta}_{i,j}(b_3(\Theta_3),\Theta_3,R) =u_{i,j}(b_3(\Theta_3),\Theta_3,R) \cdot \colvec{-\sin x \\ \cos x},\\ &u^{r}_{i,j} = u^{r}_{i,j}(b_3(\Theta_3),\Theta_3,R) =u_{i,j}(b_3(\Theta_3),\Theta_3,R) \cdot \colvec{\cos x \\ \sin x}, \end{align*} and $u_{i,j}(b_3(\Theta_3),\Theta_3,R)$ is given by \eqref{z_2} and \eqref{u_def} and $b_3(\Theta_3,R)$ is given by \eqref{def_b3}. We also denote by $\omega(\Theta_3,R), \Psi(\Theta_3,R):\mathbb{R}^2\mapsto \mathbb{R}$ the corresponding vorticity and its stream function, more precisely, \begin{align}&\omega(\Theta_3,R)(p):= \sum_{i=1}^3\Theta_i 1_{D_i}(p), \quad \Psi(\Theta_3,R)(p):=\frac{1}{2\pi}\left(\omega(\Theta_3,R)*\log|\cdot|\right)(p), \quad p\in \mathbb{R}^2,\label{vorticity_def} \\ &D_i := \left\{ r(\cos x,\sin x)\in \mathbb{R}^2 : r \le b_i+R_i(x), \ x\in \mathbb{T}\right\}.\label{patch_notation}\end{align} With the stream function, we can write $G$ as \begin{align}\label{def_G_stream} {G(\Theta_3,R)(x)} = -\colvec{\partial_x\left(\Psi(\Theta_3,R)(z_1(R)(x))\right) \\ \partial_x\left(\Psi(\Theta_3,R)(z_2(R)(x))\right) \\ \partial_x\left(\Psi(\Theta_3,R)(z_3(R)(x)) \right)}. \end{align} The derivative of the functional will be given as \begin{align}\label{derivative_matrix_3} DG(\Theta_3,R)[h] & = \colvec{u_1^{\theta} & 0 & 0 \\ 0 & u_2^\theta & 0 \\ 0 & 0 & u_3^\theta}\colvec{h_1' \\ h_2' \\ h_3'} \nonumber\\ & \ + \colvec{-h_1u_1^r(\Theta_3,R) + R_1'Du_1^\theta(\Theta_3,R)[h] - (b_1+R_1)Du_1^r(\Theta_3,R)[h]\\ -h_2u_2^r(\Theta_3,R) + R_2'Du_2^\theta(\Theta_3,R)[h] - (b_2+R_2)Du_2^r(\Theta_3,R)[h] \\ -h_3u_3^r(\Theta_3,R) + R_3'Du_3^\theta(\Theta_3,R)[h] - (b_3+R_3)Du_3^r(\Theta_3,R)[h]}\nonumber\\ & \ + Db_3(\Theta_3,R)[h]\colvec{R_1'\partial_{b_3}u^{\theta}_1 - (b_1+R_1)\partial_{b_3}u^{r}_1 \\ R_2'\partial_{b_3}u^{\theta}_2 - (b_2+R_2)\partial_{b_3}u^{r}_2 \\ R_3'\partial_{b_3}u^{\theta}_3 - (b_3+R_3)\partial_{b_3}u^{r}_3 - u^r_3}, \end{align} where $Db_3(\Theta_3,R)[h]$ is given in \eqref{b_der_R}. We define the projection $P_0:C^{\infty}(\mathbb{T})\mapsto C^{\infty}(\mathbb{T})$ as \[ P_0 f = f -\frac{1}{2\pi}\int_{\mathbb{T}}f(x)dx, \] and the linear maps, \begin{align}\label{decomp_11} a(\Theta_3,R)[h]:=\colvec{P_0\left( h_1' u^\theta_1 (\Theta_3,R) \right) \\ 0 \\ 0}, \quad A(\Theta_3,R)[h] := DG(\Theta_3,R)[h] - a(\Theta_3,R)[h]. \end{align} Now, we start checking the hypotheses. The hypothesis (a) immediately follows from \eqref{trivial_one}, \eqref{stationarR_equation3} and \eqref{stationarR_equation4}. \subsubsection{Regularity of the functional $G$} In this subsection, we use the estimates obtained in Subsection~\ref{velocity_estimates} to prove that the functional $G$ defined in \eqref{def_G2} satisfies the following proposition: \begin{prop}\label{regularity_checking} Let $2\le k\in \mathbb{N}$ and $\Theta_3$ and $R$ satisfy \eqref{sizeassumption}. Then, $G:\mathbb{R} \times (X^{k+1})^{3} \mapsto (Y^k)^3$ is well-defined and \eqref{lineargrowth1}-\eqref{dttDG} hold. \end{prop} \begin{proof} Note that $G_j$ is the tangential derivative of the stream function on the corresponding boundary $D_j$ (see \eqref{def_G_stream}). Thus, $P_0G_j(\Theta,R) = 0$, that is, each $G_j$ has zero mean. If the boundary of the patch $D_i$ is given by by $R_i\in X^{k+1}$, then $\omega(\Theta_3,R)$ is invariant under $\frac{2\pi}{m}$ rotation about the origin and reflection, therefore, $G_j$ is also $\frac{2\pi}{m}$ periodic and odd. Furthermore, it follows from (A) in Proposition~\ref{growth_velocity} and Lemma~\ref{ponce_kato} that \begin{align*} \rVert G(\Theta_3,R)\rVert_{(H^{k}(\mathbb{T}))^3} &\lesssim \sum_{i,j=1,2,3} \left( \bigg\rVert u_{i,j}(\Theta_3,R)\cdot \colvec{-\sin x \\ \cos x}R_j'\bigg\rVert_{H^{k}(\mathbb{T})} + \bigg\rVert u_{i,j}(\Theta_3,R)\cdot \colvec{\cos x \\ \sin x}(b_j+R_j)\bigg\rVert_{H^{k}(\mathbb{T})}\right)\\ & \lesssim 1+ \rVert R \rVert_{(H^{k+1}(\mathbb{T}))^3}, \end{align*} which proves \eqref{lineargrowth1} and that $G:\mathbb{R} \times (X^{k+1})^{3} \mapsto (Y^k)^3$ is well-defined. For \eqref{lineargrowth2} - \eqref{dttDG}, the results follow from (A) and (B) in Proposition~\ref{growth_velocity} and Proposition~\ref{b_derivative} and \ref{higer_derivative_estimates} straightforwardly. \end{proof} \subsubsection{The Dirichlet-Neumann operator} Here, we aim to prove that the linear operator $a(\Theta_3,R)$ in \eqref{decomp_11} satisfies \eqref{approxinverse1}. Thanks to Lemma~\ref{zero mean}, we know that at a stationary solution, the corresponding velocity outside the support of the vorticity must vanish. In the following proposition, we will prove this quantitatively by using the main idea of \cite{Craig-Schanz-Sulem:modulational-regime-3d-water-waves,Castro-Cordoba-Fefferman-Gancedo-LopezFernandez:rayleigh-taylor-breakdown}. \begin{prop}\label{a_estimate11} There exists $\eta>0$ such that if $\rVert R_1 \rVert_{H^{k+2}(\mathbb{T})} \le \eta$, then \[ \| u^\theta_1 (\Theta_3,R) \|_{H^{k}(\mathbb{T})} \lesssim_{k} \|G_1(\Theta_3,R)\|_{H^{k}(\mathbb{T})}. \] \end{prop} \begin{proof} We adapt the notations in \eqref{vorticity_def}, \eqref{patch_notation} and \eqref{def_G_stream} for $\omega$, $\Psi$ and $D_i$. Clearly we have (we omit the dependence of $\Theta_3$ and $R$ for simplicity) \begin{align}\label{uandphi} u^\theta_1(\alpha) = \nabla^{\perp}\Psi(z_1(\alpha))\cdot \colvec{-\sin \alpha \\ \cos \alpha}, \quad G_1(\alpha) = \nabla \Psi(z_1(\alpha))\cdot z_1'(\alpha), \quad \alpha\in \mathbb{T}, \end{align} where $z_1$ is as given in \eqref{z_2}. Note that $\Psi$ is harmonic in $D_1^c$. In addition, it follows from \eqref{Theta b relation} that $\int_{\mathbb{R}^2}\omega(y) dy = 0$, hence, for $x\in \mathbb{R}^2$, \begin{align*} &|\Psi(x) | \lesssim \bigg| \int_{\mathbb{R}^2} \omega(y) \log\frac{|x-y|}{|x|}dy \bigg| = O\left(\frac{1}{|x|}\right), \\ &| \nabla \Psi (x) | \lesssim \bigg| \int_{\mathbb{R}^2} \omega(y) \left( \frac{(x-y)}{|x-y|^2}- \frac{x}{|x|^2} \right) dy\bigg| = O\left(\frac{1}{|x|^2}\right). \end{align*} With the above decay rates, we can use integration by parts to obtain that for $x\in \partial D_1$, \begin{align} 0 & = \int_{D_1^c} \log|x-y| \Delta \Psi(y)dy \nonumber\\ & = -\frac{1}{2}\int_{\partial D_1} \log|x-y|^2 \nabla \Psi(y)\cdot \vec{n}(y)d\sigma(y) +\int_{\partial D_1}\nabla_y(\log|x-y|)\cdot\vec{n}(y)(\Psi(y)-\Psi(x))d\sigma(y)\nonumber\\ & =:- \frac{1}{2}L_1 + L_2,\label{equation11} \end{align} where $\vec{n}$ denotes the outer normal vector on $\partial D_1$. By the change of variables, $x\mapsto z_1(\alpha)$, and $y\mapsto z_1(\beta)$, we obtain \begin{align*} L_1 &=\int_\mathbb{T} \log |z_1(\alpha)-z_1(\beta)|^2 \nabla \Psi(z_1(\beta))\cdot (-z'_1(\beta)^\perp)d\beta \\ & = \int_{\mathbb{T}} \log (2-2\cos(\alpha-\beta)) \nabla\Psi(z_1(\beta))\cdot (-z'_1(\beta)^\perp)d\beta \\ & \ + \int_\mathbb{T} \log\left(\frac{|z_1(\alpha)-z_1(\beta)|^2}{2-2\cos(\alpha-\beta)} \right) \nabla \Psi(z_1(\beta))\cdot (-z'_1(\beta)^\perp)d\beta\\ & =: L_{11} + L_{12}. \end{align*} Similarly, we have \begin{align*} L_2 = \int_\mathbb{T} \frac{(z_1(\alpha)-z_1(\beta))\cdot z_1'(\beta)^\perp}{|z_1(\alpha)-z_1(\beta)|^2}(\Psi(z_1(\beta)) - \Psi(z_1(\alpha)) d\beta. \end{align*} Hence, we obtain \begin{align}\label{lidentity} L_{11} = -L_{12} + 2 L_2. \end{align} We claim that \begin{align} &\| L_{11} \|_{H^{k+1}(\mathbb{T})} \gtrsim \| \nabla\Psi(z_1(\cdot))\cdot (-z'_1(\cdot)^\perp)\|_{H^{k}(\mathbb{T})} \label{Lestimate1} \\ & \| L_{12} \|_{H^{k+1}(\mathbb{T})} \lesssim \| R_1 \|_{H^{k+2}(\mathbb{T})} \| \nabla\Psi(z_1(\cdot))\cdot (-z'_1(\cdot)^\perp)\|_{L^2(\mathbb{T})} \label{Lestimate2} \\ & \|L_2\|_{H^{k+1}(\mathbb{T})} \lesssim \| \nabla\Psi(z_1(\cdot))\cdot (z'_1(\cdot))\|_{H^k(\mathbb{T})}. \label{Lestimate3} \end{align} Let us assume the above claims for a moment. Then it follows from the claims, \eqref{sizeassumption} and \eqref{lidentity} that for sufficiently small $\eta>0$ in the hypothesis of the proposition, we have \[ \| \nabla\Psi(z_1(\cdot))\cdot (-z'_1(\cdot)^\perp)\|_{H^k(\mathbb{T})} \lesssim \| \nabla\Psi(z_1(\cdot))\cdot (z'_1(\cdot))\|_{H^k(\mathbb{T})}. \] With this inequality, we can obtain \begin{align*} \| \nabla \Psi(z_1(\cdot)) \|_{H^{k}(\mathbb{T})} & = \bigg\rVert \nabla \Psi(z_1(\cdot))\cdot \frac{z'_1(\cdot)^\perp}{|z_1'(\cdot)|} \bigg\lVert_{H^{k}(\mathbb{T})} + \bigg\rVert \nabla \Psi(z_1(\cdot))\cdot \frac{z'_1(\cdot)}{|z_1'(\cdot)|} \bigg\rVert_{H^{k}(\mathbb{T})}\\ & \lesssim \rVert z_1'\rVert_{H^{k}(\mathbb{T})}\left( \| \nabla \Psi(z_1(\cdot))\cdot {z'_1(\cdot)^\perp} \|_{H^{k}(\mathbb{T})} + \| \nabla \Psi(z_1(\cdot))\cdot {z'_1(\cdot)} \|_{H^{k}(\mathbb{T})}\right) \\ & \lesssim \| \nabla \Psi(z_1(\cdot))\cdot {z'_1(\cdot)} \|_{H^{k}(\mathbb{T})}, \end{align*} where the first inequalities follows from $\frac{1}{c} < |z_1'| < c$ for some $c>0$ and the second inequality follows from $\rVert z_1' \rVert_{H^{k}(\mathbb{T})}\lesssim \rVert R_1 \rVert_{H^{k+1}(\mathbb{T})}\le \eta$ for some small $\eta$. under the assumption~\eqref{sizeassumption}. Finally, recalling \eqref{uandphi}, we obtain the desired result. To finish the proof, we need to prove the claims \eqref{Lestimate1}-\eqref{Lestimate3}. \eqref{Lestimate1} follows from Lemma~\ref{T1estimate}. To see \eqref{Lestimate2}, we observe that \begin{align*} \log\left(\frac{|z_1(\alpha)-z_1(\beta)|^2}{2-2\cos(\alpha-\beta)} \right) = \log(1+\mathcal{K}(\alpha,\beta)), \end{align*} where $\mathcal{K}(\alpha,\beta):={2(R_1(\alpha) + R_1(\beta))} + R_1(\alpha)R_1(\beta) + \left( \frac{R_1(\alpha)-R_1(\beta)}{2\sin(\frac{\alpha-\beta}{2})} \right)^2$. Thus, it is straightforward that (thanks to \eqref{sizeassumption}, $1+\mathcal{K} \ge c>0$ for some $c>0$) \[ \| \log(1+\mathcal{K}) \|_{H^{k+1}(\mathbb{T}^2)} \lesssim \| \mathcal{K} \|_{H^{k+1}(\mathbb{T}^2)} \lesssim \| R_1 \|_{H^{k+2}(\mathbb{T})}, \] where the last inequality follows from Lemma~\ref{ponce_kato} and Lemma~\ref{appendix_lem_1}. Thus, \begin{align*} \| L_{12} \|_{H^{k+1}(\mathbb{T})} &\lesssim \bigg\rVert \log\left(\frac{|z_1(\alpha)-z_1(\beta)|^2}{2-2\cos(\alpha-\beta)} \right) \bigg\rVert_{H^{k+1}(\mathbb{T}^2)} \| \nabla\Psi(z_1(\cdot))\cdot (-z'_1(\cdot)^\perp)\|_{L^2(\mathbb{T})} \\ & \lesssim \| R_1 \|_{H^{k+2}(\mathbb{T}^2)} \| \nabla\Psi(z_1(\cdot))\cdot (-z'_1(\cdot)^\perp)\|_{L^2(\mathbb{T})}, \end{align*} which yields \eqref{Lestimate2}. Lastly, in order to show \eqref{Lestimate3}, we rewrite $L_2$ as \[ L_2 = \int_{\mathbb{T}}g(\alpha,\beta)\frac{((\Psi(z_1(\beta))-M_\Psi) - (\Psi(z_1(\alpha))-M_\Psi)}{2\sin(\frac{\alpha-\beta}{2})} d\beta, \] where \[ g(\alpha,\beta) = \frac{(z_1(\alpha)-z_1(\beta))\cdot z_1'(\beta)^\perp}{2\sin(\frac{\alpha-\beta}{2})} \cdot \frac{2-2\cos(\alpha-\beta)}{|z_1(\alpha)-z_1(\beta)|^2}, \quad M_\Psi:=\frac{1}{2\pi}\int_{\mathbb{T}}\Psi(z_1(\alpha))d\alpha. \] From Lemma~\ref{appendix_lem_1}, and \ref{ponce_kato}, we have \[ \| g \|_{H^{k+1}(\mathbb{T}^2)} \lesssim \| R_1 \|_{H^{k+2}(\mathbb{T})}. \] Therefore, it follows from Lemma~\ref{J_linear} that \[ \| L_2 \|_{H^{k+1}(\mathbb{T})} \lesssim \|(\Psi(z_1(R,\cdot))-M_\Psi) \|_{H^{k+1}(\mathbb{T})} \lesssim \| \nabla\Psi(z_1(R,\cdot))\cdot (z'_1(R,\cdot))\|_{H^k(\mathbb{T})}, \] where the last inequality follows from Poincar\'e inequality. Hence \eqref{Lestimate3} follows. \end{proof} \subsubsection{Estimates on the linearized operator} Our goal here is to prove that $G$ satisfies the hypotheses $(c)$, $(\tilde{c}-1)$ and $(\tilde{c}-2)$ in Theorem~\ref{theorem1}. More precisely, we will prove the following proposition: \begin{prop}\label{linear_prop} Let $a(\Theta_3,R)$ and $A(\Theta_3,R)$ be as in \eqref{decomp_11}. Then \begin{enumerate} \item[\rom{1})] $a(\Theta_3,R)\in \mathcal{L}((X^{k+1})^3,(Y^{k})^3)$, $A(\Theta_3,R) \in \mathcal{L}((X^{k+1})^3,Y^{k+1}\times (Y^{k})^2)$ with the estimates \eqref{approxinverse1} and \eqref{approxinverse2}. \item[\rom{2})] For $(\Theta_3^1,R^1),\ (\Theta_3^2,R^2)\in I\times (V^3\cap (C^\infty)^3),$ \eqref{NM_lip} holds. \item[\rom{3})] For any even $\sigma \in \mathbb{N}\cup \left\{ 0 \right\}$, there exists $0<\eta<1$ such that if $\rVert R \rVert_{(H^{k+3}(\mathbb{T}))^3} \le \eta$, and $A(\Theta_3,R)[h] =z$ for some $z\in (C^\infty)^3$ and $h \in \text{Ker}(A(\Theta^*_3,0))^{\perp}$, then \eqref{tame1} holds. \end{enumerate} \end{prop} \begin{proof} \textbf{Proof of \rom{1}).} By definition of $a(\Theta_3,R)$ in \eqref{decomp_11}, we have $\int_{\mathbb{T}}a(\Theta_3,R)[h](x)dx = 0 $. Furthermore, clearly, $h_1'$ is a $\frac{2\pi}{m}$-periodic function and odd. Using the invariance under rotation/reflection of the stream function, it follows straightforwardly that $u^\theta_1$, which is the radial derivative of the stream function on the outmost boundary, is also $\frac{2\pi}{m}$-periodic and even. Therefore, $a(\Theta_3,R)[h]$ is $\frac{2\pi}{m}$ periodic and odd. \eqref{approxinverse1} follows immediately from \eqref{a_estimate11}, and $a(\Theta_3,R)\in \mathcal{L}((X^{k+1})^3,(Y^{k})^3)$. Similarly, $A(\Theta_3,R) \in \mathcal{L}((X^{k+1})^3,Y^{k+1}\times (Y^{k})^2)$ and \eqref{approxinverse2} follows from (B) in Proposition~\ref{growth_velocity}. \textbf{Proof of \rom{2}).} Thanks to (C) in Proposition~\ref{growth_velocity} and Proposition~\ref{b_derivative}, each term in $A(\Theta_3,R)$ is Lipschitz continuous with respect to $R\in (H^{k+3}(\mathbb{T})^3)$. Furthermore, $A(\theta_3,R)$ and $b_3$ depend on $\Theta_3$ smoothly, therefore the result follows immediately. \textbf{Proof of \rom{3}).} In order to prove $\rom{3}$, we first claim that for each even $\sigma \in \mathbb{N}\cup \left\{ 0 \right\}$, there exist $\eta$ and a linear map $T(R):(C^\infty(\mathbb{T}))^3 \mapsto (C^{\infty}(\mathbb{T}))^3$ such that if $\rVert R \rVert_{(H^{k+3}(\mathbb{T}))^3} \le \eta$, then \begin{align} & \left(\frac{d}{dx}\right)^\sigma A(\Theta_3,R)[h] = A(\Theta_3,R)[h^{(\sigma)}] + L^\sigma(R)[h], \label{claim_Tsigma}\\ &\rVert L^\sigma(R)[h]\rVert_{H^{k+1}(\mathbb{T}) \times (H^{k}(\mathbb{T}))^2} \lesssim_{k,\sigma} \rVert h \rVert_{(H^{k+\sigma}(\mathbb{T}))^3} + (1 + \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3})\rVert h \rVert_{(H^{k+1}(\mathbb{T}))^3}.\label{claim_Tsigma2} \end{align} Let us assume that the above claims are true for a moment and let us suppose \[ A(\Theta_3,R)[h] = z, \quad \text{ for some }z\in (C^\infty(\mathbb{T}))^3,\ h\in \text{Ker}(A(\Theta_3^*,0))^{\perp}\subset (H^{k+1}(\mathbb{T}))^3. \] From \eqref{sizeassumption}, $\rom{2})$, Lemma~\ref{functional_stability} and the assumption that $\rVert R \rVert_{(H^{k+3}(\mathbb{T}))^3}\le \eta$ for some small enough $\eta$, we have that $A(\Theta_3,R):\text{Ker}(A(\Theta_3^*,0))^\perp\subset ((H^{k+1}(\mathbb{T}))^3) \mapsto \text{Im}(A(\Theta,R))\subset H^{k+1}(\mathbb{T})\times (H^k(\mathbb{T}))^2$ is invertible and \begin{align}\label{invert_11} \rVert A(\Theta_3,R)^{-1} \rVert_{\mathcal{L}(\text{Im}(A(\Theta_3,R)),\text{Ker}(A(\Theta_3^*,0))^\perp)} \lesssim1. \end{align} Therefore, we have \begin{align}\label{low_invert} \rVert h \rVert_{(H^{k+1}(\mathbb{T}))^3} \lesssim \rVert z \rVert_{H^{k+1}(\mathbb{T})\times (H^{k}(\mathbb{T}))^2}. \end{align} Now for each even $\sigma\in \mathbb{N}\left\{ 0 \right\}$, it follows from \eqref{claim_Tsigma} that \begin{align*} \left(\frac{d}{dx}\right)^\sigma A(\Theta_3,R)[h] = A(\Theta_3,R)[h^{(\sigma)}] + L^\sigma(R)[h] = z^{(\sigma)}. \end{align*} Thanks to Proposition~\ref{Fredholm}, we have $h^{(\sigma)}\in \text{Ker}(A(\Theta_3^*,0))^\perp$, thus, it follows from \eqref{invert_11} that \[ h^{(\sigma)} = A(\Theta_3,R)^{-1}[z^{(\sigma)}- L^\sigma(R)[h]], \] and \begin{align}\label{h_high_estimate} \rVert h^{(\sigma)}\rVert_{(H^{k+1}(\mathbb{T}))^3} &\lesssim \rVert z^{(\sigma)} - L^\sigma(R)[h] \rVert_{H^{k+1}(\mathbb{T})\times (H^{k}(\mathbb{T}))^2} \nonumber\\ & \lesssim \rVert z \rVert_{H^{k+1+\sigma}(\mathbb{T}) \times (H^{k+\sigma}(\mathbb{T}))^2} + \rVert L^\sigma(R)[h] \rVert_{H^{k+1}(\mathbb{T}) \times (H^{k}(\mathbb{T}))^2}. \end{align} From \eqref{claim_Tsigma2}, we also have \begin{align}\label{Lsigma_estimate} \rVert L^\sigma(R)[h] \rVert_{H^{k+1}(\mathbb{T}) \times (H^{k}(\mathbb{T}))^2} & \lesssim \rVert h \rVert_{(H^{k+\sigma}(\mathbb{T}))^3} + (1 + \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3})\rVert h \rVert_{(H^{k+1}(\mathbb{T}))^3}. \end{align} Using Lemma~\ref{GNinterpolation}, we have \begin{align*} \rVert h \rVert_{(H^{k+\sigma}(\mathbb{T}))^3} & \lesssim \rVert h \rVert_{(H^{k+1}(\mathbb{T}))^3} + \bigg\rVert\left( h^{(k+1)}\right)^{(\sigma-1)} \bigg\rVert_{(L^2(\mathbb{T}))^3}\\ & \lesssim \rVert h \rVert_{(H^{k+1}(\mathbb{T}))^3} + \left(\rVert h^{(k+1)} \rVert_{(L^2(\mathbb{T}))^3}\right)^{\frac{1}\sigma} \bigg\rVert \left(h^{(k+1)}\right)^{(\sigma)} \bigg\rVert_{(L^2{(\mathbb{T}))^3}}^{\frac{\sigma-1}{\sigma}} \\ & \lesssim (1+C(\epsilon))\rVert \rVert h \rVert_{(H^{k+1}(\mathbb{T}))^3} + \epsilon \rVert h^{(\sigma)} \rVert_{(H^{k+1}(\mathbb{T}))^3}, \end{align*} for any $\epsilon>0$, which follows from Young's inequality. Plugging this into \eqref{Lsigma_estimate} and using \eqref{low_invert}, we obtain \[ \rVert L^\sigma(R)[h] \rVert_{H^{k+1}(\mathbb{T}) \times (H^{k}(\mathbb{T}))^2} \lesssim (C(\epsilon) + \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3}) \rVert z \rVert_{H^{k+1}(\mathbb{T})\times (H^{k}(\mathbb{T}))^2} + \epsilon \rVert h^{(\sigma)} \rVert_{(H^{k+1}(\mathbb{T}))^3} \] Hence we choose sufficiently small $\epsilon$ depending on $k$ and $\sigma$ so that \eqref{h_high_estimate} yields that \[ \rVert h^{(\sigma)}\rVert_{(H^{k+1}(\mathbb{T}))^3} \lesssim \rVert z \rVert_{H^{k+1+\sigma}(\mathbb{T}) \times (H^{k+\sigma}(\mathbb{T}))^2} + (1 + \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3}) \rVert z \rVert_{H^{k+1}(\mathbb{T})\times (H^{k}(\mathbb{T}))^2}. \] With this estimate and \eqref{low_invert}, \eqref{tame1} follows. In order to finish the proof, we need to prove the claim \eqref{claim_Tsigma} and \eqref{claim_Tsigma2}, we note that each component of $A(\Theta_3,R)[h]$ consists of linear combination of the following forms (see \eqref{b_der_R}, \eqref{derivative_matrix_3} and \eqref{decomp_11}): \begin{align} &u^\theta_i(R)h_i', \text{ for }i=2,3,\nonumber\\ &h_i u_i^r(\Theta_3,R),\ R_i'Du^\theta_i(\Theta_3,R)[h], \ (b_i+R_i)Du_1^r(\Theta_3,R)[h], \ \text{ for }i=1,2,3.\label{terms1}\\ & \int_{\mathbb{T}}R_i(x)h_i(x)dx\left(R_i'\partial_{b_3}u^\theta_i - (b_i+R_i)\partial_{b_3}u_i^r \right), \ \text{ for i=1,2}\label{terms2}\\ & \int_{\mathbb{T}}R_3(x)h_3(x)dx\left(R_3'\partial_{b_3}u^\theta_3 - (b_3+R_3)\partial_{b_3}u_3^r-u_3^r \right).\label{terms3} \end{align} For $i=2,3$, it is clear that \begin{align}\label{L_1} \left(\frac{d}{dx}\right)^{\sigma}\left( u^{\theta}_i(R)h'_i\right) = u^\theta_ih^{(\sigma+1)}_i + \sum_{p+q=\sigma,\ q\le \sigma-1} C_{p,q,\sigma}\underbrace{\left( u^\theta_i(R)\right)^{(p)}h_i^{(q+1)}}_{L^\sigma_1(R)[h_i]}, \end{align} for some $C_{p,q,\sigma}$. From Lemma~\ref{pq_derivatives}, we have \begin{align} \rVert L^\sigma_1(R)[h_i]\rVert_{H^{k}(\mathbb{T})} &\lesssim \rVert u_i^{\theta}(R)\rVert_{H^2(\mathbb{T})}\rVert h_i\rVert_{(H^{k+\sigma}(\mathbb{T}))^3} + \rVert u_i^{\theta}(R) \rVert_{H^{k+\sigma}(\mathbb{T})}\rVert h_i \rVert_{(H^1(\mathbb{T}))^3}\nonumber\\ & \lesssim \rVert h_i \rVert_{H^{k+\sigma}(\mathbb{T})} + (1 + \rVert R \rVert_{(H^{k+1+\sigma}(\mathbb{T}))^3})\rVert h_i \rVert_{H^1(\mathbb{T})},\label{L_2} \end{align} where the second inequality follows from Proposition~\ref{growth_velocity}. For the other terms, we only deal with $R_1'Du_1^{\theta}(\Theta_3,R)[h]$, since the other terms can be dealt with in the same way. Thus, we will show that there exists a linear operator $L_2^\sigma(R)[h]$ such that \begin{align} & \left(\frac{d}{dx}\right)^\sigma \left(R_1'Du_1^{\theta}(\Theta_3,R)[h] \right) = R_1'Du_1^{\theta}(\Theta_3,R)[h^{(\sigma)}] + L_2^\sigma(R)[h], \label{claim_Tsigma3}\\ &\rVert L_2^\sigma(R)[h]\rVert_{H^{k+1}(\mathbb{T})} \lesssim_{k,\sigma} \rVert h\rVert_{(H^{k+\sigma}(\mathbb{T}))^{3}} + \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3}\rVert h \rVert_{(H^3(\mathbb{T}))^3}. \label{claim_Tsigma4} \end{align} It follows from (D) in Proposition~\ref{growth_velocity} that there exists $T^\sigma(R)[h]$ such that \begin{align*} \left(\frac{d}{dx}\right)^{\sigma} \left(R_1'Du_1^{\theta}(\Theta_3,R)[h] \right) & = R_1' \left(Du_1^{\theta}(\Theta_3,R)[h]\right)^{(\sigma)} + \sum_{p+q=\sigma,\ q\le \sigma-1}(R_1')^{(p)}\left(Du_1^{\theta}(\Theta_3,R)[h]\right)^{(q)}\\ & = R_1'Du_1^{\theta}(\Theta_3,R)[h^{(\sigma)}] +\underbrace{ R_1'T^\sigma(R)[h] + \sum_{p+q=\sigma,\ q\le \sigma-1}C_{p,q,\sigma}(R_1')^{(p)}\left(Du_1^{\theta}(\Theta_3,R)[h]\right)^{(q)}}_{=:L^\sigma_2(R)[h]}, \end{align*} and \[ \rVert T^\sigma(R)[h]\rVert_{H^{k+1}(\mathbb{T})} \lesssim (1+\rVert R \rVert_{(H^4(\mathbb{T}))^3})\rVert h^{(k+\sigma)} \rVert_{(L^2(\mathbb{T}))^3} + (1+ \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3})\rVert h\rVert_{(L^2(\mathbb{T}))^3} . \] Therefore, from (B) in Proposition~\ref{growth_velocity} and Lemma~\ref{pq_derivatives}, it follows that \begin{align}\label{rprime} \rVert R_1' T^{\sigma}(R)[h]\rVert_{H^{k+1}(\mathbb{T})} &\lesssim \rVert R_1' \rVert_{H^{k+1}(\mathbb{T})} \rVert T^{\sigma}(R)[h]\rVert_{H^{k+1}(\mathbb{T})} \nonumber\\ & \lesssim \rVert h^{(k+\sigma)} \rVert_{(L^2(\mathbb{T}))^3} + (1+ \rVert R \rVert_{(H^{k+4+\sigma}(\mathbb{T}))^3})\rVert h\rVert_{(L^2(\mathbb{T}))^3}, \end{align} where we used that $H^{k+1}(\mathbb{T})$ is a Banach algebra and that $\rVert R \rVert_{(H^{4}(\mathbb{T}))^3}\lesssim \rVert R \rVert_{(H^{k+3}(\mathbb{T}))^3} \le \eta$ for some $0 <\eta < 1$. From Lemma~\ref{pq_derivatives}, we also have \begin{align*} \rVert (R_1')^{(p)}\left(Du_1^{\theta}(\Theta_3,R)[h]\right)^{(q)} \rVert_{H^{k+1}(\mathbb{T})}& \lesssim \rVert R_1 \rVert_{H^2(\mathbb{T})} \rVert Du_1^{\theta}(\Theta_3,R)[h] \rVert_{H^{k+\sigma}(\mathbb{T})} + \rVert R_1 \rVert_{H^{k+1+\sigma}(\mathbb{T})}\rVert Du_1^{\theta}(\Theta_3,R)[h] \rVert_{H^1(\mathbb{T})} \\ & \lesssim \rVert Du_1^{\theta}(\Theta_3,R)[h] \rVert_{H^{k+\sigma}(\mathbb{T})} + \rVert R_1 \rVert_{H^{k+1+\sigma}(\mathbb{T})}\rVert Du_1^{\theta}(\Theta_3,R)[h] \rVert_{H^3(\mathbb{T})} \\ & \lesssim \rVert h \rVert_{(H^{k+\sigma}(\mathbb{T}))^3} + \rVert R \rVert_{(H^{k+3 + \sigma}(\mathbb{T}))^3}\rVert h \rVert_{(H^1(\mathbb{T}))^3} \\ & \ + \rVert R_1 \rVert_{H^{k+1+\sigma}(\mathbb{T})}\rVert Du_1^{\theta}(\Theta_3,R)[h] \rVert_{H^3(\mathbb{T})} \end{align*} where the last inequality follows from (B) in Proposition~\ref{growth_velocity},which also tells us that \begin{align*} \rVert Du_1^{\theta}(\Theta_3,R)[h] \rVert_{H^3(\mathbb{T})} & \lesssim (1+\rVert R \rVert_{(H^{5}(\mathbb{T}))^3})\rVert h\rVert_{(H^{3}(\mathbb{T}))^3}\\ & \lesssim (1 + \rVert R \rVert_{(H^{k+3+\sigma}(\mathbb{T}))^3})\rVert h \rVert_{(H^{3}(\mathbb{T})^3)}, \end{align*} where we used $k\ge 2$. Therefore, we have \[ \rVert (R_1')^{(p)}\left(Du_1^{\theta}(\Theta_3,R)[h]\right)^{(q)} \rVert_{H^{k+1}(\mathbb{T})} \lesssim \rVert h \rVert_{(H^{k+\sigma}(\mathbb{T}))^3} + (1 + \rVert R \rVert_{(H^{k+3 + \sigma}(\mathbb{T}))^3})\rVert h \rVert_{(H^{3}(\mathbb{T}))^3}. \] Combining with \eqref{rprime}, we obtain \eqref{claim_Tsigma4}. Since every term in \eqref{terms1}, \eqref{terms2} and \eqref{terms3} can be shown to satisfy the same property as in \eqref{claim_Tsigma3} and \eqref{claim_Tsigma4}, combining with \eqref{L_1} and \eqref{L_2}, we obtain \eqref{claim_Tsigma} and \eqref{claim_Tsigma2}. \end{proof} \subsubsection{Spectral Study}\label{Spectral} In this subsection, we will verify the hypotheses $(d)$ and $(e)$. They will be proved in Proposition~\ref{Fredholm} and Lemma~\ref{transversality_11} respectively. \begin{prop}\label{Fredholm} Let $2\le m\in \mathbb{N}.$ We can choose $b_1=\Theta_1 = 1$ and $b_2$ and $\Theta_2$ be as in \eqref{parameter_3}. Also we set $b_3$ as in \eqref{def_b3}. Then, there exists $\Theta^*_3 > 0$ such that $A(\Theta^*_3,0)\in \mathcal{L}((H^{k+1})^3,H^{k+1}\times (H^{k})^2)$ has one-dimensional kernel and codimension one. That is, \begin{align*} \text{Ker}(A(\Theta^*_3,0)) = \text{span}\left\{ v\right\}, \quad \text{Im}(A(\Theta^*_3,0))^{\perp} = \text{span}\left\{w\right\}. \end{align*} Furthermore, $0< b_3(\Theta^*_3,0) < b_2 < b_1=1$ and $v$ and $w$ are supported on the $m$-th Fourier mode. \end{prop} The proof of the above proposition will be achieved after proving several lemmas. Let us first compute the derivative of $G$ at $(\Theta_3,0)$. Since we have $G(\Theta_3, 0)=0$ for any $\Theta_3$, $\rom{1}$ in Proposition~\ref{linear_prop} yields that $a(\Theta_3, 0)=0$ for any $\Theta_3$. Moreover, it follows from \eqref{derivative_matrix_3}, \eqref{b_der_R} and Lemma~\ref{linearpart2} that letting $h_1(x) = \sum_{n\geq 1}h_{1,n} \cos(mnx),$ $h_2(x) = \sum_{n\geq 1}h_{2,n}\cos(mnx)$, $h_3(x) = \sum_{n\geq 1}h_{3,n}\cos(mnx)$, \begin{align*} A(\Theta_3,0)[h_1,h_2,h_3] = \left(\begin{array}{c}Q_1(x) \\ Q_2(x) \\ Q_3(x)\end{array}\right), \end{align*} where \begin{align*} Q_1(x) = \sum_{n\ge 1} q_{1,n} \sin(mnx), \quad Q_2(x) = \sum_{n\ge 1}q_{2,n} \sin(mnx), \quad Q_3(x) = \sum_{n\ge 1}q_{3,n} \sin(mnx) \end{align*} where the coefficients satisfy, for any $n$: \begin{align}\label{matrix_form1} (-mn) M_n(\Theta_3,b_3) \left(\begin{array}{c}h_{1,n} \\ h_{2,n} \\ h_{3,n} \end{array}\right) = \left(\begin{array}{c}q_{1,n} \\ q_{2,n} \\ q_{3,n} \end{array}\right), \end{align} with \begin{align} & M_n(\Theta_3,b_3) = \nonumber \\ & \left( \begin{array}{ccc} \left(-\frac12 + \frac{1}{2mn}\right) - \frac12 \Theta_2 b_2^2 - \frac12 \Theta_3 b_3^2 & \Theta_2 b_2 \frac{b_2^{mn}}{2mn} & \Theta_3 b_3 \frac{b_3^{mn}}{2mn}\\ \frac{b_2^{mn}}{2mn} & \Theta_2 b_2\left(-\frac12 + \frac{1}{2mn}\right) + \left(-\frac12 b_2\right) + \left(-\frac12\right)\Theta_3 \frac{b_3^2}{b_2} & \Theta_3 b_3 \frac{1}{2mn}\left(\frac{b_3}{b_2}\right)^{mn} \\ \frac{b_3^{mn}}{2mn} & \Theta_2 b_2 \frac{1}{2mn}\left(\frac{b_3}{b_2}\right)^{mn} & \Theta_3 b_3\left(-\frac12 + \frac{1}{2mn}\right) + \left(-\frac12 b_3\right) -\frac{\Theta_2 b_3}{2} \end{array} \right). \label{mn_formula} \end{align} \begin{lemma}\label{kernel} Let $2\le m\in \mathbb{N}$, and $b_2,$ $\Theta_2$ satisfy \eqref{parameter_3}. Then, there exists $\Theta^*_3 := \Theta^*_{3,m}> 0$ and $b_3$ such that $0< b_3< b_2, $ and \begin{align} &\text{det}(M_1)=0, \ \Theta_1b_1^2 +\Theta_2b_2^2+\Theta_3^*b_3^2 = 1+\Theta_2b_2^2+\Theta_3^*b_3^2=0,\label{singular}\\ &\text{det}(M_n(\Theta^*_3,b_3))\neq0, \quad \text{ if $n \geq 2$.}\label{nonsingular1} \end{align} \end{lemma} \begin{proof} We first write $\text{det}(M_1)$ as a polynomial in $b_3$. Under the constraint $1+\Theta_2b_2^2+\Theta_3b_3^2=0$, we get \begin{align} \label{M_1} M_{1}(\Theta_3,b_3) & = \small \left( \begin{array}{ccc} \frac{1}{2m}& \Theta_2 b_2 \frac{b_2^{m}}{2m} & \Theta_3 b_3 \frac{b_3^{m}}{2m}\\ \frac{b_2^{m}}{2m} & \Theta_2 b_2\left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12 b_2\right) + \left(-\frac12\right)\Theta_3 \frac{b_3^2}{b_2} & \Theta_3 b_3 \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} \\ \frac{b_3^{m}}{2m} & \Theta_2 b_2 \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} & \Theta_3 b_3\left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12 b_3\right) + \left(-\frac12\right)\Theta_2 b_3 \end{array} \right). \end{align} Therefore, multiplying the first row by $2m$, multiplying the third column by $b_3$ and dividing the second column by $b_2$, we get \begin{align*} &\frac{2mb_3}{b_2} \text{det}(M_{1}(\Theta_3,b_3)) =\text{det} \small \left( \begin{array}{ccc} 1& \Theta_2 b_2^{m} & \Theta_3b_3^2 b_3^{m}\\ \frac{b_2^{m}}{2m} & \Theta_2 \left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12\right) + \left(-\frac12\right)\Theta_3 \frac{b_3^2}{b_2^2} & \Theta_3 \frac{1}{2m}b_3^2\left(\frac{b_3}{b_2}\right)^{m} \\ \frac{b_3^{m}}{2m} & \Theta_2 \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} & \Theta_3 b_3^2\left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12 b_3^2\right) + \left(-\frac12\right)\Theta_2 b_3^2 \end{array} \right)\\ &=\text{det}\small \left( \begin{array}{ccc} 1& \Theta_2 b_2^{m} & (-1-\Theta_2b_2^2) b_3^{m}\\ \frac{b_2^{m}}{2m} & \Theta_2 \left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12\right) + \left(-\frac12\right)\frac{(-1-\Theta_2 b_2^2)}{b_2^2} & (-1-\Theta_2b_2^2) \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} \\ \frac{b_3^{m}}{2m} & \Theta_2 \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} & (-1-\Theta_2b_2^2)\left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12 b_3^2\right) + \left(-\frac12\right)\Theta_2 b_3^2 \end{array} \right)\\ &=\text{det}\small \left( \begin{array}{ccc} 1& \Theta_2 b_2^{m} & (-1-\Theta_2b_2^2) b_3^{m}\\ \frac{b_2^{m}}{2m} & \frac{\Theta_2}{2m}-\frac{1}{2}+\frac{1}{2b_2^2} & (-1-\Theta_2b_2^2) \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} \\ \frac{b_3^{m}}{2m} & \Theta_2 \frac{1}{2m}\left(\frac{b_3}{b_2}\right)^{m} & (-1-\Theta_2b_2^2)\left(-\frac12 + \frac{1}{2m}\right) + \left(-\frac12 -\frac12\Theta_2\right) b_3^2 \end{array} \right)\\ & =: \text{det} \begin{pmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}, \end{align*} where we used the condition $1+\Theta_2b_2^2+\Theta_3b_3^2=0$ in the second equality to get rid of $\Theta_3$. We further compute \begin{align*} \frac{2mb_3}{b_2}& \text{det}(M_{1}(\Theta_3,b_3)) = a_{31} (a_{12}a_{23} - a_{13}a_{22}) - a_{32}(a_{11}a_{23} - a_{13}a_{21}) + a_{33}(a_{11}a_{22} - a_{12}a_{21}) \\ &=b_3^{2m}\left( \frac{1}{4m}(1-\frac{1}{b_2^2})(-1-\Theta_2b_2^2)+\frac{\Theta_2(b_2^m-\frac{1}{b_2^m})(-1-\Theta_2b_2^2)}{4m^2b_2^m}\right)+A_{3,3}\left( -\frac{1}{2}-\frac{1}{2}\Theta_2\right) b_3^2\\ & \ +A_{3,3}(-1-\Theta_2b_2^2)\left(-\frac{1}{2}+\frac{1}{2m}\right). \end{align*} where $A_{3,3} :=a_{11}a_{22} - a_{12}a_{21}=\frac{\Theta_2}{2m}(1-b_2^{2m})+\frac{1}{2b_2^2}(1-b_2^2)$ is the cofactor. Let us write \begin{align}\label{determinant_f} \frac{2mb_2}{b_3} \text{det}(M_{1}(\Theta_3,b_3))=B_0b_3^{2m}+B_1b_3^2+B_2 =:f(b_3), \end{align} where \[ B_0=(-1-\Theta_2b_2^2)\left( \frac{1}{4m}(1-\frac{1}{b_2^2})+\frac{\Theta_2(b_2^m-\frac{1}{b_2^m})}{4m^2b_2^m}\right), \] \[ B_1=A_{3,3}\left( -\frac{1}{2}-\frac{1}{2}\Theta_2 \right), \] \[ B_2=A_{3,3}(-1-\Theta_2b_2^2)\left(-\frac{1}{2}+\frac{1}{2m} \right). \] Under the condition in \eqref{parameter_3}, we have \begin{align}\label{inequality 1} &-1-\Theta_2b_2^2>0,\\\label{inequality 2} &\frac{1}{4m}(1-\frac{1}{b_2^2})+\frac{\Theta_2(b_2^m-\frac{1}{b_2^m})}{4m^2b_2^m}>0,\\\label{inequality 3} &A_{3,3}>0,\\\label{inequality 4} &-\frac{1}{2}-\frac{\Theta_2}{2}>0,\\\label{inequality 5} &-\frac{1}{2}+\frac{1}{2m}<0. \end{align} So $B_0> 0$, $B_1> 0$, $B_2<0$. We then can use the Descartes rule of signs and show there exists a unique solution $b_3\in (0,\infty)$. Since $B_2<0$, the Descartes rule again tells us that we only need show that $f(b_2) > 0$ to show the unique solution $b_3$ such that $f(b_3) = 0$ satisfies $0<b_3<b_2$. Once this is done, then $\Theta_3^*$ can be uniquely determined by $1+\Theta_2b_2^2+\Theta_3b_3^2=0$ and this proves the existence of $b_3$ and $\Theta^*_3$ satisfying \eqref{singular}. Therefore we compute \begin{align*} &f(b_2) = B_0b_2^{2m}+B_1b_2^2+B_2\\ &=[\frac{(\Theta_2b_2^2+1)(1-b_2^2)}{4mb_2^2}+\Theta_2\frac{(\frac{1}{b_2^m}-b_2^m)(1+\Theta_2b_2^2)}{4m^2b_2^m}]b_2^{2m}+(\frac{\Theta_2}{2m}(1-b_2^{2m})+\frac{1}{2b_2^2}(1-b_2^2))(-\frac{b_2^2}{2}+\frac{1}{2}-\frac{1}{2m}-\frac{\Theta_2b_2^2}{2m})\\ &=(\frac{b_2^2(1-b_2^{2m})}{4m^2}-\frac{b_2^2}{2m}\frac{1-b_2^{2m}}{2m})\Theta_2^2\\ &+[\frac{(1-b_2^2)b_2^{2+2m}}{4m^2b_2^2}+\frac{(\frac{1}{b_2^{m}}-b_2^{m})b_2^{2m}}{4m^2b_2^m}+\frac{1-b_2^{2m}}{2m}(-\frac{b_2^2}{2}+\frac{1}{2}-\frac{1}{2m})+\frac{1}{2b_2^2}(1-b_2^2)(-\frac{b_2^2}{2m})]\Theta_2\\ &+(b_2^{2m}\frac{1-b_2^2}{4mb_2^2})+\frac{1}{2b_2^2}(1-b_2^2)(-\frac{b_2^2}{2}+\frac{1}{2}-\frac{1}{2m})\\ &=\frac{1-b_2^2}{4mb_2^2}(b_2^{2m}+m-mb_2^2-1)\\ &=\frac{(1-b_2^2)^2}{4mb_2^2}(\frac{b_2^{2m}-1}{1-b_2^{2}}+m)\\ &=\frac{(1-b_2^2)^2}{4mb_2^2}(m-1-b_2^2-b_2^4-...-b_2^{2m-2})>0. \end{align*} Now we are left to show \eqref{nonsingular1}, that is, $\text{det}(M_n(\Theta_3^*),b_3)\neq 0$. As before, we have (replacing $m$ by $nm$) \begin{align}\label{eq112} \frac{2nmb_2}{b_3}\text{det}(M_n(\Theta^*_3,b_3))=B_{0,n}b_3^{2nm}+B_{1,n}b_3^2+B_{2,n}. \end{align} where \[ B_{0,n}=\frac{1}{4mn}(1-\frac{1}{b_2^2})(-1-\Theta_2b_2^2)+\frac{\Theta_2(b_2^{nm}-\frac{1}{b_2^{nm}})(-1-\Theta_2b_2^2)}{4n^2 m^2b_2^{nm}}, \] \[ B_{1,n}=A_{3,3,n}\left(-\frac{1}{2}-\frac{1}{2}\Theta_2\right), \] \[ B_{2,n}=A_{3,3,n}(-1-\Theta_2b_2^2)(-\frac{1}{2}+\frac{1}{2nm}), \] \[ A_{3,3,n}=\frac{\Theta_2}{2nm}(1-b_2^{2nm})+\frac{1}{2b_2^2}(1-b_2^2). \] Since $f(b_3)=0$, where $f$ is as in \eqref{determinant_f}, that is, $b_3^{2}=-\frac{B_2+B_0b_3^{2m}}{B_1}$. Thus, \[ b_3^{2nm}B_{0,n}+b_3^2 B_{1,n}+B_{2,n}= b_3^{2nm}B_{0,n}-\frac{B_{1,n}}{B_1}B_0b_3^{2m}-\frac{B_{1,n}}{B_1}B_2+B_{2,n}. \] Let $q(z)=z^nB_{0,n}-\frac{B_{1,n}}{B_1}B_0z-\frac{B_{1,n}}{B_1}B_2+B_{2,n}$. We will show that \begin{align}\label{qpolynomial} q(0)< 0 \text{ and }q'(z)<0 \quad \text{ for $0\leq z \leq b_2^{2m}.$} \end{align} Once we have the above inequalities, then $q(b_3^{2m}) < 0$, which implies $\text{det}(M_n(\Theta^*_3))$ in \eqref{eq112} cannot be zero. This clearly finishes the proof. To show \eqref{qpolynomial}, let us first consider $q(0)$. Note that \[ \frac{B_{1,n}}{B_1}=\frac{A_{3,3,n}}{A_{3,3}},\ \frac{B_{2,n}}{B_2}=\frac{A_{3,3,n}(-\frac{1}{2}+\frac{1}{2mn})}{A_{3,3}(-\frac{1}{2}+\frac{1}{2m})} \] \[ q(0)=-\frac{B_{1,n}}{B_1}B_2+B_{2,n}<0 \Leftrightarrow \frac{B_{2,n}}{B_2}>\frac{B_{1,n}}{B_1}\Leftrightarrow \frac{A_{3,3,n}}{A_{3,3}}>0 \] Thanks to \eqref{inequality 4}, it suffices to show that $\frac{A_{3,3,n}}{A_{3,3}}>1$. Note that \[ \frac{A_{3,3,n}}{A_{3,3}}>1\Leftrightarrow \frac{\Theta_2}{2mn}(1-b_2^{2mn})>\frac{\Theta_2}{2m}(1-b_2^{2m}). \] Clearly, the last inequality holds because $b_2<1$, and $\Theta_2<0$, therefore we have \[ q(0) < 0. \] Now, we turn to $q'(z)<0$. We have \[ q'(z)=nz^{n-1}B_{0,n}-\frac{B_{1,n}}{B_1}B_0. \] If $B_{0,n}\leq 0$, then $z\mapsto q'(z)$ is deacreasing and $q'(0)=-\frac{B_{1,n}}{B_1}B_0=-\frac{A_{3,3,n}}{A_{3,3}}B_0<0$, which yields $q'(z)<0$ for all $ 0 \le z \le b_2^{2m}$. If $B_{0,n}>0$, it is sufficient to show \begin{align}\label{eq222} q'(b_2^{2m})\leq n(b_2^{2mn-2m})B_{0,n}-\frac{B_{1,n}}{B_1}B_0<0. \end{align} From \eqref{parameter_3}, we have $(1-\frac{1}{b_2^2})+\frac{\Theta_2}{m}(1-\frac{1}{b_2^{2m}})>\frac{1-b_2^2}{b_2^2}$. And the lower bound of $\Theta_2$ in \eqref{parameter_3} is equivalent to $\frac{\Theta_2}{m}(1-b_2^{2m})+\frac{1}{b_2^2}(1-b_2^2)>0$, hence \[ \eqref{eq222}\Leftrightarrow \frac{A_{3,3,n}}{A_{3,3}}=\frac{\frac{\Theta_2}{nm}(1-b_2^{2nm})+\frac{1}{b_2^2}(1-b_2^2)}{\frac{\Theta_2}{m}(1-b_2^{2m})+\frac{1}{b_2^2}(1-b_2^2)}> \frac{(1-\frac{1}{b_2^2})+\frac{\Theta_2}{mn}(1-\frac{1}{b_2^{2nm}})}{(1-\frac{1}{b_2^2})+\frac{\Theta_2}{m}(1-\frac{1}{b_2^{2m}})}b_2^{2nm-2m}, \] \[ \Leftrightarrow [\frac{\Theta_2}{nm}(1-b_2^{2nm})+\frac{1}{b_2^2}(1-b_2^2)][(1-\frac{1}{b_2^2})+\frac{\Theta_2}{m}(1-\frac{1}{b_2^{2m}})]>[(1-\frac{1}{b_2^2})+\frac{\Theta_2}{mn}(1-\frac{1}{b_2^{2nm}})][\frac{\Theta_2}{m}(1-b_2^{2m})+\frac{1}{b_2^2}(1-b_2^2)]b_2^{2mn-2m}, \] \[ \Leftrightarrow\Theta_2[\frac{(1-b_2^{2mn})(1-b_2^{-2})+n(b_2^{-2}-1)(1-b_2^{-2m})}{nm}-\frac{(b_2^{2mn}-1)b_2^{-2m}(b_2^{-2}-1)+n(1-b_2^{2m})b_2^{2mn-2m}(1-b_2^{-2})}{nm}] \] \[ +\frac{1}{b_2^2}(1-b_2^2)(1-\frac{1}{b_2^2})-(1-\frac{1}{b_2^2})\frac{1}{b_2^2}(1-b_2^2)b_2^{2nm-2m}>0 \] \[ \Leftrightarrow \frac{\Theta_2(1-b_2^{-2})(1-b_2^{-2m})(1-n)(1-b_2^{2nm})}{mn}-(1-b_2^{-2})^2(1-b_2^{2nm-2m})>0, \] \[ \Leftrightarrow \Theta_2<\frac{m(b_2^2-1)b_2^{2m}}{b_2^2(1-b_2^{2m})}\frac{1-b_2^{2m(n-1)}}{(1-b_2^{2mn})(1-\frac{1}{n})}. \] Since $\frac{1-b_2^{2m(n-1)}}{(1-b_2^{2mn})(1-\frac{1}{n})}\leq \frac{n}{n-1} \leq 2$, the condition $ \Theta_2< 2\frac{{b_2}^{2m-2}(b_2^{2}-1)m}{1-b_2^{2m}}$ in \eqref{parameter_3} leads to the inequality we want. This shows \eqref{qpolynomial}, and thus \eqref{nonsingular1} holds. \end{proof} From Lemma~\ref{kernel}, it is clear that the kernel of $A(\Theta_3^*,0)$ is one dimensional. We denote by $v\in \text{Ker}(A(\Theta_3^*,0))$ and the one such that \begin{align}\label{ker_element} v= \cos(mx) \begin{pmatrix} v^1\\ v^2\\ 1 \end{pmatrix}, \quad \begin{pmatrix} v^1\\ v^2\\ 1 \end{pmatrix} \in Ker(M_1(\Theta_3^*,b_3)). \end{align} Since $b_3$ and $\Theta^*_3$ are tied in the relation $1+\Theta_2b_2^2 + \Theta^*_3b_3^2 = 0$, we now drop the dependence on $b_3$ of $M_n(\Theta_3^*):=M_n(\Theta^*_3,b_3)$. We now characterize the image of $A(\Theta_3^*,0)$. \begin{lemma}\label{Image_space} Let $b_2, \Theta_2$ satisfy the conditions in \eqref{parameter_3} and let $ b_3, \Theta_3^*$ be as in Lemma~\ref{kernel}. Then, \begin{align}\label{def_Z} Z = \left\{Q=(Q_1, Q_2, Q_3) \in Y^{k+1,m}\times Y^{k,m} \times Y^{k,m}, \begin{pmatrix} Q_1\\ Q_2\\ Q_3 \end{pmatrix} = \sum_{n=1}^{\infty} \begin{pmatrix} q_{1,n}\\ q_{2,n}\\ q_{3,n} \end{pmatrix} \sin(mnx), \right. \\ \left. \left(\begin{array}{c}q_{1,1} \\ q_{2,1} \\ q_{3,1} \end{array}\right) \in \text{span}\left\{ c_1, c_2\right\} \right\}, \end{align} where $c_i$ is the $i$th column of $M_1(\Theta_3^*,b_3)$. Then $Z = \text{Im}\left(A(\Theta_3^*,0)\right)$. \end{lemma} \begin{proof} We choose an element $w \in \text{Im}(A(\Theta_3^*,0))^{\perp}$ to be the one such that \begin{align}\label{image_perp_element} w=\frac{1}{|c_1\times c_2 |}\cos(mx)\left( c_1 \times c_2 \right) \in Im(M_1(\Theta^*))^{\perp}. \end{align} where $c_1$ and $c_2$ are the first two column vectors in $M_1(\Theta^*_3)$, which are linearly independent. Since $\text{det}(M_1(\Theta_3^*))=0$, and $\text{det}(M_{n}(\Theta_3^*,b_3))\ne 0$ for all $n>1$, and Proposition \ref{Fredholm}, it is clear that $\text{Im}\left( A(\Theta_3^*,0)\right)\subset Z$. In order to show the other direction, $Z \subset \text{Im}\left( A(\Theta_3^*,0)\right)$, we pick an element $Q\in (Q_1,Q_2,Q_3)\in Z$,$Q_i=\sum_{n=1}^{\infty} q_{1,n}\sin(mnx)$, and we have $\lambda_Q^1$, $\lambda_Q^2\in \mathbb{R}$ such that \begin{align*} \begin{pmatrix} q_{1,2}\\ q_{2,2}\\ q_{3,2} \end{pmatrix} =\lambda_Q^1c_1+\lambda_Q^2c_2. \end{align*} Thanks to \eqref{nonsingular1}, one can find a sequence of vectors $h^n:=(h_{1,n}, h_{2,n}, h_{3,n})$ such that \begin{align}\label{matrix_form2} (-mn)M_{mn}(\Theta^*) h^n = (-mn)M_{mn}(\Theta^*)\begin{pmatrix} h_{1,n} \\ h_{2,n} \\ h_{3,n} \end{pmatrix} = \begin{pmatrix} q_{1,n} \\ q_{2,n} \\ q_{3,n} \end{pmatrix} =: q^n, \end{align} Therefore it suffices to show that \begin{align}\label{regularity_image} \quad |h_{1,n}| + |h_{2,n}| + |h_{3,n}| \le C \left(|q_{1.n}| + \frac{|q_{2,n}|}{|mn|} +\frac{ |q_{3,n}|}{|mn|}\right), \quad \text{ for all sufficiently large $n$}. \end{align} where $C$ is independent of $n$. Once we have the above inequality, then it is clear that $Q = A(\Theta^*_3,0)[h]$ where \begin{align*} h_1(x)=\sum_{n=1}^\infty h_{1,n}\cos(mnx), \quad h_2(x)=\sum_{n=1}^\infty h_{2,n}\cos(mnx), \quad h_3(x)=\sum_{n=1}^\infty h_{3,n}\cos(mnx), \end{align*} and $h \in (H^{k+1})^3$, which follows from \begin{align*} \| h \|_{(H^{k+1})^3}^2 &= \sum_{n\ge 1}|h_{1,n}|^2|nm|^{2(k+1)} + |h_{2,n}|^2|nm|^{2(k+1)} +|h_{3,n}|^2|nm|^{2(k+1)}\\ & \lesssim \sum_{n\ge1}|mn|^{2(k+1)}\left(|q_{1,n}|^2 + \frac{|q_{2,n}|^2}{|mn|^2}+ \frac{|q_{2,n}|^2}{|mn|^2} \right)\\ & \lesssim \| Q_1 \|_{H^{k+1}}^2 + \|Q_2 \|_{H^{k}}^2 + \|Q_3 \|_{H^{k}}^2,\\ & < \infty. \end{align*} This proves that $Z \subset \text{Im}\left( A(\Theta_3^*,0)\right)$. In order to prove \eqref{regularity_image}, we denote \[ (-mn)M_{n}(\Theta^*_3) =: \begin{pmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} = \begin{pmatrix} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ 0 & 0 & a_{33}\end{pmatrix} + \begin{pmatrix} 0 & a_{12} & a_{13} \\ a_{21} & 0 & a_{23} \\ a_{31} & a_{32} & 0\end{pmatrix} =: D_n + S_n. \] Then \eqref{matrix_form2} is equivalent to \begin{align}\label{decomp_112} D_n h^n = q^n - S_n h^n. \end{align} We claim that \begin{align}\label{claim_diagonal} d_{2}:=\Theta_2b_2+b_2+\Theta_3\frac{b_3^2}{b_2}\neq 0, \quad \text{ and } \quad d_{3}:=\Theta_3b_3+b_3+\Theta_2b_3\neq 0. \end{align} From \eqref{mn_formula}, and $0<b_3<b_2<1$, it follows that \[ D_n = \begin{pmatrix} -\frac{1}{2} & 0 & 0 \\ 0 & \frac{mn}{2}\left( d_2 - \frac{\Theta_2b_2}{nm} \right) & 0 \\ 0 & 0 & \frac{nm}{2}\left(d_3 - \frac{\Theta_3^*b_3}{nm} \right) \end{pmatrix} \quad \text{ and } | S_n h^n | \lesssim \kappa^{nm}|h^n|, \quad \text{ for some $\kappa\in(0,1)$.} \] Therefore, for sufficiently large $n$, $D_n$ is a non-singular diagonal matrix, and it follows from \eqref{decomp_112} \[ |h_{1,n}| \lesssim |q_{1,n}| + \kappa^{nm} |h^n|, \quad |h_{2,n}| \lesssim \frac{|q_{2,n}|}{|nm|} + \kappa^{nm} |h^n|, \quad |h_{3,n}| \lesssim \frac{|q_{3,n}|}{|nm|} + \kappa^{nm} |h^n|, \] which proves \eqref{regularity_image}. We are left to show \eqref{claim_diagonal}. From \eqref{singular}, \begin{align}\label{d2negative} d_2 = \Theta_2b_2+b_2+\Theta_3\frac{b_3^2}{b_2}=(\Theta_2b_2^2+b_2^2+\Theta_3b_3^2)\frac{1}{b_2}=\frac{b_2^2-1}{b_2} < 0. \end{align} For $d_3$, let us claim \begin{align}\label{non-zero} B_0(\frac{1+\Theta_2b_2^2}{\Theta_2+1})^{m}+B_1(\frac{1+\Theta_2b_2^2}{\Theta_2+1})+B_2\neq 0, \quad \text{ where $B_0,$ $B_1$ and $B_2$ are as in \eqref{determinant_f}}. \end{align} Plugging in $B_0, B_1, B_2$ from \eqref{determinant_f}, we can find that \eqref{non-zero} is equivalent to \[ -[\frac{1}{4m}(1-b_2^{-2})+\frac{\Theta_2(b_2^{m}-\frac{1}{b_2^m})}{4m^2b_2^m}]\frac{(1+\Theta_2b_2^2)^m}{(\Theta_2+1)^m}-\frac{1}{2}A_{3,3}-A_{3,3}(-\frac{1}{2}+\frac{1}{2m})\neq 0 \] \[ \Leftrightarrow-[(1-b_2^{-2})+\frac{\Theta_2}{m}(1-b_2^{-2m})](1+\Theta_2b_2^2)^m\neq [\frac{\Theta_2}{m}(1-b_2^{2m})+\frac{1}{b_2^2}(1-b_2^2)](\Theta_2+1)^m. \] According to \eqref{inequality 1}, \eqref{inequality 2} and \eqref{inequality 3}, the RHS is strictly positive and LHS is strictly negative. From Lemma \ref{kernel} and \eqref{determinant_f}, $B_0b_3^{2m}+B_1b_3^2+B_2=0$. This gives $b_3^2\neq \frac{1+\Theta_2 b_2^2}{1+\Theta_2}=\frac{-\Theta_3b_3^2}{1+\Theta_2}$, then \begin{align}\label{non-zero2} d_3 = (1+\Theta_2+\Theta_3)b_3\neq 0. \end{align} \end{proof} \begin{proofprop}{Fredholm} The result follows immediately from Lemma~\ref{kernel} and \ref{Image_space}. Note that $v$ and $w$ are supported on the $m$th Fourier mode, thanks to \eqref{ker_element} and \eqref{def_Z}. \end{proofprop} \begin{lemma}\label{transversality_11} $\partial_{\Theta_3} A(\Theta_3^*,0)[v] \notin \text{Im}(A(\Theta_3^*,0))$. \end{lemma} \begin{proof} From \eqref{matrix_form1}, it follows that \[ \partial_{\Theta_3} A(\Theta^*_3,0)[h] = Q, \] where \[ Q:=\sum_{n\ge 1}\begin{pmatrix} q_{1,n} \\ q_{2,n} \\ q_{3,n}\end{pmatrix}\sin(nm x), \quad h=\sum_{n\ge 1}\begin{pmatrix} h_{1,n} \\ h_{2,n} \\ h_{3,n}\end{pmatrix}\cos(nm x), \quad \begin{pmatrix} q_{1,n} \\ q_{2,n} \\ q_{3,n}\end{pmatrix} = (-mn)\partial_{\Theta_3}M_{n}(\Theta^*_3) \begin{pmatrix}h_{1,n} \\ h_{2,n} \\ h_{3,n} \end{pmatrix} \] Let us write \begin{align}\label{derivative matrix} M_1(\Theta_3^*) = \left( \begin{array}{ccc} a_{11} & a_{12} & a_{13}\\ a_{21} &a_{22} &a_{23}\\ a_{31}&a_{32}& a_{33} \end{array} \right), \quad CM_{33}=\left( \begin{array}{cccc} a_{11} & a_{12} \\ a_{21} &a_{22} \end{array} \right), \quad \partial_{\Theta_3} M_1(\Theta_3^*) =\left( \begin{array}{cccc} 0 & 0 & b_{13}\\ 0 &0 &b_{23}\\ b_{31}&b_{32}& b_{33} \end{array} \right), \end{align} where the vanishing elements in $\partial_{\Theta_3} M_1(\Theta_3^*)$ follows from \eqref{mn_formula}. Note that \[ \text{det}(CM_{33}) = a_{11}a_{22} -a_{12}a_{21} = \frac{1}{2m}\left(-\frac{1}{2}d_2 + \frac{\Theta_2b_2}{2m}(1-b_2^{2m}) \right), \quad \text{ where $d_2$ is as in \eqref{claim_diagonal}.} \] From \eqref{d2negative}, we have \[ -\frac{1}{2}d_2 + \frac{\Theta_2b_2}{2m}(1-b_2^{2m}) = \frac{1-b_2^2}{2b_2} + \frac{\Theta_2b_2}{2m}(1-b_2^{2m}) > \frac{1-b_2^2}{2b_2} + \frac{b_2^2-1}{2} = \frac{1-b_2^2}{2}\left( \frac{1}{b_2} - 1\right) > 0, \] where the first inequality follows from \eqref{parameter_3}, and the last inequality follows from $0 < b_2 < 1$. Hence, \begin{align}\label{cm33det} \text{det}(CM_{33}) > 0, \end{align} and $CM_{33}$ is invertible. Since $ \text{Im}(M_1(\Theta^*_3))^{\perp}=\text{Ker}(M_1(\Theta^*_3)^T)$, we can choose \begin{align*} \tilde{v}=\left( \begin{array}{ccc} -(CM_{33})^{-1}\left(\begin{array}{ccc} a_{13}\\ a_{23} \end{array}\right)\\ 1 \end{array} \right), \quad \tilde{w}=\left( \begin{array}{ccc} -(CM_{33}^{T})^{-1}\left(\begin{array}{ccc} a_{31}\\ a_{32} \end{array}\right)\\ 1 \end{array} \right), \end{align*} so that $\tilde{v}\in \text{Ker}A$, and $\tilde{w}\in (\text{Im} A)^{\perp}$. Then $\partial_{\Theta_3} A(\Theta_3^*,0)[v] \notin \text{Im}(A(\Theta_3^*,0))$ is equivalent to have \[ \partial_{\Theta_3}M_z(\Theta_3,0)\tilde{v} \cdot \tilde{w} \neq 0. \] According to \eqref{derivative matrix}, it is equivalent to have \begin{align}\label{trans} -(b_{13}, b_{23}) {CM_{33}^T}^{-1}\left( \begin{array}{ccc} a_{31}\\a_{32} \end{array} \right)-(b_{31}, b_{32}){CM_{33}}^{-1} \left( \begin{array}{ccc} a_{13}\\a_{23} \end{array} \right) +b_{33} \ne 0. \end{align} Using $\frac{db_3}{d\Theta_3}=-\frac{b_3}{2\Theta_3}$, which comes from $\Theta_3b_3^2+\Theta_2b_2^2+1=0$, we compute \begin{align*} {\partial_{\Theta_3}M_1(\Theta_3^*)}= \left( \begin{array}{ccc} 0&0&\frac{-(m-1)b_3^{m+1}}{4m}\\ 0&0&\frac{-b_3^{m+1}(m-1)}{4mb_2^m}\\ -\frac{b_3^m}{4\Theta^*_3}&\frac{-b_3^m\Theta_2}{4\Theta_3^* b_2^{m-1}}&\frac{b_3}{2}(-\frac{1}{2}+\frac{1}{2m})+\frac{b_3}{4\Theta^*_3}+\frac{\Theta_2 b_3}{4\Theta^*_3} \end{array} \right). \end{align*} Then \begin{align*} (b_{13}, b_{23}){CM_{33}^T}^{-1}\left( \begin{array}{ccc} a_{31}\\a_{32} \end{array} \right)=\frac{1}{\text{det}(CM_{33})}(\frac{-(m-1)}{4m}\frac{1}{2m})b_3^{2m+1}\left(1,\frac{1}{b_2^m}\right) \left( \begin{array}{ccc} \frac{1}{2m}\Theta_2 b_2-\frac{1}{2}b_2+\frac{1}{2b_2}&-\frac{b_2^m}{2m}\\ -\frac{\Theta_2 b_2^{m+1}}{2m}&\frac{1}{2m} \end{array} \right) \left( \begin{array}{ccc} 1\\ \frac{\Theta_2}{b_2^{m-1}} \end{array} \right), \end{align*} \begin{align*} \left(b_{31}, b_{32}\right) {CM_{33}}^{-1}\left( \begin{array}{ccc} a_{13}\\a_{23} \end{array} \right)=\frac{1}{\text{det}(CM_{33})}\frac{\Theta^*_3}{2m}(-\frac{b_3^{2m+1}}{4\Theta^*_3})\left(1,\frac{\Theta_2}{b_2^{m-1}}\right) \left( \begin{array}{ccc} \frac{1}{2m}\Theta_2 b_2-\frac{1}{2}b_2+\frac{1}{2b_2}&-\frac{\Theta_2 b_2^{m+1}}{2m}\\ -\frac{b_2^m}{2m}&\frac{1}{2m} \end{array} \right) \left( \begin{array}{ccc} 1\\ \frac{1}{b_2^{m}} \end{array} \right). \end{align*} So \eqref{trans} is equivalent to \begin{align*} \frac{1}{\text{det}(CM_{33})}(\frac{m-1}{8m^2}+\frac{1}{8m})b_3^{2m+1}\left(1,\frac{1}{b_2^m}\right) \left( \begin{array}{ccc} \frac{1}{2m}\Theta_2 b_2-\frac{1}{2}b_2+\frac{1}{2b_2}&-\frac{b_2^m}{2m}\\ -\frac{\Theta_2 b_2^{m+1}}{2m}&\frac{1}{2m} \end{array} \right) \left( \begin{array}{ccc} 1\\ \frac{\Theta_2}{b_2^{m-1}} \end{array} \right)+b_{33}\neq 0. \end{align*} \begin{align*} \Leftrightarrow(\frac{m-1}{8m^2}+\frac{1}{8m})b_3^{2m+1}\left(1,\frac{1}{b_2^m}\right) \left( \begin{array}{ccc} \frac{1}{2m}\Theta_2 b_2-\frac{1}{2}b_2+\frac{1}{2b_2}&-\frac{b_2^m}{2m}\\ -\frac{\Theta_2 b_2^{m+1}}{2m}&\frac{1}{2m} \end{array} \right) \left( \begin{array}{ccc} 1\\ \frac{\Theta_2}{b_2^{m-1}} \end{array} \right)+\text{det}(CM_{33})b_{33}\neq 0. \end{align*} \begin{align*} & \Leftrightarrow \text{det}(CM_{33})\left( (-\frac{1}{4}+\frac{1}{4m})b_3+\frac{b_3^3}{4(-1-\Theta_2 b_2^2)}+\frac{\Theta_2 b_3^3}{4(-1-\Theta_2 b_2^2)}\right) \\ & \ +(\frac{m-1}{8 m^2}+ \frac{1}{8m})b_3^{2m+1}(\frac{1}{2 b_2}-\frac{1}{2}b_2+\frac{\Theta_2}{2m b_2^{2m-1}}-\frac{b_2 \Theta_2}{2m})\neq 0, \end{align*} \begin{align}\label{nonzero_11} \Leftrightarrow (\frac{2m-1}{8m^2})b_3^{2m}(\frac{1}{2b_2}-\frac{1}{2}b_2+\frac{\Theta_2}{2m b_2^{2m-1}}-\frac{b_2\Theta_2}{2m})+\text{det}(CM_{33})\frac{1+\Theta_2}{4(-1-\Theta_2 b_2^2)}b_3^2+\text{det}(CM_{33})(-\frac{1}{4}+\frac{1}{4m})\neq 0. \end{align} To show \eqref{nonzero_11}, let us write \begin{align*} &C_0 := (\frac{2m-1}{8m^2})(\frac{1}{2b_2}-\frac{1}{2}b_2+\frac{\Theta_2}{2m b_2^{2m-1}}-\frac{b_2\Theta_2}{2m})\\ & C_1 :=\text{det}(CM_{33})\frac{1+\Theta_2}{4(-1-\Theta_2 b_2^2)} \\ & C_2 :=\text{det}(CM_{33})(-\frac{1}{4}+\frac{1}{4m}), \end{align*} then, it is enough to show that $g(b_3) : = C_0 b_3^{2m} + C_1 b_3^2 + C_2 \ne 0$. Note that we have \begin{align*} & \text{det}(CM_{33}) > 0, \quad \text{ from \eqref{cm33det}}\\ & (1+\Theta_2) < 0, \quad \text { from \eqref{parameter_3} and $0<b_2<1$}\\ & 1 + \Theta_2b_2^2 < 0, \quad \text{ from \eqref{singular} and $\Theta^*_3 >0$}, \end{align*} thus, $C_1 < 0$. we also have $C_2<0$ since $m>1$. For $C_0$, we have \begin{align*} (\frac{1}{2 b_2}-\frac{1}{2}b_2+\frac{\Theta_2}{2m}(\frac{1}{b_2^{2m-1}}-b_2))&=\frac{b_2}{2}[\frac{1}{b_2^2}-1+\frac{\Theta_2}{m}(\frac{1}{b_2^{2m}}-1)]\\ &=\frac{b_2}{2}(\frac{1}{b_2^2}-1)[1+\frac{\Theta_2}{m}(1+\frac{1}{b_2^2}+...+\frac{1}{b_2^{2(m-1)}})]\\ &<\frac{b_2}{2}(\frac{1}{b_2^2}-1)(1+\frac{\Theta_2}{m}m)\\ & < 0, \end{align*} as $b_2<1$, $\Theta_2\leq \frac{-1}{b_2^2}<-1$, which follows from \eqref{parameter_3}. This yields that $C_0<0$. Therefore the Descartes rule of signs implies that there is no positive root of $g(x)=0$, which implies $g(b_3) \ne 0$, that is \eqref{nonzero_11}. \end{proof} \subsection{Analytic regularity of the boundary} \begin{theorem}\label{ofc} The solutions $R(s)\in (X^{k,m})^3$ constructed in Theorem~\ref{teoremaestacionarias2} are analytic when s is sufficiently small. \end{theorem} \begin{proof} We use \cite[Theorem 1]{Kinderlehrer-Nirenberg:regularity-free-boundary} and \cite[Theorem 3.1]{Kinderlehrer-Nirenberg-Spruck:regularity-elliptic-free-boundary} to prove the analyticity. Let the stream function \[\varphi(s)=\sum_{i=1}^{3}\Theta_i1_{D_i}*\frac{1}{2\pi}\log(\cdot). \] We first consider the outermost boundary $\partial$$D_1$. For any $x_1\in\partial{D}_1$, we have \[ \Delta{\varphi}=\Theta_1,\ in \ D_1\backslash \overline{D_2}, \] \[ \varphi=c_1,\ on \ \partial D_1. \] From Lemma \ref{maximum}, we have \[ \nabla\varphi=0, \ on \ \partial D_1. \] Moreover, from Theorem \ref{teoremaestacionarias2}, we know $\partial D_1$ is in $C^2$. Thus $\varphi$ is also in $C^2(\overline{D_1}\backslash \overline{D_2})$. Then by \cite[Theorem 1]{Kinderlehrer-Nirenberg:regularity-free-boundary}, we have the analyticity of $\partial D_1$. Now we consider $\partial D_2$. We have \begin{align*} \Delta{\varphi}=\begin{cases}\Theta_2+\Theta_1,& in \ D_2\backslash \overline{D_3},\\ \Theta_1,& in \ D_1\backslash\overline{D_2} \end{cases} \end{align*} \[ \varphi=c_2,\ on \ \partial D_2. \] From Corollary \ref{integral_a0} and \eqref{d2negative}, at the bifurcation point, we have \begin{align*} &\partial_{n}\varphi(0)=\frac{1}{2\pi}\sum_{i=1}^{3}\Theta_i\int_{0}^{2\pi}\cos(x-y)b_i\log(b_i^2+b_j^2-2b_ib_j\cos(x-y))dy\\ &=\frac{1}{2\pi}(-2\pi)(\Theta_1b_1\frac{b_2}{b_1}+\Theta_2b_2+\Theta_3b_3\frac{b_3}{b_2})=-(\Theta_1b_2+\Theta_2b_2+\Theta_3\frac{b_3^2}{b_2})\neq 0. \end{align*} Thus when s is sufficiently small, $\partial_{n}\varphi(x)\neq 0$. From Theorem \ref{teoremaestacionarias2}, we have $\partial {D_2}$ is in $C^2$. So $\varphi$ is also in $C^2((\overline{D_2}\backslash D_3)\cap(\overline{D_1}\backslash D_2))\cap C^1(\overline{D_1})$. Then we can use \cite[Theorem 3.1]{Kinderlehrer-Nirenberg-Spruck:regularity-elliptic-free-boundary} to get the result. The analyticity of $\partial D_3$ can be shown in the same way by using \eqref{non-zero2} instead of \eqref{d2negative}. \end{proof}
1,477,468,750,529
arxiv
\section{Introduction} Suppose that our observations come from a Poisson process $X=\left\{X_t: t\geq 0\right\}$ whose arrival rate changes from $\lambda_0$ to $\lambda_1$ at some random time $\Theta$. The \emph{disorder time} $\Theta$ is unobservable but its prior distribution is known. We assume that the prior distribution of $\Theta$ is a phase-type distribution. This is the distribution of the time of death (absorption) of a non-conservative Markov process $M=\left\{M_t: t\geq 0\right\}$, whose state space is finite and includes a single absorbing state. Our problem is to find an alarm time $\tau$ which depends only on the past and the present observations and rings as soon as $\Theta$ occurs. Since $\Theta$ is unobservable a detection rule $\tau$ will make false alarms or have detection delays. We will find a rule that optimally balances these two. We will choose a Bayesian risk that penalizes the sum of the frequency of false alarm and a multiple of detection delay as in \cite{MR2003i:60071}. So far in the literature of continuous time Bayesian quickest detection problems, the distribution of the disorder time $\Theta$ is always taken to be exponential distribution for analytical tractability, see e.g. \cite{galchuk71}, \cite{davis76}, \cite{Sh78}, \cite{MR2001m:62090,MR2003i:60071}, \cite{MR2002b:62088}, \cite{MR1985648}, \cite{BD04,bdk05}, \cite{ds}, \cite{BD03}. The disorder time in the works cited above is modeled as the first arrival time of a Poisson process that we do not observe. We will change the assumption on the nature of the arrivals for broader applicability and we will solve the Poisson disorder problem with a phase type disorder distribution. This seems to strike a balance between generality and tractability. Indeed, any positive distribution may be approximated arbitrarily closely by phase-type distributions. See \cite{neuts} for this and other properties of this class of distributions. Let $\{1,\cdots,n,\Delta\}$ denote the state space of $M$ where $\Delta$ is absorbing and the rest of the states are transient. To solve the Poisson disorder problem, we first show that it is equivalent to an optimal stopping problem for an $n+1$ dimensional piece-wise deterministic Markov process $\vec{\Pi}_t=[\Pi^{(1)}_t, \cdots \Pi^{(n)}_t,\Pi_t]$, $t \geq 0$, whose $i$th coordinate is the posterior probability $\Pi^{(i)}_t=\P\left\{M_t=i\right\}$ that the Markov chain $M$ is in state $i$ given the past observations $\mathcal F_t=\sigma\{X_s: 0 \leq s \leq t\}$ of $X$. The process $\Pi_t$, $t \geq 0$, is the posterior probability that the disorder has already occurred. All of the coordinates are driven by the same point process. We show that the optimal stopping time (of the filtration $\mathbb{F}=\{\mathcal F_t\}_{t \geq 0}$) is the hitting time of the process $\vec{\Pi}$ to some closed convex set $\Gamma$ with non-empty interior. We describe a numerical algorithm that approximates the optimal Bayes risk within any given positive error margin. Among the outputs of this algorithm are boundary curves that characterize $\varepsilon-$optimal stopping times. Once these curves are determined the only thing an observer has to do is to ring the alarm as soon as $\vec{\Pi}$, which is completely determined by the observations of $X$, crosses one of these boundaries, continuously or via a jump. To see the efficacy of the numerical algorithm we use it to approximate the minimum Bayes risk when the prior distribution of the disorder time has Erlang or Hypergeometric distribution with two non-absorbing states. The rest of the paper is organized as follows: In Section~\ref{sec:prob-stat}, we give a precise description of the problem and show that it is equivalent to solving an optimal stopping problem for the process $\vec{\Pi}$. In Section~\ref{sec:sequential}, we show that the minimum Bayes risk can be uniformly approximated by a sequence of functions that can be constructed via an iterative application of an integral operator to the terminal penalty of the optimal stopping problem described in Section~\ref{sec:prob-stat}. A similar sequential approximation technique was employed by \cite{bdk05} in solving a Poisson disorder problem in which the disorder distribution was exponential and post disorder arrival rate was a random variable. The authors formulated the problem under an auxiliary probability measure as an optimal stopping time of an $\mathbb{R}_+$-valued \emph{odds-ratio} process. If we used a formulation similar to theirs, we would obtain an optimal stopping problem with an unbounded continuation region. Therefore, that formulation is not suitable for numerical implementation. Also, the optimal stopping problem we consider involves a terminal penalty term and a running cost with no discount factor. In this section, we also show that an optimal stopping time exists, and we describe two different types of $\varepsilon$-optimal stopping times. In Section~~\ref{sec:solution}, we describe a numerical algorithm that can approximate the optimal Bayes risk to a given level of accuracy. Finally, Section~\ref{sec:examples} provides several examples illustrating our solution. Appendix is home for the longer proofs. \section{Problem Statement}\label{sec:prob-stat} Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space hosting two independent Poisson processes $(X_t^{(0)})_{t \geq 0}$, and $(X_t^{(1)})_{t \geq 0}$ with intensities $\lambda_0$ and $\lambda_1$ respectively, and an independent continuous-time Markov chain $M=(M_t)_{t \geq 0}$ with state space \begin{equation} E \triangleq \{1,2,\cdots,n,\Delta\}. \end{equation} Here, $\Delta$ is an absorbing state and all the other states are transient. The infinitesimal generator of $M$, which we denote by $\mathcal{A}=(q_{i j})_{i,j \in E}$, is of the form \begin{equation}\label{eq:matrix} \mathcal{A}= \begin{pmatrix} & & & \\ & R & & r \\ & & & & \\ 0 & \cdots &0 & 0 \end{pmatrix} \end{equation} where the $n \times 1$ vector $r$ is non-negative, and the $n \times n$ matrix $R$ is nonsingular. The matrix $R$ has negative diagonal and nonnegative off-diagonal entries. Moreover $R$ and $r$ satisfy $R\cdot \vec{1} + r = \vec{0}$. For a point $\vec{\pi}=[\pi_1, \pi_2, \cdots, \pi_n, \pi] $ in \begin{equation} D\triangleq \{\vec{\pi}\in [0,1]^{n+1}: \sum_{i=1}^n \pi_{i} + \pi =1\}, \end{equation} let $\P$ denote the probability measure $\mathbb{P}$ such that the process $M$ has initial distribution $\vec{\pi}$. That is, \begin{equation} \label{def:P-pi} \P \{ A \} = \pi_1 \, \mathbb{P} \{A | M_0=1\} + \ldots + \pi_n \, \mathbb{P} \{A | M_0=n\} + \pi \mathbb{P} \{A | M_0=\Delta\}, \end{equation} for all $A \in \mathcal{F}$. The absorption time of $M$ is defined as $\Theta \triangleq \inf\{t>0: M_t =\Delta\}$, and its distribution is denoted by \begin{equation} \label{eq:distribution-of-theta-under-P} F_{\vec{\pi}}(t) \triangleq \P\{\Theta \leq t \}=1-[\pi_1, \pi_2, \cdots, \pi_n] \cdot \exp(tR) \cdot \vec{1}, \quad 0 \leq t < \infty. \end{equation} Here, $\Theta$ is said to have a phase-type distribution, see e.g. \cite{neuts}. The processes $X^0$, $X^1$ and $M$ are unobservable. Rather we observe \begin{equation} X_t=\int_0^{t}1_{\{s < \theta\}}dX_t^{(0)}+\int_0^{t}1_{\{s \geq \theta\}}dX_t^{(1)}, \end{equation} whose natural filtration will be denoted by $\mathbb F = \{\mathcal F_t\}_{t\ge 0}$. Let us define $\mathbb{G}=\{\mathcal{G}_t\}_{t \geq 0}$ as an initial enlargement of $\mathbb{F}$ by setting $\mathcal{G}_t \triangleq \mathcal{F}_t \vee \sigma\{M_t: t\geq 0 \}$. That is; $\mathcal{G}_t$ is the information available to a \emph{genie} at time $t$ who is given the paths of the process $M_t, t \geq 0$. If the paths of $M_t$, $t \geq 0$, are available at time 0, then the observations come from a process $X$ that is a Poisson process with rate $\lambda_0$ on the time interval $[0,\Theta)$ and with rate $\lambda_1$ on $[\Theta,\infty)$ for known positive constants $\lambda_0$ and $\lambda_1$. Specifically, the observation process $X$ is a counting process such that \text{$X_t - \int^t_{0} \left[\lambda_0 1_{\{s<\Theta\}}+ \lambda_1 1_{\{s\ge \Theta\}} \right]ds,\; t\ge 0$ is a $(\P,\mathbb{G})$-martingale}. The crucial feature here is that $\Theta$ is neither known nor observable; only the process $X$ is observable. The problem is then to find a quickest detection rule for the disorder time $\Theta$, which is \emph{adapted to} the history $\mathbb F$ generated by the observed process $X$ only. A detection rule is a stopping time $\tau$ of the filtration $\mathbb{F}$, and we will denote the set of these stopping times by $\mathcal{S}$. Our objective is to find an element of $\mathcal{S}$ minimizing the Bayes risk \begin{equation}\label{eq:bayes-risk} R_\tau(\vec{\pi}) \triangleq \P\{\tau<\Theta\} + c\,\mathbb{E}^{\vec{\pi}}(\tau-\Theta)^+, \quad \end{equation} for some positive constant $c$. Here $a^{+}=\max(a,0)$ for any $a \in \mathbb R$. The first term in (\ref{eq:bayes-risk}) penalizes the frequency of false alarms and the second term penalizes the detection delay. \begin{remark} In order to minimize $R_\tau(\vec{\pi})$ in $\mathcal{S}$, it is enough to consider stopping times with bounded expectation. Indeed, if $\mathbb{E}^{\vec{\pi}}\{\tau\}>1/c+E\{\Theta\}$, then $R_{\tau}(\vec{\pi})\geq c (E\{\tau\}-E\{\Theta\} )> 1$, which is greater than the cost incurred upon stopping immediately. In the remainder we will use $\mathcal{S}_f$ to denote the class of $\mathbb{F}$-stopping times whose expectation are strictly less than or equal to $1/c+E\{\Theta\}$. \end{remark} Our objective is then to compute \begin{equation}\label{eq:value-function} V(\vec{\pi}) \triangleq \inf_{\tau \in \mathcal{S}_f }R_{\tau}(\vec{\pi})=R_{\tau^*}(\vec{\pi}), \quad \text{for all} \quad \vec{\pi} \in D, \end{equation} and to identify a rule $\tau^{*}$ (if there exists one) for which this infimum is attained. Note also that we have $0 \leq V(\vec{\pi}) \leq 1$ for all $\vec{\pi} \in D$. \begin{remark}\label{rem:val-func} Let us introduce the posterior probability distribution $\vec{\Pi}_t \triangleq [\Pi^{(1)}_t, \cdots \Pi^{(n)}_t,\Pi_t]$, $t \geq 0$, where \begin{equation}\label{eq:posterior-prob} \Pi_t \triangleq \P\{\theta \leq t| \mathcal F_t\}=\P\left\{M_t=\Delta|\mathcal F_t\right\}, \quad \text{and} \quad \Pi^{(i)}_t \triangleq \P\left\{M_t=i|\mathcal F_t\right\},\quad t \geq 0, \end{equation} for $i \in \{1,\cdots,n\}$. Using the identities $\P\{\tau<\theta\}=\mathbb{E}^{\vec{\pi}}\{1-\Pi_{\tau}\}$ and \begin{equation}\label{eq:peanlty-in-t-Pi} \mathbb{E}^{\vec{\pi}}\{(\tau-\Theta)^+\}=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau}1_{\{\Theta\leq t \}}dt\right\}=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{\infty}1_{\{\Theta\leq t \}} 1_{\{\tau>t\}} dt\right\}=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau}\Pi_t dt\right\}, \end{equation} we can represent the value function in (\ref{eq:value-function}) in terms of posterior probability distribution as \begin{equation}\label{eq:value-func} V(\vec{\pi})=\inf_{\tau \in \mathcal{S}_f}\mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{\tau})\right\}, \end{equation} in which \begin{equation} k(\vec{\pi}) \triangleq c \pi, \quad \text{and} \quad h(\vec{\pi}) \triangleq 1-\pi, \end{equation} for all $\vec{\pi}=[\pi_1, \pi_2, \cdots, \pi_n, \pi] \in D$. \end{remark} \begin{remark} It follows from (\ref{eq:value-func}) that \begin{equation} V(\vec{\pi})\leq h(\vec{\pi})=1-\pi, \end{equation} for all $\vec{\pi}=[\pi_1, \pi_2, \cdots, \pi_n, \pi] \in D$. \end{remark} \begin{lemma} Let us define the hazard rate of the distribution $F_{\vec{\pi}}$ of $\Theta$ as \begin{equation} \eta(t)=\frac{F_{\vec{\pi}}'(t)}{1-F_{\vec{\pi}}(t)}, \qquad \text{ for $t > 0$}. \end{equation} Then the a-posteriori probability process $(\Pi_t)_{t \geq 0}$ satisfies \begin{equation}\label{eq:pi-h} d\Pi_t=[\eta(t)-(\lambda_1-\lambda_0)\Pi_t](1-\Pi_t)dt+\frac{(\lambda_1-\lambda_0)\Pi_{t-} (1-\Pi_{t-}) }{\lambda_0 (1-\Pi_{t-})+\lambda_1 \Pi_{t-}}dX_t, \end{equation} with $\Pi_0=\pi$. \end{lemma} \begin{proof} We will first introduce a reference probability measure $\P_0$ under which the processes $M$ and $X$ are independent. Moreover, the probability law of $M$ under $\P_0$ will remain unchanged. Let us introduce \begin{align} Z_t \triangleq \exp \left\{\int^t_0 \log \left(\frac{H(s)}{\lambda_0}\right)\, dX_s - \int^t_0 [H(s)-\lambda_0] ds \right\}, \quad t\ge 0, \end{align} in which $H(s) \triangleq \lambda_0 1_{\{s<\Theta\}}+ \lambda_1 1_{\{s\ge \Theta\}}$. Using the process $Z$ we can define a new probability measure $\P_0$ on $(\Omega, \mathbb{G})$ locally in terms of the Radon-Nikodym derivatives \begin{align} \label{eq:Radon-Nikodym-derivative} \left.\frac{d\P}{d\P_0} \right|_{\mathcal G_t} = \frac{1}{Z_t} = 1_{\{\theta>t\}} + 1_{\{\theta \le t\}} \frac{L_{\theta}}{L_t} \end{align} for every $0\le t <\infty$, where \begin{align} \label{eq:likelihood-ratio-when-both-rates-are-known} L_t \triangleq \left(\frac{\lambda_1}{\lambda_0} \right)^{X_t}e^{-(\lambda_1-\lambda_0)t}. \end{align} Under the measure $\P_0$, the process $Z$ is a martingale, $X$ is a Poisson process with intensity $\lambda_0$ and is independent of $M$ (see e.g. Section 2 in \cite{bdk05}, or Appendix A1 in \cite{ds}). Moreover, $\P$ and $\P_0$ coincide on $\mathcal{G}_0=\sigma\{ M_s; s\ge 0 \} $, therefore $\P_0\{\Theta \leq t\}=F_{\vec{\pi}}(t)$. Using the Bayes rule (see e.g. \cite{MR2001k:60001a}) (this is also known as the Kallianpur-Striebel formula) we obtain \begin{equation}\label{eq:piandbarpi} \Pi_t=\P\{\Theta \leq t|\mathcal F_t\}=\frac{\mathbb{E}^{\vec{\pi}}_0\{Z_t 1_{\{\Theta \leq t \}}| \mathcal F_t \}}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}}, \quad 1-\Pi_t=\frac{\mathbb{E}^{\vec{\pi}}_0\{1_{\{\Theta>t\}}|\mathcal F_t\}}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}}=\frac{(1-F_{\vec{\pi}}(t))}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}}. \end{equation} Here, to derive the second equality in the second equation we used independence of $\theta$ and $X$ under $\P_0$. Let us define \emph{the odds ratio} process \begin{equation}\label{eq:odds-ratio} \Phi_t \triangleq \frac{\Pi_t}{1-\Pi_t}, \quad 0 \leq t <\infty. \end{equation} Using (\ref{eq:piandbarpi}) we can obtain a new representation for the odds ratio process \begin{equation}\label{eq:phi-t-L} \Phi_t=\frac{\mathbb{E}^{\vec{\pi}}_0\{Z_t 1_{\{\Theta \leq t \}}| \mathcal F_t \}}{(1-F_{\vec{\pi}}(t))} =\frac{1}{1-F_{\vec{\pi}}(t)}\left[\pi L_t + \int_0^t \frac{L_t}{L_s} F'_{\vec{\pi}}(s) ds\right], \end{equation} Here, again we used the independence of $\Theta$ and $\mathbb{F}$. The process $L=\{L_t, t\ge 0\}$ in (\ref{eq:Radon-Nikodym-derivative}) is a $(\P_0, \mathbb F)$-martingale and is the unique locally bounded solution of the equation \begin{align*} dL_t = [(\lambda_1/\lambda_0)-1]L_{{t-}} (dX_t-\lambda_0 dt),\qquad L_0=1; \end{align*} see, e.g., \cite{RY99} or \cite{MR2003j:60001}. Applying chain rule to (\ref{eq:phi-t-L}), we get \begin{equation}\label{eq:dyn-phi-with-h} d \Phi_t= \eta(t)(1+\Phi_t)dt+ \Phi_{t-}\left(\frac{\lambda_1}{\lambda_0}-1\right)d(X_t-\lambda_0 t), \quad \Phi_0=\frac{\pi}{1-\pi}. \end{equation} By an another application of chain rule to (\ref{eq:odds-ratio}) together with (\ref{eq:dyn-phi-with-h}) we obtain (\ref{eq:pi-h}). \end{proof} \begin{proposition} \label{prop:dynamics-of-pi} The dynamics of the posterior probability distribution $\vec{\Pi}_t=[\Pi^{(1)}_t, \cdots \Pi^{(n)}_t,\Pi_t]$, $t \geq 0$, which is defined in (\ref{eq:posterior-prob}), is given by \begin{align} d\Pi_t&=\left(\sum_{j=1}^{n}q_{j \Delta}\Pi^{(j)}_t-(\lambda_1-\lambda_0)\Pi_t(1-\Pi_t)\right)dt+\frac{(\lambda_1-\lambda_0)\Pi_{t-}(1-\Pi_{t-})} {\lambda_0(1-\Pi_{t-})+\lambda_1 \Pi_{t-}}dX_t, \label{eq:dyn-pi-t} \\ d\Pi_t^{(i)}&=\left(\sum_{j=1}^{n}q_{j i}\Pi^{(j)}+(\lambda_1-\lambda_0)\Pi_t \Pi_t^{(i)}\right)dt- \frac{(\lambda_1-\lambda_0)\Pi_{t-}\Pi^{(i)}_{t-}} {\lambda_0(1-\Pi_{t-})+\lambda_1 \Pi_{t-}}dX_t, \label{eq:dyn-pi-i-t} \end{align} for $i \in \{1,\cdots,n\}$, and with $\vec{\Pi}_0=[\pi_{1}, \cdots \pi_{n},\pi]$. \end{proposition} \begin{proof} First, observe that the hazard rate function of the distribution $F_{\vec{\pi}}$, can be written as \begin{equation}\label{eq:hr} \eta(t)=\frac{\sum_{i=1}^{n} \P\{M_t=i\} q_{i \Delta}}{1-F_{\vec{\pi}}(t)}. \end{equation} On the other hand, \begin{align}\label{eq:pi-t} \begin{aligned} \Pi_t^{(i)}=\mathbb{E}^{\vec{\pi}}\{1_{\{M_t=i\}}|\mathcal F_t\} &=\frac{\mathbb{E}^{\vec{\pi}}_0\{Z_t 1_{\{M_t=i\}}|\mathcal F_t\}}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}} \\ &=\frac{\mathbb{E}^{\vec{\pi}}_0\{ 1_{\{M_t=i\}}|\mathcal F_t\}}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}}=\frac{\mathbb{E}^{\vec{\pi}}_0\{ 1_{\{M_t=i\}}\}}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}}=\frac{\mathbb{E}^{\vec{\pi}}\{ 1_{\{M_t=i\}}\}}{\mathbb{E}^{\vec{\pi}}_0\{Z_t|\mathcal F_t\}}, \end{aligned} \end{align} in which $\mathbb{E}^{\vec{\pi}}_0$ denotes the expectation under the measure $\P_0$ which we introduced in (\ref{eq:Radon-Nikodym-derivative}). The second equality in this equation follows from Bayes' formula, the third equality follows from the definition of $Z$ in (\ref{eq:Radon-Nikodym-derivative}), the fourth equality follows from the independence of $M$ and $X$ under the measure $\P_0$, and, finally, the fourth equality follows from the fact that under the measures $\P$ and $\P_0$ the law of $M$ is the same. From (\ref{eq:piandbarpi}) and (\ref{eq:pi-t}) it is immediate that \begin{equation}\label{eq:pi-i-over-1-pi} \frac{\Pi^{(i)}_t}{1-\Pi_t}=\frac{\P\{M_t=i\}}{1-F_{\vec{\pi}}(t)}, \quad i\in\{1, \cdots,n\}. \end{equation} Then, from (\ref{eq:pi-t}) and (\ref{eq:pi-i-over-1-pi}) it follows that \begin{equation}\label{eq:alt-for-h} \eta(t)=\frac{\sum_{i=1}^{n}q_{i \Delta} \Pi_t^{(i)}}{1-\Pi_t}. \end{equation} This equation together with (\ref{eq:pi-h}) yields (\ref{eq:dyn-pi-t}). We will now derive the dynamics of $(\Pi_t^{(i)})_{t \geq 0}$, $i \in \{1,\cdots n \}$. Let $p_{ij}(t) \triangleq \P\{M_t=j|M_0=i\}$ denote the transition probabilities of the process $M$. Recall that $t \rightarrow p_{ij}(t)$, $t \geq 0$, satisfies the forward Kolmogorov equation, i.e., \begin{equation}\label{eq:kolmogorov} \frac{dp_{ij}(t)}{dt}=\sum_{k=1}^{n}q_{k j}p_{ik}(t). \end{equation} and that \begin{equation}\label{eq:markov-chain-prob-m-i} \P\{M_t=i\}=\sum_{j=1}^{n}\pi_j p_{ji}(t). \end{equation} Now, applying chain rule to (\ref{eq:pi-i-over-1-pi}) we obtain \begin{equation} \begin{split} d\Pi_t^{(i)}&=-\frac{\Pi_t^{(i)}}{1-\Pi_t} d\Pi_t+ (1-\Pi_t)\frac{\sum_{j=1}^{n}\pi_j \sum_{k=1}^{n} q_{ki} p_{jk}(t)+\sum_{j=1}^{n} \pi_j p_{ji}(t) \eta(t)}{1-F_{\vec{\pi}}(t)} dt \\&= -\frac{\Pi_t^{(i)}}{1-\Pi_t} d\Pi_t+(1-\Pi_t) \left( \frac{\sum_{k=1}^{n} \P(M_t=k) q_{ki}}{1-F_{\vec{\pi}}(t)}+ \eta(t)\frac{\P\{M_t=i\}}{1-F_{\vec{\pi}}(t)} \right) dt \\&=-\frac{\Pi_t^{(i)}}{1-\Pi_t} d\Pi_t+ \left( \sum_{k=1}^{n}\Pi_{t}^{(k)}q_{ki}+\eta(t)\Pi^{(i)}_t \right) dt. \end{split} \end{equation} The first line follows from (\ref{eq:kolmogorov}), and the second follows from (\ref{eq:markov-chain-prob-m-i}). The last line is a result of the identity in (\ref{eq:pi-i-over-1-pi}). This equation, together with (\ref{eq:dyn-pi-t}) and (\ref{eq:alt-for-h}) gives (\ref{eq:dyn-pi-i-t}). \end{proof} \begin{remark} \label{rem:deterministic-paths} Let $\vec{x}(t,\vec{\pi}) \triangleq (x_1(t,\vec{\pi}), \cdots , x_{n}(t,\vec{\pi}), x_{\Delta}(t,\vec{\pi}))$ be the solution of the system of ordinary differential equations \begin{equation}\label{eq:dyn-x} \begin{split} dx_{\Delta}(t, \vec{\pi})&=\left(\sum_{j=1}^{n}q_{j \Delta} x_{j}(t, \vec{\pi})- (\lambda_1-\lambda_0)x_{\Delta}(t, \vec{\pi})(1-x_{\Delta}(t, \vec{\pi})) \right)dt, \quad \text{with }x_{\Delta}(0 ,\vec{\pi})=\pi, \\ dx_{i}(t,\vec{\pi})&=\left(\sum_{j=1}^{n} q_{ji} x_j(t, \vec{\pi})+ (\lambda_1-\lambda_0) x_{\Delta}(t,\vec{\pi}) x_{i}(t)\right)dt, \quad \text{with }x_{i}(0 , \vec{\pi})=\pi_i, \end{split} \end{equation} for $i \in \{1, \cdots,n\}$. Due to Kolmogorov's forward equations, the solution of this system of equations can be written as \begin{align} \label{eq:solns-of-system} \begin{aligned} x_{\Delta}(t,\vec{\pi})&= \frac{\pi e^{-(\lambda_1-\lambda_0)t}+ \int_{0}^{t}e^{-(\lambda_1-\lambda_0)(t-s)}F'_{\vec{\pi}}(s)ds}{1-F_{\vec{\pi}}(t)+ \pi e^{-(\lambda_1-\lambda_0)t}+ \int_{0}^{t}e^{-(\lambda_1-\lambda_0)(t-s)}F'{\vec{\pi}}_(s)ds}, \\ x_{i}(t,\vec{\pi})&= \frac{\sum_{j=1}^{n} \pi_j p_{ji}(t)}{1-F_{\vec{\pi}}(t)+ \pi e^{-(\lambda_1-\lambda_0)t}+ \int_{0}^{t}e^{-(\lambda_1-\lambda_0)(t-s)}F'_{\vec{\pi}}(s)ds}, \qquad \text{for }i \in \{1, \cdots n\}, \end{aligned} \end{align} in terms of the transition probabilities $p_{ij}(t) \triangleq \P\{M_t=j|M_0=i\}$, for $i,j \in E$. Moreover, the expressions in (\ref{eq:solns-of-system}) are equivalent to \begin{equation}\label{eq:alternative-esp-for-x} \begin{split} x_{\Delta}(t, \vec{\pi})=\frac{\mathbb{E}^{\vec{\pi}}\left\{1_{\{t \geq \theta\}}e^{-(\lambda_1-\lambda_0)(t-\theta)}\right\}}{\mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^+}\right\}}, \quad \text{and} \quad x_{i}(t,\vec{\pi})=\frac{\P\left\{M_t=i\right\}}{\mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^{+}}\right\}}, \end{split} \end{equation} for $i \in \{1,\cdots,n\}$. \end{remark} Using Markov property of $M$ and (\ref{eq:alternative-esp-for-x}), we have \begin{align} \label{eq:probability-for-ts} \begin{aligned} \P \{ M_{t+s} = i\} &= \sum_{ j=1}^n \P\{ M_t =j\} \cdot \P \{ M_{t+s} = i | M_t =j \} \\ &= \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^+}\right\} \sum_{ j=1}^n x_j (t, \vec{\pi} ) \cdot \P \{ M_{t+s} = i | M_t =j \} \\& = \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^+}\right\} \cdot \mathbb{P}^{\vec{x}(t,\vec{\pi})} \{ M_{s} = i\} \end{aligned} \end{align} for $i \le n$, and \begin{align} \label{eq:expectation-for-ts} \begin{aligned} \mathbb{E}^{\vec{\pi}} &\left\{e^{-(\lambda_1-\lambda_0)(t+s-\theta)^+}\right\} = \mathbb{E}^{\vec{\pi}}\left\{ \mathbb{E}^{\vec{\pi}} \left\{ e^{-(\lambda_1-\lambda_0)(t+s-\theta)^+} \Big| M_s ; s\le t \right\} \right\} \\ &= \mathbb{E}^{\vec{\pi}} \left\{\sum_{j=1}^n 1_{ \{ M_t =j \} } \cdot \mathbb{E}^{\vec{\pi}} \left\{ e^{-(\lambda_1-\lambda_0)(s-\theta)^+} \big| M_0 =j \right\} + 1_{ \{ M_t =\Delta \} } \cdot e^{-(\lambda_1-\lambda_0)(t+s-\theta)} \right\} \\ &= \sum_{j=1}^n \P \{ M_t =j \} \mathbb{E}^{\vec{\pi}} \left\{ e^{-(\lambda_1-\lambda_0)(s-\theta)^+} \big| M_0 =j \right\} + e^{-(\lambda_1-\lambda_0)s} \cdot \mathbb{E}^{\vec{\pi}} \left\{ 1_{ \{ t \ge \Theta \} } \cdot e^{-(\lambda_1-\lambda_0)(t-\theta)} \right\} \\ &= \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^+}\right\} \left[ \sum_{j=1}^n x_{j} (t, \vec{\pi})\cdot \mathbb{E}^{\vec{\pi}} \left\{ e^{-(\lambda_1-\lambda_0)(s-\theta)^+} \big| M_0 =j \right\} + x_{\Delta}(t,\vec{\pi}) \cdot e^{-(\lambda_1-\lambda_0)s} \right] \\ &= \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^+}\right\} \cdot \mathbb{E}^{\vec{x}(t,\vec{\pi})} \left\{e^{-(\lambda_1-\lambda_0)(s-\theta)^+}\right\} \end{aligned} \end{align} Using (\ref{eq:probability-for-ts}) and (\ref{eq:expectation-for-ts}), it is now easy to see that $t \mapsto x(t,\vec{\pi})$ has the semi-group property $x(t+s, \vec{\pi} ) = x( t, x(s,\vec{\pi}) ) $. Then, the dynamics in (\ref{eq:dyn-pi-t}), (\ref{eq:dyn-pi-i-t}) and Remark~\ref{rem:deterministic-paths} imply that $\vec{\Pi}$ is a piecewise deterministic process whose natural filtration coincides with $\mathbb{F}$. Between two jumps of $X$, the process $\vec{\Pi}$ follows the curves $t \mapsto \vec{x}(t,\vec{\pi}) $, and at arrival times of $X$, it jumps from one curve to another. More precisely, the paths of $\vec{\Pi}$ have the characterization \begin{equation}\label{eq:rel-pi-x} \begin{split} \vec{\Pi}_t=\vec{x}\left(t-\sigma_m,\vec{\Pi}_{\sigma_m}\right), \quad \quad \sigma_m \leq t< \sigma_{m+1}, \;\; m\in \mathbb{N}_0, \qquad\qquad\qquad\qquad \\ \vec{\Pi}_{\sigma_m}=\left(\frac{\lambda_0 \Pi^{(1)}_{\sigma_m-}}{\lambda_0(1-\Pi_{\sigma_m-})+\lambda_1 \Pi_{\sigma_m-} }, \cdots, \frac{\lambda_0 \Pi^{(b)}_{\sigma_m-}}{\lambda_0(1-\Pi_{\sigma_m-})+\lambda_1 \Pi_{\sigma_m-} }, \frac{\lambda_1 \Pi_{\sigma_m-}}{\lambda_0(1-\Pi_{\sigma_m-})+\lambda_1 \Pi_{\sigma_m-} }\right), \end{split} \end{equation} in which \begin{equation}\label{eq:sigma} \sigma_0\equiv 0, \quad \text{and} \quad \sigma_m \triangleq \inf\{t>\sigma_{m-1}|X_t-X_{t-}>0\}, \qquad m \in \mathbb N. \end{equation} Moreover, for a bounded function $g(\cdot)$, we have \begin{align} \label{eq:Markov-justification} \begin{aligned} \mathbb{E}^{\vec{\pi}} &\left\{ g( X_{t+s} - X_t ) \big| \mathcal F_t \right\}\\ &= \sum_{j=1}^n \P \{ M_t =j \big| \mathcal F_t \} \cdot \mathbb{E}^{\vec{\pi}} \left\{ g( X_{t+s} - X_t ) \big| \mathcal F_t , M_t =j\right\} \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad +\P \{ M_t = \Delta \big| \mathcal F_t \} \mathbb{E}^{\vec{\pi}} \left\{ g( X_{t+s} - X_t ) \big| \mathcal F_t , M_t =\Delta \right\} \\ &= \sum_{j=1}^n \Pi^{(i)}_t \cdot \mathbb{E}^{\vec{\pi}} \left\{ g( X_{s} ) \big| M_0 =j\right\} + \Pi_t \cdot \mathbb{E}^{\vec{\pi}} \left\{ g( X_{s}) \big| M_0 =\Delta \right\} = \mathbb{E}^{\vec{\Pi}_t} \left\{ g( X_{s} )\right\} . \end{aligned} \end{align} Then, the characterization in (\ref{eq:rel-pi-x}) and (\ref{eq:sigma}) implies that $\vec{\Pi}$ is a $(\P, \mathbb F)$-Markov process due to (\ref{eq:Markov-justification}). \section{Sequential Approximation}\label{sec:sequential} Let us define the sequence of functions \begin{equation}\label{eq:auxiliary} V_{m}(\vec{\pi})=\inf_{\tau \in \mathcal{S}_f}\mathbb{E}^{\vec{\pi}}\left\{\int_{0}^{\tau \wedge \sigma_m} k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{\tau \wedge \sigma_m})\right\}, \end{equation} in which $\sigma_m$, $m \in \mathbb{N}_0$, is defined in (\ref{eq:sigma}). The functions $V_m(\cdot)$, $m \in \mathbb{N}_0$, are non-negative and bounded above by $h(\cdot)$. By definition, the sequence $\{V_m\}_{m \geq 1}$ is decreasing and $V_m \geq V$ for all $m$. Therefore the point-wise limit $\lim_{m \rightarrow \infty} V_m$ exists and is greater than or equal to $V$. In fact a stronger convergence result holds as the next lemma shows. \begin{proposition}\label{prop:sequential} As $m \rightarrow \infty$, the sequence $\{V_m(\cdot)\}_{m \geq 1}$ converges to $V(\cdot)$ uniformly on $D$. In fact, for every $m \in \mathbb{N}$ \begin{equation}\label{eq:uniform-conv} V_m(\vec{\pi})-\sqrt{\left(\frac{1}{c}+\mathbb{E}^{\vec{\pi}}\{\theta\}\right)\frac{\max\{\lambda_0, \lambda_1\}}{m-1}}\leq V(\vec{\pi}) \leq V_m(\vec{\pi}), \quad \text{for all} \quad \vec{\pi} \in [0,1]^{n+1}. \end{equation} \end{proposition} \begin{proof} The second inequality in (\ref{eq:uniform-conv}) follows immediately, since by definition $V_m(\cdot) \geq V(\cdot)$. Let us prove the first inequality. For any $\tau \in \mathcal{S}_f$, the expectation $\mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau}k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{\tau})\right\}$ can be written as \begin{equation}\label{eq:ineq} \begin{split} & \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_m }k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{\tau \wedge \sigma_m})\right\} +\mathbb{E}^{\vec{\pi}}\left\{1_{\{\tau>\sigma_m\}} \left[\int_{\sigma_m}^{\tau}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{\tau})-h(\vec{\Pi}_{\sigma_m})\right]\right\} \\& \geq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_m }k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{\tau \wedge \sigma_m})\right\}-\mathbb{E}^{\vec{\pi}}\left\{1_{\{\tau>\sigma_m\}}\right\}, \end{split} \end{equation} since $0 \le h(\cdot) \le 1$. Note that \begin{equation}\label{eq:cauchy} \mathbb{E}^{\vec{\pi}}\left\{1_{\{\tau>\sigma_m\}}\right\}\leq \mathbb{E}^{\vec{\pi}}\left\{1_{\{\tau>\sigma_m\}}\left(\frac{\tau}{\sigma_m}\right)^{1/2}\right\} \leq \mathbb{E}^{\vec{\pi}}\left\{\left(\frac{\tau}{\sigma_m}\right)^{1/2}\right\} \leq \sqrt{\mathbb{E}^{\vec{\pi}}\{\tau\} \mathbb{E}^{\vec{\pi}}\left\{\frac{1}{\sigma_m}\right\}}, \end{equation} which follows as a result of Cauchy-Schwartz inequality, and that \begin{equation}\label{eq:inverse-moment} \mathbb{E}^{\vec{\pi}}\left\{\frac{1}{\sigma_m}\right\} \leq \frac{\max\{\lambda_0,\lambda_1\}}{m-1}. \end{equation} Since $\mathbb{E}^{\vec{\pi}}\{\tau\} \leq 1/c+\mathbb{E}^{\vec{\pi}}\{\theta\}$ for any $\tau \in \mathcal{S}_f$, using (\ref{eq:ineq}), (\ref{eq:cauchy}) and (\ref{eq:inverse-moment}) we obtain \begin{multline} \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau}k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{\tau})\right\} \leq \\ \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_m }k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{\tau \wedge \sigma_m})\right\}-\sqrt{\left(\frac{1}{c}+\mathbb{E}^{\vec{\pi}}\{\theta\}\right)\frac{\max\{\lambda_0, \lambda_1\}}{m-1}}. \end{multline} Now taking the infimum of both sides over the stopping rules in $\mathcal{S}_f$, we obtain the first inequality in (\ref{eq:uniform-conv}). \end{proof} To calculate the functions $V_m(\cdot)$ iteratively, we introduce the following operators acting on bounded functions $w:D \rightarrow \mathbb R$ \begin{equation}\label{eq:defn-J} \begin{split} J w(t, \vec{\pi})&=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{t \wedge \sigma_1}k(\vec{\Pi}_s)ds+1_{\{t<\sigma_1\}}h(\vec{\Pi}_t) +1_{\{t \geq \sigma_1\}}w(\vec{\Pi}_{\sigma_1})\right\} \quad t \in [0,\infty], \\ J_t w(\vec{\pi})&=\inf_{u \in [t,\infty]}J w(u,\vec{\pi}), \quad t \in [0,\infty]. \end{split} \end{equation} The action of the operator $J$ on the function $w$ can be written as \begin{equation}\label{eq:exp-for-J-w} Jw(t,\vec{\pi})=\int_0^{t}\P\{s \leq \sigma_1\}k(\vec{x}(s,\vec{\pi}))ds+\int_0^{t}\P\{\sigma_1 \in ds\} Sw(\vec{x}(s,\vec{\pi}))+h(\vec{x}(t,\pi))\P\{t<\sigma_1\}, \end{equation} in which \begin{equation} Sw(\vec{\pi})\triangleq w\left(\frac{\lambda_0 \pi_1}{\lambda_0(1-\pi)+\lambda_1 \pi}, \cdots, \frac{\lambda_0 \pi_n}{\lambda_0(1-\pi)+\lambda_1 \pi}, \frac{\lambda_1 \pi}{\lambda_0(1-\pi)+\lambda_1 \pi}\right). \end{equation} Let us now compute the distribution and the density of $\sigma_1$ under $\P$, respectively, since it appears in the expression for $Jw$. We have \begin{equation}\label{eq:distribution-of-first-jump} \begin{split} \P\{\sigma_1>t\}&=\int_0^{\infty}\P\{\sigma_1>t|\theta \in ds\} \P\{\theta \in ds\} \\&=\int_0^{t}\P\{\sigma_1>t|\theta=s\}\P\{\theta \in ds\}+\int_t^{\infty}\P\{\sigma_1>t|\theta=s\}\P\{\theta \in ds\} \\&=\int_0^{t}e^{-\lambda_0 s} e^{-\lambda_1(t-s)}\P\{\theta \in ds \}+\int_t^{\infty}e^{-\lambda_0 t}\P\{\theta \in ds \} \\&=e^{-\lambda_0 t} \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\theta)^{+}}\right\}, \end{split} \end{equation} from which it follows that \begin{equation}\label{eq:sigma-density} \begin{split} \P\{\sigma_1 \in dt\ =e^{-\lambda_0 t}\left[\lambda_0 \cdot \mathbb{E}^{\vec{\pi}}\{1_{\{t<\theta\}}\}+\lambda_1\cdot \mathbb{E}^{\vec{\pi}}\left\{1_{\{t \geq \theta\}}\, e^{-(\lambda_1-\lambda_0)(t-\theta)}\right\}\right]dt. \end{split} \end{equation} \begin{remark} \label{rem:inf-J-is-attained} For a bounded function $w(\cdot)$, using equations in (\ref{eq:alternative-esp-for-x}), (\ref{eq:distribution-of-first-jump}) and (\ref{eq:sigma-density}), it can be verified easily that the integrands in (\ref{eq:exp-for-J-w}) are absolutely integrable. Hence \begin{align*} \lim_{t \to \infty } Jw(t, \vec{\pi}) = Jw(\infty,\vec{\pi}) < \infty, \end{align*} and the mapping $t \to Jw(t,\vec{\pi})$ is continuous on $[0,\infty]$. Therefore, the infimum in (\ref{eq:defn-J}) is attained for all $t \in [0,\infty]$. \end{remark} \begin{remark}\label{rem:monotone} \begin{itemize} \item[(i)] $0 \leq J_0 w(\cdot) \leq h(\cdot)$ for all non-negative and bounded function $w$. \item[(ii)] For two bounded functions $w_1(\cdot) \leq w_2(\cdot)$, we have $J_0 w_1(\cdot) \leq J_0 w_2(\cdot)$. \end{itemize} \end{remark} \begin{lemma}\label{lem:concave} If $w:D\rightarrow \mathbb R$ is positive and concave, then so are the mappings \begin{equation} \vec{\pi} \rightarrow J w(t,\vec{\pi}) \quad \text{and} \quad \vec{\pi} \rightarrow J_0 w(\vec{\pi}). \end{equation} \end{lemma} \begin{lemma}\label{lem:cont} If the function $w:D \rightarrow \mathbb R_{+}$ is bounded and continuous, then $(t,\vec{\pi}) \rightarrow J_0 w (t,\vec{\pi})$ and $\vec{\pi} \rightarrow J_0 w(\vec{\pi})$ are also continuous functions. \end{lemma} Using the operator $J_0$, let us define a sequence of functions \begin{equation}\label{eq:seq-of-func} v_0(\vec{\pi})\equiv h(\vec{\pi}) \quad \text{and} \quad v_m(\vec{\pi}) \triangleq J_0 v_{m-1}(\vec{\pi}), \, m \geq 1, \quad \text{for all $\vec{\pi} \in D$}. \end{equation} \begin{cor}\label{cor:little-v-n} Each $v_m(\cdot)$ is positive, continuous, concave on $D$. The sequence $\{v_m(\cdot)\}_{m \geq 1}$ is decreasing, hence the pointwise limit $v(\vec{\pi})=\lim_{m \rightarrow \infty}v_{m}(\vec{\pi})$, $\vec{\pi} \in D$, exists. The function $v(\cdot)$ is again concave. \end{cor} \begin{proof} The proof easily follows from Remark~\ref{rem:monotone}, Lemmata~\ref{lem:concave} and \ref{lem:cont}. To prove the concavity of $v(\cdot)$ we also use the fact that the lower envelope of concave functions is concave. \end{proof} The following lemma, which follows from \cite{bremaud} Theorem T.33, characterizes the stopping times of piece-wise deterministic Markov processes. Also see \cite{MR96b:90002}, Theorem A2.3. \begin{lemma}\label{lem:bremaud} For every $\tau \in \mathcal{S}$, and for every $m \in \mathbb{N}$, there exists a $\mathcal F_{\sigma_m}$-measurable random variable such that $\tau \wedge \sigma_{m+1}=(\sigma_m+R_m) \wedge R_{m+1}$, $\P$-almost surely on $\{\tau \geq \sigma_m\}$. \end{lemma} \begin{proposition}\label{prop:V-n-epsilon} For every $\varepsilon \geq 0$, let us define \begin{equation}\label{eq:defn-r-m} r_{m}^{\varepsilon}(\vec{\pi}) \triangleq \inf\{s \in (0,\infty]: J v_{m}(s,\vec{\pi}) \leq J_0 v_{m} (\vec{\pi})+\varepsilon\}, \quad \vec{\pi} \in D, \end{equation} \begin{equation}\label{eq:defn-S-eps} S_1^{\varepsilon} \triangleq r_0^{\varepsilon}(\vec{\Pi}_0) \wedge \sigma_1 \quad \text{and} \quad S_{m+1}^{\varepsilon}(\vec{\pi}) \triangleq \begin{cases} r_{m}^{\varepsilon/2}(\vec{\Pi}_0) & \text{if $\sigma_1>r_m^{\varepsilon/2}(\vec{\Pi}_0)$}, \\ \sigma_1+ S_m^{\varepsilon/2}\circ \theta_{\sigma_1} & \text{if $\sigma_1 \leq r_m^{\varepsilon/2}(\vec{\Pi}_0)$}, \end{cases} \end{equation} where $\theta_s$ is the shift operator on $\Omega$, i.e., $X_{t}\circ \theta_s=X_{s+t}$. Then, for every $m \geq 1$ \begin{equation}\label{eq:eps-opt} \mathbb{E}^{\vec{\pi}}\left\{\int_0^{S_m^{\varepsilon}}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{S_m^{\varepsilon}})\right\} \leq v_{m}(\vec{\pi})+\varepsilon. \end{equation} Moreover, for all $m \in \mathbb{N}$, $v_{m}(\vec{\pi})=V_m(\vec{\pi})$ on $D$. \end{proposition} \begin{proposition}\label{prop:v-V} We have $v(\vec{\pi})=V(\vec{\pi})$ for every $\vec{\pi} \in D$. Moreover, $V$ is the largest solution of $U=J_0 U$ that is smaller than or equal to $h$. \end{proposition} \begin{lemma}\label{lem:dyn-p-J-t-J} For every bounded function $\vec{\pi} \rightarrow w(\vec{\pi})$, $\vec{\pi} \in D$, we have \begin{equation}\label{eq:J-t-J} J_t w(\vec{\pi})=J w (t,\vec{\pi})+\P\{\sigma_1>t\} \cdot \left\{ J_0 w (\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))\right\}, \end{equation} for all $t \geq 0$. \end{lemma} \begin{cor}\label{cor:r-m} Let \begin{equation} r_{m}(\vec{\pi})=\inf\{t \in (0,\infty]:J v_{m}(s,\vec{\pi})=J_0 v_{m}(\vec{\pi})\}. \end{equation} Then \begin{equation} r_{m}(\vec{\pi})=\inf\{t \in (0,\infty]: v_{m+1}(\vec{x}(t,\vec{\pi}))=h(\vec{x}(s,\vec{\pi}))\}. \end{equation} Here, we use the convention that $\inf \emptyset=\infty$. \end{cor} \begin{remark} Substituting $w=v_m$ in (\ref{eq:J-t-J}) yields the dynamic programming equation for the sequence of function $\{v_m (\cdot)\}_{m \in \mathbb{N}_0}$; for every $\vec{\pi} \in D$ and $n\in \mathbb{N}_0$ \begin{equation} v_{m+1}(\vec{\pi})=Jv_m(t,\vec{\pi})+\P\left\{\sigma_1>t\right\} \cdot [v_{m+1}(\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))], \quad \; t \in [0, r_m(\vec{\pi})]. \end{equation} Moreover, if we take $w=V$ in (\ref{eq:J-t-J}), then we obtain \begin{equation}\label{eq:dyn-prog-0.5} J_t V(\vec{\pi})=JV(t,\vec{\pi})+\P\left\{\sigma_1>t\right\} \cdot \left[V(\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))\right], \quad t\geq 0. \end{equation} Let us define \begin{equation} r(\vec{\pi})\triangleq \inf\{t \in (0,\infty]:JV(t,\vec{\pi})=J_0 V(\vec{\pi})\}. \end{equation} The same arguments as in the proof of Corollary~\ref{cor:r-m} leads to \begin{equation} r(\vec{\pi})=\inf\{t \in (0,\infty]: V(\vec{x}(t,\vec{\pi}))=h(\vec{x}(t,\vec{\pi}))\}. \end{equation} This equation together with (\ref{eq:dyn-prog-0.5}) yields \begin{equation} V(\vec{\pi})=JV(t,\vec{\pi})+\P\left\{\sigma_1>t\right\} \cdot [V(\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))], \quad t \in [0,r(\vec{\pi})]. \end{equation} \end{remark} \begin{remark}\label{rem:right-cont-of-paths} From Propositions~\ref{prop:sequential}, \ref{prop:V-n-epsilon}, \ref{prop:v-V} and Corollary~\ref{cor:little-v-n} it follows that a continuous sequence of functions uniformly converge to $v=V$. Therefore $V$ is continuous on $D$. Since $t \rightarrow \vec{x}(t,\vec{\pi})$, $t \geq 0$, is continuous for all $\vec{\pi} \in D$, the mapping $t \rightarrow V(\vec{x}(t,\vec{\pi}))$, $t \geq 0$ is also continuous for all $\vec{\pi} \in D$. Moreover, for every $\vec{\pi}$, the path $t \rightarrow \vec{\Pi}_t$, $t \geq 0$, follows the deterministic curves $t \rightarrow \vec{x}(t,\vec{\pi})$ between the jumps. Hence the process $t \rightarrow V(\vec{\Pi}_t)$ is right-continuous with left limits. \end{remark} Let us define the $\mathbb{F}$-stopping times \begin{equation}\label{eq:defn-U-eps} U_{\varepsilon} \triangleq \inf\{t \geq 0: V(\vec{\Pi}_t)-h(\vec{\Pi}_t) \geq -\varepsilon\}, \quad \varepsilon \geq 0. \end{equation} Remark~\ref{rem:right-cont-of-paths} implies \begin{equation} V(\vec{\Pi}_{U_{\varepsilon}})-h(\vec{\Pi}(U_{\varepsilon})) \geq -\varepsilon \quad \text{on the event} \quad \{U_{\varepsilon}<\infty\}. \end{equation} \begin{proposition}\label{prop:L-V} Let \begin{equation} L_t \triangleq \int_0^{t}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_t), \quad t \geq 0. \end{equation} Then for every $m \in \mathbb{N}$, $\varepsilon \geq 0$, $\pi \in D$, we have $L_0 =\mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_m}\right\}$, that is, \begin{equation}\label{eq:V-V} V(\vec{\pi})=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}\wedge \sigma_m}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{U_{\varepsilon}\wedge \sigma_m})\right\}. \end{equation} \end{proposition} \begin{proposition}\label{prop:eqp-opti} The stopping time $U_{\varepsilon}$, which is defined in (\ref{eq:defn-U-eps}), has bounded $\P$-expectation, for every $\vec{\pi} \in D$ and $\varepsilon \geq 0$. More precisely, \begin{equation}\label{eq:U-eps-bounded} \mathbb{E}^{\vec{\pi}}\left\{U_{\varepsilon}\right\}\leq \mathbb{E}^{\vec{\pi}}\left\{\Theta\right\}+\frac{1}{c}, \quad \vec{\pi} \in D, \varepsilon \geq 0. \end{equation} Moreover, $U_{\varepsilon}$ is $\varepsilon$-optimal for the problem in (\ref{eq:value-func}); that is \begin{equation} \mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}}k(\vec{\Pi}_s)ds+h(\vec{\Pi}_{U_{\varepsilon}})\right\}\leq V(\vec{\pi})+\varepsilon, \quad \vec{\pi} \in D. \end{equation} \end{proposition} \begin{proof} Using Proposition~\ref{prop:L-V} and the fact that $V$ is bounded above by $1$ \begin{equation} \begin{split} &1 \geq V(\vec{\pi})=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}\wedge \sigma_m}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{U_{\varepsilon}\wedge \sigma_m})\right\} \\& \geq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}\wedge \sigma_m}k(\vec{\Pi}_s)ds \right\}= c \mathbb{E}^{\vec{\pi}} \left\{\left(U_{\varepsilon}\wedge \sigma_m-\Theta\right)^{+}\right\}\geq c \mathbb{E}^{\vec{\pi}}\left\{U_{\varepsilon}\wedge \sigma_m-\Theta \right\}, \end{split} \end{equation} where we used (\ref{eq:peanlty-in-t-Pi}) to derive the second equality. Applying monotone convergence theorem as $m \uparrow \infty$, equation (\ref{eq:U-eps-bounded}) follows. Next, the almost-sure finiteness of $U_{\varepsilon}$ implies \begin{equation} V(\vec{\pi})=\lim_{m \rightarrow \infty}\mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}\wedge \sigma_m}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{U_{\varepsilon}\wedge \sigma_m})\right\}=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{U_{\varepsilon}})\right\}, \end{equation} by monotone and bounded convergence theorems, and Proposition~\ref{prop:L-V}. Since $V(\vec{\Pi}_{U_{\varepsilon}})-h(\vec{\Pi}_{U_{\varepsilon}}) \geq -\varepsilon$, we have \begin{equation} \begin{split} V(\pi)&=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{U_{\varepsilon}})-h(\vec{\Pi}_{U_{\varepsilon}})+h(\vec{\Pi}_{U_{\varepsilon}})\right\} \\&\geq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{U_{\varepsilon}}k(\vec{\Pi}_s)ds+h(\vec{\Pi}_{U_{\varepsilon}})\right\}-\varepsilon. \end{split} \end{equation} This completes the proof. \end{proof} \section{Approximating the Value Function to a Given Level of Accuracy}\label{sec:solution} In this section, we will describe a numerical procedure that approximates the value function within any given positive margin, say $\varepsilon$, and construct $\varepsilon$-optimal stopping strategies. In the next section, we will give several examples to illustrate the efficacy of the numerical procedure. \subsection{Properties of the Stopping Regions} Let us introduce the stopping and continuation regions for the problem in (\ref{eq:value-func}) \begin{equation} \Gamma \triangleq \left\{\vec{\pi} \in D: V(\vec{\pi})=h(\vec{\pi})\right\}, \quad C =D\setminus \Gamma. \end{equation} Taking $\varepsilon=0$ in Proposition~\ref{prop:eqp-opti} implies that $U_0$ is an optimal stopping time of (\ref{eq:value-func}). From Remark~\ref{rem:val-func}, we see that an admissible rule to minimize the Bayes risk in (\ref{eq:value-function}) is to observe the process $X$ until the process $\vec{\Pi}$ of (\ref{eq:dyn-pi-t}) and (\ref{eq:dyn-pi-i-t}) enters the stopping region $\Gamma$. \begin{remark}\label{rem:con-cont-V} Since $V$ and $h$ are continuous (the continuity of $V$ follows from Remark~\ref{rem:right-cont-of-paths}), $\Gamma$ is closed. Moreover, since $V$ is a concave function (see Corollary~\ref{cor:little-v-n} and Proposition~\ref{prop:v-V}) and $h$ is linear, $\Gamma$ is a convex set. Indeed, if $\vec{\pi}_1, \vec{\pi}_2 \in \Gamma$, then for any $\alpha \in [0,1]$ \begin{equation} V(\alpha \vec{\pi}_1+(1-\alpha) \vec{\pi}_2) \geq \alpha V(\vec{\pi}_1)+(1-\alpha)V(\vec{\pi}_2)=\alpha h(\vec{\pi}_1)+ (1-\alpha)h(\vec{\pi}_2)=h(\alpha \vec{\pi}_1+(1-\alpha)\vec{\pi}_2). \end{equation} Since $V(\vec{\pi}) \leq h(\vec{\pi})$, for all $\vec{\pi} \in D$, this equation implies that $V(\alpha \vec{\pi}_1+(1-\alpha) \vec{\pi}_2) =h(\alpha \vec{\pi}_1+(1-\alpha)\vec{\pi}_2)$. Therefore, $\alpha \vec{\pi}_1+(1-\alpha)\vec{\pi}_2 \in \Gamma$. \end{remark} \begin{proposition} The stopping region $\Gamma$ is not empty. In particular, \begin{equation} \Gamma \supseteqq \left\{\vec{\pi}=(\pi_1, \cdots, \pi_n, \pi) \in D: \pi \geq \frac{\max\{\lambda_0,\lambda_1\}+B}{c+\max\{\lambda_0,\lambda_1\}+B} \right\}, \end{equation} in which \begin{equation}\label{eq:B} B \triangleq \max_{1 \leq i \leq n} q_{i \Delta}. \end{equation} \end{proposition} \begin{proof} For $w(\cdot) \ge 0$, using (\ref{eq:distribution-of-first-jump}) and (\ref{eq:sigma-density}), we write \begin{align*} J w(t, \vec{\pi}) &\geq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{t \wedge \sigma_1}k(\vec{\Pi}_s)\,ds+1_{\{t<\sigma_1\}}h(\vec{\Pi}_t)\right\} \geq c \int_0^{t}\pi e^{-\lambda_1 s}\,ds \\ & \qquad + c \sum_{i=1}^{n}\pi_i\int_0^{t e^{-\lambda_0 s} \left( \int_0^{s e^{-(\lambda_1-\lambda_0)(s-u)}f_i(u)du \right) ds + e^{-\lambda_0 t}\sum_{i=1}^{n}\pi_i \int_t^{\infty}f_i(s)\,ds \\ &\geq c \int_0^{t}\pi e^{-\lambda_1 s}\,ds+e^{-\lambda_0 t}\sum_{i=1}^{n}\pi_i \int_t^{\infty}f_i(s)\,ds, \end{align*} where $f_i(\cdot)$ is the probability density function of $\Theta$ given that $M_0=i$, for $i \le n$. If $\lambda_1 > \lambda_0$, then \begin{equation} J w(t, \vec{\pi}) \geq c \int_0^{t}\pi e^{-\lambda_1 s}ds+e^{-\lambda_1 t}\sum_{i=1}^{n}\pi_i \int_t^{\infty}f_i(s)ds=:K(t,\vec{\pi}), \quad t \geq 0, \quad \vec{\pi} \in D. \end{equation} Note that $K(0,\vec{\pi})=h(\vec{\pi})$. The derivative of $K$ with respect to $t$ \begin{equation} \frac{\partial K}{\partial t}(t,\vec{\pi})=e^{-\lambda_1 t} \left(c \pi -\lambda_1 \sum_{i=1}^{n} \pi_i \int_{t}^{\infty}f_i(s)ds- \sum_{i=1}^{n} \pi_i f_{i}(t) \right). \end{equation} Since $f_{i}(t)=dp_{i \Delta}(t)/dt$, Kolmogorov's forward equation (\ref{eq:kolmogorov}) implies that \begin{equation} f_{i}(t) \leq \max_{1 \leq i \leq n}q_{i \Delta}=B. \end{equation} Therefore, \begin{equation} \frac{\partial K}{\partial t}(t,\vec{\pi}) \geq 0 \quad \text{if} \quad \pi \geq \frac{\lambda_1+B}{c+\lambda_1+B}. \end{equation} Then for $\pi \geq \frac{\lambda_1+B}{c+\lambda_1+B}$, \begin{equation} K(t,\vec{\pi}) \geq h(\vec{\pi}) \Rightarrow Jw (t,\vec{\pi}) \geq h(\vec{\pi}) \Rightarrow J_0 w(\vec{\pi})=h(\vec{\pi}). \end{equation} Since $V(\cdot) = J_0V(\cdot)$, taking $w=V$ in the last equation, we see that if $\pi \geq \frac{\lambda_1+B}{c+\lambda_1+B}$, then $\vec{\pi}=(\pi_1,\cdots,\pi_n, \pi)\in D$ belongs to $\Gamma$. Similarly, if $\lambda_0>\lambda_1$ it can be shown that if $\pi \geq \frac{\lambda_0+B}{c+\lambda_0+B}$, then $\vec{\pi}=(\pi_1,\cdots,\pi_n, \pi)\in D$ belongs to $\Gamma$. \end{proof} Let us define the optimal stopping and continuation regions for the problems that we introduced in (\ref{eq:auxiliary}) as \begin{equation} \Gamma_m \triangleq \{\vec{\pi} \in D: V_{m}(\vec{\pi})=h(\vec{\pi})\}, \quad \text{and} \quad C_{m}=D\setminus \Gamma_m, \quad m \geq 0. \end{equation} Similar arguments as in Remark~\ref{rem:con-cont-V} imply that $\Gamma_m$ is a closed and convex subset of $D$ for all $m \in \mathbb{N}_0$. In fact, these sets are ordered, i.e., \begin{equation}\label{eq:sets-order} \left\{\vec{\pi} \in D: \pi \geq \frac{\max\{\lambda_0,\lambda_1\}+B}{c+\max\{\lambda_0,\lambda_1\}+B} \right\} \subseteq \Gamma \subseteq \cdots \subseteq \Gamma_{m} \subseteq \cdots \subseteq \Gamma_1 \subseteq \Gamma_0 \equiv D, \end{equation} since $V(\cdot) \leq \cdots \leq V_1(\cdot) \leq V_0(\cdot)=h(\cdot)$. \subsection{Two Computable $\varepsilon$-Optimal Strategies} The value function $V(\cdot)$, which is defined in (\ref{eq:value-function}), can be approximated by the sequence $\left\{V_m(\cdot) \right\}_{m \in \mathbb{N}_0}$, as Proposition~\ref{prop:sequential} suggests. Each element of the sequence $\left\{V_m \right\}_{m \in \mathbb{N}_0}$ can be computed by a successive application of the operator $J_0$, which is defined in (\ref{eq:defn-J}), to the function $h(\cdot)$, see (\ref{eq:seq-of-func}) and Proposition~\ref{prop:V-n-epsilon}. Moreover, the error in approximating $V(\cdot)$ by $\left\{V_m(\cdot) \right\}_{m \in \mathbb{N}_0}$ can be controlled. Due to Proposition~\ref{prop:sequential}, for every $\varepsilon>0$, if we choose $\mathcal{M}_{\varepsilon}$ as \begin{equation}\label{eq:M-eps} \mathcal{M}_{\varepsilon}=1+\frac{\max\{\lambda_0,\lambda_1\}}{\varepsilon^2}\left(\frac{1}{c}+\mathbb{E}^{\vec{\pi}}\left\{\theta\right\}\right) \Rightarrow \|V_{\mathcal{M}}-V\|_{\infty}=\sup_{\vec{\pi} \in D}|V_{\mathcal{M}}(\vec{\pi})-V(\vec{\pi})| \leq \varepsilon, \quad \mathcal{M} \geq \mathcal{M}_{\varepsilon}. \end{equation} In the next section, we will give a numerical algorithm to compute $V_1, V_2 \cdots$ iteratively. Here, we will describe two $\varepsilon$-optimal strategies using these functions. Recall from Proposition~\ref{prop:V-n-epsilon} that $S_{m}^{\varepsilon}$, $m \geq 1$ are $\varepsilon$-optimal stopping times for the problem in (\ref{eq:auxiliary}). For a fixed $\varepsilon>0$, if we choose $\mathcal{M} \geq \mathcal{M}_{\varepsilon/2}$, we have $\|V_{\mathcal{M}}-V\|_{\infty} \leq \varepsilon/2$. Then $S_{\mathcal{M}}^{\varepsilon/2}$ is $\varepsilon-$optimal for $V(\cdot)$ since \begin{equation} \mathbb{E}^{\vec{\pi}}\left\{\int_0^{S_{\mathcal{M}}^{\varepsilon/2}}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{S_{\mathcal{M}}^{\varepsilon/2}})\right\} \leq V_{\mathcal{M}}(\vec{\pi})+\frac{\varepsilon}{2} \leq V(\vec{\pi})+\varepsilon, \quad \vec{\pi} \in D. \end{equation} Note that $S_{\mathcal{M}}^{\varepsilon/2}$ is not a hitting time. In (\ref{eq:defn-S-eps}), it prescribes to wait until the minimum of $r_{\mathcal{M}-1}^{\varepsilon/4}(\vec{\pi})$ and the first jump time $\sigma_1$ of the process $X$. If $r_{\mathcal{M}-1}^{\varepsilon/4}(\vec{\pi})<\sigma_1$, then we stop. Otherwise, the probabilities are updated to $\vec{\Pi}_{\sigma_1}$ and we wait until the minimum of $r_{\mathcal{M}-1}^{\varepsilon/4}(\vec{\Pi}_{\sigma_1})$ and the next jump time $\sigma_2=\sigma_1 \circ \theta_{\sigma_1}$ of the process $X$. If $r_{\mathcal{M}-1}^{\varepsilon/4}(\vec{\Pi}_{\sigma_1})$ comes first we stop. Otherwise we continue as before. We finally stop at the $M$th jump time if not before. We can also give an $\varepsilon$-optimal strategy that is a hitting time. Let us define \begin{equation}\label{eq:U-M-eps} U_{\varepsilon/2}^{(\mathcal{M})} \triangleq \inf\left\{t \geq 0: h(\vec{\Pi}_t) \leq V_{\mathcal{M}}(\vec{\Pi}_t)+\frac{\varepsilon}{2} \right\}. \end{equation} Following the same arguments as in the proof of Proposition~\ref{prop:eqp-opti} this stopping time can be shown to be an $\varepsilon/2$-optimal stopping time for $V_{\mathcal{M}}(\cdot)$, which in turn implies that it is an $\varepsilon-$optimal stopping time for $V(\cdot)$. \subsection{An Algorithm Approximating the Value Function} \label{sec:algorithm} Note that if the hitting time of $t \rightarrow x_{\Delta}(t,\vec{\pi})$ to the region $\Gamma$ is uniformly bounded by some $t^* < \infty$, then the minimization problem in computing $V_{m+1}(\vec{\pi}) =\inf_{t \in [0,\infty]}JV_m(t,\vec{\pi})$ can be restricted to the compact interval $ [0,t^*]$ thanks to Corollary~\ref{cor:r-m}. Remark~\ref{rem:time-bound} constructs a uniform bound $t^*$ when the parameters of the problem satisfy $\tilde{B}-\lambda_1+\lambda_0 \geq 0$, in which $\tilde{B}$ defined as \begin{equation}\label{eq:Btilde} \tilde{B} \triangleq \min_{1 \leq i \leq n}q_{i \Delta}. \end{equation} \begin{remark}\label{rem:time-bound} The hazard rate of the prior distribution of $\theta$ satisfies $\eta(t)\geq \tilde{B}$ (see (\ref{eq:alt-for-h})). Moreover, from (\ref{eq:pi-h}), we have \begin{equation} \frac{d x_{\Delta}(t,\vec{\pi})}{dt}=(\eta(t)-(\lambda_1-\lambda_0)x_{\Delta}(t,\vec{\pi}) )(1-x_{\Delta}(t,\vec{\pi})), \; \quad x_{\Delta}(0, \vec{\pi})=\pi, \end{equation} where $x_{\Delta}$ is defined in (\ref{eq:dyn-x}). Let $\tilde{x}(t)$ be the solution of the differential equation \begin{equation}\label{eq:auxi-diff} \frac{d \tilde{x}(t)}{dt}=(\tilde{B}-(\lambda_1-\lambda_0)\tilde{x}(t))(1-\tilde{x}(t)), \qquad \text{with }\; \tilde{x}(0)=0. \end{equation} A simple comparison argument shows that $x_{\Delta}(t) \geq x(t)$, for all $t \geq 0$ when $\tilde{B}-\lambda_1+\lambda_0 \geq 0$. The solution to (\ref{eq:auxi-diff}) can be written as \begin{equation}\label{eq:explicit-x} x\tilde{}(t)= \begin{cases} \frac{\frac{\tilde{B}}{\tilde{B}-\lambda_1-\lambda_0}\left(1-\exp\left((\tilde{B}-\lambda_1+\lambda_0)t\right)\right)} {1+\frac{\tilde{B}}{\tilde{B}-\lambda_1-\lambda_0}\left(1-\exp\left((\tilde{B}-\lambda_1+\lambda_0)t\right)\right)} & \text{if \,\, $\tilde{B}-\lambda_1+\lambda_0 \ne 0$}, \\ \frac{\tilde{B}t}{1+\tilde{B}t} & \text{if \,\, $\tilde{B}-\lambda_1+\lambda_0=0$}. \end{cases} \end{equation} When $\tilde{B}-\lambda_1+\lambda_0 \geq 0$, let us denote \begin{equation} \hat{x} \triangleq \frac{\max\{\lambda_0,\lambda_1\}+B}{c+\max\{\lambda_0,\lambda_1\}+B}, \end{equation} where $B$ is given in (\ref{eq:B}). Let $t^*(\vec{\pi}) \triangleq \inf \{ t > 0; x_{\Delta}(t,\vec{\pi})=\hat{x} \}$, then using (\ref{eq:explicit-x}), it can be easily verified that \begin{equation} \label{eq:uniform-bound-on-hitting-time} t^*(\vec{\pi}) \leq t^* \triangleq \begin{cases} \frac{1}{\tilde{B}-\lambda_1-\lambda_0} \log\left(\frac{\tilde{B}+ \hat{x} (\lambda_0-\lambda_1)}{\tilde{B}(1-\hat{x})}\right), & \text{if \,\, $\tilde{B}-\lambda_1+\lambda_0>0$,} \\ \frac{\hat{x}}{1-\hat{x}}\frac{1}{\tilde{B}} , & \text{if \,\, $\tilde{B}-\lambda_1+\lambda_0=0$}, \end{cases} \end{equation} for all $\vec{\pi} \in D$. \end{remark} \begin{remark} If $\tilde{B}-\lambda_1+\lambda_0 < 0$, then $t^{*}(\vec{\pi})$ defined in Remark~\ref{rem:time-bound} may be $\infty$ for some $\vec{\pi} \in D$. \end{remark} When $\tilde{B}-\lambda_1+\lambda_0 < 0$, it is still possible to restrict the minimization problem in $V_{n+1}(\vec{\pi}) = \inf_{t \in [0,\infty]} J V_n (t,\vec{\pi})$ to a compact interval and control the error arising from this. Note that for any $w(\cdot) \leq 1$, we have \begin{equation} \begin{split} & \sup_{\vec{\pi} \in D}|Jw (t,\vec{\pi})-J w (\infty, \vec{\pi})| \\&\leq c \int_t^{\infty}\P\{s \leq \sigma_1\} ds+\int_t^{\infty}\P\{\sigma_1 \in ds\} Sw(\vec{x}(s,\vec{\pi}))+h(\vec{x}(t,\pi))\P\{t<\sigma_1\} \\& \leq c \int_t^{\infty}\P\{s \leq \sigma_1\} ds+ 2 \P\left\{\sigma_1 \geq t \right\} \leq c \int_t^{\infty}e^{-\lambda_0 s}ds+2 e^{-\lambda_0 t} \leq \left(\frac{c}{\lambda_0}+2\right)e^{-\lambda_0 t}, \end{split} \end{equation} where the first inequality follows from (\ref{eq:exp-for-J-w}), and the second one follows from the fact that $V_m \leq h \leq 1$, the third one follows from $\P\{\sigma_1 \geq t\} \leq e^{-\lambda_0 t}$, which is a direct consequence of (\ref{eq:distribution-of-first-jump}). Then, denoting \begin{equation}\label{eq:t-delta} t(\delta)\triangleq -\frac{1}{\lambda_0} \log \left(\frac{\delta}{4+2c/\lambda_0}\right), \end{equation} we obtain \begin{equation}\label{JV-t1-t2} |Jw (t_1,\vec{\pi})-Jw(t_2,\vec{\pi})|\leq |Jw(t_1,\vec{\pi})-Jw(\infty,\vec{\pi})|+|Jw(t_2,\vec{\pi})-Jw(\infty,\vec{\pi})|\leq \delta. \end{equation} for any $t_1,t_2 \geq t(\delta)$. Letting \begin{equation}\label{eq:J-0-delta} J_{0,t}w(\vec{\pi}) \triangleq \inf_{s \in [0,t]}J w(s,\vec{\pi}), \quad \text{for every bounded $w:D\rightarrow \mathbb R$, and $t \geq 0$, $\vec{\pi} \in D$}, \end{equation} we get $ \sup_{\vec{\pi} \in D}|J_{0,t(\delta)}w(\vec{\pi})-J_0 w(\vec{\pi})| \leq \delta$. Now, let us define a new sequence of functions as \begin{equation} V_{\delta,0}(\vec{\pi}) \triangleq h(\vec{\pi}) \quad \text{and} \quad V_{\delta,m+1}(\vec{\pi}) \triangleq J_{0, t(\delta)}V_{\delta,m}(\vec{\pi}), \quad \vec{\pi} \in D. \end{equation} \begin{proposition}\label{prop:new-approx} For every $\delta>0$, $m \geq 0$, we have \begin{equation}\label{eq:new-bounds} V_m(\vec{\pi}) \leq V_{\delta,m}(\vec{\pi}) \leq m \delta+V_m (\vec{\pi}), \qquad \vec{\pi} \in D. \end{equation} \end{proposition} \begin{proof} For $m=0$ we have $V_{\delta,0}(\cdot)=V_0(\cdot)=h(\cdot)$, by construction. Now, suppose that (\ref{eq:new-bounds}) holds for some $m \geq 0$. Then \begin{equation} V_{m+1}(\vec{\pi})=J_0 V_{m}(\vec{\pi}) \leq J_0 V_{\delta,m}(\vec{\pi}) \leq J_{0, t(\delta)}V_{\delta,m}(\vec{\pi}) \leq V_{\delta,m+1} \qquad \vec{\pi} \in D, \end{equation} which proves the first inequality in (\ref{eq:new-bounds}) when we substitute $m$ with $m+1$. The first inequality follows from the induction hypothesis and Remark~\ref{rem:monotone}. The second inequality follows from (\ref{eq:J-0-delta}). Let us now prove the second inequality in (\ref{eq:new-bounds}) when $m$ is replaced by $m+1$. Observe that $V_{\delta,m}(\vec{\pi}) \leq h(\vec{\pi})$, $\vec{\pi} \in D$. Then \begin{equation} \begin{split} V_{\delta, m+1}(\vec{\pi})&=\inf_{t \in [0,t(\delta)]}J V_{\delta,m}(t,\vec{\pi})\leq \inf_{t \in [0,\infty]}J V_{\delta,m}(t,\vec{\pi})+\delta \\ &\leq \inf_{t \in [0,\infty]}\left[J V_m (t,\vec{\pi})+m \delta \int_0^{t}\P\{\sigma_1 \in ds\}\right]+\delta \\& \leq V_{m+1}(\vec{\pi})+ m \delta \int_0^{\infty}\P\{\sigma_1 \in ds\}+\delta \leq V_{m+1}(\vec{\pi})+(m+1)\delta, \end{split} \end{equation} where the second inequality follows from the induction hypothesis and the definition of the operator $J$. \end{proof} When $\tilde{B}-\lambda_1+\lambda_0 < 0$, using Proposition~\ref{prop:new-approx} we can approximate the value function $V(\cdot)$ with the functions $\left\{V_{\delta,m}(\cdot)\right\}_{\delta>0,m\geq 1}$. There is an extra error, because we truncate at $t(\delta)$, but this can be compensated by increasing the number of iterations. Let us define \begin{equation}\label{eq:tilde-M-eps} \tilde{\mathcal{M}}_{\varepsilon}\triangleq 1+\frac{1}{\varepsilon^2}\left[1+\sqrt{\left(\frac{1}{c}+\mathbb{E}^{\vec{\pi}}\{\Theta\}\right)\max\{\lambda_0,\lambda_1\}}\right], \quad \text{and} \quad \delta_{\varepsilon} \triangleq \frac{1}{\tilde{\mathcal{M}}_{\varepsilon}\sqrt{\tilde{\mathcal{M}}_{\varepsilon}-1}}. \end{equation} Then for every $\mathcal{M} \geq \tilde{\mathcal{M}}_{\varepsilon}$ and $\delta \leq \delta_{\varepsilon}$ we have \begin{equation} \|V_{\delta,\mathcal{M}}-V\|_{\infty} \leq \|V_{\delta,\mathcal{M}}-V_{\mathcal{M}}\|+\|V_{\mathcal{M}}-V\| \leq \mathcal{M} \delta+ \sqrt{\left(\frac{1}{c}+\mathbb{E}^{\vec{\pi}}\{\Theta\}\right)\frac{\max\{\lambda_0, \lambda_1\}}{\mathcal{M}-1}} \leq \varepsilon, \end{equation} where we used Propositions~\ref{prop:sequential} and \ref{prop:new-approx}. In other words, by applying the operator $J_{0,t(\delta_{\varepsilon})}$ to the function $h(\cdot)$ $\tilde{M}_{\varepsilon}$ times, we obtain an approximation of $V(\cdot)$ within $\varepsilon$-closeness on $D$. Similar arguments as in Section~\ref{sec:sequential} can be repeated to show that each $V_{\delta,m}$, for $m \ge 1$, is continuous and concave on $D$. Moreover, we can still define $\varepsilon$-optimal rules using the function $V_{\delta,\mathcal{M}}$. Particularly, let us define the stopping time \begin{equation} U_{\varepsilon/2}^{(\mathcal{M},\delta)} \triangleq \inf\left\{t \geq 0: h(\vec{\Pi}_t) \leq V_{\delta,M}(\vec{\Pi}_t)+\frac{\varepsilon}{2} \right\}. \end{equation} When we take $\mathcal{M}=\tilde{\mathcal{M}}_{\varepsilon/2}$ and $\delta=\delta_{\varepsilon/2}$, this stopping time becomes an $\varepsilon-$optimal stopping time for the problem in (\ref{eq:value-function}). This follows using the same arguments as in the proof of Proposition~\ref{prop:eqp-opti}. Finally, we conclude this section with the following numerical algorithm summarizing the results presented here in order to approximate $V(\cdot)$. \textbf{Algorithm.} 1) If $\tilde{B}-\lambda_1+\lambda_0 \geq 0$, then choose $\mathcal{M} \geq \mathcal{M}_{\varepsilon}$. $\tilde{B}$ is given by (\ref{eq:Btilde}) and $M_{\varepsilon}$ is given by (\ref{eq:M-eps}). 1') On the other hand if $\tilde{B}-\lambda_1+\lambda_0 < 0$, then choose $\mathcal{M} \geq \tilde{\mathcal{M}}_{\varepsilon}$ and $\delta \leq \delta_{\varepsilon}$, in which $\tilde{\mathcal{M}}_{\varepsilon}$ and $\delta_{\varepsilon}$ are given by (\ref{eq:tilde-M-eps}). 2) Set $V_0(\cdot)=h(\cdot)$. 2') Set $V_{\delta,0}(\cdot)=h(\cdot)$. 3) Calculate $V_{m+1}(\vec{\pi})=\min_{t \in [0,t^{*}(\vec{\pi})]}J V_m(t,\vec{\pi})$, $\vec{\pi} \in D$, in which $t^{*}(\vec{\pi})$ is given in Remark~\ref{rem:time-bound} (see also (\ref{eq:uniform-bound-on-hitting-time})). 3') Calculate $V_{\delta,m+1}=\inf_{t \in [0,t(\delta)]}J V_{\delta,n}(t,\vec{\pi})$, $\vec{\pi} \in D$, in which $t(\delta)$ is defined in (\ref{eq:t-delta}). 4) Repeat step 3 until $m=\mathcal{M}+1$. 4') Repeat step 3' until $m=\mathcal{M}+1$. If $\tilde{B}-\lambda_1+\lambda_0 \geq 0$, our algorithm returns $V_\mathcal{M}$, which satisfies $\|V_\mathcal{M}-V\| \leq \varepsilon$. On the other hand if $\tilde{B}-\lambda_1+\lambda_0< 0$, the algorithm returns $V_{\delta,\mathcal{M}}$, which satisfies $\|V_{\delta,\mathcal{M}}-V\|\leq \varepsilon$. \section{Examples}\label{sec:examples} In this section, we provide examples illustrating the use of the numerical algorithm presented above for negligible $\varepsilon$-values. \subsection{Mixed Erlang distribution} In (\ref{eq:matrix}) let us take a particular form for $\mathcal{A}$ where all entries are zero except $q_{ii} = - \lambda$, $ q_{i,i+1} = \lambda$ for some rate $\lambda >0 $, and for $i=1,\ldots,n$. Then, starting from any non-absorbing state $i$, the process $M$ visits all the states $i+1, i+2, \ldots$ until it eventually hits the absorbing state $\Delta$. In other words, conditioned on any initial non-absorbing state $i$, the disorder time has Erlang distribution with the shape index $n-i+1$ and rate $\lambda$. In this case the distribution of $\Theta$ can be explicitly given as \begin{align*} F_{\vec{\pi}}(t) = \P\{ \Theta \le t \} = \pi + \sum_{i \ne \Delta} \pi_i \cdot \int_0^t f_i(u) du, \qquad \text{in terms of} \qquad f_i (t) \triangleq \frac{ \lambda^{n+1-i} t^{n-i} }{ (n-i)! } e^{-\lambda t} , \end{align*} for $i \le n$. Moreover, the components of the deterministic path $\vec{x}(\cdot, \cdot)$ have the explicit forms \begin{align*} x_i(t,\vec{\pi}) = \frac{\sum_{j=1}^i \pi_j e^{-\lambda t} \frac{ (\lambda t)^{i-j} }{ (i-j)! } }{ \left( \sum_{k=1}^{n} \sum_{j=1}^{k} \pi_j e^{-\lambda t} \frac{ (\lambda t)^{k-j}}{ (k-j)! } \right) + e^{-(\lambda_1 - \lambda_0)t} \left( \pi + \sum_{k=1}^{n} \pi_k \int_0^t e^{(\lambda_1 - \lambda_0)u} f_k (u) du \right) }, \end{align*} for $i \le n$, and for the $n+1$'st component $x_{\Delta}$ we have \begin{align*} x_{\Delta}(t,\vec{\pi}) = \frac{e^{-(\lambda_1 - \lambda_0)t} \left( \pi + \sum_{k=1}^{n} \pi_k \int_0^t e^{(\lambda_1 - \lambda_0)u} f_k (u) du \right) }{ \left( \sum_{k=1}^{n} \sum_{j=1}^{k} \pi_j e^{-\lambda t} \frac{ (\lambda t)^{k-j} }{ (k-j)! } \right) + e^{-(\lambda_1 - \lambda_0)t} \left( \pi + \sum_{k=1}^{n} \pi_k \int_0^t e^{(\lambda_1 - \lambda_0)u} f_k (u) du \right) }. \end{align*} Using these expressions (and assuming $\pi \ne 1$), it can be shown that if $\lambda - \lambda_1+ \lambda_0 \ge 0 $ then $x_{\Delta}(t,\vec{\pi}) \to 1$ as $t \to \infty$. Otherwise, we have \begin{align} \label{convergence-points-in-mixed-erlang} \lim_{t \to \infty} x_n(t,\vec{\pi}) = \frac{\lambda_1 - \lambda_0 - \lambda}{\lambda_1 - \lambda_0 } \, , \quad \text{ and} \quad \lim_{t \to \infty} x_{\Delta}(t,\vec{\pi}) = \frac{ \lambda }{\lambda_1 - \lambda_0 } . \end{align} Due to the explicit form of the paths $t \mapsto \vec{x}(\cdot, \cdot)$, the steps described in the numerical algorithm above are easier to carry. The Figure 1 below illustrates examples on two different problems where there are two transient states. In Panels (a) and (b) of Figure 1, we see the sample path behavior and the value function of a problem where the parameters are $\lambda_0= 6$, $\lambda_1=5 $, $\lambda=3$, $c=1$. Panel (a) presents the behavior of the paths $t \to \vec{x} (t, \vec{\pi}) $ for a number of different starting points. We also plot a sample path of $\vec{\Pi}_t$ starting from a particular point. Since $\lambda_0 > \lambda_1$, the $n+1$'st component $x_{\Delta}$ of $\vec{x}$ is increasing. In other words, as long as we do not observe any arrival, we tend to assign more likelihood the event that the disorder has happened by then. On the other hand, when we observe an arrival, we decrease this likelihood. Moreover, since $\lambda - \lambda_1 + \lambda_0 \geq 0$ we see that the paths of $\vec{x}$ converge asymptotically to the point $(0,0,1)$ as indicated above. In this case, we use steps (1), (2), (3) and (4) of the algorithm that is presented at end of Section~\ref{sec:algorithm} to approximate the value function to a given order of accuracy. Thanks to the properties of the approximating sequence (see Section~\ref{sec:sequential}), properties such as concavity of the value function and the convexity of the optimal stopping boundary are preserved by our approximation. Panel (b), on the right, illustrates the (approximated) value function defined on the state space $D$ of $\vec{\Pi}$. As the figure shows, the value function $V(\cdot)$ is non-negative and concave on $D$, and there exists a region on the neighborhood of the point $(0,0,1)$ where it coincides with the terminal reward function $h(\cdot)$. As indicated in Section 4, an ($\varepsilon$-)optimal strategy then implies that one observes the counting process $X$, and update the process $\vec{\Pi}$ continuously until $\vec{\Pi}$ enters the region $\Gamma$. At this time, we stop and declare that the disorder has happened by then. \begin{center} \begin{tabular*}{\textwidth} {@{\extracolsep{\fill}}cc} \includegraphics[scale=0.4]{F1-p-a.eps}& \includegraphics[scale=0.65]{F1-panel-b.eps} \\ (a) & (b) \\ \text{ } & \text{ } \\ \includegraphics[scale=0.4]{F1-p-c.eps}& \includegraphics[scale=0.65]{F1-panel-d.eps} \\ (c) & (d) \\ \text{ } & \text{ } \\ \end{tabular*} \emph{\textbf{Figure 1}: Examples with mixed Erlang prior distributions. Panels (a) and (b) correspond to a problem with $\lambda_0= 6$, $\lambda_1=5 $, $\lambda=3$, $c=1$. Panel (a) represents the sample path behavior of $t \mapsto \vec{x}(t,\vec{\pi})$ and $t \mapsto \vec{\Pi}_t$. The continuous curves are possible sample paths of $\vec{x}$ starting from different points. The discontinuous path with arrows indicates the behavior of $\vec{\Pi}$. As indicated in Section 2, between two jumps, the process $\vec{\Pi}$ follows the deterministic curves of $\vec{x}$, and at jump times it switches from one curve to another. Panel (b) gives the value function $V(\cdot)$ and the stopping ($\mathcal{C}$) and continuation ($\Gamma$) regions. Similarly, Panels (c) and (d) correspond to another problem where $\lambda_0= 5$, $\lambda_1=10 $, $\lambda=3$, $c=1$.} \end{center} Panels (c) and (d), on the other hand, correspond to another sample problem where $\lambda_0= 5$, $\lambda_1=10 $, $\lambda=3$, $c=1$. In this case, we have $\lambda_1 > \lambda + \lambda_0$. Therefore, the paths $t \mapsto \vec{x}(t,\vec{\pi})$ are asymptotically converging to the point $(0,0.4,0.6)$ as indicated in (\ref{convergence-points-in-mixed-erlang}). Moreover, since $\lambda_1 > \lambda_0 $, the $n+1$'st component $\Pi_t$ of $\vec{\Pi}$ moves closer to the point $(0,0,1)$ at jump times. In this case we use steps (1'), (2'), (3') and (4') of the algorithm that we presented at the end of Section~\ref{sec:algorithm} to approximate the value function. In Panel (d), we verify our claims in Section~\ref{sec:sequential} again. That is; the value function is a concave function and the stopping region $\Gamma$ is a convex region around the point $(0,0,1)$. \subsection{Hyperexponential distribution} Let us here reconsider the formulation of the classical Poisson disorder problem with exponential prior distribution, and let us assume that the rate of this exponential distribution is not known precisely. Rather there are $n$ possible rates $\mu_1, \mu_2, \ldots, \mu_n$ with prior likelihoods $(\pi_1, \ldots , \pi_n, \pi)$, and the aim is to detect the change time $\Theta$ by minimizing $R_{\tau}(\vec{\pi})$ in (\ref{eq:bayes-risk}). This problem can be modeled as a special case of phase-type Poisson disorder problem if we take column vector $r$ in (\ref{eq:matrix}) in the form $r = [\mu_1, \mu_2, \ldots, \mu_n]'$ for $\mu_i >0$ for $i=1,\ldots, n$. Moreover we let the matrix $R$ in (\ref{eq:matrix}) be $R= - r' \cdot I$, where $I$ is $n \text{x} n$ identity matrix. In this case, if the process $M$ starts from a transient state, it is absorbed to the state $\Delta$ at the first transition time, and conditioned on the initial state $i$ the hitting time has exponential distribution with parameter $\mu_i$. In this case, by direct computation it can be shown that the deterministic paths $\vec{x}(\cdot, \cdot)$ has the form \begin{align} \label{deterministic-paths-for-hyperexp-case} x_i(t,\vec{\pi}) = \frac{ \pi_i e^{-\mu_i t} }{ \left( \sum_{k=1}^{n} \pi_k e^{-\mu_k t} \right) +e^{-(\lambda_1 - \lambda_0)t} \left( \pi + \sum_{k=1}^{n} \pi_k \int_0^t e^{(\lambda_1 - \lambda_0)u} f_k (u) du \right) }, \end{align} for $1 \le i \le n$ and \begin{align*} x_{\Delta}(t,\vec{\pi}) =\frac{ e^{-(\lambda_1 - \lambda_0)t} \left( \pi + \sum_{k=1}^{n} \pi_k \int_0^t e^{(\lambda_1 - \lambda_0)u} f_k (u) du \right) }{ \left( \sum_{k=1}^{n} \pi_k e^{-\mu_k t} \right) + e^{-(\lambda_1 - \lambda_0)t} \left( \pi + \sum_{k=1}^{n} \pi_k \int_0^t e^{(\lambda_1 - \lambda_0)u} f_k (u) du \right) }, \end{align*} where $f_k (u) = \mu_k e^{- \mu_k u}$, for $1 \le k \le n$. Without loss of generality let us assume that $\mu_1> \mu_2 > \ldots > \mu_n$. Then, on $\{ \vec{\pi} \in D: \pi_n \ne 0\}$, the path $x_i(t,\vec{\pi})$ goes to $0$ as $t \to \infty$ for $i=1,\ldots, n-1$. If $\mu_n - \lambda_1 + \lambda_0 \ge 0$, then $x_i(t,\vec{\pi})$ converges to $1$ asymptotically, otherwise we have \begin{align} \label{convergence-points-in-hyperexponential} \lim_{t \to \infty} x_n(t,\vec{\pi}) = \frac{ \lambda_1 -\mu_n - \lambda_0 }{\lambda_1 - \lambda_0 } , \quad \text{ and} \quad \lim_{t \to \infty} x_{\Delta}(t,\vec{\pi}) = \frac{ \mu_n }{\lambda_1 - \lambda_0 } \end{align} On the other hand, on the region $\{ \vec{\pi} \in D: \pi_n = 0, \pi_{n-1} \ne 0\}$, the above statements hold by replacing $n$ and $\mu_n$ with $n-1$ and $\mu_{n-1}$ respectively, and so on. If a non-absorbing state has the initial likelihood $\pi_i=0$, then $\Pi^{(i)}_t =0$, for all $t \ge 0$ by (\ref{deterministic-paths-for-hyperexp-case}) and (\ref{eq:rel-pi-x}). Indeed, since the disorder occurs at the first transition time, this state can be eliminated from the problem. Figure 2 presents two numerical examples with two transient states. In Panels (a) and (b), we see the value function and the paths $\vec{x}(\cdot, \cdot)$ of a problem where the parameters are $\mu_1=3$, $\mu_2=2$, $\lambda_0=2$, $\lambda_1=1$, $c=1.5$. Between two jumps, the process $\vec{\Pi}$ follows the paths $t \mapsto \vec{x}(t, \vec{\pi})$, which are converging to the point $(0,0,1)$ asymptotically. Moreover since $\lambda_1 < \lambda_0$, the process $\vec{\Pi}$ jumps away from this point, and we decrease the conditional likelihood of the disorder event at arrival times of $X$. In this case, we use steps (1), (2), (3) and (4) of the algorithm that is presented at end of Section~\ref{sec:algorithm} to approximate the value function. In Panel (b), we observe that the value function is concave, and the stopping region is a convex region with non-empty interior around the point $(0,0,1)$ as indicated in Section~\ref{sec:sequential}. \begin{center} \begin{tabular*}{\textwidth} {@{\extracolsep{\fill}}cc} \includegraphics[scale=0.4]{F2-p-a.eps}& \includegraphics[scale=0.65]{F2-panel-b.eps} \\ (a) & (b) \\ \text{ } & \text{ } \\ \includegraphics[scale=0.4]{F2-p-c.eps}& \includegraphics[scale=0.65]{F2-panel-d.eps} \\ (c) & (d) \\ \text{ } & \text{ } \\ \end{tabular*} \emph{\textbf{Figure 2}: Examples with hyper-geometric prior distributions. In panels (a) and (b) we see the sample path properties and the value function of a problem where $\mu_1=3$, $\mu_2=2$, $\lambda_0=2$, $\lambda_1=1$, $c=1.5$. Continuous paths in Panel (a) are the paths of $t \mapsto \vec{x}(t,\vec{\pi})$ for different starting points. The discontinuous path with arrows is a sample path of $t \mapsto \vec{\Pi}_t$. Panel (b) illustrates the value function $V(\cdot)$ and the stopping ($\mathcal{C}$) and continuation ($\Gamma$) regions. Similarly, in Panels (c) and (d) we see another problem with $\mu_1=3$, $\mu_2=2$, $\lambda_0=2$, $\lambda_1=6$, $c=1.5$.} \end{center} In panels (c) and (d) we have another problem whose parameters are $\mu_1=3$, $\mu_2=2$, $\lambda_0=2$, $\lambda_1=6$, $c=1.5$. In this case we have $\mu_n - \lambda_1 + \lambda_0 < 0$. Hence, in accordance with (\ref{convergence-points-in-mixed-erlang}) we see that the paths $t \mapsto \vec{x}(t,\vec{\pi})$ converge to the point $(0, 0.5, 0.5)$. Also, since $\lambda_1 > \lambda_0$, at the jump times of $X$, the process $\vec{\Pi}$ jumps towards the point $(0,0,1)$ and the conditional probability of the disorder event is increased. In this case we use steps (1'), (2'), (3') and (4') of the algorithm that we presented at the end of Section~\ref{sec:algorithm} to approximate the value function. In Panel (d), we verify once again the concavity of the value function and convexity of the stopping region around the point $(0,0,1)$. Non-smooth behavior of the value function on the region where $\pi_1 =0$ is in accordance with Lemma 7.1 of \cite{ds}. On this region, the problem is essentially with one non-absorbing state. The point $\vec{\pi}= (\,0,1- \mu_2 / (\lambda_1 - \lambda_0) , \mu_2 / (\lambda_1 - \lambda_0) \,) = (0, 0.5, 0.5)$ falls into the continuation, and the function is not differentiable at the boundary point of this line segment. \section{Appendix} \noindent \textbf{Proof of Lemma~\ref{lem:concave}.} Using (\ref{eq:alternative-esp-for-x}) and (\ref{eq:distribution-of-first-jump}) we write \begin{equation}\label{eq:first-term} \begin{split} \int_0^{t}\P\{s \leq \sigma_1\}&k(\vec{x}(s,\pi))ds=\int_0^{t}e^{-\lambda_0 s}\cdot \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(s-\Theta)^+}\right\}k(\vec{x}(s,\vec{\pi}))ds \\&=c \int_0^{t}ds e^{-\lambda_0 s}\ E\{e^{-(\lambda_1-\lambda_0)(s-\Theta)^+}\}\frac{\mathbb{E}^{\vec{\pi}}\{1_{\{s \geq \Theta \}}e^{-(\lambda_1-\lambda_0)(s-\Theta)}\}}{\mathbb{E}^{\vec{\pi}}\{e^{-(\lambda_1-\lambda_0)(s-\Theta)^+}\}} \\& = \frac{c \pi}{\lambda_1}(1-e^{-\lambda_1 t})+ c \sum_{i=1}^{n}\pi_i\int_0^{t} e^{-\lambda_0 s} \left(\int_0^{s} e^{-(\lambda_1-\lambda_0)(s-u)}f_i(u) du \right)ds, \end{split} \end{equation} where $f_i(\cdot)$ is the probability density function of $\Theta$, given that $M_0=i$. Therefore, $\vec{\pi} \rightarrow \int_0^{t}\P\{s \leq \sigma_1\}\, k(\vec{x}(s,\vec{\pi}))$ is linear. Next, we observe that \begin{equation}\label{eq:h-P} \begin{split} h(\vec{x}(t,\pi)) \, \P\{t<\sigma_1\}&=(1-x_{\Delta}(t,\vec{\pi}))\cdot e^{-\lambda_0 t} \cdot \mathbb{E}^{\vec{\pi}}\left\{e^{-(\lambda_1-\lambda_0)(t-\Theta)^+}\right\} \\&=e^{-\lambda_0 t} \cdot \mathbb{E}^{\vec{\pi}}\left\{1_{\{t<\Theta\}}\right\}=e^{-\lambda_0 t}\sum_{i=1}^{n}\pi_i \int_t^{\infty}f_i(s)ds, \end{split} \end{equation} hence the mapping \begin{equation}\label{eq:second-term} \vec{\pi}\rightarrow h(\vec{x}(t,\vec{\pi}))\P(t<\sigma_1) \quad \text{is linear}. \end{equation} Finally, let $w(\cdot)$ be a positive and concave function. Then it can be written as \begin{equation} w(\vec{\pi})=\inf_{k \in K}\left(\beta_0^{(k)}+\beta_1^{(k)}\pi_1+\cdots+\beta_n^{(k)} \pi_n+\beta_{\Delta}^{(k)}\pi \right), \end{equation} for some index set $K$ and constants $\beta_j^{(k)}$. Using this representation of $w(\cdot)$, (\ref{eq:alternative-esp-for-x}), and (\ref{eq:sigma-density}) we obtain \begin{equation} \label{eq:int-of-Sw-in-explicit-form} \begin{split} &\int_0^{t}\P(\sigma_1 \in ds) Sw(\vec{x}(s,\vec{\pi})) \\ &=\int_0^{t}\P\{\sigma_1 \in ds\}w\left(\frac{\lambda_0 x_1(s,\pi_1)}{\lambda_0(1-x_{\Delta}(s,\pi))+\lambda_1 x_{\Delta}(s,\pi)},\cdots , \frac{\lambda_1 x_{\Delta}(s,\pi_1)}{\lambda_0(1-x_{\Delta}(s,\pi))+\lambda_1 x_{\Delta}(s,\pi)}\right) \\&=\int_0^{t}ds \, e^{-\lambda_0 s}\left[\lambda_0 \mathbb{E}^{\vec{\pi}}\{1_{\{s<\Theta\}}\}+\lambda_1 \mathbb{E}^{\vec{\pi}}\left\{1_{\{s \geq \Theta\}} e^{-(\lambda_1-\lambda_0)(s-\Theta)}\right\}\right] \cdot \\ &\qquad \qquad \qquad \inf_{k \in K}\left[\beta_0^{(k)}+\frac{\beta_1^{(k)}\lambda_0 \mathbb{E}^{\vec{\pi}}\{1_{\{M_s=1\}}\} +\cdots+\beta_{\Delta}\lambda_1\mathbb{E}^{\vec{\pi}}\left\{1_{\{s \geq \theta\}}e^{-(\lambda_1-\lambda_0)(s-\Theta)}\right\}}{\lambda_0\mathbb{E}^{\vec{\pi}}\left\{ 1_{\{s<\Theta\}}\right\}+\lambda_1 \mathbb{E}^{\vec{\pi}}\left\{1_{\{s \geq \Theta\}} e^{-(\lambda_1-\lambda_0)(s-\Theta)}\right\} } \right] \\&=\int_0^{t}ds \, e^{-\lambda_0 s} \bigg(\inf_{k \in K} \beta_0^{(k)}\left(\lambda_0 \mathbb{E}^{\vec{\pi}}\{1_{\{s<\Theta\}}\}+\lambda_1 \mathbb{E}^{\vec{\pi}}\left\{1_{\{s \geq \Theta\}} e^{-(\lambda_1-\lambda_0)(s-\Theta)}\right\}\right) \\ &\qquad \qquad \qquad \qquad \qquad +\beta_1^{(k)}\lambda_0 \mathbb{E}^{\vec{\pi}}\{1_{\{M_s=1\}}\} +\cdots+\beta_{\Delta}\lambda_1\mathbb{E}^{\vec{\pi}}\left\{1_{\{s \geq \theta\}}e^{-(\lambda_1-\lambda_0)(s-\Theta)}\right\}\bigg). \end{split} \end{equation} Note that the term inside the parentheses is linear in $\vec{\pi}$ due to (\ref{def:P-pi}). Hence it follows that \begin{equation} \vec{\pi} \rightarrow \int_0^{t}\P\{\sigma_1 \in ds\}Sw(\vec{x}(s,\vec{\pi})) \quad \text{is concave,} \end{equation} since the lower envelope of linear functions is concave. As a sum of three concave mappings, $\vec{\pi} \rightarrow Jw(t,\vec{\pi})$ is concave for all $t \geq 0$. Also, as the lower envelope of concave functions, the mapping $\vec{\pi} \rightarrow J_0 w(\vec{\pi})=\inf_{t \geq 0}J (t,\vec{\pi})$ is again concave. \hfill $\square$ \\ \noindent\textbf{Proof of Lemma~\ref{lem:cont}.} Let $w(\cdot)$ be a bounded continuous function. Then as in (\ref{eq:int-of-Sw-in-explicit-form}), we have \begin{equation} \label{eq:int-of-Sw} \begin{split} \int_0^{t}\P(\sigma_1 \in ds) Sw(\vec{x}(s,\vec{\pi}))=\int_0^{t} \lambda_0 e^{-\lambda_0 s} \left( \sum_{i=1}^{n} \pi_i \int_s^{\infty}f_i(u)du \right) Sw(\vec{x}(s,\vec{\pi}))ds \, + \\ \int_0^{t}\lambda_1 \left(\pi e^{-\lambda_1 s}+ e^{-\lambda_0 s} \sum_{i=1}^{n}\int_0^{s}e^{-(\lambda_1-\lambda_0)(s-u)}f_i(u) \right) S w(\vec{x}(s,\vec{\pi}))ds. \end{split} \end{equation} where $f_i(\cdot)$ is the probability density function of $\Theta$, given that $M_0=i$. Then using (\ref{eq:first-term}), (\ref{eq:h-P}) and (\ref{eq:int-of-Sw}), it can easily verified that the mapping $(t,\vec{\pi}) \rightarrow Jw(t,\vec{\pi})$ is jointly continuous on $\mathbb R_+ \times D$. The mapping $(t,\vec{\pi}) \rightarrow Jw(t,\vec{\pi})$ is then uniformly continuous on $[0,k]\times D$ for all $k \in \mathbb{N}$. Therefore, the mapping \begin{equation}\label{eq:J-k-cont} \vec{\pi} \rightarrow J_{0,k}w (\vec{\pi}) = \inf_{t \in [0,k]} J w(t,\vec{\pi})\quad \text{is continuous on $D$}. \end{equation} On the other hand, using (\ref{eq:exp-for-J-w}) and (\ref{eq:h-P}) we can write \begin{equation}\label{eq:bound-J-w} \begin{split} J w(t,\vec{\pi})&=Jw(t \wedge k, \vec{\pi})+\int_{t \wedge k }^{t}\P\{s \leq \sigma_1\} k(\vec{x}(s,\pi))ds+ \\& e^{-\lambda_0 t} \sum_{i\neq \Delta} \pi_i \left(\int_t^{\infty}f_i(s)ds- \int_{t \wedge k }^{\infty}f_i(s)ds\right)+\int_{t \wedge k}^{t}\P(\sigma_1 \in ds) Sw(\vec{x}(s,\vec{\pi})) \\ &\geq Jw (t \wedge k,\vec{\pi})-e^{-\lambda_0 k} \sum_{i=1}^{n}\pi \int_{t \wedge k}^{\infty}f_i(s)ds - (\lambda_0 \vee \lambda_1) \int_{t \wedge k}^t e^{- (\lambda_0 \wedge \lambda_1)s } \cdot ||w|| \, ds \\ &\geq Jw (t \wedge k,\vec{\pi})-e^{-(\lambda_0 \wedge \lambda_1) k} \left( 1 + (\lambda_0 \vee \lambda_1) \cdot ||w||\right), \end{split} \end{equation} where $||w|| \triangleq \sup_{\vec{\pi} \in D} |w(\vec{\pi})|$. By taking the infimum on both sides of (\ref{eq:bound-J-w}) we get \begin{equation}\label{eq:J-k-uniform} J_{0,k}w (\vec{\pi}) \geq J_0 w(\vec{\pi}) \geq J_{0,k} w (\vec{\pi}) -e^{-(\lambda_0 \wedge \lambda_1) k}, \end{equation} which implies that $J_{0,k}w(\cdot) \rightarrow J_0 w(\cdot)$ uniformly on $D$. This fact together with (\ref{eq:J-k-cont}) implies that $\vec{\pi} \rightarrow J_0 w (\vec{\pi})$ is continuous on $D$. \hfill $\square$ \\ \noindent \textbf{Proof of Proposition~\ref{prop:V-n-epsilon}.} First, we will prove (\ref{eq:eps-opt}) by an induction on $m \in \mathbb{N}.$ For $m=1$ the left-hand-side of (\ref{eq:eps-opt}) becomes \begin{equation} \label{prof-for-m-1} \begin{split} \mathbb{E}^{\vec{\pi}}\left\{\int_0^{S_1^{\varepsilon}}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{S_1^{\varepsilon}})\right\} &= \mathbb{E}^{\vec{\pi}} \left\{\int_0^{r_0^{\varepsilon}(\vec{\pi})\wedge \sigma_1}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{r^{\varepsilon}_0(\vec{\pi})\wedge \sigma_1 })\right\} \\ & =J v_0 (r_0^{\varepsilon}(\vec{\pi}), \vec{\pi}) \leq J_0 v_0 (\vec{\pi})+\varepsilon=v_1(\vec{\pi})+\varepsilon, \end{split} \end{equation} where we used (\ref{eq:defn-J}), (\ref{eq:seq-of-func}) and (\ref{eq:defn-r-m}). Also note that we used Remark~\ref{rem:inf-J-is-attained} for the inequality above. This inequality in (\ref{prof-for-m-1}) proves (\ref{eq:eps-opt}) holds for $m=1$. Now, suppose (\ref{eq:eps-opt}) holds for $\varepsilon \geq 0$, and for some $m>1$. We will prove that it also holds when $m$ is replaced by $m+1$. Since $S_{m+1}^{\varepsilon} \wedge \sigma_1=r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1$, we have \begin{equation}\label{eq:derovation-f-m} \begin{split} &\mathbb{E}^{\vec{\pi}}\left\{\int_{0}^{S_{m+1}^{\varepsilon}}k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{S_{m+1}^{\varepsilon}})\right\} \\&=\mathbb{E}^{\vec{\pi}}\bigg\{\int_{0}^{S_{m+1}^{\varepsilon} \wedge \sigma_1 }k(\vec{\Pi}_t)dt+1_{\left\{S_{m+1}^{\varepsilon} \geq \sigma_1 \right\}}\left[\int_{\sigma_1}^{S_{m+1}^{\varepsilon}}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{S_{m+1}^{\varepsilon}})\right] +h(\vec{\Pi}_{S_{m+1}^{\varepsilon}}) 1_{\{S_{m+1}^{\varepsilon}<\sigma_1\}} \bigg\} \\&=\mathbb{E}^{\vec{\pi}}\bigg\{\int_{0}^{r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1 }k(\vec{\Pi}_t)dt+1_{\left\{r_{m}^{\varepsilon/2}(\vec{\pi}) \geq \sigma_1 \right\}}\left[\int_{\sigma_1}^{\sigma_1+S_m^{\varepsilon/2} \circ \theta{\sigma_1} }k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{\sigma_1+S_{m}^{\varepsilon/2}\circ \theta_{\sigma_1} })\right] \\&\qquad \qquad \qquad+h(\vec{\Pi}_{r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1}) 1_{\{r_{m}^{\varepsilon/2}(\vec{\pi})<\sigma_1\}} \bigg\} \\&=\mathbb{E}^{\vec{\pi}}\bigg\{\int_{0}^{r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1 }k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1}) 1_{\{r_{m}^{\varepsilon/2}(\vec{\pi})<\sigma_1\}}\bigg\}+\mathbb{E}^{\vec{\pi}}\left\{1_{\left\{r_{m}^{\varepsilon/2}(\vec{\pi}) \geq \sigma_1 \right\}} f_{m}(\vec{\Pi}_{\sigma_1})\right\}, \end{split} \end{equation} in which \begin{equation} f_{m}(\vec{\pi})=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{S_n^{\varepsilon/2}}k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{S_n^{\varepsilon/2}})\right\} \leq v_{m}(\vec{\pi})+\varepsilon/2, \end{equation} where the inequality follows from the induction hypothesis, and the last line of (\ref{eq:derovation-f-m}) follows from the Strong Markov property of the process $\vec{\Pi}$. Then we obtain \begin{multline*} \mathbb{E}^{\vec{\pi}}\left\{\int_{0}^{S_{m+1}^{\varepsilon}}k(\vec{\Pi}_t)dt+ h(\vec{\Pi}_{S_{m+1}^{\varepsilon}})\right\} \leq \mathbb{E}^{\vec{\pi}}\bigg\{\int_{0}^{r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1 }k(\vec{\Pi}_t)dt+h(\vec{\Pi}_{r_{m}^{\varepsilon/2}(\vec{\pi}) \wedge \sigma_1}) 1_{\{r_{n}^{\varepsilon/2}(\vec{\Pi}_0)<\sigma_1\}}\bigg\} \\ +\mathbb{E}^{\vec{\pi}}\left\{1_{\left\{r_{n}^{\varepsilon/2}(\vec{\Pi}_0) \geq \sigma_1 \right\}} v_{m}(\vec{\Pi}_{\sigma_1})\right\}+\frac{\varepsilon}{2} =J v_n (r_{m}^{\varepsilon/2}(\vec{\pi}),\vec{\pi})+\frac{\varepsilon}{2} \leq v_{m+1}(\vec{\pi})+\varepsilon, \end{multline*} where the first equality follows from the definition of the operator $J$ in (\ref{eq:defn-J}) and the second equality follows from (\ref{eq:defn-r-m}). This concludes the proof of (\ref{eq:defn-S-eps}). The inequality $V_n \leq v_n$ follows immediately from (\ref{eq:eps-opt}) since $S_n^{\varepsilon} \leq \sigma_n$ by construction. Let us prove the opposite inequality $V_n \geq v_n$. First, we will establish \begin{equation}\label{eq:V-vs-v} \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_m}k(\vec{\Pi}_t)dt+h (\Vec{\Pi}_{\tau \wedge \sigma_m})\right\} \geq v_m(\vec{\pi}), \end{equation} for every $m \in \mathbb{N}$, by showing that \begin{equation}\label{eq:V-vs-v-int} \begin{split} &\mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_m}k(\vec{\Pi}_t)dt+h (\Vec{\Pi}_{\tau \wedge \sigma_m})\right\} \\ & \geq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_{m-k+1}}k(\vec{\Pi}_t)dt+1_{\{\tau \geq \sigma_{m-k+1}\}} v_{k-1}(\vec{\Pi}_{\sigma_{m-k+1}})+1_{\{\tau < \sigma_{m-k+1}\}}h(\vec{\Pi}_{\tau}) \right\}=:RHS_{k-1}, \end{split} \end{equation} for $k=1, \cdots, m+1$. Note that (\ref{eq:V-vs-v}) follows from (\ref{eq:V-vs-v-int}) when we take $k=m+1$. For $k=1$, (\ref{eq:V-vs-v}) is satisfied as an equality since $v_0(\cdot)=h(\cdot)$. Now, let us suppose that (\ref{eq:V-vs-v}) holds for some $1 \leq k < m+1$, and let us prove that it also holds for $k+1$. Note that $RHS_{k-1}$ can be decomposed as \begin{equation} RHS_{k-1}=RHS_{k-1}^{(1)}+RHS_{k-1}^{(2)}, \end{equation} in which \begin{equation} \begin{split} RHS_{k-1}^{(1)} & \triangleq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_{m-k}}k(\vec{\Pi}_t)dt+ 1_{\{\tau<\sigma_{m-k}\}}h(\vec{\Pi}_{\tau})\right\}, \\RHS_{k-1}^{(2)} & \triangleq \mathbb{E}^{\vec{\pi}}\Bigg\{ 1_{\{\tau \geq \sigma_{m-k}\}}\Big[\int_{\sigma_{m-k}}^{\tau \wedge \sigma_{m-k+1}}k(\vec{\Pi}_t)dt \\ &\qquad \qquad +1_{\{\tau \geq \sigma_{m-k+1} \}}v_{k-1} (\vec{\Pi}_{\sigma_{m-k+1}})+1_{\{\tau<\sigma_{m-k+1}\}}h(\Pi_{\tau})\Big]\Bigg\}. \end{split} \end{equation} By Lemma~\ref{lem:bremaud}, there exists an $\mathcal F_{\sigma_{m-k}}$-measurable random variable $R_{m-k}$ such that \begin{align*} \tau \wedge \sigma_{m-k+1}=(\sigma_{m-k}+R_{m-k}) \wedge \sigma_{m-k+1} \quad \text{ on } \; \{\tau \geq \sigma_{m-k}\}. \end{align*} Then $RHS_{k-1}^{(2)}$ can be written as $RHS_{k-1}^{(2)}=$ \begin{equation}\label{eq:RHS-k-1-2} \begin{split} & \mathbb{E}^{\vec{\pi}}\bigg\{ 1_{\{\tau \geq \sigma_{m-k}\}}\bigg[\int_{\sigma_{m-k}}^{(\sigma_{m-k}+R_{m-k}) \wedge \sigma_{m-k+1}}k(\vec{\Pi}_t)dt+1_{\{\sigma_{m-k}+R_{m-k} \geq \sigma_{m-k+1}\}}v_{k-1} (\vec{\Pi}_{\sigma_{m-k+1}}) \\ &+1_{\{\sigma_{m-k}+R_{m-k}<\sigma_{m-k+1}\}}h(\Pi_{\sigma_{m-k}+R_{m-k}})\bigg]\bigg\} =\mathbb{E}^{\vec{\pi}}\left\{1_{\{\tau \geq \sigma_{m-k}\}}g_{m-k}(R_{m-k},\vec{\Pi}_{\sigma_{m-k}})\right\}, \end{split} \end{equation} in which \begin{equation} \begin{split} g_{m-k}(r,\vec{\pi})& \triangleq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{r \wedge \sigma_1}k(\vec{\Pi}_t)dt+1_{\{r \geq \sigma_1\}}v_{k-1}(\Pi_{\sigma_1})+1_{\{r < \sigma_1\}}h(\vec{\Pi}_{\tau}) \right\} \\&=J v_{k-1}(r, \vec{\pi}) \geq J_{0} v_{k-1}(\vec{\pi})=v_{k}(\vec{\pi}). \end{split} \end{equation} The second equality in (\ref{eq:RHS-k-1-2}) follows from the strong Markov property of the process $\Pi$ and the fact that the jump times of the observation process $X$ and $\Pi$ are the same. Therefore, the expression for $RHS_{k-1}^{(2)}$ is bounded below as \begin{equation} RHS_{k-1}^{(2)} \geq \mathbb{E}^{\vec{\pi}}\left\{ 1_{\{\tau \geq \sigma_{m-k}\}} v_{k}(\vec{\Pi}_{\sigma_{m-k}})\right\}. \end{equation} Therefore, \begin{multline} \mathbb{E}^{\vec{\pi}}\Big\{\int_0^{\tau \wedge \sigma_m}k(\vec{\Pi}_t)dt + h (\Vec{\Pi}_{\tau \wedge \sigma_m})\Big\} \\ \geq \mathbb{E}^{\vec{\pi}}\left\{\int_0^{\tau \wedge \sigma_{m-k}} k(\vec{\Pi}_t)dt+1_{\{\tau < \sigma_{m-k}\}}h (\Vec{\Pi}_{\tau})+1_{\{\tau \geq \sigma_{m-k}\}}v_{k}(\vec{\Pi}_{m-k})\right\} \end{multline} This completes the proof of (\ref{eq:V-vs-v-int}) by induction. Equation (\ref{eq:V-vs-v}) follows when we set $k=n+1$. Finally, taking the infimum of both sides in (\ref{eq:V-vs-v}), we arrive at the desired inequality $V_n \geq v_n$. \hfill $\square$ \\ \noindent \textbf{Proof of Proposition~\ref{prop:v-V}.} Using Proposition~\ref{prop:sequential}, Corollary~\ref{cor:little-v-n} and (\ref{prop:V-n-epsilon}) we obtain \begin{equation} v(\vec{\pi})=\lim_{m \rightarrow \infty}v_{m}(\vec{\pi})=\lim_{m \rightarrow \infty}V_m(\vec{\pi})=V(\vec{\pi}), \quad \vec{\pi} \in D, \end{equation} which proves the first statement of the proposition. To prove the second statement, we note that the sequence $\{v_{n}\}_{n \geq 1}$ is decreasing and \begin{equation}\label{eq:V-J-0-V} \begin{split} &V(\vec{\pi})=v(\vec{\pi})=\inf_{m \geq 1}v_{m}(\vec{\pi})=\inf_{m \geq 1}\inf_{t \geq 0}J v_{m-1}(t,\vec{\pi})=\inf_{t \geq 0}\inf_{m \geq 1}J v_{m-1}(t,\vec{\pi}) \\ & =\inf_{t \geq 0}\inf_{m\geq 1} \Bigg\{\int_0^{t}\P\left\{s \leq \sigma_1\right\}k(\vec{x}(s,\vec{\pi}))ds \\ &\qquad \qquad \qquad \qquad \qquad + \int_0^{t}\P\left\{\sigma_1 \in ds \right\}S v_{m-1}(\vec{x}(s,\vec{\pi}))+\P\left\{t<\sigma_1\right\}h(\vec{x}(t,\vec{\pi}))\Bigg\} \\&=\inf_{t \geq 0}\Bigg\{\int_0^{t}\P\left\{s \leq \sigma_1\right\}k(\vec{x}(s,\vec{\pi}))ds+\int_0^{t}\P\left\{\sigma_1 \in ds \right\}S v(\vec{x}(s,\vec{\pi}))+\P\left\{t<\sigma_1\right\}h(\vec{x}(t,\vec{\pi}))\Bigg\} \\&=\inf_{t \geq 0}Jv(t,\vec{\pi})=J_0 v (\pi). \end{split} \end{equation} This proves that $V$ is a solution of $U=J_0 U$. The third line of (\ref{eq:V-J-0-V}) follows from the bounded convergence theorem. Next, let $U$ be a solution of $U=J_0 U$ such that $ U \leq h$. Then by Remark~\ref{rem:monotone} we have $U=J_0 U \leq J_0 h =v_1$. Now, suppose $U \leq v_m$ for some $m \geq 0$, then $U=J_0U \leq J_0 v_m=v_{m+1}$. By induction, we conclude that $U \leq v_m$, for all $m \geq 1$ and therefore $U \leq \lim_{m \rightarrow \infty}v_m=v=V$. \hfill $\square$ \\ \noindent \textbf{Proof of Lemma~\ref{lem:dyn-p-J-t-J}.} Let us fix a constant $u \geq t$, and $\vec{\pi} \in D$. Then \begin{multline} \label{eq:Ju} J w (u,\vec{\pi})= \mathbb{E}^{\vec{\pi}}\left\{\int_0^{u \wedge \sigma_1}k(\vec{\Pi}_s)ds+ 1_{\{u<\sigma_1\}}h(\vec{\Pi}_u)+ 1_{\{u \geq \sigma_1\}}w(\vec{\Pi}_{\sigma_1}) \right\} \\ = \mathbb{E}^{\vec{\pi}}\left\{\int_0^{t \wedge \sigma_1}k(\vec{\Pi}_s)ds+ 1_{\{u<\sigma_1\}}h(\vec{\Pi}_u)+ 1_{\{u \geq \sigma_1\}}w(\vec{\Pi}_{\sigma_1}) \right\} + \mathbb{E}^{\vec{\pi}}\left\{1_{\{\sigma_1>t\}}\int_t^{u \wedge \sigma_1}k(\vec{\Pi}_s)ds\right\}. \end{multline} On the event $\{\sigma_1>t\}$,we have $u \wedge \sigma_1=t+\left\{(u-t)\wedge (\sigma_1 \circ \theta_t)\right\}$. Therefore, the strong Markov property implies \begin{equation} \begin{split}\label{eq:int-u-t-w-wedge-sigma1} &\mathbb{E}^{\vec{\pi}}\left\{1_{\{\sigma_1>t\}}\int_t^{u \wedge \sigma_1}k(\vec{\Pi}_s)ds\right\}=\mathbb{E}^{\vec{\pi}}\left\{1_{\{\sigma_1>t\}}\mathbb{E}^{\vec{\Pi}_t}\left\{\int_0^{(u-t)\wedge \sigma_1}k(\vec{\Pi}_s)ds\right\}\right\} \\ &=\mathbb{E}^{\vec{\pi}}\left\{1_{\{\sigma_1>t\}}\left[J w (u-t, \vec{\Pi}_t)-\mathbb{E}^{\vec{\pi}}\left\{1_{\{u-t<\sigma_1\}}h(\vec{\Pi}_{u-t})+1_{\{u-t \geq \sigma_1\}}w(\vec{\Pi}_{\sigma_1})\right\}\right]\right\} \\&=\P\{\sigma_1>t\}J(u-t,\vec{x}(t,\vec{\pi}))-\mathbb{E}^{\vec{\pi}}\left\{1_{\{u<\sigma_1\}}h(\vec{\Pi}_u)\right\}-\mathbb{E}^{\vec{\pi}}\left\{1_{\{\sigma_1>t\}} 1_{\{u \geq \sigma_1\}}w(\vec{\Pi}_{\sigma_1})\right\}, \end{split} \end{equation} where the second equality follows from the definition of the operator $J$, and the third from (\ref{eq:rel-pi-x}) and the strong Markov property. Substituting (\ref{eq:int-u-t-w-wedge-sigma1}) into (\ref{eq:Ju}), after some simplification, yields \begin{equation} J w (u, \pi)=Jw (t,\pi)+\P\left\{\sigma_1>t\right\}\left[J w(u-t,\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))\right]. \end{equation} Now, taking the infimum of both sides over $u \in [t,\infty]$ concludes the proof. \hfill $\square$ \\ \noindent \textbf{Proof of Corollary~\ref{cor:r-m}.} Note that by Remark~\ref{rem:inf-J-is-attained}, we have \begin{equation} Jv_{m}(r_m(\vec{\pi}),\vec{\pi})=J_0v_{m}(\vec{\pi})=J_{r_{m}(\vec{\pi})}v_{m}(\vec{\pi}). \end{equation} Let us first assume that $r_m(\vec{\pi})<\infty$. Taking $t=r_{m}(\vec{\pi})$ and $w=v_{m}$ in (\ref{eq:J-t-J}) gives \begin{multline*} Jv_{m}(r_m(\vec{\pi}),\vec{\pi})=J_{r_{m}(\vec{\pi})}v_{m}(\vec{\pi}) \\ =Jv_{m}(r_m(\vec{\pi}),\vec{\pi})+\P\left\{\sigma_1>r_{m}(\vec{\pi})\right\} \left[v_{m+1}(\vec{x}(r_m(\vec{\pi}), \vec{\pi}))-h(\vec{x}(r_m(\vec{\pi}),\vec{\pi}))\right]. \end{multline*} Hence, we have $v_{m+1}(\vec{x}(r_m(\vec{\pi}),\vec{\pi}))=h(\vec{x}(r_m(\vec{\pi}),\vec{\pi}))$. If $0<t<r_m(\vec{\pi})$, then \begin{equation}\label{eq:Jvm-ge-J0} Jv_m(t,\vec{\pi})>J_0 v_m(\vec{\pi})=J_{r_m(\vec{\pi})}v_m(\vec{\pi})=J_t v_m(\vec{\pi}). \end{equation} Using (\ref{eq:J-t-J}) one more time, we get \begin{equation} J_0 v_m(\vec{\pi})=J_t v_m(\vec{\pi})=J v_m(t,\vec{\pi})+\P \left\{\sigma_1>t\right\}\left[v_{m+1}(\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))\right]. \end{equation} This equation together with (\ref{eq:Jvm-ge-J0}) implies that $v_{m+1}(\vec{x}(t,\vec{\pi}))<h(\vec{x}(t,\vec{\pi}))$ for $t \in (0,r_m(\vec{\pi}))$. If $r_m(\vec{\pi})=\infty$, then $v_{m+1}(\vec{x}(t,\vec{\pi}))<h(\vec{x}(t,\vec{\pi}))$ for every $t \in (0,\infty)$ by the same argument as in the last paragraph. The statement of the lemma still holds in this case, since by convention $\inf \emptyset=\infty$. \hfill $\square$ \\ \noindent \textbf{Proof of Proposition~\ref{prop:L-V}.} The proof will be based on an induction. For $m=1$, by Lemma~\ref{lem:bremaud} there exists a constant $u \in [0,\infty]$ such that $U_{\varepsilon} \wedge \sigma_1= u\wedge \sigma_1$. Then \begin{equation}\label{eq:E-L} \begin{split} &\mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_1}\right\}=\mathbb{E}^{\vec{\pi}}\left\{\int_0^{u \wedge \sigma_1}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{u \wedge \sigma_1})\right\} \\& =\mathbb{E}^{\vec{\pi}}\left\{\int_0^{u \wedge \sigma_1}k(\vec{\Pi}_s)ds+1_{\{u \geq \sigma_1\}}V(\vec{\Pi}_{\sigma_1})+1_{\{u<\sigma_1\}} h(\vec{\Pi}_u)\right\}+ \mathbb{E}^{\vec{\pi}}\left\{1_{\{u<\sigma_1\}}[V(\vec{\Pi}_u)-h(\vec{\Pi}_u)]\right\} \\&=J V(u,\vec{\pi})+\P\left\{u<\sigma_1\right\}[V(\vec{x}(u,\vec{\pi}))-h(\vec{x}(u,\vec{\pi}))]=J_u V(\vec{\pi}), \end{split} \end{equation} where the third equality follows from (\ref{eq:defn-J}) and (\ref{eq:rel-pi-x}). The last equality follows from (\ref{eq:dyn-prog-0.5}). Fix any $t \in [0,u)$. By (\ref{eq:dyn-prog-0.5}) again \begin{equation} \begin{split} &J V(t,\vec{\pi})=J_tV(\vec{\pi})-\P\left\{\sigma_1>t\right\}[V(\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))] \\ &\geq J_0 V(\vec{\pi})-\P\left\{\sigma_1>t\right\}[V(\vec{x}(t,\vec{\pi}))-h(\vec{x}(t,\vec{\pi}))]=J_0 V(\vec{\pi})-\mathbb{E}^{\vec{\pi}}\left\{1_{\{\sigma_1>t\}}[V(\vec{\Pi}_t)-h(\vec{\Pi}_t)]\right\}. \end{split} \end{equation} On the event $\{\sigma_1>t\}$ we have $U_{\varepsilon}>t$ (otherwise, $U_{\varepsilon} \leq t <\sigma_1$ would imply $U_{\varepsilon}=u \leq t$, and this would contradict our initial choice of $t<u$). Thus, $V(\vec{\Pi}_t)-h(\vec{\Pi}_t)<-\varepsilon$ on $\{\sigma_1>t\}$. Hence, \begin{equation} J V(t,\vec{\pi}) \geq J_0 V(\vec{\pi})+\varepsilon \P\left\{\sigma_1>t\right\}> J_{0} V(\vec{\pi}), \quad t \in [0,u). \end{equation} Therefore, $J_0 V(\vec{\pi})=J_u V(\pi)$ and (\ref{eq:E-L}) implies that \begin{equation} \mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_1}\right\}=J_u V(\vec{\pi})=J_0 V (\vec{\pi})=V(\vec{\pi})= L_0, \end{equation} which completes the proof for $m=1$. Assume (\ref{eq:V-V}) holds for $m \geq 1$. Note that \begin{equation} \begin{split} &\mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_{m+1}}\right\}=\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon}<\sigma_1\}}L_{U_{\varepsilon}}+1_{\{U_{\varepsilon} \geq \sigma_1 \}}L_{U_{\varepsilon}\wedge \sigma_{m+1}}\right\} =\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon}<\sigma_1\}}L_{U_{\varepsilon}}\right\} \\&+\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon} \geq \sigma_1 \}}\left[\int_{\sigma_1}^{U_{\varepsilon}\wedge \sigma_{m+1}}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{U_{\varepsilon}\wedge \sigma_{m+1}})\right]\right\}+\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon} \geq \sigma_1 \}} \int_0^{\sigma_1}k(\vec{\Pi}_s)ds\right\}. \end{split} \end{equation} Since $U_{\varepsilon}\wedge \sigma_{m+1}=\sigma_1+[(U_{\varepsilon} \wedge \sigma_m)\circ \theta_{\sigma_1}]$ on the event $\{U_{\varepsilon} \geq \sigma_1 \}$, the strong Markov property of $\vec{\Pi}$ implies that \begin{multline} \mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_{m+1}}\right\}=\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon}<\sigma_1\}}L_{U_{\varepsilon}}\right\} \\+ \mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon} \geq \sigma_1 \}}\mathbb{E}^{\vec{\Pi}_{\sigma_1}}\left\{\int_0^{U_{\varepsilon}\wedge \sigma_m}k(\vec{\Pi}_s)ds+ V(\vec{\Pi}_{U_{\varepsilon}\wedge \sigma_m})\right\}\right\} +\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon} \geq \sigma_1 \}} \int_0^{\sigma_1}k(\vec{\Pi}_s)ds\right\}. \end{multline} By induction hypothesis we can replace the inner expectation with $V(\vec{\Pi}_{\sigma_1})$ and obtain \begin{equation} \begin{split} \mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_{m+1}}\right\}&=\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon}<\sigma_1\}}L_{U_{\varepsilon}}+1_{\{U_{\varepsilon} \geq \sigma_1 \}}\left[\int_{0}^{\sigma_1}k(\vec{\Pi}_s)ds+V(\vec{\Pi}_{\sigma_1})\right]\right\} \\&=\mathbb{E}^{\vec{\pi}}\left\{1_{\{U_{\varepsilon}<\sigma_1\}}L_{U_{\varepsilon}}+1_{\{U_{\varepsilon} \geq \sigma_1 \}} L_{\sigma_1} \right\} =\mathbb{E}^{\vec{\pi}}\left\{L_{U_{\varepsilon}\wedge \sigma_1}\right\}=L_0, \end{split} \end{equation} where the last equality follows from the above proof for $m=1$. This completes the proof of the statement. \hfill $\square$ \\ \bibliographystyle{dcu}
1,477,468,750,530
arxiv
\section{Introduction} We consider integrable modules $L^{\mu }$ with the highest weight $\mu $ for af\/f\/ine Lie algebra $\frak{g}$ and are especially interested in the properties of the string functions related to $L^{\mu }$. String functions and branching coef\/f\/icients of the af\/f\/ine Lie algebras arise in the computation of the local state proba\-bi\-lities for solvable models on square lattice~\cite{DJKMO}. Irreducible highest weight modules with dominant integral weights appear also in application of the quantum inverse scattering method~\cite{LD} where solvable spin chains are studied in the framework of the AdS/CFT correspondence conjecture of the super-string theory (see~\cite{KAZA,BE} and references therein). There are dif\/ferent ways to deal with string functions. One can use the BGG resolution~\cite{BGG} (for Kac--Moody algebras the algorithm is described in \cite{Kac,Wak1}), the Schur function series \cite{FauKing}, the BRST cohomology \cite{Hwang}, Kac--Peterson formulas \cite{Kac} or the combinatorial methods applied in \cite{FeigJimbo}. Here we want to develop a new description for string functions by applying the recursive formulas for weight multiplicities and branching coef\/f\/icients obtained in \cite{IKL-1}. It was proved in \cite{Kac} that for simply laced or twisted af\/f\/ine Lie algebra and integrable mo\-du\-le~$L^{\mu }$ with the highest weight $\mu $ of level $1$ the string function is unique: \begin{equation*} \sigma \big( e^{-\delta }\big) :=\prod_{n=1}^{\infty }\frac{1}{% (1-e^{-n\delta })^{\mathrm{mult}(n\delta )}}. \end{equation*} so that the corresponding formal character $\mathrm{ch}\left( L^{\mu }\right) $ can be easily written down provided the set $\max (\mu )$ of maximal weights for $L^{\mu }$ is known: \begin{equation} \mathrm{ch}\left( L^{\mu }\right) =\sigma \big( e^{-\delta }\big) \sum_{\alpha \in M}e^{\mu +\alpha -\big( \frac{\left| \alpha \right| ^{2}}{2% }+\left( \mu |\alpha \right) \big) \delta } \label{kac-character-l-1} \end{equation} with \[ M:=\left\{ \begin{array}{c} \sum\limits_{i=1}^{r}\mathbf{Z}\alpha _{i}^{\vee }\text{ }\text{for untwisted algebras or }A_{2r}^{\left( 2\right) } \\ \sum\limits_{i=1}^{r}\mathbf{Z}\alpha _{i}\text{ }\text{for }A_{r}^{\left( u\geq 2\right) }\text{ and }A\neq A_{2r}^{\left( 2\right) } \end{array} \right\} \] (see also Corollary 2.1.6 in \cite{Wak2}). Comparing this expression with the Weyl--Kac formula \[ \mathrm{ch}\left( L^{\mu }\right) =\frac{1}{R}\sum_{w\in W}\epsilon (w)e^{w\circ (\mu +\rho )-\rho }, \] where the character can be treated as generated by the denominator $\frac{1}{R}$ acting on the set of singular vectors $\Psi ^{\left( \mu \right) }=\sum\limits_{w\in W}\epsilon (w)e^{w\circ (\mu +\rho )-\rho }$ of the module $L^{\mu }$ we see that in the relation (\ref{kac-character-l-1}) both factors on the right hand side are simplif\/ied: singular weights are substituted by the maximal ones and instead of the factor $\frac{1}{R}$ the string function $\sigma _{u}\big( e^{-\delta }\big) $ is applied. In this paper we shall demonstrate that similar transformations can be def\/ined when the level~$k\left( \mu \right) $ is arbitrary. To f\/ind these transformations we use the recursion properties of branching coef\/f\/icients $k_{\xi }^{\left( \mu \right) }$ for the reduced module $L_{\frak{g}\downarrow \frak{a}}^{\mu }$ where the subalgebra$\frak{a}$ has the same rank as $\frak{g}$: $r(\frak{a})=r(\frak{g})$. These properties are formulated in \cite{IKL-1} in terms of relations \begin{equation*} k_{\xi }^{\left( \mu \right) }=\sum_{\gamma \in \Gamma _{\frak{a}\subset \frak{g}}}s\left( \gamma \right) k_{\xi +\gamma }^{\left( \mu \right) }+\sum_{w\in W}\epsilon \left( w\right) \delta _{\xi ,\pi _{\frak{a}}\circ \left( w\circ\left( \mu +\rho \right) -\rho \right) }, \end{equation*} where $\pi _{\frak{a}}$\ is the projection to the weight space of $\frak{a}$ and $\Gamma _{\frak{a}\subset \frak{g}}$ is the fan of the injection $\frak{a% }\longrightarrow \frak{g}$, that is the set of vectors def\/ined by the relation \[ 1-\prod_{\alpha \in \left( \pi _{\frak{a}}\circ \Delta ^{+}\right) }\left( 1-e^{-\alpha }\right) ^{\mathrm{{mult}\left( \alpha \right) -{mult}}% _{\frak{a}}\mathrm{\left( \alpha \right) }}= \sum_{\gamma \in \Gamma _{\frak{a}\subset \frak{g}}}% s\left( \gamma \right) e^{-\gamma } \] (with $s\left( \gamma \right)\neq 0$). In particular when $\frak{a}$ is a Cartan subalgebra $\frak{h}$ of $\frak{g}$ the coef\/f\/icients $k_{\xi }^{\left( \mu \right) }$ are just the multiplicities of the weights of $L^{\mu }$ and the corresponding fan $\Gamma _{\frak{h}% \subset \frak{g}}$ coincides with $\widehat{\Psi ^{\left( 0 \right) }}$~-- the set of singular weights $\psi \in P$ for the module $L^{0 }$. In Section~\ref{section3} we demonstrate that this set can be ``folded'' $\widehat{\Psi ^{\left( 0 \right) }}\longrightarrow F\Psi $\ so that the new shifts (the vectors of the folded fan) $f\psi \in F\Psi $ connect only the weights in the closure of the fundamental Weyl chamber while the recursive property survives in a new form. Thus the recursive relations are obtained for the coef\/f\/icients of the string functions for the modules~$L^{\xi _{j}}$ whose highest weights $\xi _{j}$ belong to the same congruence class $\Xi _{k;v}$. When these relations are applied simultaneously to the set of string functions located in the main Weyl chamber (Section~\ref{section4}) this results in the system of linear equations for the string function coef\/f\/icients (collected in the vectors $\mathbf{m}_{\left( s, u \right) }^{\left( \mu \right)}$). This system can be written in a compact form $\mathbf{M}_{\left( u\right) }^{ \Xi _{v} } \mathbf{m}_{\left( u\right) }^{ \mu }= \mathbf{\delta }_{\left( u\right) }^{\mu }$ where the ope\-ra\-tor~$\mathbf{M}% _{\left( u\right) }^{\Xi _{v} } $ is a matrix whose elements are composed by the multiplicities of weights in the folded fans $% F\Psi $. The set is solvable and the solution~-- the vector $% \mathbf{m}_{\left( u\right) }^{ \mu }$~-- def\/ines the string functions for $L^{\mu }$\ up to an arbitrary minimal grade $u$. In the Section~\ref{section5} some examples are presented where the string functions for modules of $% \frak{g}=A_{2}^{\left( 1\right) }$ are explicitly constructed. The set of folded fans provides a compact and ef\/fective method to construct the string functions. \section[Basic definitions and relations]{Basic def\/initions and relations}\label{section2} Consider the af\/f\/ine Lie algebra $\frak{g}$ with the underlying f\/inite-dimensional subalgebra $\overset{\circ }{\frak{g}}$. The following notation will be used: $L^{\mu }$ -- the integrable module of $\frak{g}$ with the highest weight $\mu $; $r$ -- the rank of the algebra $\frak{g}$; $\Delta $ -- the root system; $\Delta ^{+}$ -- the positive root system for $% \frak{g}$; $\mathrm{mult}\left( \alpha \right) $ -- the multiplicity of the root $% \alpha $ in $\Delta $; $\overset{\circ }{\Delta }$ -- the f\/inite root system of the subalgebra $% \overset{\circ }{\frak{g}}$; $\mathcal{N}^{\mu }$ -- the weight diagram of $L^{\mu }$; $W$ -- the corresponding Weyl group; $C^{\left( 0\right) }$ -- the fundamental Weyl chamber; $\rho $ -- the Weyl vector; $\epsilon \left( w\right) :=\det \left( w\right) $, $w \in W$; $\alpha _{i}$ -- the $i$-th simple root for $\frak{g}$, $i=0,\ldots ,r$; $\delta $ -- the imaginary root of $\frak{g}$; $\alpha _{i}^{\vee }$ -- the simple coroot for $\frak{g}$, $i=0,\ldots ,r$; $\overset{\circ }{\xi }$ -- the f\/inite (classical) part of the weight $\xi \in P$; $\lambda =\big( \overset{\circ }{\lambda };k;n\big) $ -- the decomposition of an af\/f\/ine weight indicating the f\/inite part $\overset{\circ }{\lambda }$, level $k$ and grade $n$; $\overline{C_{k}^{\left( 0\right) }}$ -- the intersection of the closure of the fundamental Weyl chamber $C^{\left( 0\right) }$\ with the plane with f\/ixed level $k=\mathrm{const}$; $P$ -- the weight lattice; $Q$ -- the root lattice; $M:=\left\{ \begin{array}{c} \sum\limits_{i=1}^{r}\mathbf{Z}\alpha _{i}^{\vee }\text{ for untwisted algebras or }% A_{2r}^{\left( 2\right) }, \\ \sum\limits_{i=1}^{r}\mathbf{Z}\alpha _{i}\text{ for }A_{r}^{\left( u\geq 2\right) }% \text{ and }A\neq A_{2r}^{\left( 2\right) }, \end{array} \right\} $; $\mathcal{E}$ -- the group algebra of the group $P$; $\Theta _{\lambda }:=e^{-\frac{\left| \lambda \right| ^{2}}{2k}\delta }\sum\limits_{\alpha \in M}e^{t_{\alpha }\circ \lambda }$ -- the classical theta-function; $A_{\lambda }:=\sum\limits_{s\in \overset{\circ }{W}}\epsilon (s)\Theta _{s\circ \lambda }$; $\Psi ^{\left( \mu \right) }:=e^{\frac{\left| \mu +\rho \right| ^{2}}{2k}% \delta \ -\ \rho }A_{\mu +\rho }=e^{\frac{\left| \mu +\rho \right| ^{2}}{2k}% \delta \ -\ \rho }\sum\limits_{s\in \overset{\circ }{W}}\epsilon (s)\Theta _{s\circ \left( \mu +\rho \right) }=$ \qquad $=\sum\limits_{w\in W}\epsilon (w)e^{w\circ(\mu +\rho )-\rho }$ -- the singular weight element for the $\frak{g}$-module $L^{\mu }$; $\widehat{\Psi ^{\left( \mu \right) }}$ -- the set of singular weights $\psi \in P$ for the module $L^{\mu }$ with the coordinates \newline {}\hspace*{10mm} $\big( \overset{\circ }{\psi },k,n,\epsilon \left( w\left( \psi \right) \right) \big) \mid _{\psi =w\left( \psi \right) \circ (\mu +\rho )-\rho }$ (this set is similar to $P_{\mathrm{nice}}^{\prime }\left( \mu \right) $ in \cite{Wak1}); $m_{\xi }^{\left( \mu \right) }$ -- the multiplicity of the weight $\xi \in P $ in the module $L^{\mu }$; $\mathrm{ch}\left( L^{\mu }\right) $ -- the formal character of $L^{\mu }$; $\mathrm{ch}\left( L^{\mu }\right) =\frac{\sum\limits_{w\in W}\epsilon (w)e^{w\circ (\mu +\rho )-\rho }}{\prod\limits_{\alpha \in \Delta ^{+}}\left( 1-e^{-\alpha }\right) ^{\mathrm{{mult}\left( \alpha \right) }}}=\frac{\Psi ^{\left( \mu \right) }}{\Psi ^{\left( 0\right) }}$ -- the Weyl--Kac formula; $R:=\prod\limits_{\alpha \in \Delta ^{+}}\left( 1-e^{-\alpha }\right) ^{\mathrm{{% mult}\left( \alpha \right) }}=\Psi ^{\left( 0\right) }$ -- the denominator; $\max (\mu )$ -- the set of maximal weights of $L^{\mu }$; $\sigma _{\xi }^{\mu }\left( q\right) =\sum\limits_{n=0}^{\infty }m_{\left( \xi -n\delta \right) }^{\left( \mu \right) }q^{n}$ -- the string function through the maximal weight $\xi $. \section{Folding a fan}\label{section3} The generalized Racah formula for weight multiplicities $m_{\xi }^{\left( \mu \right) }$ (with $\xi \in P$) in integrable highest weight modules $L^{\mu }\left( \frak{g}\right) $ (see \cite{Ful} for a f\/inite dimensional variant), \begin{equation} m_{\xi }^{\left( \mu \right) }=-\sum_{w\in W\setminus e}\epsilon (w)m_{\xi -\left( w\circ \rho -\rho \right) }^{\left( \mu \right) }+\sum_{w\in W}\epsilon (w)\delta _{\left( w\circ(\mu +\rho )-\rho \right) ,\xi }, \label{gen-mult-form} \end{equation} can be obtained as a special case of developed in \cite{IKL-1} (see also~\cite{LDu}) branching algorithm for af\/f\/ine Lie algebras. To apply this formula (\ref{gen-mult-form}) we must determine two sets of singular weights: $\widehat{\Psi ^{\left( \mu \right) }}$ for the module~$% L^{\mu }$ and $\widehat{\Psi ^{\left( 0\right) }}$ for $L^{0}$. (As it was indicated in the Introduction the set $\widehat{\Psi ^{\left( 0\right) }}$ coincides with the fan $\Gamma _{\frak{h}\subset \frak{g}}$ of the injection $\frak{h}\longrightarrow \frak{g}$ of the Cartan subalgebra $\frak{h}$ in the Lie algebra~$\frak{g}$.) Our main idea is to contract the set $\widehat{\Psi ^{\left( 0\right) }}$ (the fan $\Gamma _{\frak{h}\subset \frak{g}}$) into the closure $\overline{% C^{\left( 0\right) }}$ of the fundamental Weyl chamber $C^{\left( 0\right) }$. We shall use the set $\max (\mu )$ of maximal weights of~$L^{\mu }\left( \frak{g}\right) $ instead of~$\widehat{\Psi ^{\left( \mu \right) }}$. And as a result we shall f\/ind the possibility to solve the relations based on the recurrence properties of weight multiplicities, to obtain the explicit expressions for the string functions $\sigma _{\xi \in \max (\mu )}^{\mu }$ and thus to describe the module~$L^{\mu }$. Consider the module $L^{\mu}\left( \frak{g}\right) $ of level $k$: $\mu =\big( \overset{\circ }{\mu };k;0\big) $. Let $\overline{C_{k;0}^{\left( 0\right) }}$ be the intersection of $\overline{C_{k}^{\left( 0\right) }}$ with the plane $\delta =0$, that is the ``classical'' part of the closure of the af\/f\/ine Weyl chamber at level $k$. To each $\xi \in P $ attribute a representative $w_{\xi }\in W$ of the class of transformations \[ w_{\xi }\in W/W_{\xi },\qquad W_{\xi }:=\left\{ w\in W\,|\,w\circ \xi =\xi \right\} , \] bringing the weight $\xi $ into the chamber $\overline{C_{k}^{\left( 0\right) }}$ \[ \bigg\{ w_{\xi }\circ \xi \in \overline{C_{k}^{\left( 0\right) }}\mid\xi \in P,w_{\xi }\in W/W_{\xi }\bigg\} . \] Fix such representatives for each shifted vector $\phi \left( \xi ,w\right) =\xi -\left( w\circ \rho -\rho \right) $. The set \[ \bigg\{ w_{\phi \left( \xi ,w\right) } \mid w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right) \in \overline{C_{k}^{\left( 0\right) }} \bigg\}, \] is in one-to-one correspondence with the set $\left\{ \phi \left( \xi ,w\right) \right\} $ of shifted weights. The recursion relation~(\ref{gen-mult-form}) can be written as \begin{eqnarray*} m_{\xi }^{\left( \mu \right) } &=&-\sum_{w\in W\setminus e}\epsilon (w)m_{\phi \left( \xi ,w\right) }^{\left( \mu \right) }+\sum_{w\in W}\epsilon (w)\delta _{\left( w\circ(\mu +\rho )-\rho \right) ,\xi } \\ &=&-\sum_{w\in W\setminus e}\epsilon (w)m_{w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right) }^{\left( \mu \right) }+\sum_{w\in W}\epsilon (w)\delta _{\left( w\circ (\mu +\rho )-\rho \right) ,\xi }. \end{eqnarray*} Consider the restriction to $\overline{C_{k}^{\left( 0\right) }}$: \begin{equation} m_{\xi }^{\left( \mu \right) }\Big| _{\xi \in \overline{C_{k}^{\left( 0\right) }}}=-\sum_{w\in W\setminus e}\epsilon (w)m_{w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right) }^{\left( \mu \right) }+\delta _{\mu, \xi }. \label{intermed-rel} \end{equation} In the r.h.s. the function $m_{\xi ^{\prime }}^{\left( \mu \right) }$ has an argument $\xi ^{\prime }=w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right) \in \overline{C_{k}^{\left( 0\right) }}$: \[ m_{\xi ^{\prime }}^{\left( \mu \right) }=m_{w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right) }^{\left( \mu \right) }=m_{\xi +\left( w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right) -\xi \right) }^{\left( \mu \right) }. \] Thus the new (``folded'') shifts are introduced: \begin{gather*} f\psi \left( \xi ,w\right) := \left( \xi ^{\prime }-\xi \right) _{\xi ^{\prime }\neq \xi }=w_{\phi \left( \xi ,w\right) }\circ \left( \xi -\left( w\circ \rho -\rho \right) \right) _{w\neq e}-\xi , \qquad \xi ,\xi^{\prime } \in \overline{C_{k}^{\left( 0\right) }}, \qquad \xi^{\prime } \neq \xi . \end{gather*} When the sum over $W\setminus e$ in the expression (\ref{intermed-rel}) is performed the shifted weight $\xi ^{\prime }$ acquires the (f\/inite) multiplicity $\widehat{\eta }\left( \xi ,\xi^{\prime }\right) $: \begin{equation} \widehat{\eta }\left( \xi ,\xi ^{\prime }\right) =-\sum_{ w\in W\setminus e, } \epsilon (w), \label{mult-in-ffan} \end{equation} (the sum is over all the elements $w\in W\setminus e $ satisfying the relation $w_{\phi \left( \xi ,\tilde{w}\right) }\circ \left( \xi -\left( w\circ \rho -\rho \right) \right) =\xi ^{\prime }$) such that \begin{equation} m_{\xi }^{\left( \mu \right) }\Big|_{\xi \in \overline{C_{k}^{\left( 0\right) }}% }= \sum_{\xi ^{\prime }\in \overline{C_{k}^{\left( 0\right) }},\xi ^{\prime }\neq \xi }\widehat{\eta }\left( \xi ,\xi ^{\prime }\right) m_{\xi +f\psi \left( \xi ,w\right) }^{\left( \mu \right) }+\delta _{\xi ,\mu }. \label{intermediate-2} \end{equation} The main property of the multiplicities $\widehat{\eta }\left( \xi ,\xi ^{\prime }\right) $ is that they do not depend directly on $n_{\xi}$. \begin{lemma}\label{lemma1} Let $\psi =\rho -w\circ \rho $; $\phi \left( \xi ,w\right) = \xi + \psi$; $\xi ^{\prime }:=w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right)$; $\xi ,\xi ^{\prime }\in \overline{C_{k}^{\left( 0\right) }}$. Then the corresponding folded shifts $% f\psi \left( \xi ,w\right) = \xi^{\prime} -\xi$ and multiplicities $\widehat{\eta }\left( \xi ,\xi ^{\prime }\right) $ depend only on $k$, $\overset{\circ }{\xi }$, and~$w $. \end{lemma} \begin{proof} As far as imaginary roots are $W$-stable we have: $w_{\phi \left( \xi ,w\right) }\circ \left( \xi +\widetilde{n}\delta \right) =w_{\phi \left( \xi ,w\right) }\circ \xi +\widetilde{n}\delta $. Thus for both $\xi $ and $% \widetilde{\xi }=\xi +\widetilde{n}\delta $ the representatives of the classes bringing $\phi \left( \xi ,w\right) $ and $\phi \big( \widetilde{% \xi },w\big) $ to the fundamental chamber $\overline{C_{k}^{\left( 0\right) }}$ can be taken equal: $w_{\phi \left( \xi ,w\right) }=w_{\phi \left( \widetilde{\xi },w\right) }\;{\rm mod}W_{\xi }$. In the shift $f\psi \left( \xi ,w\right) $ decompose the element $w_{\phi \left( \xi ,w\right) }=t_{\phi \left( \xi ,w\right) }\cdot s_{\phi \left( \xi ,w\right) }$ into the product of the classical ref\/lection $s_{\phi \left( \xi ,w\right) }$\ and the translation $t_{\phi \left( \xi ,w\right) }$. Denote by $% \theta _{\phi \left( \xi ,w\right) }^{\vee }$ the argument (belonging to $M$) of the translation $t_{\phi \left( \xi ,w\right) }$. The direct computation demonstrates that the weight $f\psi \left( \xi ,w\right) $ does not depend on $n_{\xi }$: \begin{gather*} f\psi \left( \xi ,w\right) = \left( \begin{array}{c} s_{\phi \left( \xi ,w\right) }\circ \big( \overset{\circ }{\xi }+\overset{% \circ }{\psi }\big) -\overset{\circ }{\xi }+k\overset{\circ }{\theta }% _{\phi \left( \xi ,w\right) }^{\vee },0, \vspace{1mm}\\ n_{-w\circ \rho }-\frac{k}{2}\big| \theta _{\phi \left( \xi ,w\right) }^{\vee }\big| ^{2}-\big( s_{\phi \left( \xi ,w\right) }\circ \big( \overset{\circ }{\xi }+\overset{\circ }{\psi }\big) ,\theta _{\phi \left( \xi ,w\right) }^{\vee }\big) \end{array} \right) . \end{gather*} Thus the shift $f\psi \left( \xi ,w\right) $ can be considered as depending on $k$, $\overset{\circ }{\xi }$ and $w$: $f\psi =f\psi \big( \overset{% \circ }{\xi },k,w\big) $. The multiplicity $\widehat{\eta }\left( \xi ,\xi ^{\prime }\right) $ (see (\ref{mult-in-ffan})) depends only on the set of ref\/lections $w\in W$ connecting $\xi $ and $\xi ^{\prime }\neq \xi $ and does not depend on $n_{\xi }$ neither: $\widehat{\eta }\left( \xi ,\xi ^{\prime }\right) =\widehat{\eta }\big( \overset{\circ }{\xi },k,\xi ^{\prime }\big) $. \end{proof} Thus we have constructed the set of (nonzero) shifts $f\psi \big( \overset{% \circ }{\xi },k,w\big) $ with the multiplicities $\widehat{% \eta }\big( \overset{\circ }{\xi },k,\xi +f\psi \big( \overset{\circ }{\xi },k,w\big) \big) $ and obtained the possibility to formulate the recursion properties entirely def\/ined in the closure $\overline{% C_{k}^{\left( 0\right) }}$ of the fundamental Weyl chamber. Let us return to the relation (\ref{intermediate-2}), \begin{gather*} m_{\xi }^{\left( \mu \right) }\Big|_{\xi \in \overline{C_{k}^{\left( 0\right) }}% }= \sum_{\xi ^{\prime }\in \overline{C_{k}^{\left( 0\right) }},\, \xi ^{\prime }\neq \xi }\widehat{\eta }\big( \overset{\circ }{\xi },k,\xi ^{\prime }\big) m_{\xi +f\psi ( \overset{\circ }{\xi },k,w) }^{\left( \mu \right) }+\delta _{\xi ,\mu } \notag \\ \hphantom{m_{\xi }^{\left( \mu \right) }|_{\xi \in \overline{C_{k}^{\left( 0\right) }}}}{} =\sum_{f\psi ( \overset{\circ }{\xi },k,w ) \neq 0} \widehat{\eta }\big( \overset{\circ }{\xi }% ,k,\xi +f\psi \big( \overset{\circ }{\xi },k,w\big) \big) m_{\xi +f\psi ( \overset{\circ }{\xi },k,w ) }^{\left( \mu \right) }+\delta _{\xi ,\mu }. \end{gather*} For simplicity from now on we shall omit some arguments and write down the shifts as $f\psi \big( \overset{\circ }{\xi }\big) $ and their multiplicities as $\widehat{\eta }\big( \overset{\circ }{\xi },\xi ^{\prime }\big) $ (keeping in mind that we are at the level $k$ and the weight $\xi ^{\prime }$ depends on the initial ref\/lection $w$). The set of vectors: \begin{gather*} \widetilde{F\Psi }\big( \overset{\circ }{\xi }\big) := \left\{\xi^{\prime}- \xi =f\psi \big( \overset{\circ }{\xi }\big) = \big( \overset{\circ }{f\psi }\big( \overset{\circ }{\xi }\big) ;0;n_{f\psi \big( \overset{\circ }{\xi }\big) }\big) \big| \xi^{\prime} - \xi \neq 0 \right\} , \\ \xi ^{\prime } = w_{\phi \left( \xi ,w\right) }\circ \phi \left( \xi ,w\right),\qquad \xi ,\xi ^{\prime } \in \overline{C_{k}^{\left( 0\right) }}, \end{gather*} plays here the role similar to that of the set $\left\{\Psi ^{\left( 0\right) }\setminus 0\right\}$ of nontrivial singular weights for $L^{0}$ in the relation (\ref{gen-mult-form}) and is called the \emph{folded fan} for $% \overset{\circ }{\xi }$. (The initial (unfolded) fan $\Gamma _{\frak{h}% \subset \frak{g}}$ corresponds here to the injection of the Cartan subalgebra.) Thus we have proved the following property: \begin{proposition}\label{proposition1} Let $L^{\mu }$ be the integrable highest weight module of $\frak{g}$, $\mu =\big( \overset{\circ }{\mu };k;0\big) $, $\xi =\big( \overset{\circ }{% \xi };k;n_{\xi }\big) \in \mathcal{N}^{\mu }$, $\xi \in \overline{% C_{k}^{\left( 0\right) }}$\ and let $\widetilde{F\Psi }\big( \overset{\circ }{\xi }\big) $ be the folded fan for $\overset{\circ }{\xi }$ then the multiplicity of the weight $\xi $ is subject to the recursion relation \begin{equation} m_{\xi }^{\left( \mu \right) }\Big|_{{\xi \in \overline{C_{k}^{\left( 0\right) }}}}=\sum_{f\psi ( \overset{\circ }{\xi } ) \in \widetilde{F\Psi } ( \overset{\circ }{\xi } ) }\widehat{\eta }% \big( \overset{\circ }{\xi },\xi +f\psi \big( \overset{\circ }{\xi }% \big) \big) m_{\xi +f\psi ( \overset{\circ }{\xi } ) }^{\left( \mu \right) }+\delta _{\xi ,\mu }. \label{recursion-prop-weights} \end{equation} \end{proposition} \section{Folded fans and string functions}\label{section4} For the highest weight module $L^{\mu }\left( \frak{g}\right) $ with $\mu =\big( \overset{\circ }{\mu };k;0\big) $ of level $k$ consider the set of maximal vectors belonging to $\overline{C_{k}^{\left( 0\right) }}$ \[ {\cal Z} _{k}^{\mu}:=\bigg\{\zeta \in \max (\mu )\cap \overline{% C_{k}^{\left( 0\right) }} \bigg\}. \] Let $\pi $ be a projection to the subset of $P$ with level $k$ and grade $% n=0$ and introduce the set: \begin{equation*} \Xi _{k}^{\mu }:=\big\{ \xi =\pi \circ \zeta \mid \zeta \in {\cal Z} _{k}^{\mu} \big\}. \end{equation*} The cardinality \[ p_{\max }^{\left( \mu \right) }:=\#\left( \Xi _{k}^{\mu } \right) \] is f\/inite and we can enumerate the corresponding weights $\xi _{j}$: \[ \Xi _{k}^{\mu } = \left\{ \xi _{j}\mid j=1,\ldots ,p_{\max }^{\left( \mu \right) }\right\}. \] The string functions necessary and suf\/f\/icient to construct the diagram $\mathcal{N}% ^{\mu }$\ (and correspondingly the character $\mathrm{ch}\left( L^{\mu }\right) $) are \begin{gather*} \left\{ \sigma _{\zeta }^{\mu, k }\, |\, \zeta \in {\cal Z} _{k}^{\mu} \right\} , \qquad \mathrm{ch}\left( L^{\mu }\right) =\sum_{\xi \in \max (\mu )}\sigma _{\xi }^{\mu }\big( e^{-\delta }\big) e^{\xi }=\sum_{w \in W/ W_{\zeta}, \, \zeta \in {\cal Z} _{k}^{\mu}} \sigma_{ \zeta }^{\mu, k }\big( e^{-\delta }\big) e^{w \circ \zeta }. \end{gather*} Let us consider these string functions as starting from the points $\xi _{j}$ rather than from $\zeta$'s. (For $\zeta = \xi_s -l\delta \in {\cal Z} _{k}^{\mu}$ the expansion $\sigma _{\xi _{s}}^{\mu }\left( q\right) =\sum\limits_{n=0}^{\infty }m_{\left( \xi _{s}-n\delta \right) }^{\left( \mu \right) }q^{n}$ starts with string coef\/f\/icients $m_{\left( \xi _{s}-n\delta \right)}|_{n< l}=0$.) Denote these extended string functions by $\sigma _{j}^{\mu ,k}$ and introduce the set \begin{equation*} \Sigma _{k}^{\mu }:=\left\{ \sigma _{j}^{\mu ,k}\mid \xi _{j}\in \Xi _{k}^{\mu }\right\} . \end{equation*} Let us apply the relation (\ref{recursion-prop-weights}) to the weights of the string $\sigma _{j}^{\mu ,k}\in \Sigma _{k}^{\mu }$ and put $\xi =\xi _{j}+n_{j}\delta $, \begin{gather*} m_{ ( \overset{\circ }{\xi _{j}};k;n_{j} ) }^{\left( \mu \right) }= \sum_{f\psi ( \overset{\circ }{\xi _{j}} ) \in \widetilde{F\Psi }% ( \overset{\circ }{\xi _{j}} ) }\widehat{\eta }\big( \overset{% \circ }{\xi _{j}},\big( \overset{\circ }{\xi _{j}}+\overset{\circ }{f\psi }% \big( \overset{\circ }{\xi_{j} }\big) ;k;n_{j}+n_{f\psi ( \overset{% \circ }{\xi _{j}} ) }\big) \big) \\ \hphantom{m_{ ( \overset{\circ }{\xi _{j}};k;n_{j} ) }^{\left( \mu \right) }=}{} \times m_{\big( ( \overset{\circ }{\xi }_{j}+\overset{\circ }{f\psi }% ( \overset{\circ }{\xi_{j} } ) ) ;k; ( n_{j}+n_{f\psi ( \overset{\circ }{\xi_{j} } ) }) \big) }^{\left( \mu \right) }+\delta _{\xi _{j},\mu }. \end{gather*} In the folded fan $\widetilde{F\Psi }\big( \overset{\circ }{\xi _{j}}% \big) $ let us separate the summation over the grades $n_{f\psi }$ and the classical parts $\overset{\circ }{f\psi }$ \ of the shifts $f\psi \big( \overset{\circ }{\xi _{j}}\big) $. The overcrossing terms vanish because their multiplicities are zero. The f\/irst term in the r.h.s.\ of the recursion relation takes the form \begin{gather*} \sum_{n_{f\psi ( \overset{\circ }{\xi _{j}}) }}\sum _{\substack{ \overset{\circ }{f\psi } ( \overset{\circ }{\xi_{j} }% ) ; \\ f\psi ( \overset{\circ }{\xi _{j}} ) \in \widetilde{% F\Psi } ( \overset{\circ }{\xi _{j}} ) }}\widehat{\eta }\big( \overset{\circ }{\xi _{j}},\big( \overset{\circ }{\xi _{j}}+\overset{\circ }{f\psi }\big( \overset{\circ }{\xi_{j} }\big) ;k,;n_{j}+n_{f\psi ( \overset{\circ }{\xi _{j}} ) }\big) \big) m_{\big( ( \overset{\circ }{\xi }_{j}+\overset{\circ }{% f\psi } ( \overset{\circ }{\xi_{j} } ) ) ;k; ( n_{j}+n_{f\psi ( \overset{\circ }{\xi_{j} } ) } ) \big) }^{\left( \mu \right) }. \end{gather*} For the same reason we can spread the f\/irst summation over all the positive grades. It is suf\/f\/icient to include the vector with zero coordinates into the folded fan and put the multiplicity $\eta \big( \overset{\circ }{\xi },\xi \big) =-1$. Introduce the set \[ F\Psi \big( \overset{\circ }{\xi _{j}}\big) :=\widetilde{F\Psi }\big( \overset{\circ }{\xi _{j}}\big) \cup \left( 0;0;0\right) . \] It is called \emph{the full folded fan} or simply \emph{the folded fan} when from the context it is clear what fan $\widetilde{F\Psi }\big( \overset{% \circ }{\xi _{j}}\big) $ or $F\Psi \big( \overset{\circ }{\xi _{j}}% \big) $ is actually used. The set of multiplicities $\eta \big( \overset{\circ }{\xi },\xi ^{\prime }\big) $ for the shifts in $F\Psi \big( \overset{\circ }{\xi }\big) $ is thus f\/ixed as follows: \begin{equation} \eta \big( \overset{\circ }{\xi },\xi ^{\prime }\big)\big|_{\xi ^{\prime }-\xi \in F\Psi ( \overset{\circ }{\xi }) }:=-\sum_{\substack{ % w\in W, \\ w_{\phi ( \xi ,w) }\circ ( \xi -( w\circ \rho -\rho ) ) =\xi ^{\prime }}}\epsilon (w), \label{full-fan-multiplicity} \end{equation} and the recursion property (\ref{recursion-prop-weights}) is reformulated: \begin{equation*} \sum_{f\psi ( \overset{\circ }{\xi } ) \in F\Psi ( \overset{% \circ }{\xi } ) }\eta \big( \overset{\circ }{\xi },\xi +f\psi \big( \overset{\circ }{\xi }\big) \big) m_{\xi +f\psi ( \overset{\circ }{% \xi } ) }^{\left( \mu \right) }+\delta _{\xi ,\mu }=0, \qquad \xi \in \overline{C_{k}^{\left( 0\right) }}. \end{equation*} For the string $\sigma_{j}^{\mu,k }$ we can rewrite this relation separating the summations: \begin{gather*} \sum_{n=0}^{\infty }\sum_{\substack{ \overset{\circ }{f\psi } ( \overset{\circ }{\xi_{j} } ) \\ f\psi ( \overset{\circ }{\xi _{j}}% ) \in F\Psi ( \overset{\circ }{\xi _{j}} ) }}\eta \big( \overset{\circ }{\xi _{j}},\big( \overset{\circ }{\xi _{j}}+\overset{\circ }{f\psi }\big( \overset{\circ }{\xi_{j} }\big) ;k;n_{j}+n\big) \big) m_{ ( ( \overset{\circ }{\xi }_{j}+\overset{\circ }{% f\psi } ( \overset{\circ }{\xi_{j} } ) ) ;k; ( n_{j}+n ) ) }^{\left( \mu \right) }+\delta _{\xi _{j},\mu }=0. \end{gather*} The properties of $\mathcal{N}^{\mu }$ for an integrable modules $L^{\mu }$ guarantee that for any f\/inite $n_{j}$ the f\/irst sum is f\/inite. It extends to $n\leq -n_{j}$ (remember that $n_{j}$ is negative). The second sum can also be augmented so that the vectors $ \big( \overset{\circ }{\xi _{j}}+\overset{\circ }{f\psi }\big( \overset{\circ }{% \xi_{j} }\big);k;0 \big) =\big( \overset{\circ }{\xi _{s}};k;0 \big)$ run over the set $\Xi_{k}^{\mu }$. Now taking into account that $n_{j,s}$ does not depend on $n_j$ (Lemma~\ref{lemma1}) the notation can be simplif\/ied: \begin{gather*} \eta _{j,s}\left( n\right) : =\eta \big( \overset{\circ }{\xi _{j}}% ,\big( \overset{\circ }{\xi _{s}};k;n_{j}+n\big) \big) , \qquad m_{s,n_{j}+n}^{\left( \mu \right) } : =m_{ ( \overset{\circ }{\xi _{s}}% ;k;n_{j}+n ) }^{\left( \mu \right) }, \end{gather*} and the recursion property for the string functions in $\big\{ \sigma _{j}^{\mu }|\xi _{j}\in \Xi _{k}^{\mu }\big\} $ can be stated: \begin{proposition}\label{proposition2} Let $L^{\mu }$ be the integrable highest weight module of $\frak{g}$, $\mu =\big( \overset{\circ }{\mu };k;0\big) $, $p_{\max }^{\left( \mu \right) }:=\#\left( \Xi_{k}^{\mu }\right) $\ , $\xi _{j}=\big( \overset{\circ }{\xi _{j}};k;n_{j}\big) \in \Xi _{k}^{\mu } + n_j \delta $ , let $F\Psi \big( \overset{\circ }{\xi _{j}}\big) $ be the full folded fan for $\overset{\circ }{\xi _{j}}$ and $\eta _{j,s}\left( n\right) =-\sum\limits_{ \tilde{w}_{j,s}, }\epsilon (\tilde{w}_{j,s})$ where the summation is over the elements $\tilde{w}_{j,s}$ of $W$ satisfying the equation $ w_{\phi \left( \xi ,w\right) }\circ \left( \xi _{j}-\left( \tilde{w}_{j,s}\circ \rho -\rho \right) \right) =\big( \overset{\circ }{\xi _{s}};k;n_{j}+n\big),$ then for the string function coefficients $m_{s,n_{j}+n}^{\left( \mu \right) }$ the following relation holds: \begin{equation} \sum_{s=1}^{p_{\max }^{\left( \mu \right) }}\sum_{n=-n_{j}}^{0}\eta _{j,s}\left( n\right) m_{s,n_{j}+ n}^{\left( \mu \right) }=-\delta _{\xi _{j},\mu }. \label{recursion-prop-string} \end{equation} \end{proposition} For a f\/ixed $n_{j}\leq 0$ consider the sequence of the string weights \[ \xi _{j;n_{j}}=\big( \overset{\circ }{\xi _{j}};k;n_{j}\big) , \quad \xi _{j;n_{j}+1}=\big( \overset{\circ }{\xi _{j}};k;n_{j}+1\big) ,\quad \ldots ,\quad \xi _{j;0}=\big( \overset{\circ }{\xi _{j}};k;0\big) , \] and write down two $\left( \left| n_{j}\right| +1\right) $-dimensional vectors: the coordinates of the f\/irst one are the coef\/f\/icients of the $s$-th string $\left\{ \sigma _{s}^{\mu }\right\} $, \begin{equation*} \mathbf{m}_{\left( s;n_{j}\right) }^{\left( \mu \right) }:=\left( m_{s,n_{j}}^{\left( \mu \right) },m_{s,n_{j}+1}^{\left( \mu \right) },\ldots ,m_{s,0}^{\left( \mu \right) }\right) , \end{equation*} the second indicates that the $j$-th string $\sigma _{j}^{\mu ,k}$\ is starting at the highest weight $\mu $, \[ \mathbf{\delta }_{\left( j;n_{j}\right) }^{\mu }:=\left( 0,0,\ldots ,-1\right) . \] For the weights with $n\geq n_{j}$ we have the sequence of relations of the type (\ref{recursion-prop-string}): \begin{gather} \sum_{s=1}^{p_{\max }^{\left( \mu \right) }}\sum_{n=0}^{-n_{j}}\eta _{j,s}\left( n\right) m_{s,n_{j}+n}^{\left( \mu \right) } = 0, \notag \\ \sum_{s=1}^{p_{\max }^{\left( \mu \right) }}\sum_{n=0}^{-n_{j}-1}\eta _{j,s}\left( n\right) m_{s,n_{j}+n+1}^{\left( \mu \right) } = 0, \notag \\ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots\cdots , \notag \\ \sum_{s=1}^{p_{\max }^{\left( \mu \right) }}\eta _{j,s}\left( 0\right) m_{s,0}^{\left( \mu \right) } = -1. \label{preliminary-set-equations} \end{gather} Introduce the upper triangular $\left( \left| n_{j}\right| +1\right) \times \left( \left| n_{j}\right| +1\right) $-matrix \begin{equation*} \mathbf{M}_{\left( j,s\right) }^{\Xi \mu}:= \begin{array}{cccc} \eta _{j,s}\left( 0\right) & \eta _{j,s}\left( 1\right) & \cdots & \eta _{j,s}\left( -n_j \right) \\ 0 & \eta _{j,s}\left( 0 \right) & \cdots & \eta _{j,s}\left(-n_j -1\right) \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & \eta _{j,s}\left( 0\right) \end{array} . \end{equation*} The set of relations (\ref{preliminary-set-equations}) reads: \begin{equation} \mathbf{M}_{\left( j,s\right) }^{\Xi \mu} \cdot% \mathbf{m}_{\left( s;n_{j}\right) }^{\left( \mu \right) }=\mathbf{\delta }% _{\left( j;n_{j}\right) }^{\mu }. \label{preliminary-matrix-relations} \end{equation} Perform the same procedure for the other weights $\xi _{j}\in \Xi _{k}^{\mu }$ putting the minimal values of grade equal: $n_{j}|_{j=1,\ldots ,p_{\max }^{\left( \mu \right) }}=u$, that is construct all the folded fans $F\Psi \big( \overset{\circ }{\xi _{j}}\big) $ (till the grade $u$) and the corresponding sets of multiplicities $\eta _{j,s}\left( n\right) $ (def\/ined by relations (\ref{full-fan-multiplicity})). For $j=1,\ldots ,p_{\max }^{\left( \mu \right) }$ compose $\left( |u| +1\right)^2$ equations of the type (\ref{preliminary-matrix-relations}): \begin{gather} \mathbf{M}_{\left( j,s\right) }^{\Xi \mu} \mathbf{m}_{\left( s;n_j \right) }^{\left( \mu \right) } = \mathbf{\delta }% _{\left( j;n_j \right) }^{\mu }, \qquad j,s = 1,\dots, p_{\max }^{\left( \mu \right)}. \label{almost-final-set-relations} \end{gather} Form two $\left( \left| u\right| +1\right) \times p_{\max }^{\left( \mu \right) }$-dimensional vectors: the f\/irst with the string coef\/f\/icients, \begin{gather*} \mathbf{m}_{\left( u\right) }^{\left( \mu \right) }:= \bigg( m_{1,u}^{\left( \mu \right) },m_{1,u+1}^{\left( \mu \right) },\ldots ,m_{1,0}^{\left( \mu \right) },m_{2,u}^{\left( \mu \right) },m_{2,u+1}^{\left( \mu \right) },\ldots ,m_{2,0}^{\left( \mu \right) },\ldots \notag \\ \hphantom{\mathbf{m}_{\left( u\right) }^{\left( \mu \right) }:=\bigg(}{} \ldots , m_{p_{\max }^{\left( \mu \right) },u}^{\left( \mu \right) },m_{p_{\max }^{\left( \mu \right) },u+1}^{\left( \mu \right) },\ldots ,m_{p_{\max }^{\left( \mu \right) },0}^{\left( \mu \right) }\bigg) , \end{gather*} the second indicating that the string $\sigma _{j}^{\mu ,k}$ with number $j$ starts at the highest weight $\mu $, \[ \mathbf{\delta }_{\left( u\right) }^{\mu }:=\left( 0,0,\ldots ,0,0,0,\ldots ,0,0,0,\ldots ,-1,0,0,\ldots ,0\right) , \] (here only in the j-th subsequence the last ($\left( \left| u\right| +1\right) $-th) coordinate is not zero). Def\/ine the $\left( \left| u\right| +1\right) p_{\max }^{\left( \mu \right) }\times \left( \left| u\right| +1\right) p_{\max }^{\left( \mu \right) }$-matrix -- the block-matrix with the blocks $\mathbf{M}_{\left( j,s\right) }^{\Xi \mu}$: \[ \mathbf{M}^{\Xi \mu}:=\left\| \mathbf{M}_{\left( j,s\right) }^{\Xi \mu}\right\| _{j,s=1,\ldots, p_{\max }^{\left( \mu \right) }}. \] In these terms the relations (\ref{almost-final-set-relations}) have the following integral form: \begin{equation} \mathbf{M}^{\Xi \mu}\,\,\mathbf{m}_{\left( u\right) }^{\left( \mu \right) }=\mathbf{% \delta }_{\left( u\right) }^{\mu }. \label{final-equation} \end{equation} The matrix $\mathbf{M}^{\Xi \mu}$ being invertible the equation (\ref{final-equation}) can be solved. Thus we have demonstrated that the strings $\sigma _{j}^{\mu ,k}$ are determined by the matrix $\mathbf{M}^{\Xi \mu}$ whose elements are the full folded fan weight multiplicities: \begin{proposition}\label{proposition3} Let $L^{\mu }$ be an integrable highest weight module of $\frak{g}$, $\mu =\big( \overset{\circ }{\mu };k;0\big) $, $p_{\max }^{\left( \mu \right) }:=\#\left( \Xi_{k}^{\mu}\right) $, $\xi _{j}=\big( \overset{\circ }{\xi _{j}};k;n_{j}\big) \in \Xi _{k}^{\mu } + n_j \delta $; let $F\Psi \big( \overset{\circ }{\xi _{j}}\big) $ be the full folded fan for $\overset{\circ }{\xi _{j}}$ and $\mathbf{M}^{\Xi \mu}$~-- the $\left( \left| n_{j}\right| +1\right) p_{\max }^{\left( \mu \right) }\times \left( \left| n_{j}\right| +1\right) p_{\max }^{\left( \mu \right) }$-matrix formed by the blocks $\mathbf{M}_{\left( j,s\right) }^{\Xi \mu}$ \[ \mathbf{M}_{\left( j,s\right) }^{\Xi \mu}:= \begin{array}{cccc} \eta _{j,s}\left( 0 \right) & \eta _{j,s}\left( 1\right) & \cdots & \eta _{j,s}\left( -n_{j} \right) \\ 0 & \eta _{j,s}\left( 0 \right) & \cdots & \eta _{j,s}\left( -n_{j} -1\right) \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & \eta _{j,s}\left( 0\right) \end{array} \] where the elements $\eta _{j,s}\left( n\right) $ are the multiplicities of the folded fan weights, \[ \eta _{j,s}\left( n\right) =-\sum_{ \tilde{w}_{j,s}, }\epsilon (\tilde{w}_{j,s}) \] with the summation over the elements $\tilde{w}_{j,s} \in W$ satisfying the equation \[ w_{\phi \left( \xi ,w\right) }\circ \left( \xi _{j}-\left( \tilde{w}_{j,s}\circ \rho -\rho \right) \right) =\big( \overset{\circ }{\xi _{s}};k;n_{j}+n\big). \] Let the string function coefficients be the coordinates in the $n_{j}+1$-subsequences of the vector~$\mathbf{m}_{\left( n_{j}\right) }^{\left( \mu \right) }$. Then for the coefficients of $\big\{\sigma _{j}^{\mu ,k}\mid j=1,\ldots, p_{\max }^{\left( \mu \right)}\big\}$ the following relation folds: \begin{gather} \mathbf{m}_{\left( n_{j}\right) }^{\left( \mu \right) }= \left(\mathbf{M}^{\Xi \mu}\right)^{-1} \mathbf{\delta }_{\left( n_{j}\right) }^{\mu }. \label{finite-string-solution} \end{gather} \end{proposition} Thus the solution $\mathbf{m}_{\left( n_{j} \right) }^{\left( \mu \right) }$ describes all the string functions relevant to the chosen module $L^{\mu }$ (with the grades no less than the preliminary f\/ixed $n_{j}=u$). To describe the complete string functions it is suf\/f\/icient to send $u$ to the limit $u\rightarrow -\infty $ . \section{Examples}\label{section5} \subsection[$\frak{g}=A_{2}^{\left( 1\right) }$]{$\boldsymbol{\frak{g}=A_{2}^{\left( 1\right) }}$}\label{section5.1} Consider the fan $\Gamma _{\frak{h}\subset \frak{g}}$ (with $n_{\psi ^{\left( 0\right) }}\leq 9$): \begin{gather} \Gamma _{\frak{h}\subset \frak{g}} = \big\{ \left( 0, 1, 0, 0, 1\right) , \left( 2, 1, 0, 0, -1\right) , \left( 1, 0, 0, 0, 1\right) , \left( 1, 2, 0, 0, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 2, 2, 0, 0, 1\right) ,\left( 3, 1, 0, 1, 1\right) , \left( -1, 1, 0, 1, -1\right) ,\left( 1, 3, 0, 1, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 1, -1, 0, 1, -1\right) , \left( 3, 3, 0, 1, -1\right) ,\left( -1, -1, 0, 1, 1\right) ,\left( 3, 4, 0, 2, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 0, -2, 0, 2, 1\right) , \left( 2, 4, 0, 2, -1\right) , \left( -1, -2, 0, 2, -1\right) ,\left( 4, 3, 0, 2, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -2, 0, 0, 2, 1\right) , \left( 4, 2, 0, 2, -1\right) , \left( -2, -1, 0, 2, -1\right) , \left( 0, 3, 0, 2, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 3, 0, 0, 2, -1\right) , \left( -1, 2, 0, 2, 1\right) , \left( 2, -1, 0, 2, 1\right) , \left( 0, 4, 0, 4, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -3, -2, 0, 4, 1\right) ,\left( 5, 4, 0, 4, -1\right) , \left( 2, -2, 0, 4, -1\right) , \left( 4, 0, 0, 4, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -2, -3, 0, 4, 1\right) , \left( 4, 5, 0, 4, -1\right) , \left( -2, 2, 0, 4, -1\right) , \left( -3, 0, 0, 4, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 1, -3, 0, 5, 1\right) , \left( 5, 1, 0, 5, -1\right) , \left( 5, 5, 0, 5, 1\right) , \left( 1, 5, 0, 5, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 0, -3, 0, 4, -1\right) , \left( 2, 5, 0, 4, 1\right) , \left( 5, 2, 0, 4, 1\right) , \left( -3, -3, 0, 5, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{}\left( -3, 1, 0, 5, 1\right) , \left( 6, 4, 0, 6, 1\right) , \left( 3, -2, 0, 6, 1\right) , \left( -1, 4, 0, 6, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -4, -2, 0, 6, -1\right) ,\left( 4, 6, 0, 6, 1\right) , \left( -2, 3, 0, 6, 1\right) , \left( 4, -1, 0, 6, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -2, -4, 0, 6, -1\right) , \left( 3, 6, 0, 6, -1\right) ,\left( 6, 3, 0, 6, -1\right) , \left( -4, -1, 0, 6, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -1, -4, 0, 6, 1\right) , \left( 6, 1, 0, 8, 1\right) , \left( -4, 1, 0, 8, -1\right) ,\left( 1, 6, 0, 8, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 1, -4, 0, 8, -1\right) , \left( 6, 6, 0, 8, -1\right) , \left( -4, -4, 0, 8, 1\right) , \left( 3, 7, 0, 9, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -3, -5, 0, 9, 1\right) , \left( 5, 7, 0, 9, -1\right) , \left( -1, -5, 0, 9, -1\right) , \left( 7, 3, 0, 9, 1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( -5, -3, 0, 9, 1\right) ,\left( 7, 5, 0, 9, -1\right) , \left( -5, -1, 0, 9, -1\right) , \left( -3, 3, 0, 9, -1\right) , \notag \\ \hphantom{\Gamma _{\frak{h}\subset \frak{g}} = \big\{}{} \left( 3, -3, 0, 9, -1\right) , \left( -1, 5, 0, 9, 1\right) , \left( 5, -1, 0, 9, 1\right), \ldots \big\} . \label{star-a2-1} \end{gather} Here the f\/irst two coordinates are classical in the basis of simple roots $% \left\{ \alpha _{1},\alpha _{2}\right\} $, next comes the level $k=0$, the grade $n_{\psi ^{\left( 0\right) }}$ and the multiplicity $m_{\psi ^{\left( 0\right) }}$ of the weight $\psi ^{\left( 0\right) }\in \Gamma _{\frak{h}% \subset \frak{g}}$ (for the injection $\frak{h} \longrightarrow \frak{g}$ we have $m_{\psi ^{\left( 0\right) }}=-\epsilon(w)$). \subsubsection[$k=1$]{$\boldsymbol{k=1}$}\label{section5.1.1} The set $\overline{C_{1;0}^{\left( 0\right) }}$ contains three weights ($% p_{\max }^{\left( \mu \right) }=3$): \begin{gather*} \overline{C_{1;0}^{\left( 0\right) }} = \big\{ \left( 0,0;1;0\right) , ( \overset{\circ }{\omega _{1}};1;0 ) , ( \overset{\circ }{% \omega _{2}};1;0 ) \big\} = \left\{\omega _{0},\omega _{1},\omega _{2} \right\}\\ \hphantom{\overline{C_{1;0}^{\left( 0\right) }} }{} = \big\{ \left( 0,0;1;0\right) ,\left( 2/3,1/3;1;0\right) ,\left( 1/3,2/3;1;0\right) \big\} , \end{gather*} $\omega _{i}$ are the fundamental weights. The classical components $\overset{\circ }{f\psi }$ of the folded fan shifts \[ w_{\phi \left( \xi ,w\right) }\circ \left( \xi -\left( w\circ \rho -\rho \right) \right) -\xi ,\qquad \xi \in \overline{C_{k}^{\left( 0\right) }} \] belong to the classical root lattice $Q\big( \overset{\circ }{\frak{g}}% \big) $. For any weight $\xi =\big( \overset{\circ }{\xi };1;0\big) \in \overline{C_{1;0}^{\left( 0\right) }}$ these classical components are equal to zero, thus the folded fan has the form \[ F\Psi \big( \overset{\circ }{\xi _{j}}\big) :=\big\{ \big( 0;0;n_{f\psi ( \overset{\circ }{\xi } ) }\big) \big\} ,\qquad \xi_{j} \in \overline{C_{k}^{\left( 0\right) }},\qquad j=1,2,3. \] It is convenient to indicate the multiplicities \[ \eta _{j,s}\left( n\right) = -\sum_{\substack{ \tilde{w}_{j,s}\in W, \\ w_{\phi\left( \xi ,w\right) }\circ \left( \xi _{j}-\left( \tilde{w}_{j,s}\circ \rho -\rho \right) \right) = ( \overset{\circ }{\xi _{s}};k;n_{j}+n ) }} \epsilon (\tilde{w}_{j,s}) \] as the additional coordinates of the shifts $f\psi $: \[ F\Psi\big( \overset{\circ }{\xi _{j}}\big) :=\big\{ \big( 0,0,n_{f\psi ( \overset{\circ }{\xi } ) },\eta _{j,s}\big( n_{f\psi ( \overset{\circ }{\xi } ) }\big) \big) \big\} . \] Thus any folded fan for the highest weight $\mu $ of level $k=1$ contains only ``one string''. Moreover the fans $F\Psi\big( \overset{% \circ }{\xi _{j}}\big) $ do not depend on the choice of $\xi _{j}=\big( \overset{\circ }{\xi _{j}};1;0\big) \in \overline{C_{1;0}^{\left( 0\right) }}$. The latter results are in full accord with the Proposition 12.6 in \cite{Kac}. Using the fan $\Gamma _{\frak{h}\subset \frak{g}}$ we obtain the folded fan (only the shifts with nonzero multiplicities $\eta _{j,j}$ are indicated, the maximal grade here is $n=20$): \begin{gather*} F\Psi\big( \overset{\circ }{\xi _{j}}\big) :=\big\{ \left( 0;0;0;-1\right) ,\left( 0;0;1;2\right) ,\left( 0;0;2;1\right) ,\left( 0;0;3;-2\right) ,\left( 0;0;4;-1\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{j}}\big) :=\big\{}{} \left(0;0;5;-2\right) ,\left( 0;0;7;2\right) ,\left( 0;0;8;2\right) , \left( 0;0;9;-1\right) ,\left( 0;0;10;-1\right) ,\left( 0;0;13;-2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{j}}\big) :=\big\{}{} \left( 0;0;14;-3\right) ,\left( 0;0;15;2\right) ,\left( 0;0;16;-2\right) , \left( 0;0;19;2\right) ,\left( 0;0;20;2\right) ,\ldots \big\} \end{gather*} The multiplicities \begin{gather*} \left\{ \eta _{j,j}\left( n\right) \right\} _{n=0,\ldots ,20} = \big\{ -1,2,1,-2,-1,-2,2,0,2,2,-1,-1,0,0, -2,-3,2,-2,0,0,2,2\big\} \end{gather*} form the unique nonzero matrix $\mathbf{M}_{\left( j,j\right) }$ for $j=1,2,3 $: \[ \mathbf{M}_{\left( j,j\right) }:= \begin{array}{cccc} \eta _{j,j}\left( 0\right) & \eta _{j,j}\left( 1\right) & \cdots & \eta _{j,j}\left( -n_{j}\right) \\ 0 & \eta _{j,j}\left( 0\right) & \cdots & \eta _{j,j}\left( -n_{j} -1\right) \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & \eta _{j,j}\left( 0\right) \end{array} . \] The matrix $\mathbf{M}$ is block-diagonal and the equation (\ref{finite-string-solution}) splits into three equivalent (for $\mu =\left( 0,0;1;0\right) ,\left( 2/3,1/3;1;0\right) ,\left( 1/3,2/3;1;0\right) $) relations $\mathbf{m}_{\left( j;-20\right) }^{\left( \mu \right) }=\mathbf{M}% _{\left( j,j\right) }^{-1} \mathbf{\delta }_{\left( j;-20\right) }^{\mu }$ determining the unique string function with coef\/f\/icients $\mathbf{m}_{\left( j;-20\right) }^{\left( \mu \right) }=\big( m_{j,-20}^{\left( \mu \right) },m_{j,-19}^{\left( \mu \right) },\ldots ,m_{j,0}^{\left( \mu \right) }\big) $, \begin{gather*} \sigma \left( q\right) = 1+2q+5q^{2}+10q^{3}+20q^{4}+36q^{5} +65q^{6}+110q^{7}+185q^{8}+300q^{9}+481q^{10} \\ \hphantom{\sigma \left( q\right) =}{} +752q^{11}+1165q^{12}+1770q^{13}+2665q^{14}+3956q^{15} +5822q^{16}+8470q^{17}\\ \hphantom{\sigma \left( q\right) =}{} +12230q^{18}+17490q^{19}+24842q^{20}+\cdots . \end{gather*} The obtained expression coincides with the expansion of the square of the inverse Euler function (see Proposition~12.13 in~\cite{Kac} and the relation~(12.13.4) there). \subsubsection[$k=2$]{$\boldsymbol{k=2}$}\label{section5.1.2} The set $\overline{C_{1;0}^{\left( 0\right) }}$ contains six weights: \begin{gather*} \overline{C_{1;0}^{\left( 0\right) }} = \left\{ \begin{array}{c} \left( 0,0;2;0\right) , ( \overset{\circ }{\omega _{1}};2;0 ) , ( \overset{\circ }{\omega _{2}};2;0 ) , \\ ( \overset{\circ }{\omega _{1}}+\overset{\circ }{\omega _{2}}% ;2;0 ) , ( 2\overset{\circ }{\omega _{1}};2;0 ) , ( 2% \overset{\circ }{\omega _{2}};2;0 ) \end{array} \right\} = \\ \hphantom{\overline{C_{1;0}^{\left( 0\right) }} }{} = \left\{ \begin{array}{c} \left( 0,0;2;0\right) ,\left( 2/3,1/3;2;0\right) ,\left( 1/3,2/3;2;0\right) , \\ \left( 1,1;2;0\right) ,\left( 4/3,2/3;2;0\right) ,\left( 2/3,4/3;2;0\right) \end{array} \right\} . \end{gather*} This set is divided into 3 congruence classes. The fan shifts cannot connect vectors from dif\/ferent classes. Thus instead of the set $\Xi _{2}$ we can consider three subsets separately: \begin{gather*} \Xi _{\rm 2;I} =\left\{ \left( 0,0;2;0\right) ,\left( 1,1;2;0\right) \right\} , \\ \Xi _{\rm 2;II} =\left\{ \left( 2/3,1/3;2;0\right) ,\left( 2/3,4/3;2;0\right) \right\} , \\ \Xi _{\rm 2;III} =\left\{ \left( 1/3,2/3;2;0\right) ,\left( 4/3,2/3;2;0\right) \right\} . \end{gather*} Let us start with $\overset{\circ }{\xi _{s}}\in \Xi _{\rm 2;I}$ and $\mu =\left( 0,0;2;0\right) $. Here we have two folded fans $F\Psi% \big( \overset{\circ }{\xi _{1}}\big) $ and $F\Psi\big( \overset{\circ }{\xi _{2}}\big) $. Using the fan $\Gamma _{\frak{h}\subset \frak{g}}$ (\ref{star-a2-1}) we obtain the folded fans (the maximal grade here is $n=9$): \begin{gather*} F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{ \left( 0;0;0;-1\right), \left( 0;0;2;1\right) ,\left( 0;0;4;2\right) ,\left( 0;0;8;-2\right) ,\left( 0;0;10;-2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{}{} \left( 1;1;0;2\right) ,\left( 1;1;1;-1\right) ,\left( 1;1;2;-2\right) ,\left( 1;1;3;-2\right) ,\left( 1;1;4;2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{}{}\left( 1;1;5;1\right) ,\left( 1;1;6;-2\right) ,\left( 1;1;7;2\right) ,\left( 1;1;9;-1\right) ,\ldots \big\}, \\ F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{ \left( 0;0;1;1\right) ,\left( 0;0;3;-2\right) ,\left( 0;0;7;1\right) ,\left( 0;0;9;-2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{} \left( 1;1;1;2\right) ,\left( 1;1;2;-2\right) ,\left( 1;1;4;1\right) ,\left( 1;1;5;2\right) ,\left( 1;1;6;2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{}\left( 1;1;7;-2\right) ,\left( 1;1;8;-2\right) ,\left( 1;1;9;-2\right) ,\ldots \big\}. \end{gather*} The multiplicities ($n=0,\ldots ,10$) \begin{gather*} \left\{ \eta _{1,1}\left( -10+n\right) \right\} = \left\{ -1,0,1,0,2,0,0,0,-2,0,-2\right\} , \\ \left\{ \eta _{1,2}\left( -10+n\right) \right\} = \left\{ 2,-1,-2,-2,2,1,-2,2,0,-1,0\right\} , \\ \left\{ \eta _{2,1}\left( -10+n\right) \right\} = \left\{ 0,1,0,-2,0,0,0,1,0,-1,0\right\} , \\ \left\{ \eta _{2,2}\left( -10+n\right) \right\} = \left\{ -1,2,-2,0,1,2,2,-2,-2,-2,0\right\} , \end{gather*} form the matrices $\mathbf{M}_{\left( s,t\right) }^{\left( \Xi ,2;v\right) }$ for $s,t=1,2$: \[ \mathbf{M}_{\left( s,t\right) }^{\left( \Xi ,2;v\right) }:= \begin{array}{cccc} \eta _{s,t}\left( 0\right) & \eta _{s,t}\left( 1\right) & \cdots & \eta _{s,t}\left( 10\right) \\ 0 & \eta _{s,t}\left( 0\right) & \cdots & \eta _{s,t}\left( 9\right) \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & \eta _{s,t}\left( 0\right) \end{array} . \] The block-matrix $\mathbf{M}^{\left( \Xi ,2;v\right) }$ is \[ \mathbf{M}^{\left( \Xi ,2;v\right) }:=\left\| \begin{array}{cc} \mathbf{M}_{\left( 1,1\right) }^{\left( \Xi ,2;v\right) } & \mathbf{M}% _{\left( 1,2\right) }^{\left( \Xi ,2;v\right) } \\ \mathbf{M}_{\left( 2,1\right) }^{\left( \Xi ,2;v\right) } & \mathbf{M}% _{\left( 2,2\right) }^{\left( \Xi ,2;v\right) } \end{array} \right\| . \] The equation \[ \mathbf{m}_{\left( -10\right) }^{\left( 0,0;2;0\right) }=\big( \mathbf{M}% ^{\left( \Xi ,2;v\right) }\big) ^{-1} \mathbf{\delta }_{\left( -10\right) }^{\left( 0,0;2;0\right) } \] gives two string functions $\sigma _{\left( s;-10\right) }^{\left( 0,0;2;0\right) }$ with the coef\/f\/icients in the subsections of the vector $% \mathbf{m}_{\left( -10\right) }^{\left( 0,0;2;0\right) }$: \begin{gather*} \sigma _{\left( 1;-10\right) }^{\left( 0,0;2;0\right) } = 1+2q+8q^{2}+20q^{3}+52q^{4}+116q^{5} \\ \hphantom{\sigma _{\left( 1;-10\right) }^{\left( 0,0;2;0\right) }=}{} +256q^{6}+522q^{7}+1045q^{8}+1996q^{9}+3736q^{10}+\cdots , \\ \sigma _{\left( 2;-10\right) }^{\left( 0,0;2;0\right) } =q+4q^{2}+12q^{3}+32q^{4}+77q^{5} \\ \hphantom{\sigma _{\left( 2;-10\right) }^{\left( 0,0;2;0\right) } =}{} +172q^{6}+365q^{7}+740q^{8}+1445q^{9}+2736q^{10}+\cdots . \end{gather*} In the second congruence class $\Xi _{\rm 2;{\rm II}}{=}\left\{ \left( 2/3,1/3;2;0\right) ,\left( 2/3,4/3;2;0\right) \right\} $ put $\mu {=}\left( 2/3,1/3;2;0\right) $. Again we have two folded fans $F\Psi% \big( \overset{\circ }{\xi _{1}}\big) $ and $F\Psi\big( \overset{\circ }{\xi _{2}}\big) $. The multiplicities ($n=0,\ldots ,10$): \begin{gather*} \left\{ \eta _{1,1}\left( -10+n\right) \right\} = \left\{ -1,2,-2,0,1,2,2,-2,-2,-2,0\right\} , \\ \left\{ \eta _{1,2}\left( -10+n\right) \right\} = \left\{ 1,0,-2,0,0,0,1,0,-1,0,2\right\} , \\ \left\{ \eta _{2,1}\left( -10+n\right) \right\} = \left\{ 0,2,-1,-2,-2,2,1,-2,2,0,-1\right\} , \\ \left\{ \eta _{2,2}\left( -10+n\right) \right\} = \left\{ -1,0,1,0,2,0,0,0,-2,0,-2\right\} . \end{gather*} form the matrices $\mathbf{M}_{\left( s,t\right) }^{\Xi 2;{\rm II}}$ for $s,t=1,2$ and the $22\times 22$ block-matrix $\mathbf{M}$ \[ \mathbf{M}^{\Xi 2;{\rm II}}:=\left\| \begin{array}{cc} \mathbf{M}_{\left( 1,1\right) }^{\Xi 2;{\rm II}} & \mathbf{M}_{\left( 1,2\right) }^{\Xi 2;{\rm II}} \\ \mathbf{M}_{\left( 2,1\right) }^{\Xi 2;{\rm II}} & \mathbf{M}_{\left( 2,2\right) }^{\Xi 2;{\rm II}} \end{array} \right\| . \] The equation \[ \mathbf{m}_{\left( -10\right) }^{\left( 2/3,1/3;2;0\right) }=\left( \mathbf{M% }^{\Xi 2;{\rm II}}\right) ^{-1} \mathbf{\delta }_{\left( -10\right) }^{\left( 2/3,1/3;2;0\right) } \] gives two string functions $\sigma _{\left( s;-10\right) }^{\left( 2/3,1/3;2;0\right) }$ for the module $L^{\left( 2/3,1/3;2;0\right) }$ with the coef\/f\/icients in the subsections of the vector $\mathbf{m}_{\left( -10\right) }^{\left( 2/3,1/3;2;0\right) }$: \begin{gather*} \sigma _{\left( 1;-10\right) }^{\left( 2/3,1/3;2;0\right) } =1+4q+13q^{2}+36q^{3}+89q^{4}+204q^{5} \\ \hphantom{\sigma _{\left( 1;-10\right) }^{\left( 2/3,1/3;2;0\right) }=}{} +441q^{6}+908q^{7}+1798q^{8}+3444q^{9}+6410q^{10}+\cdots , \\ \sigma _{\left( 2;-10\right) }^{\left( 2/3,1/3;2;0\right) } = 2q+7q^{2}+22q^{3}+56q^{4}+136q^{5} \\ \hphantom{\sigma _{\left( 2;-10\right) }^{\left( 2/3,1/3;2;0\right) }=}{} +300q^{6}+636q^{7}+1280q^{8}+2498q^{9}+4708q^{10}+\cdots . \end{gather*} For the third congruence class $\Xi _{2;{\rm III}}{=}\left\{ \left( 1/3,2/3;2;0\right) ,\left( 4/3,2/3;2;0\right) \right\} $ the folded fans $% F\Psi\big( \overset{\circ }{\xi _{1}}\big) $ and $F\Psi \big( \overset{\circ }{\xi _{2}}\big) $ are the same as for the second one. As a result the string functions also coincide: $\sigma _{\left( s;-10\right) }^{\left( 1/3,2/3;2;0\right) }=\sigma _{\left( s;-10\right) }^{\left( 2/3,1/3;2;0\right) } $ in accord with the $A_{2}$ external automorphism. \subsubsection[$k=4$]{$\boldsymbol{k=4}$}\label{section5.1.3} The set $\overline{C_{1;0}^{\left( 0\right) }}$ contains 15 projected maximal weights \begin{gather*} \left\{ \xi _{j}\mid \xi _{j}\in \Xi_{4} ;j=1,\ldots ,p_{\max }=15\right\} , \\ \overline{C_{1;0}^{\left( 0\right) }}=\left\{ \begin{array}{c} 4\omega _{0},3\omega _{0}+\omega _{1},3\omega _{0}+\omega _{2},2\omega _{0}+2\omega _{1},2\omega _{0}+2\omega _{2}, \\ 2\omega _{0}+\omega _{1}+\omega _{2},\omega _{0}+3\omega _{1},\omega _{0}+3\omega _{2},\omega _{0}+2\omega _{1}+\omega _{2}, \\ \omega _{0}+\omega _{1}+2\omega _{2},3\omega _{1}+\omega _{2},\omega _{1}+3\omega _{2},2\omega _{1}+2\omega _{2},4\omega _{1},4\omega _{2} \end{array} \right\} . \end{gather*} This set is divided into 3 congruence classes. Instead of the set $\Xi _{4}$ we can consider separately three subsets: \begin{gather*} \Xi _{\rm 4;I} = \left\{ \left( 0,0;4;0\right) ,\left( 1,1;4;0\right) ,\left( 1,2;4;0\right) ,\left( 2,1;4;0\right) ,\left( 2,2;4;0\right) \right\} , \\ \Xi _{\rm 4;II} = \big\{ \left( 2/3,1/3;4;0\right) ,\left( 2/3,4/3;4;0\right) ,\left( 5/3,4/3;4;0\right) , \left( 5/3,7/3;4;0\right) ,\left( 8/3,4/3;4;0\right) \big\} , \\ \Xi _{\rm 4;III} = \big\{ \left( 1/3,2/3;4;0\right) ,\left( 4/3,2/3;4;0\right) ,\left( 4/3,5/3;4;0\right) , \left( 7/3,5/3;4;0\right) ,\left( 4/3,8/3;4;0\right) \big\} . \end{gather*} Let us start with $\overset{\circ }{\xi _{s}}\in \Xi _{\rm 4;I}$ and $\mu =\left( 0,0;4;0\right) $. Here we have 5 folded fans $F\Psi% \big( \overset{\circ }{\xi _{s}}\big) $, $s=1,\ldots ,5$. Using the fan $\Gamma _{\frak{h}\subset \frak{g}}$ (\ref{star-a2-1}) we construct the folded fans (the maximal grade here is chosen to be $n=9$): \begin{gather*} F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{ \left( 0,0;0;-1\right) ,\left( 0,0;9;2\right) ,(1,1;0;2),(1,1;1;1),(1,1;3;-1), \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{}{} (1,1;4;-2),(1,1;5;2),(1,1;6;-2),(1,1;7;-1),(1,1;8;2), \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{}{}(1,2;0;-1),(1,2;1;-1),(1,2;3;1),(1,2;5;1),(1,2;8;1), \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{}{} (2,1;0;-1),(2,1;1;-1),(2,1;3;1),(2,1;5;1),(2,1;8;1), \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{1}}\big) :=\big\{}{} (2,2;0;1),(2,2;2;2),(2,2;4;-2),(2,2;6;-2),(2,2;8;-2), \ldots \big\}, \\ F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{ \left( 0,0;1;1\right) ,\left( 0;0;5;-1\right) ,\left( 1;1;0;-1\right) ,\left( 1;1;2;-1\right) ,\left( 1;1;4;2\right) , \left( 1;1;5;-2\right) ,\\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{} \left( 1;1;8;2\right) ,\left( 1;1;9;2\right) ,\left( 1;1;1;2\right) ,\left( 1;1;2;-2\right) , \left( 1;1;4;1\right) ,\left( 1;1;5;2\right) ,\\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{} \left( 1;1;6;2\right) ,\left( 1;2;0;1\right) ,\left( 1;2;1;-1\right) ,\left( 1;2;2;1\right) ,\left( 1;2;4;1\right) ,\left( 1;2;5;-1\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{} \left(1;2;6;-1\right) ,\left( 1;2;7;-1\right) , \left( 2;1;0;1\right) ,\left( 2;1;1;-1\right) ,\left( 2;1;2;1\right) ,\left(2;1;4;1\right) ,\\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{} \left( 2;1;5;-1\right) , \left( 2;1;6;-1\right) ,\left( 2;1;7;-1\right) ,\left( 2;2;0;1\right) ,\left( 2;2;2;2\right) ,\left( 2;2;4;-2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{2}}\big) :=\big\{}{} \left( 2;2;6;-2\right) ,\left( 2;2;8;-2\right) ,\ldots \big\}, \\ F\Psi\big( \overset{\circ }{\xi _{3}}\big) :=\big\{ \left( 0,0;2;-1\right) ,\left( 0;0;6;1\right) ,\left( 1;1;1;1\right) ,\left( 1;1;4;2\right) , \left( 1;1;6;-2\right) ,\left( 1;1;7;-2\right) ,\\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{3}}\big) :=\big\{}{} \left( 1;2;0;-1\right) ,\left( 1;2;1;1\right) ,\left( 1;2;4;-1\right) ,\left( 1;1;5;-1\right) ,\left( 1;2;6;1\right) ,\left( 1;2;8;1\right) ,\\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{3}}\big) :=\big\{}{} \left( 1;2;9;2\right) , \left( 2;1;1;-1\right) , \left( 2;1;2;-1\right) ,\left( 2;1;3;2\right) ,\left( 2;1;5;1\right) , \left(2;1;6;-1\right) ,\\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{3}}\big) :=\big\{}{} \left( 2;1;8;-1\right) , \left( 2;2;0;1\right) ,\left( 2;2;2;-1\right) ,\left( 2;2;8;1\right) ,\ldots \big\}. \end{gather*} The fan $F\Psi\big( \overset{\circ }{\xi _{4}}\big) $ is equal to $\big\{ F\Psi\big( \overset{\circ }{\xi _{3}}\big) |\left( 1;2;n;m\right) \rightleftharpoons \left( 2;1;n;m\right) \big\} $ \begin{gather*} F\Psi\big( \overset{\circ }{\xi _{5}}\big) :=\big\{ \left( 0,0;4;1\right) ,\left( 0;0;8;-2\right) ,\left( 1;1;1;1\right) ,\left( 1;1;2;-2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{5}}\big) :=\big\{}{} \left( 1;1;3;-2\right) ,\left( 1;1;4;2\right) ,\left( 1;1;5;1\right) ,\left( 1;1;7;-1\right) ,\left( 1;1;8;2\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{5}}\big) :=\big\{}{} \left( 1;2;1;1\right) ,\left( 1;2;2;-1\right) ,\left( 1;2;6;-1\right) ,\left( 1;2;7;1\right) ,\left( 1;2;9;1\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{5}}\big) :=\big\{}{} \left( 2;1;1;1\right) ,\left( 2;1;2;-1\right) ,\left( 2;1;6;-1\right) ,\left( 2;1;7;1\right) ,\left( 2;1;9;1\right) , \\ \hphantom{F\Psi\big( \overset{\circ }{\xi _{5}}\big) :=\big\{}{}\left( 2;2;0;-1\right) ,\left( 2;2;2;2\right) ,\left( 2;2;6;-2\right) ,\left( 2;2;8;-1\right) ,\ldots \big\}. \end{gather*} Their multiplicities (for $n=0,\ldots ,9$) \begin{gather*} \left\{ \eta _{1,1}\left( -9+n\right) \right\} =\left\{ -1,0,0,0,0,0,0,0,2,0\right\} , \\ \left\{ \eta _{1,2}\left( -9+n\right) \right\} =\left\{ 2,1,0,-1,-2,2,-2,-1,2,0\right\} , \\ \left\{ \eta _{1,3}\left( -9+n\right) \right\} =\left\{ -1,-1,0,1,0,1,0,0,1,0\right\} , \\ \left\{ \eta _{1,4}\left( -9+n\right) \right\} =\left\{ -1,-1,0,1,0,1,0,0,1,0\right\} , \\ \left\{ \eta _{1,5}\left( -9+n\right) \right\} =\left\{ 1,0,2,0,-2,0,-2,0,-2,0\right\} , \\ \left\{ \eta _{2,1}\left( -9+n\right) \right\} =\left\{ 0,1,0,0,0,-1,0,0,0,0\right\} , \\ \left\{ \eta _{2,2}\left( -9+n\right) \right\} =\left\{ -1,0,-1,0,2,-2,0,0,2,2\right\} , \\ \left\{ \eta _{2,3}\left( -9+n\right) \right\} =\left\{ 1,-1,1,0,1,-1,-1,-1,0,0\right\} , \\ \left\{ \eta _{2,4}\left( -9+n\right) \right\} =\left\{ 1,-1,1,0,1,-1,-1,-1,0,0\right\} , \\ \left\{ \eta _{2,5}\left( -9+n\right) \right\} =\left\{ 1,0,2,0,-2,0,-2,0,-2,0\right\} , \\ \left\{ \eta _{3,1}\left( -9+n\right) \right\}=\left\{ 0,0,-1,0,0,0,1,0,0,0\right\} , \\ \left\{ \eta _{3,2}\left( -9+n\right) \right\} =\left\{ 0,1,0,0,2,0,-2,-2,0,0\right\} , \\ \left\{ \eta _{3,3}\left( -9+n\right) \right\} =\left\{ -1,1,0,0,-1,-1,1,0,1,2\right\} , \\ \left\{ \eta _{3,4}\left( -9+n\right) \right\} =\left\{ 0,-1,-1,2,0,1,-1,0,-1,0\right\} , \\ \left\{ \eta _{3,5}\left( -9+n\right) \right\} =\left\{ 1,0,-1,0,0,0,0,0,1,0\right\} , \\ \left\{ \eta _{4,1}\left( -9+n\right) \right\} =\left\{ 0,0,-1,0,0,0,1,0,0,0\right\} , \\ \left\{ \eta _{4,2}\left( -9+n\right) \right\} =\left\{ 0,1,0,0,2,0,-2,-2,0,0\right\} , \\ \left\{ \eta _{4,3}\left( -9+n\right) \right\} =\left\{ 0,-1,-1,2,0,1,-1,0,-1,0\right\} , \\ \left\{ \eta _{4,4}\left( -9+n\right) \right\} =\left\{ -1,1,0,0,-1,-1,1,0,1,2\right\} , \\ \left\{ \eta _{4,5}\left( -9+n\right) \right\} =\left\{ 1,0,-1,0,0,0,0,0,1,0\right\} , \\ \left\{ \eta _{5,1}\left( -9+n\right) \right\} =\left\{ 0,0,0,0,1,0,0,0,-2,0\right\} , \\ \left\{ \eta _{5,2}\left( -9+n\right) \right\} =\left\{ 0,1,-2,-2,2,1,0,-1,2,0\right\} , \\ \left\{ \eta _{5,3}\left( -9+n\right) \right\} =\left\{ 0,1,-1,0,0,0,-1,1,0,1\right\} , \\ \left\{ \eta _{5,4}\left( -9+n\right) \right\} =\left\{ 0,1,-1,0,0,0,-1,1,0,1\right\} , \\ \left\{ \eta _{5,5}\left( -9+n\right) \right\} =\left\{ -1,0,2,0,0,0,-2,0,-1,0\right\} . \end{gather*} The matrices $\mathbf{M}_{\left( s,t\right) }^{\Xi 4;{\rm I}}$ for $s,t=1,\ldots ,5 $: \[ \mathbf{M}_{\left( s,t\right) }^{\Xi 4;{\rm I}}:= \begin{array}{cccc} \eta _{s,t}\left( 0\right) & \eta _{s,t}\left( 1\right) & \cdots & \eta _{s,t}\left( 9\right) \\ 0 & \eta _{s,t}\left( 0\right) & \cdots & \eta _{s,t}\left( 8\right) \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & \eta _{s,t}\left( 0\right) \end{array} . \] For example, \[ \mathbf{M}_{\left( 2,3\right) }^{\Xi 4;{\rm I}}:= \begin{array}{rrrrrrrrrr} 1 & -1 & 1 & 0 & 1 & -1 & -1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 1 & 0 & 1 & -1 & -1 & -1 & 0 \\ 0 & 0 & 1 & -1 & 1 & 0 & 1 & -1 & -1 & -1 \\ 0 & 0 & 0 & 1 & -1 & 1 & 0 & 1 & -1 & -1 \\ 0 & 0 & 0 & 0 & 1 & -1 & 1 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 1 & -1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} . \] Matrices $\mathbf{M}_{\left( s,t\right) }^{\Xi 4;{\rm I}}$ form the block-matrix $% \mathbf{M}^{\Xi 4;{\rm I}}\mathbf{=}\left\| \mathbf{M}_{\left( s,t\right) }^{\Xi 4;{\rm I}}\right\| _{s,t=1,\ldots ,5}$. With this matrix we can describe f\/ive modules of the level 4 with the highest weights $\mu _{s}\in \Xi _{4;{\rm I}}=\left\{ \left( 0,0;4;0\right) ,\left( 1,1;4;0\right),\right.$ $\left. \left( 1,2;4;0\right) ,\left( 2,1;4;0\right) ,\left( 2,2;4;0\right) \right\} $ . We construct f\/ive sets of string functions $\sigma _{\left( t;-9\right) }^{\left( \mu _{s}\right) }$ in terms of their coef\/f\/icients obtained as ten dimensional subsections of the vector $\mathbf{m}_{\left( -9\right) }^{\left( \mu _{s}\right) }$: \[ \mathbf{m}_{\left( -9\right) }^{\left( \mu _{s}\right) }=\left( \mathbf{M}% ^{\Xi 4;{\rm I}}\right) ^{-1} \mathbf{\delta }_{\left( -9\right) }^{\left( \mu _{s}\right) }. \] The answer is as follows: \begin{gather*} \sigma _{\left( 1;-9\right) }^{\left( 0,0;4;0\right) } =1+2q+8q^{2}+24q^{3}+72q^{4}+190q^{5}+490q^{6} \\ \hphantom{\sigma _{\left( 1;-9\right) }^{\left( 0,0;4;0\right) }=}{} +1176q^{7}+2729q^{8}+6048q^{9}+\cdots , \\ \sigma _{\left( 2;-9\right) }^{\left( 0,0;4;0\right) } = q+4q^{2}+15q^{3}+48q^{4}+138q^{5}+366q^{6} +913q^{7}+2156q^{8}+4874q^{9}+\cdots , \\ \sigma _{\left( 3;-9\right) }^{\left( 0,0;4;0\right) } = q^{2}+6q^{3}+23q^{4}+74q^{5}+2121q^{6}+556q^{7} +1366q^{8}+3184q^{9}+\cdots , \\ \sigma _{\left( 4;-9\right) }^{\left( 0,0;4;0\right) } = q^{2}+6q^{3}+23q^{4}+74q^{5}+2121q^{6}+556q^{7} +1366q^{8}+3184q^{9}+\cdots , \\ \sigma _{\left( 5;-9\right) }^{\left( 0,0;4;0\right) } = q^{2}+4q^{3}+18q^{4}+56q^{5}+167q^{6}+440q^{7} +1103q^{8}+2588q^{9}+\cdots , \\ \sigma _{\left( 1;-9\right) }^{\left( 1,1;4;0\right) } = 2+10q+40q^{2}+133q^{3}+398q^{4}+1084q^{5}+2760q^{6} \\ \hphantom{\sigma _{\left( 1;-9\right) }^{\left( 1,1;4;0\right) }=}{} +6632q^{7}+15214q^{8}+33508q^{9}+\cdots , \\ \sigma _{\left( 2;-9\right) }^{\left( 1,1;4;0\right) } = 1+6q+27q^{2}+96q^{3}+298q^{4}+836q^{5}+2173q^{6} \\ \hphantom{\sigma _{\left( 2;-9\right) }^{\left( 1,1;4;0\right) }=}{} +5310q^{7}+12341q^{8}+27486q^{9}+\cdots , \\ \sigma _{\left( 3;-9\right) }^{\left( 1,1;4;0\right) } = 2q^{2}+12q^{3}+49q^{4}+166q^{5}+494q^{6}+1340q^{7} +3387q^{8}+8086q^{9}+\cdots , \\ \sigma _{\left( 4;-9\right) }^{\left( 1,1;4;0\right) } = 2q^{2}+12q^{3}+49q^{4}+166q^{5}+494q^{6}+1340q^{7} +3387q^{8}+8086q^{9}+\cdots , \\ \sigma _{\left( 5;-9\right) }^{\left( 1,1;4;0\right) } = q+8q^{2}+35q^{3}+124q^{4}+379q^{5}+1052q^{6}+2700q^{7} +6536q^{8}+15047q^{9}+\cdots , \\ \sigma _{\left( 1;-9\right) }^{\left( 1,2;4;0\right) } = 1+8q+32q^{2}+110q^{3}+322q^{4}+872q^{5}+2183q^{6} \\ \hphantom{\sigma _{\left( 1;-9\right) }^{\left( 1,2;4;0\right) }=}{} +5186q^{7}+11730q^{8}+25552q^{9}+\cdots , \\ \sigma _{\left( 2;-9\right) }^{\left( 1,2;4;0\right) } = 1+6q+25q^{2}+85q^{3}+255q^{4}+695q^{5}+1764q^{6} \\ \hphantom{\sigma _{\left( 2;-9\right) }^{\left( 1,2;4;0\right) }=}{} +4226q^{7}+9653q^{8}+21179q^{9}+\cdots , \\ \sigma _{\left( 3;-9\right) }^{\left( 1,2;4;0\right) } = 1+4q+16q^{2}+54q^{3}+163q^{4}+450q^{5}+1161q^{6}+2824q^{7} \\ \hphantom{\sigma _{\left( 3;-9\right) }^{\left( 1,2;4;0\right) }=}{} +6549q^{8}+14572q^{9}+\cdots , \\ \sigma _{\left( 4;-9\right) }^{\left( 1,2;4;0\right) } = 2q+11q^{2}+44q^{3}+143q^{4}+414q^{5}+1096q^{6}+2714q^{7} \\ \hphantom{\sigma _{\left( 4;-9\right) }^{\left( 1,2;4;0\right) }=}{} +6364q^{8}+14272q^{9}+\cdots , \\ \sigma _{\left( 5;-9\right) }^{\left( 1,2;4;0\right) } = 2q+9q^{2}+36q^{3}+115q^{4}+336q^{5}+890q^{6}+2224q^{7} +5241q^{8}+11840q^{9}+\cdots . \end{gather*} The next set of string functions $\sigma _{\left( s;-9\right) }^{\left( 2,1;4;0\right) }$ coincides with the previous one where the third and the fourth strings are interchanged: $\sigma _{\left( 3;-9\right) }^{\left( 2,1;4;0\right) }=\sigma _{\left( 4;-9\right) }^{\left( 1,2;4;0\right) },\sigma _{\left( 4;-9\right) }^{\left( 2,1;4;0\right) }=\sigma _{\left( 3;-9\right) }^{\left( 1,2;4;0\right) }.$ The last set describes the module $L^{\mu _{5}}$ where $\mu _{5}$ is the highest weight in $\Xi _{4;{\rm I}}$: \begin{gather*} \sigma _{\left( 1;-9\right) }^{\left( 2,2;4;0\right) } = 3+14q+58q^{2}+184q^{3}+536q^{4}+1408q^{5}+3492q^{6} \\ \hphantom{\sigma _{\left( 1;-9\right) }^{\left( 2,2;4;0\right) }=}{} +8160q^{7}+18299q^{8}+39428q^{9}+\cdots , \\ \sigma _{\left( 2;-9\right) }^{\left( 2,2;4;0\right) } = 2+11q+44q^{2}+145q^{3}+424q^{4}+1133q^{5}+2830q^{6} \\ \hphantom{\sigma _{\left( 2;-9\right) }^{\left( 2,2;4;0\right) }=}{} +6688q^{7}+15102q^{8}+32805q^{9}+\cdots , \\ \sigma _{\left( 3;-9\right) }^{\left( 2,2;4;0\right) } = 1+6q+25q^{2}+86q^{3}+260q^{4}+716q^{5}+1833q^{6}+4426q^{7} \\ \hphantom{\sigma _{\left( 3;-9\right) }^{\left( 2,2;4;0\right) }=}{} +10183q^{8}+22488q^{9}+\cdots , \\ \sigma _{\left( 4;-9\right) }^{\left( 2,2;4;0\right) } = 1+6q+25q^{2}+86q^{3}+260q^{4}+716q^{5}+1833q^{6}+4426q^{7} \\ \hphantom{\sigma _{\left( 4;-9\right) }^{\left( 2,2;4;0\right) }=}{} +10183q^{8}+22488q^{9}+\cdots , \\ \sigma _{\left( 5;-9\right) }^{\left( 2,2;4;0\right) } = 1+4q+19q^{2}+64q^{3}+202q^{4}+560q^{5}+1464q^{6}+3568q^{7} \\ \hphantom{\sigma _{\left( 5;-9\right) }^{\left( 2,2;4;0\right) }=}{} +8315q^{8}+18512q^{9}+\cdots . \end{gather*} Notice that in the congruence class $\Xi _{4;{\rm I}}$ we have only 17 dif\/ferent string functions. \section{Conclusions}\label{section6} The folded fans $F\Psi\big( \overset{\circ }{\xi _{j}}\big) $ (for a f\/ixed level $k$ and the congruence class $\Xi _{k;v}$ of weights in $ \overline{C_{k}^{\left( 0\right) }}$) were constructed by transporting to the fundamental Weyl chamber the standard set $\widehat{% \Psi ^{\left( 0\right) }}$~-- the set of singular weights of module $L^{0}$ supplied with the anomalous multiplicities. We have found out that the shifts $f\psi \big( \overset{\circ }{\xi }\big) \in F\Psi % \big( \overset{\circ }{\xi }\big) $ (connecting $\xi _{j}\in \Xi _{k;v}$) together with their multiplicities~$\eta _{j,s}$ describe the recursive properties of the weights of modules $L^{\xi _{j}}$ with the highest weights~$\xi _{j}$. Thus the set $\big\{ F\Psi\big( \overset{\circ }{% \xi _{j}}\big) |\,\xi _{j}\in \Xi _{k;v}\big\} $ describes the recursive properties of the string functions $\big\{ \sigma _{j}^{\mu ,k}\,|\,\mu ,\xi _{j}\in \Xi _{k;v}\big\} $. When for a f\/ixed module $L^{\mu }$ these properties are simultaneously considered for $\big\{ \sigma _{j}^{\mu ,k}\,|\,\mu ,\xi _{j}\in \Xi _{k;v}\big\} $ they can be written in a form of the equation $\mathbf{M}^{\Xi ,k;v} \mathbf{m}_{\left( u\right) }^{\left( \mu \right) }=\mathbf{\delta }_{\left( u\right) }^{\mu }$. In this equation $% \mathbf{M}^{\Xi ,k;v}$ is a matrix formed by the multiplicities $\eta _{j,s}$ of the fan shifts, $\mathbf{\delta }_{\left( u\right) }^{\mu }$ indicates what weight in the set $ \Xi _{k;v}$ is chosen to be the highest weight $\mu$ of the module and $\mathbf{m}_{\left( u\right) }^{\left( \mu \right) }$ is a vector of string functions coef\/f\/icients. As far as $\mathbf{M}^{\Xi ,k;v}$ is invertible the solution $\mathbf{m}_{\left( u\right) }^{\left( \mu \right) }=\left( \mathbf{M}^{\Xi ,k;v}\right) ^{-1} \mathbf{\delta }% _{\left( u\right) }^{\mu }$ can be explicitly written and the full set of string functions $\big\{ \sigma _{j}^{\mu ,k}\,|\, \mu ,\xi _{j}\in \Xi _{k;v}\big\} $ for $L^{\mu }$ is determined by this linear equation (at least for any common f\/inite ``length'' of all the strings). There are two points that we want to stress. The f\/irst is that in this algorithm the singular vectors $\psi \in \widehat{\Psi ^{\left( \mu \right) }}$ of $L^{\mu }$ are not needed (except the highest weight $\mu $). The second point is that the crossections $F\Psi\big( \overset{\circ }{\xi _{j}}\big) \cap \overline{C_{k,0}^{\left( 0\right) }}$ form the parts of the classical folded fans for $\overset{\circ }{\frak{g}}$. It can be easily verif\/ied that the string starting vectors $\big\{ \sigma _{j}^{\mu ,k}\,|\, \mu ,\xi _{j}\in \Xi _{k;v};\ n=0\big\} $ and their multiplicities present the diagram $\mathcal{N}^{\overset{\circ }{\mu }}\cap \overline{C_{k}^{\left( 0\right) }}$ of the module $L^{\overset{\circ }{\mu }}\big( \overset{\circ }{\frak{g}}\big) $. In general the crossections $F\Psi\big( \overset{\circ }{\xi _{j}}\big) \cap \overline{C^{\left( 0\right) }\big( \overset{\circ }{\frak{g}}\big) }$ do not coincide with the classical folded fans because the chambers $\overline{C^{\left( 0\right) }\big( \overset{\circ }{\frak{g}}\big) }$ are inf\/inite (contrary to~$\overline{% C_{k,0}^{\left( 0\right) }}$ for any f\/inite $k$). As it was demonstrated in the examples the folded fans provide an ef\/fective tool when studying the string functions for integrable highest weight modules of af\/f\/ine Lie algebras. \subsection*{Acknowledgements} The authors appreciate helpful remarks made by the Referees. The work was supported in part by RFBR grants N 06-01-00451, N 08-01-00638 and the National Project RNP.2.1.1.1112. \newpage \pdfbookmark[1]{References}{ref}
1,477,468,750,531
arxiv
\section{Introduction} Dark surface spots are among the most prominent manifestations of solar and stellar activity. While they typically cover less than one percent of the solar surface, their coverage fractions on highly active stars can reach tens of percent \citep[e.g.,][]{ONeal1996}. Therefore, the surfaces of active stars are not always homogeneously bright. Starspots are ultimately caused by the stellar magnetic field, which is sustained by a stellar dynamo. Although mostly spatially unresolved, the photometric and spectral variability induced by rotating stellar surfaces can be used to study the photospheres of active stars using techniques such as Doppler imaging and light curve inversion \citep[e.g.,][]{Rodono1986, Piskunov1990, Karoff2013}. While long-term ground-based photometric observation campaigns have long been used to study stellar activity \citep[e.g.,][]{Jarvinen2005, Olah2006}, the recent advent of the space-based observatories CoRoT and \textit{Kepler} \citep{Baglin2006, Jenkins2010} offers photometric data of unprecedented temporal cadence, continuity, and accuracy. Although the main objectives of both missions are the search for extrasolar planets and asteroseismological studies, the data are also extremely interesting in the context of stellar activity. CoRoT-2A is among the most active planet host-stars known to date. The star shows strong \ion{Ca}{ii} H and K emission line cores and X-ray emission as well as \ion{Li}{i} absorption \citep{Bouchy2008}, suggesting an age of $\sim300$\,Myr \citep{Schroeter2011}. The broad band light curve of the CoRoT-2 system is among the most remarkable discoveries made by the CoRoT mission. In addition to deep transits caused by a bloated hot Jupiter \citep{Guillot2011}, the light curve shows photometric variability on at least two distinct timescales \citep[e.g.,][]{Alonso2008, Lanza2009, Czesla2009, Huber2010}. First, there is clear evidence for rotational variability with a period of $\sim 4.5$ days. Second, the amplitude of the rotation-induced pattern is itself variable, showing modulation reminiscent of a ``beating pattern'' with a period of roughly 50 days. The general pattern remained stable for at least 140\,d, i.e., the duration of the CoRoT observation. In their analyses of the spot configuration, \citet{Lanza2009} and \citet{Huber2010} reconstructed two active longitudes on opposing hemispheres. The prominent photometric beating pattern is related to an alternation in the strength of these active longitudes, in combination with differential rotation. A similar photometric behavior, however, at a much longer timescale, has been observed in the active late-type giant \mbox{\object{FK Comae Berenices}} \citep[e.g.,][]{Jetsu1993}. Here, the pattern has also been attributed to a pair of opposing active longitudes of alternating strengths. A change in the center of activity, i.e., starspot coverage fraction, to the opposing hemisphere is known as a ``flip-flop'' event \citep{Jetsu1993, Olah2006, Hackman2013}. The discovery of such flip-flops in \mbox{\object{FK Com}} was later supplemented with the observation of flip-flops in nearly a dozen additional objects such as \mbox{RS CVn}-type stars \citep[e.g.,][]{Berdyugina1998} and young solar analogs \citep{Berdyugina2005}. These observations have inspired a number of theoretical works on the magnetic field configurations capable of reproducing the observed behavior \citep[for an overview see][]{Berdyugina2006}. \begin{table*} \caption{Parameters of the observed stars provided by the \textit{Exo-Dat} information system.} \label{table:1} \centering \begin{tabular}{ccccccccc} \hline\hline CoRoT-ID & Right Ascension & Declination & B & V & R & I & Spectral & Luminosity \\ & (J2000.0) & (J2000.0) & & & & & type & class \\ \hline 102577568 & 06\,40\,48.3 & $-01\,05\,15.5$ & $12.60$ & $11.81$ & $11.56$ & $11.12$ & G8 & V \\ 102601465 & 06\,41\,28.7 & $+01\,03\,31.3$ & $14.09$ & $13.31$ & $12.93$ & $12.51$ & G8 & V \\ 102606401 & 06\,41\,35.3 & $-01\,27\,26.1$ & $12.66$ & $12.03$ & $11.82$ & $11.37$ & F0 & V \\ 102656730 & 06\,42\,44.8 & $-00\,28\,24.6$ & $13.65$ & $12.91$ & $12.60$ & $12.26$ & G5 & V \\ 102743567 & 06\,44\,37.9 & $-00\,44\,13.4$ & $12.38$ & $11.99$ & $11.85$ & $11.62$ & F0 & V \\ 102763571 & 06\,45\,04.9 & $+00\,59\,09.3$ & $13.94$ & $13.08$ & $12.69$ & $12.21$ & K2 & V \\ 102778303 & 06\,45\,24.7 & $-00\,22\,41.3$ & $13.78$ & $12.64$ & $12.07$ & $11.45$ & K1 & III\\ 102791435 & 06\,45\,42.4 & $-00\,17\,39.0$ & $13.80$ & $12.84$ & $12.39$ & $11.94$ & K0 & V \\ \hline \end{tabular} \end{table*} In this paper, we present the analysis of the CoRoT photometry and follow-up high-resolution spectra of a sample of eight CoRoT targets. The sample stars were selected by their photometric variability which show a variation pattern similar to that of CoRoT-2A. Our main observational objective is to find common spectral characteristics and to establish a link between the structure of the light curves and the stellar parameters. In particular, we are interested in the nature of the observed activity pattern and whether this pattern is powered by differential rotation. We also provide estimates of stellar age and distance in an attempt to thoroughly characterize a larger number of stars with a well-observed beating behavior to significantly increase the data base on which a theory of the underlying dynamo processes can be established. Our paper is organized as follows. In Sect.~\ref{section:target} we describe the sample selection and the photometric/spectroscopic data. The analysis of the data is presented in Sects.~\ref{section:analysis} and Sect.~\ref{section:spectral}. We discuss our findings in Sect.~\ref{section:discussion} and, finally, present our conclusions in Sect.~\ref{section:conclusions}. \section{Observations} \label{section:target} \subsection{CoRoT photometry and target selection} CoRoT was a space-based 30\,cm telescope dedicated to stellar photometry \citep{Auvergne2009}. The satellite observed a few thousand stars simultaneously with a temporal cadence of either 32\,s or 512\,s. The CoRoT mission was divided into several runs, i.e., continuous observing periods pointed at a specific section of the sky. CoRoT provided simultaneous three-color photometry in a red, a green, and a blue channel \citep{Auvergne2009}. Although the exact passbands of these channels remain essentially unknown and are expected to vary between individual targets and observations, they can still be used to test the plausibility of physical assumptions. The sum of the signals registered in the individual color channels is known as the ``white'' light curve. We checked the light curves of about 11\,000 stars situated in CoRoT's first two Long Run fields (LRc01 and LRa01) for variability and manually grouped the light curves into variability classes. In the course of this analysis, we discovered about 40 objects with photometric characteristics similar to those of CoRoT-2A. The light curves of these objects show a dominant modulation between 2 and 11 days, which is most likely caused by rotation. Additionally, there is a beating pattern found with $P_{\mathrm{beat}} \gg P_{\mathrm{rot}}$. On the rotational and beating timescale, the photometric variability is typically a few percent. Eight of the brightest targets were selected for spectroscopic follow-up observations. These stars are listed in Table~\ref{table:1}, along with their available observational stellar parameters from the \textit{Exo-Dat} database\footnote{The values in Table~\ref{table:1} are rounded. The exact numbers including uncertainties can be found at \url{http://cesam.oamp.fr/exodat/}} \citep{Deleuil2009}. This database contains photometric information from several catalog references. To provide uniform photometry, we give B, V, R, and I magnitudes from the \mbox{OBSCAT} catalog. The spectral type and luminosity class were derived using spectral energy distribution (SED) analysis \citep{Deleuil2009}. However, such assigned spectral types and luminosity classes have to be used with some caution. \begin{table}[!t] \caption{CoRoT light curves of sample targets. \label{tab:CoRoTObs}} \begin{tabular}{c c c c c} \hline\hline CoRoT-ID & LRa01 & Sampling & LRa06 & Sampling \\ & & [s] & & [s] \\ \hline 102577568 & \checkmark & 512 & & \\ 102601465 & \checkmark & 512/32\tablefootmark{a} & & \\ 102606401 & \checkmark & 32 & & \\ 102656730 & \checkmark & 512/32\tablefootmark{a} & \checkmark & 32 \\ 102743567 & \checkmark & 512 & \checkmark & 32 \\ 102763571 & \checkmark & 512 & & \\ 102778303 & \checkmark & 512 & \checkmark & 32 \\ 102791435 & \checkmark & 512 & \checkmark & 32 \\ \hline \end{tabular} \tablefoot{\tablefoottext{a}{Sampling changed during run}} \end{table} The light curves of our sample stars were observed in the frame of the first Long Run (LRa01), which lasted for about 131\,d starting in October 2007. Later, a subsample of four targets was reobserved during sixth Long Run (LRa06) with a duration of 77\,d, which began in January 2012; the observations are summarized in Table~\ref{tab:CoRoTObs}. We used the N2-level pipeline data available from the CoRoT Data Center\footnote{\url{http://idoc-corot.ias.u-psud.fr/}}. \begin{figure*} \begin{center} \includegraphics[width=0.49\textwidth]{102577568_2007.pdf} \includegraphics[width=0.49\textwidth]{102601465_2007.pdf} \includegraphics[width=0.49\textwidth]{102606401_2007.pdf} \includegraphics[width=0.49\textwidth]{102763571_2007.pdf} \caption{\label{figure:lc1} Light curves and periodograms of \object{CoRoT 102577568}, \object{CoRoT 102601465}, \object{CoRoT 102606401}, and \object{CoRoT 102763571} (LRa01). The origin of the CoRoT Julian day is 1 January 2000 12:00.00.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.49\textwidth]{102743567_2007.pdf} \includegraphics[width=0.49\textwidth]{102743567_2012.pdf} \includegraphics[width=0.49\textwidth]{102778303_2007.pdf} \includegraphics[width=0.49\textwidth]{102778303_2012.pdf} \caption{\label{figure:lc2} Light curves and periodograms of \object{CoRoT 102743567} and \object{CoRoT 102778303} (left: LRa01, right: LRa06).} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.49\textwidth]{102656730_2007.pdf} \includegraphics[width=0.49\textwidth]{102656730_2012.pdf} \includegraphics[width=0.49\textwidth]{102791435_2007.pdf} \includegraphics[width=0.49\textwidth]{102791435_2012.pdf} \caption{\label{figure:lc3} Light curves and periodograms of \object{CoRoT 102656730} and \object{CoRoT 102791435} (left: LRa01, right: LRa06).} \end{center} \end{figure*} After acquiring the data we first removed all data points marked as ``bad quality'' by the CoRoT pipeline. Such points are produced, for example, by the impact of high-energy particles during CoRoT passages of the South Atlantic Anomaly. Second, we normalized the light curves by dividing by the mean flux. In a final step, we detrended the light curves to remove instrumental long-term trends by subtracting a linear model. The resulting white light curves are shown in the upper panels of \mbox{Figs.~\ref{figure:lc1} -- \ref{figure:lc3}}. For the purpose of visualization, we rebinned the light curves except that of CoRoT 102743567, which shows a particularly fast pattern of variability. In particular, we binned by a factor of 8 for light curves with a 512\,s sampling rate and by a factor of 16 for those with a 32\,s sampling. For all our light curves we estimated the photometric error, $\sigma$. To this end, we divided the light curves into consecutive 0.5\,d long chunks, fitted them using a second-order polynomial, and determined the standard deviation of the residuals. The mean standard deviation determined in the chunks was used as an error estimate on individual data points. The photometric error was used as an input parameter when we utilized the generalized Lomb-Scargle periodogram, which takes the photometric error of the light curve into account (see Sect.~\ref{subsection_period}). \subsection{High-resolution spectroscopy} The spectroscopic observations of our target stars were carried out between 8 and 12 December, 2011, with the SARG echelle spectrograph, mounted on the 3.58\,m ``Telescopio Nazionale Galileo'' (TNG) on La Palma. We used the yellow cross-disperser, which provides wavelength coverage in the $5200-7800\,\AA$ range with a gap of $150\,\AA$ at around $6200\,\AA$ caused by the separation of the CCD detectors. Our setup yields a spectral resolution of $R\sim 57\,000$. During our observations, weather conditions were unstable with periods of excellent conditions interrupted by rather cloudy phases. Therefore, seeing varied between $0.6\,''$ and $5\,''$. As part of our observational campaign, we also obtained a solar spectrum by observing the light reflected from the asteroid 15\,Eunomia, which is required for a differential abundance analysis. The data reduction was carried out using the REDUCE package developed by \citet{Piskunov2002}. In particular, we performed a bias correction, order definition, extraction of the blaze function, and flat fielding. The treatment of scattered light and hot pixels is described by \citet{Piskunov2002}. The wavelength calibration is based on ThAr-lamp reference frames and was carried out using the WAVECAL extension of REDUCE. Finally, a barycentric velocity correction was applied. The majority of the targets was observed several times with individual exposure times between 1800\,s and 3600\,s. To improve the signal-to-noise ratio (S/N), we averaged all spectra taken for an individual target. For every spectrum we then computed a S/N in the $6578-6580\,\AA$ interval, which contains no strong spectral features. The resulting values are listed in Table~\ref{table:snr} along with the total integration time. Here, the S/N refers to the averaged spectrum and although it varies across individual echelle orders and across the entire spectrum, these numbers broadly characterize the data quality. Finally, we manually continuum-normalized the spectra by dividing the flux by linear functions, which were adjusted to have the same gradient as the continuum. \begin{table} \caption{Spectroscopic data obtained with SARG.} \label{table:snr} \centering \begin{tabular}{ccc} \hline\hline CoRoT-ID & S/N & Total exposure time\\ & & [s] \\ \hline 102577568 & 43 & 1800 \\ 102601465 & 37 & 3600 \\ 102606401 & 69 & 1800 \\ 102656730 & 55 & 12600\\ 102743567 & 35 & 9900 \\ 102763571 & 44 & 1800 \\ 102778303 & 46 & 1800 \\ 102791435 & 60 & 2700 \\ \hline \end{tabular} \end{table} \section{CoRoT light curves analysis} \label{section:analysis} In the upper panels of Figs.~\ref{figure:lc1} -- \ref{figure:lc3}, we show the white-band \co\ light curves of our target stars. All light curves show pronounced variability with peak-to-peak amplitudes of several percent (see Table~\ref{table:period}), which we attribute to a rotating and temporally evolving starspot pattern. \subsection{Starspot induced color changes} To assess the plausibility of the starspot hypothesis to explain the observed variability, we examined the individual \co\ color channels and their relation. As observed on the Sun, we postulate that the temperature of the starspots is lower than the remaining stellar photosphere \citep[e.g.,][]{Strassmeier2009}. Once a spot appears on the visible hemisphere, the flux in all three color channels decreases. However, because the spot is cooler, the stellar spectrum appears redder. Therefore, the decrease in the blue channel flux is expected to be stronger than in the red channel. The opposite is the case, when a spot rotates off the visible hemisphere. In Fig.~\ref{figure:spot} we show an excerpt of the white-band light curve of CoRoT 102577568 along with the ratio of the fluxes recorded in the red and blue channels. Clearly, both curves are anticorrelated. A decrease in the white light flux is accompanied by a reddening of the star, which is consistent with spot-induced photometric modulation. We verified that all investigated light curves show this behavior. Alternatively, also pulsations could cause a similar signature. Yet, typical photometric pulsation amplitudes on solar-like stars fall far behind the observed modulation \citep[$<10^{-5}$,][]{White2011}. Therefore, we conclude that the modulation in the light curves is indeed dominated by starspots with only a marginal contribution from pulsations. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{102577568_spotevidence.pdf} \caption{\label{figure:spot} Excerpt of the light curve of CoRoT 102577568 obtained in the white band (solid line) along with the ratio of the red- and blue-channel light curves (dashed, shifted upward by 0.025 for clarity).} \end{center} \end{figure} \subsection{Comparing LRa01 and LRa06} \label{sec:LRa01VsLRa06} For half of our targets, two light curves observed about 4 years apart are available (see Table~\ref{tab:CoRoTObs}). A visual comparison of the two light curves allows to assess the temporal stability of the variability pattern. For CoRoT 102743567 and CoRoT 102778303 both the amplitude and appearance of the light curves remained unchanged. For CoRoT 102656730 this may also be the case, although it seems less clear. During the second, shorter observation, the amplitude is smaller and the pattern appears somewhat more chaotic. However, this is compatible with an observation in the low-amplitude phase of the beating pattern clearly visible during the first Long Run. The situation is similar for CoRoT 102791435, whose light curve appears more erratic, but still periodically variable during the later Long Run. \subsection{Period analysis} \label{subsection_period} Clearly, the light variation of our target stars shows some periodic components. To study the variability in the frequency domain, we applied the generalized Lomb-Scargle periodogram \citep{Zechmeister2009} to all light curves and show the results in the upper subfigures in the lower panels of \mbox{Figs.~\ref{figure:lc1} -- \ref{figure:lc3}}. All periodograms show distinguished peaks at periods between about one and twelve days. We attribute these peaks and the associated modulation in the light curve to rotating starspots and, therefore, also identify the associated period with the stellar rotation period. Generally, periodograms obtained from stellar light curves observed in LRa01 and their counterparts observed in LRa06 show similar structures. Peaks are generally less well resolved in the LRa06 periodograms, because the observation is only about half as long as the LRa01 data set. \begin{table*} \caption{Measured peak-to-peak variability amplitude, $A_{\mathrm{pp}}$, and fitted rotation periods, $P_{\mathrm{fit,1}}$ and $P_{\mathrm{fit,2}}$. Calculated beating period as well as absolute and relative horizontal shear derived from $P_{\mathrm{fit,1}}$ and $P_{\mathrm{fit,2}}$.} \label{table:period} \centering \begin{tabular}{ccrr|rcc} \hline\hline\\[-3.5mm] CoRoT-ID & $A_{\mathrm{pp}}$ [\%] & \multicolumn{1}{r}{$P_{\mathrm{fit,1}}$ [d]} & \multicolumn{1}{r|}{$P_{\mathrm{fit,2}}$ [d]} & \multicolumn{1}{r}{$P_{\mathrm{beat}}$} [d] & $\Delta\Omega_{\mathrm{beat}}$ [rad\,d$^{-1}$] & $\alpha$\\\\[-3.5mm] \hline\\[-3mm] \multicolumn{7}{c}{2007} \\ \hline 102577568 & 3.8 & 5.51$\pm$0.11 & 5.82$\pm$0.09 & 103.4 & 0.061 & 0.053 \\ 102601465 & 4.6 & 6.23$\pm$0.13 & 6.45$\pm$0.10 & 182.7 & 0.034 & 0.034 \\ 102606401 & 3.7 & 3.04$\pm$0.04 & 2.97$\pm$0.03 & 129.0 & 0.049 & 0.023 \\ 102656730 & 5.2 & 5.43$\pm$0.07 & 5.83$\pm$0.07 & 79.1 & 0.079 & 0.069 \\ 102743567 & 7.5 & 0.832$\pm$0.002 & 0.809$\pm$0.002 & 29.3 & 0.215 & 0.028 \\ 102763571 & 5.9 & 5.14$\pm$0.10 & 5.36$\pm$0.08 & 125.2 & 0.05 & 0.041 \\ 102778303 & 4.8 & 6.86$\pm$0.16 & 6.59$\pm$0.12 & 167.4 & 0.038 & 0.039 \\ 102791435 & 3.2 & 10.8$\pm$0.4 & 12.0$\pm$0.6 & 108.0 & 0.058 & 0.100 \\ \hline\\[-3mm] \multicolumn{7}{c}{2012} \\ \hline 102656730 & 2.7 & 5.83$\pm$0.15 & 5.41$\pm$0.13 & 75.1 & 0.084 & 0.072 \\ 102743567 & 7.0 & 0.832$\pm$0.003 & 0.811$\pm$0.004 & 32.1 & 0.196 & 0.025 \\ 102778303 & 4.2 & 6.78$\pm$0.25 & 7.61$\pm$0.38 & 62.2 & 0.101 & 0.109 \\ 102791435 & 1.2 & 11.2$\pm$0.9 & 13.0$\pm$0.9 & 80.9 & 0.078 & 0.138 \\ \hline \end{tabular} \end{table*} The maximum-power periodogram peaks, which we call the primary peaks, clearly show a tendency to be accompanied by at least one nearby secondary peak. A particularly clear example for such a double-peak structure is the periodogram of CoRoT 102656730 in Fig.~\ref{figure:lc3}. We adopted the prewhitening procedure described by \citet{Reinhold2013} to extract closely spaced periods from the periodogram. The period associated with the primary peak was used as input for a sine, whose amplitude and phase were fitted to the data. After subtracting the resulting sine from the light curve, we computed the periodogram of the residuals. Following \citet{Reinhold2013}, we examined the period space within 30\,\% of the primary period, mainly to avoid alias periods. The resulting periodograms are shown in the lower subfigures in the lower panels of \mbox{Figs.~\ref{figure:lc1} -- \ref{figure:lc3}}. To determine the periods associated with the main and secondary peaks, we fitted the peaks using a Gaussian profile and used the center as an estimate of the peak location. Additionally, we interpret their FWHM as an error estimate on the location. The resulting periods and errors are listed in Table~\ref{table:period}, and the locations of the primary and secondary peak are indicated by black vertical dots in Figs.~\ref{figure:lc1} -- \ref{figure:lc3}. In two out of the four stars observed twice by CoRoT we found that the primary and secondary peaks in the LRa06 periodograms remained at essentially the same location (CoRoT 102743567 and CoRoT 102656730). Another case is CoRoT 102778303, where the secondary peak was already quite weak in the LRa01 periodogram. The structure is mostly washed out in the LRa06 periodogram. The situation is also different for CoRoT 102791435, where the secondary peak changed in position and power, possibly indicating considerable change in the stellar spot configuration. In the case of CoRoT 102601465, CoRoT 102606401, CoRoT 102763571 (all LRa01), and CoRoT 102791435 (LRa06) the power of the secondary peak is larger than that of the primary peak. We were able to reproduce this behavior in simulated light curves generated by the superposition of sines with a continuum of periods. Nevertheless, we interpret the secondary peak and the associated period in terms of differential rotation, but advise some caution in interpreting the result. We note that the light curves of CoRoT 102606401 and CoRoT 102601465 were also included in the study presented by \citet{Affer2012}, who provide period measurements of almost two thousand stars observed by \co. For CoRoT 102606401 and CoRoT 102601465, they found rotational periods of 3.039 and 6.375 days, which are compatible with our results. \subsubsection{Differential rotation} The presence of two similar periods in the periodograms of stellar light curves has been attributed to differentially rotating starspots \citep[e.g.,][]{Reinhold2013}. Differential rotation is also a possible explanation for the marked beating pattern in the light curve of \object{CoRoT-2A} \citep[e.g.,][]{Lanza2009, Huber2010} and, therefore, also of our target stars. On the Sun the observed latitude-dependent rotation velocity can be parametrized by \begin{equation} \Omega (\Psi) = \Omega_{\mathrm{eq}} - \Delta \Omega \, \sin^2 \Psi \; = \Omega_{\mathrm{eq}} \left(1 - \alpha \sin^2\Psi \right) \; , \label{eq:DR} \end{equation} where $\Psi$ denotes the latitude, $\Omega_{\mathrm{eq}}$ is the equatorial angular velocity, $\Delta \Omega = \Omega_{\mathrm{eq}} - \Omega_{\mathrm{pole}}$ is the ``absolute horizontal shear'', and $\alpha$ is dubbed the ``relative horizontal shear'' \citep[e.g.,][]{Snodgrass1983, Reinhold2013}. Similar to the situation on the Sun we may expect that the rotation rates of starspots depend not only on latitude but also on their anchoring depth. Therefore, we caution that there may not be a unique relation between latitude and starspot rotation rate as suggested by Eq.~\ref{eq:DR} and helioseismology reveals that the rotation rate of the Sun depends on both latitude and depth \citep{Schou1998}. Therefore, if there are two measurements of $\Omega$ at two latitudes $\Psi_1$ and $\Psi_2$, $\Delta\Omega$ can be written as \begin{equation} \Delta\Omega = \frac{\Omega(\Psi_1) - \Omega(\Psi_2)}{\sin^2(\Psi_2) - \sin^2(\Psi_1)} \; . \end{equation} Remaining ignorant of $\Psi_1$ and $\Psi_2$, as is often the case, a lower limit on $|\Delta\Omega|$ may be obtained by assigning $\Psi_1=0^{\circ}$ and $\Psi_2=90^{\circ}$, which is of course a rather unsatisfactory choice. Furthermore, attributing the larger rotational velocity measurement to the equator ($\Psi=0^{\circ}$), we ensure $\Delta\Omega>0$ and thus, solar-like differential rotation. As the angular velocity is related to the rotation period $P$, via $\Omega = 2\pi P^{-1}$, a lower limit on the absolute and relative latitudinal shear may be obtained by \begin{equation} \Delta\Omega \geqslant 2\pi\left(\frac{1}{P_1} - \frac{1}{P_2} \right) \;\;\;\mbox{and} \;\;\; \alpha \geqslant \frac{P_2 - P_1}{P_2} \; , \label{eq:domega_alpha} \end{equation} where we demand $P_2 > P_1$ to endure solar-like differential rotation. \citet{Froehlich2009} used a three-spot model to reproduce the light curve of CoRoT-2A. Their modeling showed one slowly rotating and two fast rotating spot components. In particular, they found the length of the beating period to be compatible with the overtaking period \begin{equation} \label{eq:P_beat} P_{\mathrm{over}} = P_{\mathrm{beat}} = \left( \frac{1}{P_1} - \frac{1}{P_2}\right)^{-1} \; \end{equation} of the slowest and the fastest differentially rotating spots. Tentatively assigning the primary and secondary peaks identified in the periodograms to differentially rotating spots, we used Eq.~\ref{eq:domega_alpha} to derive lower limits for the absolute and relative horizontal shear and Eq.~\ref{eq:P_beat} to estimate the beating period; the values are listed in Table~\ref{table:period}. In our data, the beating period is not always well defined. Some light curves do not fully cover a pattern as is the case for CoRoT 102601465 (Fig.~\ref{figure:lc1}) and the definition of the start and end of a cycle remains somewhat ambiguous. Nonetheless, the calculated beating periods are in reasonable agreement with the observed behavior in the light curves. In CoRoT 102743567, a number of beating cycles with different duration was observed and the beating structure persisted over (or reappeared after) four years. If attributable to differential rotation, this indicates varying relative spot velocities. Given a temporally constant and unique mapping between latitude and velocity, this implies changing spot latitudes. \section{Spectral analysis} \label{section:spectral} We determined the stellar parameters using two complementary approaches. First, the curve-of-growth based MOOG package \citep{Sneden1973} and, second, the ``Spectroscopy Made Easy'' (SME) package presented by \citet{Valenti1996}, which relies on direct spectral modeling. A visual inspection of the spectra with regard to double line profiles did not reveal indications for binarity in any of our target stars. \subsection{Stellar parameter determination using MOOG} The stellar atmospheric parameters effective temperature ($T_{\mathrm{eff}}$), surface gravity ($\log g$), metallicity ([Fe/H]), and microturbulence velocity ($\xi_{\mathrm{mic}}$) were derived using the 2014 version of MOOG\footnote{\url{http://www.as.utexas.edu/~chris/moog.html}} and a grid of 1D Kurucz ATLAS9 model atmospheres\footnote{The Kurucz grids of model atmospheres can be found at \url{http://kurucz.harvard.edu/grids.html}} \citep{Kurucz1993}. Additionally, we used PYSPEC, a Python interface to run MOOG \citep{Bubar2010}. The analysis carried out by MOOG relies on a curve-of-growth approach in which the stellar atmospheric equilibrium is adjusted to match equivalent width (EW) measurements (see also \citet{Gray2005}). To adjust the equilibrium and find the stellar parameters, it is essential to measure the EWs of lines originating from a single element in different ionization states. For low-mass stars, iron provides a plethora of \ion{Fe}{i} and \ion{Fe}{ii} lines distributed over the optical spectral range; the \ion{Fe}{ii} lines, however, are usually more scarce. Our choice of \ion{Fe}{i} and \ion{Fe}{ii} lines is based on the final line list\footnote{The line list is available online at \url{http://www.astro.up.pt/~sousasag/ares}} provided by \citet{Sousa2008}, who present a total of 263 $\ion{Fe}{i}$ and 36 $\ion{Fe}{ii}$ lines, together with their excitation potentials and oscillator strengths in the wavelength range between $4500\,\AA$ and $6900\,\AA$. We scrutinized their line list and eliminated lines not usable in our analysis, for example, lines suffering from heavy blending or lines for which we could not derive adequate continua to obtain the EW. Additionally, we neglected weak lines ($\lesssim 5$\,m$\AA$) and lines in wavelength intervals with low S/N. The number of the remaining $\ion{Fe}{i}$ and $\ion{Fe}{ii}$ lines of each star are listed in Table~\ref{tab:specan}. Equivalent widths were measured by fitting the line profiles using a Gaussian. The local continuum was adjusted manually for each line to achieve the optimal normalization. To this end, a solar model spectrum served as a comparison, which we also used for the unambiguous line identification in the observed spectra. For CoRoT 102743567 no EWs could be determined from our spectra owing to extreme rotational broadening ($\approx 64\,$km\,s$^{-1}$, see Sect.~\ref{Stellar_parameter_SME} and Table \ref{tab:specan}). To minimize the impact of uncertainties in the atomic line parameters and the characteristics of our particular instrumental setup on the analysis, we carried out a differential abundance analysis. Abundances are thus defined with respect to the Sun \begin{equation} [\mathrm{Fe/H}] = \mathrm{log} \left(\frac{N(\mathrm{Fe})}{N(\mathrm{H})}\right)_* - \mathrm{log} \left( \frac{N(\mathrm{Fe})}{N(\mathrm{H})} \right)_{\odot},\quad \log\, N(\mathrm{H}) \equiv 12. \end{equation} Therefore, we determined the EWs of our sample of spectral lines in a solar spectrum, which we obtained by observing light reflected off the asteroid 15\,Eunomia with the same instrumental setup. Our results are summarized in Table~\ref{tab:specan}. In MOOG, errors were estimated by studying various correlations. In particular, the temperature value was varied until the correlation between excitation potential and the $\ion{Fe}{i}$ abundances reached the $1\sigma$ level. The error on the microturbulence velocity was derived equivalently by adapting it until the correlation between the abundances and the reduced EWs\footnote{The EW divided by the line wavelength} reached $1\sigma$. In MOOG, the surface gravity is determined by minimizing the difference between the abundances obtained from the $\ion{Fe}{i}$ and $\ion{Fe}{ii}$ lines. Yet, the abundances depend on the stellar parameters including the surface gravity, which implies that the error on the surface gravity depends on the surface gravity itself. Therefore, the error had to be derived in an iterative process, which is described in detail by \citet{Bubar2010}. \begin{table*} \caption{Results of spectral analysis with MOOG and SME.} \label{tab:specan} \centering \begin{tabular}{clcrcrrc} \hline\hline CoRoT-ID & \multicolumn{1}{c}{$T_{\mathrm{eff}}$ [K]} & log\,$g$ & \multicolumn{1}{c}{[Fe/H]\tablefootmark{a}} & $\xi_{\mathrm{mic}}$ [$\mathrm{km\,s^{-1}}$] & \multicolumn{1}{c}{$\varv_{\mathrm{rad}}$ [$\mathrm{km\,s^{-1}}$]} & \multicolumn{1}{c}{$\varv\,\mathrm{sin}\,i$ [$\mathrm{km\,s^{-1}}$]} & $N(\ion{Fe}{i}, \ion{Fe}{ii})$\tablefootmark{b}\\ \hline \multicolumn{8}{c}{MOOG analysis} \\ 102577568 & $5600\pm110$ & $4.37\pm0.25$ & $-0.19\pm0.07$ & $1.94\pm0.13$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 104, 11\\ 102601465 & $5630\pm110$ & $4.52\pm0.37$ & $0.01\pm0.07$ & $1.68\pm0.14$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 115, 11 \\ 102606401 & $6270\pm140$ & $4.81\pm0.28$ & $0.02\pm0.09$ & $2.68\pm0.21$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} &79, 11 \\ 102656730 & $5880\pm80$ & $4.84\pm0.19$ & $-0.13\pm0.05$ & $1.80\pm0.13$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} &119, 12 \\ 102743567 & \multicolumn{1}{c}{--} & -- & \multicolumn{1}{c}{--} & -- & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} \\ 102763571 & $5270\pm70$ & $4.24\pm0.17$ & $-0.22\pm0.04$ & $1.84\pm0.10$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} &137, 14 \\ 102778303 & $4840\pm100$ & $4.09\pm0.52$ & $-0.54\pm0.05$ & $2.20\pm0.17$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} &119, 12 \\ 102791435 & $5150\pm60$ & $4.39\pm0.26$ & $-0.03\pm0.03$ & $1.44\pm0.10$ & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} &135, 9 \\ \hline \multicolumn{8}{c}{SME analysis} \\ 102577568 & $5690\pm90$ & $4.64\pm0.30$ & $0.06\pm0.07$ & $1.77\pm0.53$ & $0.45\pm0.51$ & $8.7\pm1.4$ & \multicolumn{1}{c}{--} \\ 102601465 & $5700\pm90$ & $4.88\pm0.31$ & $0.25\pm0.09$ & $1.77\pm0.42$ & $4.1\pm0.4$ & $7.4\pm0.9$& \multicolumn{1}{c}{--} \\ 102606401 & $6220\pm130$ & $4.93\pm0.32$ & $0.15\pm0.09$ & $1.76\pm0.41$ & $-18.9\pm0.5$ & $13.2\pm1.1$& \multicolumn{1}{c}{--} \\ 102656730 & $5900\pm100$ & $5.04\pm0.41$ & $0.02\pm0.10$ & $1.50\pm0.32$ & $-16.9\pm0.5$ & $10.2\pm1.0$& \multicolumn{1}{c}{--} \\ 102743567 & \multicolumn{1}{c}{--} & -- & \multicolumn{1}{c}{--} & -- & $-46.9\pm0.2$ & $63.9\pm3.4$& \multicolumn{1}{c}{--} \\ 102763571 & $5280\pm60$ & $4.61\pm0.25$ & $-0.15\pm0.09$ & $1.79\pm0.43$ & $-12.1\pm0.4$ & $5.5\pm0.8$& \multicolumn{1}{c}{--} \\ 102778303 & $4680\pm150$ & $4.97\pm0.19$ & $-0.23\pm0.30$ & $1.75\pm0.61$ & $-12.3\pm0.5$ & $5.4\pm1.6$& \multicolumn{1}{c}{--} \\ 102791435 & $5100\pm140$ & $4.69\pm0.25$ & $0.09\pm0.14$ & $1.54\pm0.46$ & $-23.2\pm0.3$ & $3.6\pm0.8$& \multicolumn{1}{c}{--} \\ \hline \end{tabular} \tablefoot{\tablefoottext{a}{For MOOG this is specifically the iron abundance. For SME, it refers to the metallicity pattern according to \citet{Grevesse2007}.} \tablefoottext{b}{Number of \ion{Fe}{i} and \ion{Fe}{ii} lines used in the MOOG analysis.}} \end{table*} \subsection{Stellar parameter determination using SME} \label{Stellar_parameter_SME} Following the analysis with MOOG, we used the software package SME in version~2.1 to derive a complementary set of parameters. In contrast to MOOG, the technique employed by SME is based on fitting synthetic spectra to the observations. As input, a grid of model atmospheres and an atomic line list are required. For the latter we obtained atomic data from VALD3\footnote{VALD3 is available at \url{http://vald.astro.uu.se/}} \citep{Kupka2000} and, again, used ATLAS9 Kurucz atmospheres. In our analysis, we used SME to fit individual echelle orders neglecting the low S/N edges of the orders and other unsuitable spectral regions. The latter comprise sections with spectral lines missing in the line list; regions affected by cosmics, telluric lines, airglow emission lines, and bands lacking any notable stellar absorption lines. In our fits, we used the values for $T_{\mathrm{eff}}$, log\,$g$, [Fe/H], and $\xi_{\mathrm{mic}}$ determined with MOOG as initial values for the SME fit. For the macroturbulence parameters, we assumed the solar value of $\xi_{\mathrm{mac}}=3.57\,\mathrm{km\,s^{-1}}$. In a first fitting step, we determined the radial velocity shift $\varv_{\mathrm{rad}}$ for all orders. We averaged the results in the different orders and interpreted the mean as the best estimate of the stellar parameter and its standard deviation as a robust error estimate. Second, we determined the mean rotational broadening parameter, $\varv\,\mathrm{sin}\,i$, in the same manner. In a third step, we fitted $\xi_{\mathrm{mic}}$ and [M/H], and finally, fitted $T_{\mathrm{eff}}$ and log\,$g$. During the fitting process, all parameters not currently fitted remained fixed at their best-fit values. The final values are summarized in Table~\ref{tab:specan}. An example of a spectral interval of CoRoT 102656730 is shown together with the best-fit synthetic spectrum in Fig.~\ref{figure:sme_example}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{sme_example.pdf} \caption{Segment of the observed (open squares) and synthetic spectrum (solid line) of CoRoT 102656730. The grey shaded regions were used to determine the stellar parameters.} \label{figure:sme_example} \end{center} \end{figure} In the case of CoRoT 102743567 we fitted $\varv_{\mathrm{rad}}$ and $\varv\,\mathrm{sin}\,i$ using only orders covering the strong H$\alpha$ Balmer line and the $\ion{Na}{i}$ doublet at $\approx5990\,\AA$. As initial values we calculated $T_{\mathrm{eff}}$ photometrically using B--V colors as described in Sect.~\ref{Color, extinction, and distance}. Furthermore, we chose typical values of main-sequence stars for log\,$g=4.3$ based on \citet{Gray2005}, solar metallicity, and $\xi_{\mathrm{mic}}=1.5$\,km\,s$^{-1}$. \subsection{Stellar parameters obtained using MOOG and SME} In general, we find good agreement between the effective temperatures, surface gravities, metallicities, and microturbulence velocities derived with SME and MOOG. With a few exceptions, the values agree within their respective uncertainties. However, the surface gravities found with SME are systematically higher than those determined with MOOG. In particular, we find differences between 0.12 and 0.37 for all stars but CoRoT 102778303 for which the deviation reaches 0.88; however, the error is also large. This star is the coolest in our sample, and we speculate that the line EW measurements may be affected by line blends, which could also account for the rather low metallicity of $-0.54\pm0.05$ found in our MOOG analysis. The direct fitting approach implemented in SME should minimize errors caused by blending effects or continuum determination. \citet{Valenti2005} analyzed 1040 F, G, and K stars using SME and obtained statistical errors of 44\,K on $T_{\mathrm{eff}}$, 0.06 on log\,$g$, and 0.03 on the metallicity. In our analysis, we found that the deviation among fits to individual echelle orders were larger than their estimates. Thus, our SME errors are likely dominated by systematics. All stars show substantial photometric variability, which we attribute to starspots. Therefore, high starspot coverage fractions are conceivable \citep[e.g.,][]{Huber2010}, although we caution that the data have not been taken simultaneously. A high starspot coverage fraction may actually challenge the assumption of a single-temperature photosphere, which tacitly underlies our spectral analysis. It may also explain at least a fraction of the differences between the MOOG and SME analysis. \subsection{Color, extinction, and distance} \label{Color, extinction, and distance} \begin{table} \begin{center} \caption{Intrinsic color, color excess, and photometric distance estimate} \label{table:color_temp} \begin{tabular}{cccc} \hline\hline CoRoT-ID & (B--V)$_0$ & E(B--V) & Dist.\tablefootmark{a} \\ & [mag] & [mag] & [pc] \\ \hline 102577568 & 0.70 & 0.09 & 199 \\ 102601465 & 0.69 & 0.09 & 407 \\ 102606401 & 0.53 & 0.11 & 347 \\ 102656730 & 0.62 & 0.12 & 404 \\ 102743567 & -- & -- & 474 \\ 102763571 & 0.80 & 0.05 & 291 \\ 102778303 & 0.96 & 0.18 & 138 \\ 102791435 & 0.84 & 0.12 & 207 \\ \hline \end{tabular} \end{center} \tablefoot{\tablefoottext{a}{The accuracy is typically 25\,\%, see text.}} \end{table} \citet{Ramirez2005} provide metallicity-dependent relations between color and effective temperature. Given the spectroscopically determined parameters and the observed B--V colors (see Table~\ref{table:1}), we can determine the B--V color excess. The resulting intrinsic colors, (B--V)$_0$, and excesses, E(B--V), are given in Table~\ref{table:color_temp}. In the conversion, we applied the effective temperatures and metallicities derived using MOOG (Table~\ref{tab:specan}) and assumed main-sequence stars. By substituting the upper and lower error boundaries on the effective temperature, we estimated that the temperature-induced error on the color excess is on the order of 0.03\,mag. For the star CoRoT 102743567, for which no spectroscopic parameters could be determined, we used the observed B--V color of 0.39\,mag to obtain an estimate of 7000\,K for its effective temperature, which points to an early F-type star. We note that \mbox{B--V$=0.39$\,mag} requires a slight stretch of the \citet{Ramirez2005} calibration, which is formally only valid until 0.4. This result is consistent with the classification as an A9III type star by \citet{Sebastian2012} based on low-resolution spectroscopy. We proceeded by converting the color excess into optical extinction by multiplying with $R$, for which we assumed a value of 3.1 \citep{Predehl1995}. Using Table~15.7 from \citet{allen2000}, we estimated absolute visual magnitudes based on our effective temperature determinations and, finally, calculated a photometric distance estimate for our target stars, taking the extinction into account. Estimating an accuracy of $\pm 0.5$\,mag for the distance modulus, the typical error on the distances is 25\,\%. \subsection{Age of the sample stars} The $\ion{Li}{i}$ resonance doublet at $6707.76\,\AA$ and $6707.91\,\AA$ can be used as an age estimator in young stars because lithium depletion progresses quickly during the first few hundred Myr \citep[e.g.,][]{Soderblom2010}. In Fig.~\ref{figure:lithiumlines}, we show the spectral region around the $\ion{Li}{i}$ line for our sample stars. Unambiguous detections are present for four out of the eight targets. In a quantitative analysis, we fitted the line profile by a Gaussian after adjusting the local continuum, which is the main source of error. Clear detections of the lithium line were obtained in CoRoT 102577568, CoRoT 102601465, CoRoT 102606401, and CoRoT 102763571. A formally significant line detection is also obtained in CoRoT 102778303. We caution, however, that this result may require confirmation at higher S/N. For CoRoT 102656730 and CoRoT 102791435 we derived upper limits on the line EW. To this end, we generated artificial data sets between $6707.0\,\AA$ and $6708.5\,\AA$ assuming that no line is present and fitted them using the model including the absorption line. Repeating this experiment $10\,000$ times, we determined the distribution of line EW measurements given that in reality no line exists and give the 90\,\% cut-off as the upper limit for a significant detection. The final EWs along with their $90\,\%$ confidence intervals are listed in Table~\ref{table:ew}. The $\ion{Li}{i}$ line is known to be blended by an $\ion{Fe}{i}$ line at $6707.43\,\AA$, which has not been taken into account in our fitting. \citet{Favata1993} studied the contribution of this iron line to the overall EW. From their Fig.~1, we estimate that the contribution for a star with an effective temperature of 5500\,K should be on the order of $10\,m\AA$ and it decreases toward higher temperatures. However, for CoRoT 102778303, at a temperature of about $4700$\,K, the $\ion{Fe}{i}$ line could have an EW of about $20$\,m\AA, which is on the same order as our measurement. While this could indicate that the line may indeed be attributable to iron, this seems unlikely given the subsolar abundance pattern. All other detections should, if anything, be affected on the $10$\,\% level, casting no doubt on the detection of the $\ion{Li}{i}$ line itself. We proceeded by comparing the measured $\ion{Li}{i}$ line EWs with measurements in open cluster members of well-known age. In Fig.~\ref{figure:age} we show our $\ion{Li}{i}$ EW measurements as a function of effective temperature along with results for the open clusters \object{Orion} Ic \citep[10\,Myr,][]{King1993}, \object{NGC 2264} \citep[10\,Myr,][]{Soderblom1999}, \object{Pleiades} \citep[100\,Myr,][]{Soderblom1993a}, \object{Ursa Major} \citep[300\,Myr,][]{Soderblom1993b}, \object{Hyades} \citep[660\,Myr,][]{Soderblom1990}, and \object{Praesepe} \citep[660\,Myr,][]{Soderblom1993c}. Based on their location in the plot, we arrived at the age estimates provided in Table~\ref{table:ew}. Although the stars within an open cluster are formed simultaneously, there is a considerable scatter around a mean value in the measured EWs for a given effective temperature. \begin{table} \begin{center} \caption{Equivalent widths of \ion{Li}{i} line and stellar age estimates.} \label{table:ew} \begin{tabular}{cccc} \hline\hline CoRoT-ID & Li\,I EW [m\AA] & Age\tablefootmark{a} [Myr] & Age\tablefootmark{b} [Myr] \\ \hline 102577568 & $89.8^{+6.4}_{-6.4}$ & 100--660 & $178\pm21$ \\ 102601465 & $114.7^{+10.6}_{-10.3}$ & 100--660 & $234\pm29$ \\ 102606401 & $153.4^{+6.5}_{-6.4}$ & 10--100 & $149\pm24$ \\ 102656730 & $\le3.9$ & 300--660 & $247\pm33$ \\ 102743567 & -- & -- & -- \\ 102763571 & $161.0^{+8.2}_{-8.1}$ & 100 & $112\pm12$ \\ 102778303 & $29.4^{+8.0}_{-7.5}$ & 300--660 & $132\pm13$ \\ 102791435 & $\le3.8$ & 300--660 & $419\pm50$ \\ \hline \end{tabular} \end{center} \tablefoot{\tablefoottext{a}{Comparison with open clusters}; \tablefoottext{b}{Gyrochronological age}} \end{table} As a result of the well-established age-activity-rotation paradigm \citep[e.g.,][]{Skumanich1972}, rotation itself can be used as an age indicator. We calculated the stellar ages of our target stars based on gyrochronological models presented by \citet{Barnes2007} (their Eq.~3). In particular, we used the rotation period and the intrinsic color \mbox{(B--V)$_0$} as input parameters. The errors were estimated using their Eq.~16. Again, we present our results in Table~\ref{table:ew}. Almost all of the gyrochronological estimates are consistent with our estimates based on the comparison with open clusters. The single exception is CoRoT 102778303 for which our cluster comparison yields a higher age. However, the star CoRoT 102778303 is the coolest target in our sample, for which we obtained effective temperature estimates of $4840\pm100$\,K with MOOG and $4680\pm150$\,K with SME. If the star is at the lower edge of the indicated temperature range and the \ion{Li}{i} line is not heavily contaminated by iron, its age could also be compatible with that of the Pleiades in Fig.~\ref{figure:age}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{lithiumlines.pdf} \caption{\label{figure:lithiumlines} Normalized spectra of our target stars showing the wavelength region around the \ion{Li}{i} line at 6708\,\AA.} \end{center} \end{figure} \section{Discussion} \label{section:discussion} \subsection{Stellar parameters and age determination} The results of our spectral analysis are broadly consistent with the classification provided by \textit{Exo-Dat} (see Table~\ref{table:1}). All stars show gyrochronological ages between $111$ and $418$\,Myr. Additionally, five stars show a \ion{Li}{i} line supporting a low age, although the detection in CoRoT 102778303 remains somewhat ambiguous. Fast rotation and young age are compatible with high levels of activity and large starspot coverage fractions responsible for the photometric variability. Judging from the derived spectroscopic parameters, our sample consists of main-sequence stars. However, we find that the luminosity class for CoRoT 102778303 provided by \textit{Exo-Dat} is probably inappropriate. The obtained log\,$g$ and $\varv\,\sin\,i$ values, gyrochronology, and rotation period indicate that CoRoT 102778303 should be classified as a dwarf instead of a subgiant. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{age.pdf} \caption{\label{figure:age} $\ion{Li}{i}$ EW vs. effective temperature for several open clusters. The location of our sample stars is indicated by black circles (effective temperatures derived with MOOG).} \end{center} \end{figure} \subsection{Differential rotation, spot evolution, and flip-flop events} \citet{Reinhold2013} studied differential rotation in a sample of $40\,661$ active stars observed by \textit{Kepler}. In this sample, $77.2$\,\% of the targets show a second periodogram peak, which they attribute to a second rotation period. Following their interpretation, we show our results for the lower limit of the relative horizontal shear in the context of the sample presented by \citet{Reinhold2013} in Fig.~\ref{figure:reinhold}. Our values are clearly compatible with theirs, although CoRoT 102601465 lies near their detection limit. In particular, we were able to confirm the trend of increasing relative shear, $\alpha$, with higher rotation periods. CoRoT 102778303 and CoRoT 102791435, the coolest targets with the longest rotation period which have been observed twice, show a considerable shift in the value of the relative shear parameter between the two observation epochs roughly four years apart. We attribute the difference to both an uncertainty resulting from the prewhitening technique and the variability of the spot configuration. The results reflect the uncertainty that is expected in individual measurements. The star CoRoT 102743567 is the earliest star and fastest rotator in our sample. Combining a canonical radius of $1.5$\,R$_{\odot}$ for the star \citep{allen2000} with the rotation period, we estimate an equatorial rotation velocity, $\varv_{\mathrm{eq}}$, of $90-100$\,km\,s$^{-1}$. Based on our period analysis, we further estimated a relative horizontal shear parameter of $0.052$. \citet{Ammler2012} studied differential rotation specifically in A- and F-type stars using line profile analyses. Although the authors find that the fraction of differential rotators decreases both as a function of increasing temperature and rotation velocity, about $10$\,\% of their sample stars with $\varv\,\sin\,i\approx 100$\,km\,s$^{-1}$ show measurable differential rotation. Among these stars, a relative horizontal shear of $\approx 0.05$ seems too moderate (see their Fig.~10). Indeed, at $7000$\,K an absolute horizontal shear, $\Delta \Omega$, of $0.6$\,rad\,d$^{-1}$ may be expected, which is also quite compatible with our value of $0.4$\,rad\,d$^{-1}$. We note that the spectral type of CoRoT 102743567 is in the range of $\gamma$~Dor-type variables, which show pulsations with periods on the order of one day \citep{Kaye1999, Zwintz2013}. Although it is still hard to unambiguously distinguish between the rotational modulation and a potential pulsational component \citep[cf.,][]{Zwintz2013}, our results are consistent with dominant rotational variation. If the beating pattern is to be caused by differentially rotating active regions (alone), its length also defines a minimal lifetime for the associated regions, which in our case ranges between $15$\,d and about $150$\,d. Typical sunspot lifetimes are on the order of or less than one month \citep{Solanki2003} and low- to mid-latitude spots on rapidly rotating, young single main-sequence stars also appear to have lifetimes of about one month \citep{Hussain2002}. Therefore, the longer beating periods seem challenging in terms of spot lifetimes. However, larger spots tend to have longer lifetimes \citep{Solanki2003}. The light curves of our sample stars (Figs.~\ref{figure:lc1} -- \ref{figure:lc3}) show a similar pattern of variability to the light curves of flip-flop stars such as the active giant \mbox{FK Com} or young solar analogs \citep[see][]{Jetsu1993, Berdyugina2005, Olah2006, Hackman2013}. Flip-flops are a specific manifestation of spot evolution, characterized by alternating spot coverage on two long-lived active longitudes on opposing hemispheres, which requires ``coordinated'' spot evolution. While the origin of the phenomenon may be entirely explained by the evolution of quasi-stationary spots, differential rotation and latitudinal spot migration may also play a role or even largely explain the observations \citep[][]{Jetsu1993, Hackman2013}. The surface reconstructions of \object{CoRoT-2} have revealed spot concentrations on active longitude on opposing hemispheres \citep{Lanza2009, Huber2010}, which alternate in strength on the beating timescale ($\approx 50$\,d). Owing to the similarity of the light curves studied here, a comparable behavior may be expected. While the behavior of the light curves analyzed here is reminiscent of the flip-flop phenomenon, the flip-flop timescales observed so far are several years \citep[e.g.,][]{Hackman2013}. For instance, \citet{Jetsu1993} detected only three flip-flop events in a photometric data set spanning roughly 25 years. While spot evolution certainly contributes to the morphology of the observed light curves, the absence of differential rotation in our late-type sample stars appears unlikely. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{reinholdt.pdf} \caption{Lower limits of the relative horizontal shear as a function of $P_{\mathrm{min}}$ for our target stars and the sample of \citet{Reinhold2013}. Values determined for stars with two CoRoT light curves obtained during different epochs are connected by a solid line. We adopted the effective temperatures derived with MOOG and used 7000\,K for CoRoT 102743567.} \label{figure:reinhold} \end{center} \end{figure} \section{Summary and conclusion} \label{section:conclusions} We present a photometric and spectroscopic study of eight stars with light curves showing photometric variability similar to that of CoRoT-2A. The sample spans a wide range of spectral types from early F- to mid K-type. The stellar parameters obtained from our spectral analysis with SME and MOOG are generally consistent. For the fastest rotator in our sample, CoRoT 102743567, no detailed spectral analysis could be carried out. For the remaining stars, we obtained surface gravities compatible with that of main-sequence stars. Combining the spectroscopically derived effective temperatures with the stellar colors, we deduced distances corrected for interstellar reddening. The light curve analysis showed large photometric amplitudes of up to 7.5\,\% and short rotation periods between about $0.8$\,d and $11$\,d. We found the photometric variability to be consistent with rotational modulation by starspots. In the majority of cases, our periodogram analysis revealed two peaks corresponding to similar periods, whose spacing is related to the beating period. Attributing the two periods to differentially rotating spots and combining the results with our spectroscopic measurements, we find results consistent with similar previous analyses of Kepler light curves \citep{Reinhold2013}. However, in the end it remains unclear whether the prominent pattern of variability exhibited by the light curves is dominated by differential rotation or spot evolution. In analogy to findings from photometric campaigns of the active giant FK~Com, we expect both effects to play a role. Gyrochronological models show that all sample stars in our sample are young dwarfs ($100-400$\,Myr). In four stars, we also found detectable \ion{Li}{i} absorption, which also points toward a low age. This is consistent with the high level of activity evident in the light curves. Our sample shows a wide spread in spectral types including F-, G-, and K-type stars, which all show a similar photometric beating behavior. This suggests that all low-mass stars with outer convection zones may produce a similar CoRoT-2A-like light curve sometime in their early evolution. \begin{acknowledgements} The authors thank Dr. Sebastian Schr\"oter for valuable discussions in preparing the project and support in obtaining the spectra. This work was prepared using PyAstronomy. This research has also made use of the ExoDat Database, operated at LAM-OAMP, Marseille, France, on behalf of the CoRoT/Exoplanet program. We acknowledge use of observational data obtained with SARG at the TNG, Roque de Los Muchachos, Spain. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. Special thanks to Eric J. Bubar (University of Rochester) for making his MSPAWN and PYSPEC codes available to us. We are grateful to Nikolai Piskunov (University of Uppsala) for providing SME to us. We made use of the stellar spectrum synthesis program SPECTRUM of Richard O. Gray (Appalachian State University). \end{acknowledgements} \bibliographystyle{aa}
1,477,468,750,532
arxiv
\section*{REFERENCES}} \def\emph{e.g. }{\emph{e.g.~}} \def\Eg{\emph{E.g.~}} \def\emph{i.e. }{\emph{i.e.~}} \def\Ie{\emph{I.e.~}} \def\emph{cf.~}} \def\Cf{\emph{Cf.~}{\emph{cf.~}} \def\Cf{\emph{Cf.~}} \def\emph{etc.~}} \def\vs{\emph{vs.~}{\emph{etc.~}} \def\vs{\emph{vs.~}} \defw.r.t.~} \def\dof{d.o.f.~{w.r.t.~} \def\dof{d.o.f.~} \def\emph{et al.}{\emph{et al.~}} \def\textit{d-OR }{\textit{d-OR }} \def\textit{w-OR }{\textit{w-OR }} \def\textit{AND }{\textit{AND }} \def\textit{WAIT }{\textit{WAIT }} \def\emph{et al.}{\emph{et al.}} \def\emph{i.e. }{\emph{i.e. }} \def\emph{e.g. }{\emph{e.g. }} \def\emph{True }{\emph{True }} \def\emph{true }{\emph{true }} \def\emph{False }{\emph{False }} \def\emph{false }{\emph{false }} \newtheorem{definition}{Definition} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{^{\mathrm{T}}}{^{\mathrm{T}}} \newcommand{{\boldsymbol \theta}}{{\boldsymbol \theta}} \def\mathbf{a}{\mathbf{a}} \def\mathbf{x}{\mathbf{x}} \def\mathbf{y}{\mathbf{y}} \makeatother \newcommand{\wb}[1]{\overline{#1}} \newcommand{\wt}[1]{\widetilde{#1}} \def\hspace{1pt}{\hspace{1pt}} \def\hspace{3pt}{\hspace{3pt}} \def\hspace{5pt}{\hspace{5pt}} \def\hspace{12pt}{\hspace{12pt}} \usepackage{amsmath,amssymb,array} \usepackage{algorithm} \begin{document} \defPlanning and Learning: A Review of Methods involving Path-Planning for Autonomous Vehicles{Planning and Learning: A Review of Methods involving Path-Planning for Autonomous Vehicles} \title{Planning and Learning: A Review of Methods involving Path-Planning for Autonomous Vehicles} \author{Kevin Osanlou$^{1,2}$, Christophe Guettier$^1$, Tristan Cazenave$^{2}$, Eric Jacopin$^3$} \affiliation{$^1$ Safran, $^2$ Paris-Dauphine University, $^3$ CREC Saint-cyr \\ [email protected], [email protected], [email protected], [email protected]} \begin{abstract} \noindent This short review aims to make the reader familiar with state-of-the-art works relating to planning, scheduling and learning. First, we study state-of-the-art planning algorithms. We give a brief introduction of neural networks. Then we explore in more detail \textit{graph neural networks}, a recent variant of neural networks suited for processing graph-structured inputs. We describe briefly the concept of reinforcement learning algorithms and some approaches designed to date. Next, we study some successful approaches combining neural networks for path-planning. Lastly, we focus on temporal planning problems with uncertainty. \end{abstract} \keywords{Planning, Learning, Machine Learning, Graph Machine Learning} \maketitle \section{Planning} The aim of planning is to conceive plans in order to achieve a particular goal. Those plans represent a sequence of actions executed by an agent that enable the transition from a \textit{start state} of an environment, where goal requirements are not satisfied, to an \textit{end state} where they are. In some planning tasks, states are fully observable, in others, only partially. Actions taken by the agent can be deterministic (\emph{i.e. } lead to a certain future state) or non-deterministic (\emph{i.e. } lead to different future states based on some probabilities that are either known or not). State variables can be continuous or not, resulting in a possibly finite or infinite number of states. Actions can be taken in parallel or only one at a time, and have a duration or not. There can be several initial start states or only one. There can be several agents or only one. Planning environments can be diverse, varying from simple positioning in a graph to the complex dynamics of a \textit{first person shooter} (FPS) video game. In classical planning, models are restricted in the following aspects. The environment is fully observable, there is a single agent, states are finite, there is only one known initial start state, actions are instantaneous and deterministic: there is no uncontrollable event. Actions can only be taken one at a time. Therefore, a sequence of actions from a start state will accurately define the end state, which needs to satisfy goal requirements. Generally, classical planning can be represented mathematically by a set ($S,A,P$) where: \begin{itemize} \item $S$ is the set of states \item $A$ is the set of actions \item $P$ is a state transition function \end{itemize} \noindent The state transition function $P: S \times A \longrightarrow 2^S$ defines a transition from a current state $s\in S$ to another state $s' \in S$ by considering an action $a \in A$. To express and solve planning tasks in computer science, different languages have been proposed. Each language represents components of the planning environment differently. These include the Stanford Research Institute Problem Solver (STRIPS) \citep{fikes1971strips} from SRI International and the popular Planning Domain Definition Language (PDDL) \index{Languages! PDDL} \citep{mcdermott1998pddl}. NASA introduced its own planning language, the Action Notation Modeling Language (ANML) \index{Languages! ANML} \citep{smith2008anml}. \subsection{Applications of Planning in Autonomous Systems} A system is considered autonomous if it is able to generate and execute plans to achieve its assigned goals without human intervention, and if it is able to deal with unexpected events. Planning has benefited autonomous systems greatly in the past 50 years. Early on, Shakey the robot \citep{nilsson1984shakey}, the first general-purpose autonomous mobile robot, was a project that saw the rise of a powerful planning algorithm known as A* (introduced in the next sections), still used nowadays. Space exploration has benefited greatly from planning techniques. Autonomy in satellites or other space vehicles reduces the need of human presence as well as communication to ground, which can be especially useful for long term missions. Applications include Deep Space 1 \citep{muscettola1998remote}, or more recently the Curiosity rover \citep{rabideau2017prototyping} which is currently exploring Mars. Aerospace applications include Unmanned Aerial Vehicles (UAVs). These have a wide array of applications. Civil applications include road traffic monitoring, remote sensing, security, goods delivery \citep{motlagh2016low}, networking and communications \citep{hayat2016survey}. UAVs can also be used for operational situations such as search and rescue tasks \citep{silvagni2017multipurpose}, \citep{doherty2007uav}. Search and rescue operations are typically very costly both in terms of costs and human resources, and can present human risks. UAVs reduce those costs and their ability to fly autonomously allows to remove human presence for dangerous tasks. Autonomous Unmanned Ground Vehicles (AUGV) and Autonomous Ground Vehicles (AGV) are also at the center of automation efforts where (trajectory) planning is playing a crucial role. Among AGVs, self-driving cars have been the main focus for civil applications given the potentially revolutionary impact they can have on society. The most advanced self-driving cars combine the latest sensors and computer vision tools for environment perception and use planning to make relevant decisions. We refer the reader to \citep{badue2020self} for a complete survey on self-driving cars. AUGVs on the other hand are intended for other tasks such as typical disaster relief situations, in which they can be required to perform technical actions (\emph{e.g. } observations, measurements, communications, etc...) while navigating mostly in off-road environments across defined trajectories \citep{guettier2016constraint}. Automation allows AUGVs to perform dangerous tasks without human presence, and AGVs to move on their own while passengers can focus on other activities. \section{Motion Planning and Path-Planning} Path-planning consists of finding a path leading to a desired point from a start point. Motion planning consists of determining motion and path decisions for an agent in order to allow it to achieve a specified motion-related task. Motion planning is more general than path-planning in the sense that, in addition to determining a path the agent needs to take to reach an end point from a start point, it also requires motion characteristics for the agent to reach the end point. Such characteristics can be, but are not limited to, a sequence of positions over time, acceleration values to provide in order to reach a potentially required speed or parameters such as directional angles. Figure ~\ref{fig_motion_planning} illustrates the example of a motion planning task in which a robot manipulator is tasked with grabbing an object located at a START position and moving it to the GOAL position \citep{robotmanip}. The robot has 4 joints which can revolve. The last joint is used to grab and release objects. Let $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ be the angles for each joint, starting from the base of the robot. A planning state is defined by a vector $s = (\alpha_1, \alpha_2, \alpha_3, \alpha_4)$ which entirely defines the position of the robot and the potential object it is carrying. The configuration space $S$ is made of all possible combinations of values each $\alpha_i$ can take. Some states are 'legal', \emph{i.e. } the robot can actually be in those states, others 'illegal', \emph{i.e. } the robot cannot be in those states. For instance, supposing the angle axis is horizontal and revolves counterclockwise, any state written as $(\frac{3}{2}\pi, \alpha_2, \alpha_3, \alpha_4)$ will not be valid regardless of the values of $\alpha_2, \alpha_3, \alpha_4$ since the first joint cannot bend the joined arm downward. By discretizing values of each angle, the robot can determine a series of consecutive angle changes for each joint, which will be considered as actions, that will allow it to grab and move the object from START to GOAL. \begin{figure}[tbh] \centering \includegraphics[scale=0.7]{intro_chapter/pictures/motion_planning.pdf} \caption[A motion planning task for a robot manipulator] {\textbf{A motion planning task for a robot manipulator.} The robot has to carry an object from the START location to the GOAL position. Source: \citep{robotmanip}} \label{fig_motion_planning} \end{figure} Figure~\ref{fig_path_planning} shows a path-planning problem in which an agent has to move from a start grid to an end grid. In this environment, black grids represent obstacles. At each step, the agent can move to the adjacent top, left, right or bottom grids. A possible path, in red, allows the agent to fulfill its goal, and minimizes the total distance it needs to travel. The environment in Figure~\ref{fig_path_planning} can be represented by a geographical graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$. In this graph, nodes in $\mathcal{V}$ are grids not blocked by obstacles, and edges in $\mathcal{E}$ link adjacent (non-diagonal) grids. Edges are assigned a default weight of $1$ as we suppose adjacent grids to be equidistant from one another. \begin{figure}[tbh] \centering \includegraphics[scale=0.5]{intro_chapter/pictures/path_planning.pdf} \caption[A path-planning task for an agent] {\textbf{A path-planning task for an agent.} Black grids represent obstacles. The agent is located at the START grid and needs to move to the END grid. The red arrows represent a possible path to satisfy that goal.} \label{fig_path_planning} \end{figure} Next, we describe some popular deterministic heuristic-based algorithms for path-planning related problems. These algorithms are well-suited for planning domains with low dimensionality. \subsection{A*} \index{Algorithms! Planning! A*} The A* algorithm \citep{hart1968formal} is a popular best-first search approach to compute an optimal path. Note that A* and related algorithms remain applicable more broadly in other planning domains than path-planning, which is what makes them so popular. A* can be considered as a specialized form of \textit{Dynamic Programming} (DP) \index{Algorithms! Search! DP} \citep{bellman1966dynamic}. DP essentially breaks down a problem into sub-problems in a recursive fashion and seeks to find the optimal choice to make at each step. This can be expressed as a search tree, which a DP algorithm will explore entirely to return an optimal solution. On the other hand, A* differs in that it will guide search towards most promising states first in order to potentially save a significant amount of computation. In path-planning problems, states are graph nodes and transition cost from a state to another is the cost of the edge linking the corresponding nodes in the graph. A* is complete: it will always find a solution if one exists in a finite search space. Depending on requirements, a heuristic which guarantees to find an optimal solution can be used, or a heuristic which simply aims to find a good solution very efficiently, even if possibly sub-optimal. We describe this process next. Let $S$ be the finite set of states A* explores, $s_{start}$ the start state where the agent starts and $s_{end}$ the state the agent wants to transition into to satisfy goal requirements. In order to guide search, A* proceeds in a best-first fashion by keeping track, for any state $s$ it explores, of an estimate cost $g(s)$ it took to reach that state from the start state $s_{start}$. Algorithm initialization is as follows: $\forall s \in S$, ~ $g(s) \gets \infty $ and $g(s_{start}) \gets 0$. Additionally, A* uses a heuristic $h$ which estimates the remaining best cost from any state $s$ to the goal state $s_{goal}$. The heuristic can be \textit{admissible} to ensure that A* will return an optimal path: once the goal is reached, the path found is guaranteed to be optimal. The fact that the heuristic is admissible means that for any state $s$, $h(s)$ is lower than or equal to the actual cost of the optimal path from $s$ to $s_{goal}$. A* maintains a priority queue, the $OPEN$ list, in which it inserts states by their $'f'$ value. For any state $s \in S$, $f(s) = g(s) + h(s)$. It then proceeds to extract the state $s_{min}$ with the lowest such value in the $OPEN$ list. A* then develops all neighboring states $s' \in S'\subset S$ it can transition into from the state $s_{min}$. For each of those states, costs are updated if possible. More specifically, $\forall s' \in S'$, if $g(s_{min}) + TC(s_{min}, s') < g(s')$ then $g(s') \gets g(s_{min}) + TC(s_{min}, s')$. Here, $TC(s,s')$ returns the transition cost from state $s$ to state $s'$. Additionally, if $g(s')$ is updated, the state $s'$ is added to the $OPEN$ list with its new $f$ value (or its $f$ value is updated if already present in the $OPEN$ list). The best predecessor state for $s'$ is also stored in memory if $g(s')$ is updated, \emph{i.e. } $prev(s') \gets s_{min}$, where function $prev$ stores a predecessor for each state. The A* algorithm will keep extracting states from the $OPEN$ list until the goal state $s_{goal}$ is extracted, at which point a path has been found (and is optimal if an admissible heuristic is used) from $s_{start}$ to $s_{goal}$. \subsection{Incremental Planning} In some scenarios, the agent might not have accurate information about graph structure. The agent may acquire more accurate information about graph structure only when it has started travelling on a computed plan. This is also the case for autonomous vehicles agents if the explored terrain, represented by a graph, is inaccurate at the time of path-planning. It may also be the case if terrain structure changes are likely to happen frequently. The agent will only be able to take into account corrections as it is exploring the terrain. If the agent computes a path from $s_{start}$ to $s_{goal}$, proceeds on the path, and observes graph changes along the way (\emph{e.g. } edge connection or weight modifications), the computed path may turn out to be in fact sub-optimal after taking into account the new graph structure. In order to compute the new optimal path from the agent's current position $s$ on the path (when the change is observed) to $s_{goal}$, two possibilities exist. The agent can re-plan from scratch in order to compute the shortest path from $s$ to $s_{goal}$. This approach can however cause expensive computations that may be avoidable (\emph{e.g. } if the change in the graph does not change the optimality of the shortest path, or if it's a minor change that can be fixed with a small modification). The other possibility is to leverage information from the previously computed shortest path in order to repair it and make it optimal again. This is the approach taken in \textit{incremental} planning algorithms. Two particularly popular such algorithms are the D* algorithm \citep{stentz1995focussed} and an improved lighter version of D*, the D* Lite algorithm \citep{koenig2002improved}. D* Lite is quite efficient and remains a method of choice even now, with a wide array of applications relying on it \citep{al2011d, liu2014costar, sun2016three, belanova2018path}. Simply put, D* and D* Lite aim to re-expand and develop only parts of the search space relevant to registered graph changes and the potential new current state the agent is in. The following provides an overview of how D* Lite operates. First, it computes a path from $s_{start}$ to $s_{goal}$ using backwards A*. Backwards A* works in the same way as A* except the search is done backwards: from the goal state $s_{goal}$ to the start state $s_{start}$. Furthermore, a consistency criterion is used for each state $s$ explored. This criterion compares the cost of the optimal path found from state $s$ to $s_{goal}$ to the minimum of the costs to $s_{goal}$ obtained from each neighboring state plus the transition cost to said neighboring states. The state is said to be consistent if they are equal. Otherwise, it is said to be inconsistent (either overconsistent or underconsistent if respectively higher than or lower than). When a change is observed in the graph while the agent is proceeding on the computed shortest path, edges are updated, and the resulting inconsistent states are re-processed in a defined priority order. Once the process is over, the path has been repaired and is optimal again. Sometimes changes observed which would result in no impact on the optimality of the path will still require computations by D* and D* Lite to guarantee optimality. Such is the case if some edge weights, all of which are outside the computed path, increase. The algorithm would still need to reprocess states becoming inconsistent due to their connection to edge changes before guaranteeing optimality, even though it is clear the path computed is still optimal. To address this issue, a modified version of D*, delayed D* \citep{ferguson2005delayed}, has been proposed. To avoid useless computations in such situations, delayed D* initially ignores underconsistent states and only focuses on overconsistent states first. This enables it to potentially save a lot of computations in such cases, making it more suited than D* Lite in some planning domains \citep{ferguson2005delayed}. \index{Algorithms! Incremental Planning! D*} \index{Algorithms! Incremental Planning! D*Lite} \index{Algorithms! Incremental Planning! Delayed D*} Another incremental approach worthy of note is Lifelong Planning A* (LPA*) \index{Algorithms! Incremental Planning! LPA*} \citep{koenig2002incremental}. LPA* starts by running an A* instance to determine an optimal path from a start state $s_{start}$ to a goal state $s_{goal}$. Once edge changes are observed, it uses previous search information to re-compute an optimal path more efficiently in a similar way to D*. The main difference with D* is that LPA* does not allow $s_{start}$ and $s_{goal}$ to be modified. In other words, the approach can only be used before the agent starts moving on the path, in case some last-minute changes are learned (presumably remotely). It is thus unsuitable in situations where the agent observes changes as it is already moving on a computed path and needs to adjust the plan from a new position. More recently, \citep{przybylski2017d} proposed the D* Extra Lite \index{Algorithms! Incremental Planning! D* Extra Light} algorithm. Similarly to D* Lite, D* Extra Lite is based on A* and propagates changes to the previously processed search space in order to re-optimize a path. Unlike D* Lite, the reinitialization of the affected search space is achieved by cutting search tree branches. This allows the algorithm to often outperform D* Lite, with experiments suggesting it can be almost up to twice faster on typical path-planning problems. Previously described approaches are applicable in graphs, and are therefore well-suited to, for example, grid environments where agents can move with 45 or 90 degree angles. Such a representation of the environment can cause the optimal path in the graph to actually be sub-optimal in reality. In \textit{any angle} path-planning, the agent can take any angle to move around in its environment. Some incremental planning work have also emerged for such environments. \citep{ferguson2007field} introduce Field D*, \index{Algorithms! Incremental Planning! Field D*} an adaptation of the D* algorithm for any angle path-planning, which reportedly returns a solution path often close to the optimal solution. Other works include Theta* \index{Algorithms! Planning! Theta*} from \citep{nash2007theta}. Based on A*, Theta* is shown to give even shorter paths than Field D*, though not necessarily optimal either. However, Theta* lacks Field D*'s fast replanning capabilities. Finally, \citep{harabor2016optimal} introduced ANYA, \index{Algorithms! Incremental Planning! ANYA} which they show to be significantly faster than previous approaches. Moreover, ANYA also guarantees to find optimal any-angle paths. \subsection{Anytime Planning} In some situations a path needs to be computed quickly. Such could be the case for example for an agent detecting possible obstruction on a planned path while in movement. A solution would be required by the agent as fast as possible to avoid having to come to a complete stop and waste time while re-computing a path. Computing a new optimal path can quickly become very hard, even for incremental algorithms if the number of search states required to be re-processed is high. In such a situation, it can be acceptable to compute a solution which is not guaranteed to be optimal very quickly first, so that the agent can keep moving. In the remaining time available (\emph{e.g. } time for the agent to reach decisive points), the previously computed (likely sub-optimal) path can be improved. \textit{Anytime} algorithms, sometimes referred to as \textit{Hierarchical} path-planners, are designed to address that problem. They build a likely sub-optimal path very quickly and improve the path in the remaining time available. There have been various works on anytime algorithms. Early works include \citep{zilberstein1995approximate,dean1988analysis}. Then, \citep{likhachev2003ara} introduced the well-known Anytime Repairing A* (ARA*). \index{Algorithms! Anytime Planning! ARA*} This algorithm is made of successive weighted A* searches. In a weighted A* search, the heuristic function $h$ used is multiplied by a factor $\epsilon > 1$. In doing so, substantial speedup is often provided at the cost of solution optimality. ARA* executes successive weighted A* searches with a decreasing inflation factor $\epsilon$, each of which uses information from previous searches and provides a sub-optimality bound. During each weighted A* search, ARA* considers only states whose costs at the previous search may not be valid anymore due to the new, lower $\epsilon$ value. Another anytime algorithm is the Anytime Weighted A* (AWA*) \index{Algorithms! Anytime Planning! AWA*} \citep{hansen2007anytime}, which is very similar to ARA*. Authors show that AWA* is seven times faster than ARA* on certain domains such as the sliding-tile planning problem of eight puzzles. From another perspective, \citep{likhachev2005anytime} introduced Anytime Dynamic A* (AD*). \index{Algorithms! Anytime Planning! AD*} Unlike previous approaches, AD* does not differentiate incremental and anytime approaches. Instead, it provides a framework which combines the benefits of both to provide solutions efficiently to hard dynamic problems. Experiments are carried out in an environment where a robotic arm is manipulating an end-effector through a dynamic environment and show AD* generating significantly better trajectories than ARA* and D* Lite in the same time budget. \citep{botea2004near} presented Hierarchical Path-Finding A* (HPA*). HPA* \index{Algorithms! Anytime Planning! HPA*} proceeds to divide the environment into square clusters with connections, making an abstract search graph which is searched to find a shortest path. Another approach, Partial-Refinement A* (PRA*) \index{Algorithms! Anytime Planning! PRA*} \citep{sturtevant2005partial}, builds cliques of nodes to construct a multi-level search space. The original problem is reduced to finding a set of nodes on the optimal shortest path. However, both HPA* and PRA* address homogenous agents in homogenous-terrain environments. An extension of HPA*, Annotated Hierarchical A* (AHA*), \index{Algorithms! Anytime Planning! AHA*} has been proposed by \citep{harabor2008hierarchical}. It is still one of the most advanced anytime path-planning algorithms to date. AHA* is able to deal with heterogeneous multi-terrain environments by reducing them to simpler single-size, single-terrain search problems. Authors' experiments suggest that near-optimal solutions are returned by the algorithm for problems in a wide range of environments, with an exponentially lower search effort than A*. \subsection{Probabilistic Methods for Path-Planning} In high-dimensional search spaces, probabilistic approaches can provide a solution quickly but not necessarily an optimal one. We describe two popular approaches, Probabilistic Roadmaps (PRM) \index{Algorithms! Probabilistic Methods! PRM} \citep{kavraki1996probabilistic} and Rapidly-exploring Random Trees (RRT) \citep{lavalle1998rapidly}. The intuition behind PRMs is to generate random 'points' in the search space, connect these points to nearby points, and repeat the procedure until a path can be computed from the start state $s_{start}$ to the goal state $s_{goal}$ by moving along these points. More specifically, PRM starts by generating random states. It checks whether the generated states are valid, \emph{i.e. } if they do not possess contradictory features (\emph{e.g. } for the robot manipulator in \ref{fig_motion_planning}, one would need to check if the combination of angles does not leave the robot arm in an impossible position). Invalid states are removed, and remaining states are named "milestones". Each milestone is connected to its $k$-nearest neighbor states, $k$ being a parameter. The process is repeated until the roadmap (the milestones and their connections) becomes dense enough and a connection between $s_{start}$ and $s_{goal}$ is created. A shortest path on the roadmap is then computed between $s_{start}$ and $s_{goal}$. PRM is \textit{probabilistically complete}, \emph{i.e. } as the roadmap building process goes on in time, the probability that the algorithm will find an existing path from $s_{start}$ to $s_{goal}$ tends to 1. Figure ~\ref{fig_prm} illustrates a PRM. Notable follow-up works include Hierarchical PRMs \citep{collins2003hprm}, which are a variant of PRMs refined recursively, providing better performance at finding narrow passages than uniform sampling. Other works have attempted to improve the efficiency of PRMs by altering the state sample generation process. Recently, \citep{kannan2016robot} built a PRM variant with adaptive sampling. They assign probabilities to different samplers dynamically based on the environment and use the one with the highest probability. \citep{ichter2020learned} proposed to learn to identify 'critical' states with a neural network from local environment features, \emph{i.e. } states that are key to building the wanted path (\emph{e.g. } doorways in an office environment). They draw these critical samples more often and thus are able to build a hierarchical roadmap more efficiently, with reportedly up to three order of magnitude improvements in computation time. \begin{figure}[tbh] \centering \includegraphics[scale=0.5]{intro_chapter/pictures/PRM.pdf} \caption[A Probabilistic roadmap]{\textbf{A Probabilistic roadmap.} The white space represents feasible states, purple points milestones. Point 's' is the start state, point 'g' the goal state. The shortest path on the roadmap linking s to g is shown in blue. Credit for the picture goes to Jean-Claude Latombe. Source: \citep{probroadmap}} \label{fig_prm} \end{figure} An RRT, \index{Algorithms! Probabilistic Methods! RRT} on the other hand, starts growing a tree rapidly from the start state $s_{start}$, which is considered to be its root. To do so, RRT repeatedly uses a randomly sampled state $s_{rand}$ and attempts to connect $s_{rand}$ to any nearest state in the tree via feasible paths. When successful, RRT expands the size of the tree further with the addition of $s_{rand}$ and intermediary states found on the path. The sampling of random states is done in a way which expands the tree towards unsearched areas of the search space. Furthermore, for a randomly generated state $s_{rand}$ the length of its connection to the tree is limited by a growth factor. If the total length of the connection is above this distance, $s_{rand}$ is dropped and $s_{rand}'$, the state at the maximally allowed distance from the tree along the connection is selected instead. In this manner, the position of the randomly generated samples determines towards which areas the tree gets expanded, while the growth factor limits how far the tree is expanded in those directions. A drawback of RRTs is that they tend to often converge to non-optimal solutions. To address this issue, \citep{karaman2010incremental} introduced RRT*, \index{Algorithms! Probabilistic Methods! RRT*} which they showed to almost surely converge towards the optimal path without any significant overhead against RRT. We describe some notable follow-up works which are variants of RRT*. \citep{adiyatov2013rapidly} proposed a variant called RRT* Fixed Nodes (RRT*FN). \index{Algorithms! Probabilistic Methods! RRT*FN} Since there is no limit to the number of nodes RRT* can develop, the algorithm is not suited for embedded systems with limited memory. RRT*FN aims to solve the issue by using a node removal procedure which allows it to limit the number of nodes developed without hindering the convergence of the algorithm towards an optimal solution. \citep{gammell2014informed} proposed informed-RRT*, \index{Algorithms! Probabilistic Methods! Informed-RRT*} a variant which uses a heuristic to shrink the planning problem to subsets of the original domain. Informed-RRT* reportedly outperforms RRT* in rate of convergence, final solution cost, and ability to find difficult passages. More recently, \citep{lai2019balancing} presented Rapidly-exploring Random disjointed-Trees* (RRdT). \index{Algorithms! Probabilistic Methods! RRdT} It is a RRT* variant which explores the search space with locally exploring disjointed trees and actively balances global exploration and local-connectivity exploitation. This is done by expressing the problem as a multi-armed bandit problem, and leads to improved performance. \section{Graph Representation Learning with Graph Neural Networks} \subsection{Neural Networks} We start by giving a brief description of neural networks and convolutional neural networks, architecture types from which graph neural networks originated. Neural Networks (NNs) \index{Neural Networks! MLP} allow abstraction of data by using models with trainable parameters coupled with non-linear transformations of the input data. In spite of the complex structure of a NN, the main mechanism is straightforward. A \emph{feedforward neural network}, or \emph{Multi-Layer Perceptron (MLP)}, with $L$ layers describes a function $f_{{\boldsymbol \theta}}(\mathbf{x}) = f(\mathbf{x}; {\boldsymbol \theta}): \mathbb{R}^{d_{\mathbf{x}}} \rightarrow \mathbb{R}^{d_{\hat{\mathbf{y}}}}$ that maps an input vector $\mathbf{x} \in \mathbb{R}^{d_{\mathbf{x}}}$ to an output vector $\hat{\mathbf{y}} \in \mathbb{R}^{d_{\hat{\mathbf{y}}}}$. Vector $\mathbf{x}$ is the input data that we need to analyze ($\emph{e.g. }$ an image, a signal, a graph, etc.), while $\hat{\mathbf{y}}$ is the expected decision from the NN ($\emph{e.g. }$ a class index, a heatmap, etc.). The function $f$ performs $L$ successive operations over the input $\mathbf{x}$: \begin{align} h^{(l)} = f^{(l)}(h^{(l-1)}; \theta^{(l)}), \qquad l=1,\dots,L \label{intro_eq:layers} \end{align} where $h^{(l)}$ is the hidden state of the network ($\emph{i.e. }$ features from intermediate layers, corresponding to intermediary values) and $f^{(l)}(h^{(l-1)}; \theta^{(l)}): \mathbb{R}^{d_{l-1}} \mapsto \mathbb{R}^{d_{l}}$ is the mapping function performed at layer $l$; $h_0=\mathbf{x}$. In other words: $$f(\mathbf{x})=f^{(L)}(f^{(L-1)}(\dots f^{(1)}(\mathbf{x})\dots))$$ \noindent Each intermediate mapping depends on the output of the previous layer and on a set of trainable parameters $\theta^{(l)}$. We denote by ${{\boldsymbol \theta}=\{\theta^{(1)},\dots,\theta^{(L)}\}}$ the entire set of parameters of the network. The intermediate functions $f^{(l)}(h^{(l-1)}; \theta^{(l)})$ have the form: \begin{align} f^{(l)}(h^{(l-1)}; \theta^{(l)}) = \sigma\left( \theta^{(l)} h^{(l-1)} + b^{(l)} \right) , \label{intro_eq:linear} \end{align} where $\theta^{(l)}\in\mathbb{R}^{d_l\times d_{l-1}}$ and $b^{(l)}\in\mathbb{R}^{d_l}$ are the trainable parameters and the bias, while $\sigma(\cdot)$ is an \emph{activation} function, \emph{i.e. } a function which is applied individually to each element of its input vector to introduce non-linearities. Intermediate layers are actually a combination of linear classifiers followed by a piecewise non-linearity. Layers with this form are termed \emph{fully-connected layers}. NNs are typically trained using labeled training data from a dataset, $\emph{i.e. }$ a set of input-output pairs $(\mathbf{x}_i, \mathbf{y}_i)$, $i=1,\dots,N$, where $N$ is the size of the dataset. During training we aim to minimize the training loss: \begin{align} \mathcal{L}({\boldsymbol \theta}) = \frac1N\sum_{i=1}^N \ell(\hat{\mathbf{y}}_i,\mathbf{y}_i) , \label{intro_eq:loss} \end{align} where $\hat{\mathbf{y}_i}=f(\mathbf{x}_i; {\boldsymbol \theta})$ is the estimation of $\mathbf{y}_i$ by the NN and ${\ell: \mathbb{R}^{d_L}\times \mathbb{R}^{d_L} \mapsto \mathbb{R}}$ is a loss function which measures the distance between the true label $\mathbf{y}_i$ and the estimated one $\hat{\mathbf{y}_i}$. Through \emph{backpropagation}, the information from the loss is transmitted to all ${\boldsymbol \theta}$ and gradients of each $\theta_l$ are computed w.r.t. the loss. The optimal values of the parameters ${\boldsymbol \theta}$ are then searched for via Stochastic Gradient Descent (SGD) which updates ${\boldsymbol \theta}$ iteratively towards the minimization of $\mathcal{L}$. The input data is randomly grouped into mini-batches and parameters are updated after each pass. The entire dataset is passed through the network multiple times and the parameters are updated after each pass until reaching a satisfactory optimum. In this manner all the parameters of the NN are learned jointly and the pipeline allows the network to learn to extract features and to learn other more abstract features on top of the representations from lower layers. In recent years, NNs, in particular Deep Neural Networks (DNNs), have achieved major breakthroughs in various areas. Some examples include image classification \citep{krizhevsky_2012}, \citep{simonyan_2014}, \citep{he_2016}, object detection \citep{ren_2015}, \citep{redmon_2016}, \citep{he_2017}, semantic segmentation \citep{long_2015}, neural machine translation \citep{sutskever_2014}, computer games \citep{silver_2016}, \citep{silver_2017} and many other fields. While the fundamental principles of training neural networks are known since many years, the recent improvements are due to a mix of availability of large datasets, advances in GPU-based computation and increased shared community effort. Similarly to NNs, DNNs enable a high number of levels of abstraction of data by using models with millions of trainable parameters coupled with non-linear transformations of the input data. It is known that a sufficiently large neural network can approximate any continuous function \citep{funahashi_1989}, although the cost of training such a network can be prohibitive. Convolutional Neural Networks (CNNs) \index{Convolutional Neural Networks! CNN} ~\citep{fukushima_1982, lecun_1995} are a generalization of multi-layer perceptrons for 2D data. In convolutional layers, groups of parameters (which can be seen as small fully-connected layers) are slided across an input vector similarly to filters in image processing. This reduces significantly the number of parameters of the network since they are now \textit{shared} across locations, whereas in fully connected layers there is a parameter for element of the input. Since the convolutional units act locally, the input to the network can have a variable size. A convolutional layer is also a combination of linear classifiers (equation \ref{intro_eq:linear}) and the output of such layer is 2D and is called \textit{feature map}. CNNs are highly popular in most recent approaches for computer vision problems. Figure~\ref{intro_fig_fig_cnn} shows a CNN architecture. \begin{figure}[tbh] \centering \includegraphics[scale=0.7]{intro_chapter/pictures/cnn.pdf} \caption[A convolutional neural network] {\textbf{A convolutional neural network.} The input of this CNN is an image, the output a prediction of a class among the following set of classes: \{dog,cat,boat,bird\}. Source: \citep{cnnwebsite}} \label{intro_fig_fig_cnn} \end{figure} \subsection{Graph Neural Networks} In this section, we discuss various architectures of Graph Neural Networks (GNNs). GNNs are generalizations of CNNs to non-Euclidean data and aim to learn graph representations. Although no common groups have been precisely defined for GNNs, they tend to belong to four categories: \begin{itemize} \item \textbf{Converging Recurrent Graph Neural Networks (CRGNN)}. These architectures of neural networks mostly include the first works on extending NNs to graphs. \item \textbf{Graph Convolutional Networks (GCN)}. These networks are mostly inspired from the application of CNNs to graphs and are well-suited to supervised-learning for node classification. \item \textbf{Recurrent Graph Neural Networks (RGNN)}. RGNNs are designed to process input graphs for which a temporal sequence ordering exists. They should not be confused with CRGNNs, which are 'recurrent' in the sense they apply a process on the input graph repeatedly until convergence. \end{itemize} These networks have different processing architectures but have the following common point. They take as input a graph in which nodes, and possibly edges, have features. They use intermediary layers, each of which produces new features for nodes (and possibly edges, depending on the architecture type). Let $\mathcal{G=(V,E)}$ be an input graph of these network architectures with its set of nodes $\mathcal{V}=(v_1,v_2,...,v_n)$ and its set of edges $\mathcal{E}$. We denote as $A$ the adjacency matrix of $\mathcal{G}$, $X= (x_{v_1}, x_{v_2}, ..., x_{v_n})$ the matrix of node feature vectors of the input graph $\mathcal{G}$ and $H^{(l)}=(h_{v_1}^{(l)}, h_{v_2}^{(l)}, ..., h_{v_n}^{(l)})$ the matrix of node feature vectors after the input graph $\mathcal{G}$ has been processed by $l$ layers. Each $x_{v_i}$ is the feature vector of node $v_i$ of the input graph $\mathcal{G}$ and each $h_{v_i}^{(l)}$ the feature vector of node $v_i$ after the $l^{th}$ layer. Finally, as the layer architecture types we describe next apply a similar process to each graph node, the same layer can be used on input graphs with any number of nodes, although the number of features per node needs to be fixed. In other words, a same GNN with layers made of these architectures can process input graphs with any number of node. \subsubsection{Converging Recurrent Graph Neural Networks} The idea behind CRGNNs was initially introduced in \citep{sperduti1997supervised}, with a contribution termed \textit{generalized recursive neuron}, extending the idea of applying neural networks to inputs with structures. Those structures were essentially limited to acyclic graphs because of computational constraints at the time. In follow-up works, \citep{scarselli2008graph} extend this with an architecture capable of processing acyclic, cyclic, directed and undirected graphs. To that end, neighborhood information among graph nodes is exchanged repeatedly until convergence. The following formula describes how information is updated from layer $l$ to layer $l+1$ for node $v$: \begin{equation} \label{intro_eq_CRGNN} h_{v}^{(l)} = \sum_{w \in N(v)} f(x_{v}, x_{e_{vw}}, x_w, h_w^{(l-1)}) \end{equation} \noindent where: \begin{itemize} \item $h_v^{(l)}$ and $h_v^{(l-1)}$ respectively designate the vector feature of node $v$ after layer $l$ and layer $l-1$; $h_v^{(0)} = x_v$. \item $N(v)$ designates the nodes connected to node $v$ with an edge. \item $x_{e_{vw}}$ is the feature vector of the edge connecting $v$ and $w$. \item $f$ is a parametric function, called \textit{local transition function} by Scarselli \emph{et al.} \end{itemize} Intuitively, the information update of node $v$ from a layer $l-1$ to a layer $l$ proceeds in the following manner for each node. For each neighboring node $w$, a parametric function $f$ takes as input the following elements: the input feature vectors of node $v$, edge $(v,w)$, node $w$, as well as the feature vector of node $w$ after layer $l-1$. The sum of the output of $f$ for each neighbor of $v$ makes the new feature vector of node $v$ after layer $l$. Moreover, to ensure convergence after applying layers repeatedly, function $f$ needs to be a \textit{contraction map} which reduces the distance between inputs and satisfies this property: $$ \forall z \in \mathbb{R}^{m} ~~ \exists \mu \in ]0,1[ ~~ s.t ~~ \forall (x,y) \in \mathbb{R}^{m' \times m''} : ~~ \| f(x,z) - f(y,z) \| \leq \mu \|x-y\|$$ where $\| \cdot \|$ denotes a vectorial norm. A convergence criterion also needs to be defined. Layers are applied recursively on each node in parallel until this criterion is satisfied. The converged node feature vectors $h_{v_i}^{*}$ of each node $v_i$ can then be forwarded to an output layer to perform either node classification tasks, edge classification tasks (by using for example a MLP which takes as input converged features $h_{v_i}^{*}$ and $h_{v_j}^{*}$ and outputs a value for edge $(v_i,v_j)$) or graph-level predictions (\emph{e.g. } predict a class among a portfolio of classes for the input graph, which is typically done by using one or multiple \textit{pooling} operations such as $max$ or $min$ to reduce the size of the converged graph into a fixed size, enabling the use, for example, of a fully-connected output layer). A notable issue with this CRGNN architecture is the number of layers which need to be applied to meet the convergence criterion and the possibly ensuing complexity. More recently, a framework was proposed in \citep{li2015gated} to address this issue based on gated recurrent units \citep{cho2014learning}. This allows Li \emph{et al.}~to only require a fixed number of layers to process an input graph, thereby lifting the constraints associated with the convergence criterion. Nevertheless, the approach in \citep{li2015gated} requires Back-Propagation Through Time (BPTT) to compute gradients when using the model in a loss function, which can cause severe overhead. CRGNNs are mostly pioneer works which inspired the next architectures we describe, and even the newest CRGNN approach presents computational issues due to BPTT. \subsubsection{Graph Convolutional Networks} Unlike CRGNNs where a fixed recurrent model is applied repeatedly, GCNs use a fixed number of graph convolutional layers, each of which is different and has its own set of trainable parameters. GCNs are inspired from CNNs. They generalize their operations from grid-structured data (images) to graph-structured data. There are two main categories of GCNs: \textit{spectral-based} and \textit{spatial-based}. Spectral-based approaches use signal processing to define the neighborhood of a node and the ensuing feature update process, while spatial-based approaches rely directly on spatially close neighbors in the graph. \paragraph{Spectral-based GCNs} Spectral-based architectures use the spectral representation of graphs and are thus limited to undirected graphs. They were introduced in \citep{bruna2014}. The following layer propagation rule is used to compute $H^{(l)}$, the matrix of all node feature vectors at layer $l$, from $H^{(l-1)}$: \begin{equation} \label{intro_eq_bruna} H^{(l)} = \sigma (U g_{\theta}(\Lambda) U^{T} H^{(l-1)}) \end{equation} Here, $U$ denotes the eigenvectors of the normalized graph Laplacian matrix $L = I_N - D^{- \frac{1}{2}} A D^{- \frac{1}{2}}$ ($A$ being the adjacency matrix, $D$ the node degree matrix) and $\Lambda$ its eigenvalues. Function $g_{\theta}(\Lambda)= diag_{\theta}(\Lambda)$ is a filter applied on the eigenvalues with a set of parameters $\theta$. Lastly, $\sigma$ is an activation function. A problem with this approach is that it results in non-spatially localized filters, making it unable to extract local features independently of graph size. In a follow-up work, \citep{defferrard2016convolutional} introduce \textit{ChebNet}. The filters proposed in ChebNet are localized in space. Their idea is to replace $g_{\theta}(\Lambda)$ with a truncation of Chebyshev polynomials $T_k$ of the eigenvalues $\Lambda$: $g_{\theta}(\Lambda) = \sum_{k=0}^{K}\theta_k T_k(\Tilde{\Lambda}) $ where $\Tilde{\Lambda} = \frac{2 \Lambda}{\lambda_{max}} - I_n $, $\theta_k \in \mathbb{R}^{K}$ and $\lambda_{max}$ denotes the highest eigenvalue. Chebyshev polynomials are recursively defined by: $T_k(x) = 2 x T_{k - 1}(x) - T_{k - 2}(x)$; $T_0(x) = 1$; $T_1(x) = x$. After further simplifications, the layer propagation rule is simplified to: \begin{equation} \label{intro_eq_chebnet} H^{(l)} = \sigma (\sum_{k=0}^{K} \theta_k T_k(\Tilde{L}) H^{(l-1)}) \end{equation} \index{Graph Neural Networks! Spectral-based! ChebNet} \noindent where $\Tilde{L}=\frac{2 L}{\lambda_{max}}- I_n$. More recently, in \citep{kipf_2017}, authors introduce \textit{GCN} by applying a first-order approximation of ChebNet ($K=1$, and $\lambda_{max} = 2$). This enables them to avoid overfitting local neighborhood structures on graphs with unbalanced node degree distributions. Equation \ref{intro_eq_chebnet} becomes: \index{Graph Neural Networks! Spectral-based! GCN} \begin{equation} \label{intro_eq_gcn_unsimp} H^{(l)} = \sigma (\theta_0 H^{(l-1)} - \theta_1 D^{- \frac{1}{2}} A D^{- \frac{1}{2}} H^{(l-1)}) \end{equation} An additional assumption is made in GCN that $\theta = \theta_0 = - \theta_1$ to further reduce overfitting, and the equation becomes: \begin{equation} \label{intro_eq_gcn_simp_onelayer} H^{(l)} = \sigma (\theta ( I_n + D^{- \frac{1}{2}} A D^{- \frac{1}{2}}) H^{(l-1)} ) \end{equation} Finally, a \textit{re-normalization trick} is used to avoid numerical instabilities such as exploding or vanishing gradients: $I_n + D^{- \frac{1}{2}} A D^{- \frac{1}{2}} \xrightarrow{} \Tilde{D}^{- \frac{1}{2}} \Tilde{A} \Tilde{D}^{- \frac{1}{2}}$, with $\Tilde{A} = A + I_n$ and $\Tilde{D}$ being the degree matrix of $\Tilde{A}$. Kipf and Welling generalize this definition to an input $H^{(l-1)} \in \mathbb{R}^{N \times C}$ where $C$ is the number of features per node at layer $l-1$, $N$ the number of nodes in the input graph. Moreover, They use a weight matrix $W \in \mathbb{R}^{C \times F}$, where $F$ is the desired number of features per node after the layer has been applied. The equation becomes: \begin{equation} \label{intro_eq_gcn} H^{(l)} = \sigma (\Tilde{D}^{- \frac{1}{2}} \Tilde{A} \Tilde{D}^{- \frac{1}{2}} H^{(l-1)} W ) \end{equation} \noindent On a side note, during the information update for a node $v$, GCN takes a weighted sum of vector features from neighbors, where the weight for a neighbor $w$ is given by: $\frac{1}{\sqrt{deg(v) \times deg(w)}}$, where $deg(v)$ refers to the degree of node $v$. Several linear combinations are then applied, to create as many output features as needed for $v$ in the next layer. \begin{figure}[tb] \centering \includegraphics[scale=0.8]{intro_chapter/pictures/gcn.pdf} \caption[A graph convolutional network] {\textbf{A graph convolutional network.} In this illustration, an input graph with node features (and possibly edge features) is processed through multiple graph convolutional layers and $ReLU(\cdot)=max(0,\cdot)$ nonlinearities. An output graph is returned with new, updated node features. Credit goes to Thomas Kipf for the illustration, source: \citep{gcnkipf}} \label{intro_fig_gcn} \end{figure} Lastly, methods presented thus far rely on the adjacency matrix to define relations between nodes, possibly missing on implicit information between nodes. Authors in \citep{li2018adaptive} propose Adaptive Graph Convolutional Network (AGCN) to address this issue. AGCN basically learns a \textit{residual} graph adjacency matrix by learning a distance function which takes as input the features of two different nodes in the graph, enabling it to better capture implicit dependencies. \index{Graph Neural Networks! Spectral-based! AGCN} \paragraph{Spatial-based GCNs} \label{intro_GNN_spatial} Spatial-based approaches rely on spatially close neighbors to define the feature update step for a node. In this sense, spatial-based GCNs are somewhat similar to CRGNNs in that they propagate node information through edges, although they do not retain the idea of convergence and they stack multiple different layers with different trainable weights. A significant advantage of spatial-based GCNs over spectral-based GCNs is that they can be used on directed graphs. Among early spatial-based architectures, \citep{micheli2009neural} introduces neural network for graphs (NN4G). \index{Graph Neural Networks! Spatial-based! NN4G} In the NN4G architecture, graph convolutions are performed at each layer (each of which has its own trainable weights). Each convolution basically consists in the sum, for each node, of the feature vectors of neighboring nodes. In this sense, it is somewhat similar to the GCN architecture of \citep{kipf_2017} which performs a weighted sum based on the spectral graph instead. Additionally, NN4G applies residual skip connections between each layer to 'memorize' information. Each new layer is basically linked not only to the previous one, but also to all preceding layers and the input. The following equation defines NN4G's propagation rule, where $\Theta^{(l)}$ and $W^{(k)}$ are weight matrices: \begin{equation} \label{intro_eq_nn4g} H^{(l)} = \sigma ( X \Theta^{(l)} + \sum_{k=1}^{l-1} A H^{(k)} W^{(k)} ) \end{equation} The Diffusion Convolutional Neural Networks (DCNNs) \index{Graph Neural Networks! Spatial-based! DCNN} proposed in \citep{atwood2016diffusion} brings the concept of diffusion to graphs convolutions. A transition probability is defined when information from a node is passed to a neighboring node, causing the passing of information to converge after applying the process repeatedly. Transition matrices are used to define the neighborhood for a node. The propagation rule for DCNN is: \begin{equation} \label{intro_eq_dcnn} H^{(l)} = \sigma ( W^{(l)} \odot P^l H^{(l-1)} ) \end{equation} \noindent where $W^{(l)}$ is a weight matrix, $\odot$ denotes the element-wise product, $P^l$ (not to be confused with $P^{(l)}$) is $P$ to the power of $l$, with $P = D^{-1}A$ the probability transition matrix. Message Passing Neural Networks (MPNN), \index{Graph Neural Networks! Spatial-based! MPNN} on the other hand, are a general framework presented in \citep{gilmer2017neural} which aim to regroup different categories of previous works into one single architecture. In MPNNs, during the convolution phase of an input graph, messages are passed between nodes along edges by following an aggregation phase, called \textit{message passing} phase, after which node features get updated in a \textit{message update} phase. Each node $v$ has its feature vector $h_v^{(l-1)}$ updated to $h_v^{(l)}$ based on a message $m_v^{(l)}$: \begin{equation} \label{intro_eq_mpnn_mess_pass} m_v^{(l)} = \sum_{w \in N(v)} M_{l-1}(h_v^{(l-1)}, h_w^{(l-1)}, x_{e_{vw}}) \end{equation} \begin{equation} \label{intro_eq_mpnn_mess_update} h_v^{(l)} = U_{l-1}(h_v^{(l-1)}, m_v^{(l)}) \end{equation} \noindent where $N(v)$ designates the neighborhood of node $v$; $M_{l-1}$ and $U_{l-1}$ are learned differentiable functions; $x_{e_{vw}}$ is the vector feature of the edge connecting $v$ and $w$. A \textit{readout phase} is also introduced (after the last message passing layer $l_{max}$ has been applied), in which a readout function $R$ can optionally compute a feature vector $\hat{y}$ for the whole graph (assuming we want to do some other type of classification than node classification, such as graph-level class prediction): \begin{equation} \label{intro_eq_mpnn_mess_output} \hat{y} = R(\{h_v^{(l_{max})}| v \in \mathcal{V}\}) \end{equation} \noindent $R$ needs to be invariant to the permutation of node states in order for the MPNN to retain invariance to graph isomorphism. Gilmer \emph{et al.}~proceed to express previous existing GNN architectures in the literature by specifying the corresponding message passing function $M_{l-1}$, message update function $U_{l-1}$ and readout function $R$. In their own work, they use an architecture in which $M(h_v, h_w, x_{e_{vw}}) = MLP(x_{e_{vw}})h_w$~. Here, MLP is a multi-layer perceptron which takes as input the feature vector of edge $(v,w)$ and outputs a $out_{c} \times in_{c} $ sized-matrix, $out_{c}$ being the number of desired feature per node after applying the message passing layer and $in_{c}$ the number of feature per node of the input graph provided to the layer. Vector $h_w$ being of size $in_{c} \times 1$, the matrix multiplication results in a $out_{c} \times 1$ sized-matrix, \emph{i.e. } a vector which has the desired number of new features after the message passing layer is applied. The sum of these vectors for the entire neighborhood defines $m_v$. Gilmer \emph{et al.}~apply this architecture for node classification tasks on a molecular property prediction benchmark and achieve state-of-the-art results. Other recent relevant works include GraphSAGE \citep{hamilton2017inductive} \index{Graph Neural Networks! Spatial-based! GraphSAGE} and Graph Attention Networks (GATs) \index{Graph Neural Networks! Spatial-based! GAT} \citep{velivckovic2018graph}. GraphSAGE has been conceived to handle graphs where the number of neighbors for nodes can vary greatly from one node to another. Since always taking into account the entire neighborhood can prove inefficient and costly, graphSAGE uses sampling to define neighborhoods and thus keep a fixed number of neighbors for each node. The propagation rule in a graphSAGE convolution is defined by: \begin{equation} \label{intro_eq_graph_sage} h_v^{(l)}= \sigma [W^{(l)} AGG_l(\{h_v^{(l-1)} \} \cup \{ h_u^{(l-1)}, \forall u \in N_r(v) \})] \end{equation} \noindent where: $N_r(v)$ designates a fixed-size uniform draw from the set $\{ u \in \mathcal{V}: (u,v) \in \mathcal{E}\}$ and $AGG_l$ is an aggregation function invariant to the permutations of node orderings (\emph{e.g. } mean function). GATs, on the other hand, use an \textit{attention} mechanism which defines weights for each connected pair of nodes. Weights are learned by the attention mechanism so as to reflect the importance of each neighbor of a node $v$. The layer propagation rule is defined by: \begin{equation} \label{intro_eq_gat} h_v^{(l)}= \sigma (W^{(l)} \sum_{u \in \{v\} \cup N(v) } \alpha_{uv}^{(l)} h_u^{(l-1)} ) \end{equation} \noindent where $N(v)$ refers to the neighborhood of $v$, and $\alpha_{uv}^{(l)}$ is the attention weight. This attention weight is defined by the following query-key mechanism: \begin{equation} \label{intro_eq_gat_attention} \alpha_{uv}^{(l)} = \frac{ exp( LeakyReLU( a^{T} [ W^{(l)} h_v^{(l-1)} || W^{(l)} h_u^{(l-1)} ] ) ) } {\sum_{q \in N(v)} exp( LeakyReLU( a^{T} [ W^{(l)} h_v^{(l-1)} || W^{(l)} h_q^{(l-1)} ] ) )} \end{equation} \noindent where: $LeakyReLU(\cdot) = \cdot$ ~ if ~ $\cdot > 0$, ~ $\alpha \times \cdot$ ~ otherwise ($\alpha$ is a small number); $a$ is a vector of learnable weights, $||$ denotes the concatenation operation and $N(v)$ refers to the neighborhood of node $v$. Additionally, GAT can use multi-head attention mechanisms (\emph{i.e. } have multiple attention heads $\alpha_{uv}^{(l)}$, $\alpha_{uv}^{(l)'}$, $\alpha_{uv}^{(l)''}$, etc...). This enables the model to learn different attention schemes in parallel per layer, and shows considerable improvement over GraphSAGE on node classification benchmarks. \subsubsection{Recurrent Graph Neural Networks} In many applications, graphs can not only present spatial structure, but also hold temporal dependencies. An example is road network traffic, for which the same graph at different time steps is going to represent the current flow of traffic in the network. RGNNs are inspired by Recurrent Neural Networks (RNNs) and aim to process a sequence of temporal graphs, in order for example to make predictions about future states (\emph{e.g. } how traffic is going to be like in future time steps). For most RGNNs, an RNN-like mechanism is used to memorize and leverage temporal information. Nevertheless, some RGNNs use CNNs to capture temporal information instead. We first describe some RNN-based methods and then some CNN-based approaches. The idea behind RNN-based RGNNs stems from the recurrent units used in RNNs. When a RNN is used on an input at time step $t$, each hidden layer $h^{(l)^t}$ is computed by combining both the input to the layer $h^{(l-1)^t}$, as well as a 'memory' equal to the output of the same layer at time step $t-1$ : $h^{(l)^{t-1}}$. In RNN-based RGNNs, the equation for the layer propagation rule is of the form: \begin{equation} \label{intro_eq_rnn_rgnn} H^{(l)^t} = \sigma ( graphConv(H^{(l-1)^t}, W^{(l)}, \mathcal{G}) + graphConv( H^{(l)^{t-1}}, \hat{W}^{(l)}, \mathcal{G}) + B^{(l)} ) \end{equation} \noindent Here, $graphConv$ denotes a graph convolution operation (either spatial-based or spectral-based), $ \hat{W}^{(l)}$ is a weight matrix different from $W^{(l)}$, and $B^{(l)}$ is a bias. The second $graphConv$ operation aims to introduce to the computation the memory which was retained in the previous step. Main such works include Structural-RNN (S-RNN) \index{Graph Neural Networks! Recurrent! S-RNN} \citep{jain2016structural}. S-RNN uses different RNNs to handle both node and edge information, namely \textit{nodeRNN} and \textit{edgeRNN}. Diffusion Convolutional Recurrent Neural Network (DCRNN) \index{Graph Neural Networks! Recurrent! DCRNN} \citep{li2017diffusion} is an encoder-decoder framework which applies gated recurrent units on the DCNN architecture. In \citep{seo2018structured}, a Long Short-Term Memory (LSTM) network is combined with the ChebNet graph convolution operator. LSTMs are a popular type of RNN architecture because they are able to maintain a longer memory than RNNs. CNN-based RGNNs, on the other hand, abandon the idea of keeping a memory and instead use a CNN jointly with a graph convolution operator to capture temporal and spatial information at the same time. Their advantage over RNN-based RGNNs is that they do not require backpropagation through time for gradient computation. The idea is that for each node $v$ in the input graph, a 1D-CNN is applied and temporal information from previous states of the node is aggregated. Next, a graph convolutional layer is applied on the aggregated temporal information to aggregate spatial information. This process is repeated for each layer. \citep{yu2017spatio} propose Spatio-Temporal Graph Convolutional Networks (STGCN), \index{Graph Neural Networks! Recurrent! STGCN} which uses a 1D-CNN alongside the ChebNet convolutional layer. Graph WaveNet \citep{wu2019graph} \index{Graph Neural Networks! Recurrent! Graph WaveNet} introduces a framework with a self-adaptive adjacency matrix. This allows Graph WaveNet to learn latent structures, which can help discover implicit temporal dependencies between nodes in the graph. Lastly, \citep{guo2019attention} introduce an Attention based Spatial-Temporal Graph Convolutional Network (ASTGCN) \index{Graph Neural Networks! Recurrent! ASTGCN} to solve traffic flow forecasting problems. ASTGCN builds on STGCN by introducing attention mechanisms both for spatial and temporal aggregation. This allows ASTGCN to outperform state-of-the-art baselines on real-world datasets from the Caltrans performance measurement system. \section{Reinforcement Learning} Reinforcement Learning (RL) consists in designing an agent capable of learning through trial and error by interacting with an environment. This section only aims to briefly describe Markov Decision Processes (MDP) \citep{bellman1957markovian} and RL concepts. We refer the reader to \citep{sutton2018reinforcement} for a complete introduction to RL. We temporarily use the following notations here, not to be confused with notations from the previous section: \begin{itemize} \item Set $S$: a set of states. \item Set $A$: a set of actions. \item Set $P$: a set of transition probabilities. The probability $P(s'|s , a) = P_a(s,s')$ refers to the probability of transitioning from state $s \in S$ to state $s' \in S$ after taking action $a \in A$. \item Function $R$: a reward function. The transition from state $s \in S$ to state $s' \in S$ after taking action $a \in A$ results in an immediate reward $R(s'|s,a) = R_a(s,s')$. \end{itemize} In MDPs, the environment is fully observable and actions are instantaneous and non-deterministic. Nevertheless $\forall (s,a) \in (S,A), ~ \exists ! P_a(s,s')$. In other words, after taking action $a\in A$ in the state $s \in S$, a given set of probabilities exist for each state $s' \in S$ which defines the likelihood of transitioning into those states. Moreover, an \textit{immediate reward} function $R_a(s,s')$ defines a given reward obtained from transitioning to a state $s' \in S$ after taking the action $a \in A$ in state $s \in S$. The aim for the agent is to devise an optimal \textit{policy} $\pi^*$ which specifies which action $\pi^*(s) \in A$ to take in any state $s$ in order to maximize the total cumulative reward. RL can generally be formulated as a 4-tuple $(S,A,P,R)$, representing an agent interacting with the environment in a MDP. \index{Reinforcement Learning! MDP} The agent interacts with the environment by following a policy $\pi$, and the goal is to find an optimal policy $\pi^*$ by trial and error. Two main RL approaches exist: policy gradient \index{Reinforcement Learning! Policy Gradient} optimization and value function \index{Reinforcement Learning! Value Iteration} optimization. In policy gradient approaches, a parameterizable function $f_\theta$ ($\theta$ being parameters) is used to approximate $\pi$ directly. Through interaction with the environment, the agent is able to learn, given its current policy $f_\theta$, which actions are more suited for given states in $S$. Thus, the agent can modify its policy $f_\theta$ to prioritize these actions. A popular choice for the function $f_\theta$ is neural networks, whose number of layers can be chosen according to the assumed complexity of the function approximated. On the other hand, value function optimization learns two different value functions $Q$ and $V$, which define the policy $\pi$ to follow. The state value function $V$ is defined by: $V^{\pi}(s) = \mathbb{E}_{\pi}(\sum_{i=0}^{\infty}\gamma^i r_{i+1} | s_t= s)$, where $s_t$ refers to the state of the agent at the current time step, $s_{t+1}, s_{t+2}, ...$ at future time steps; $r_{i+1}$ refers to the immediate reward received by the agent at time step $t+i+1$; variable $\gamma \in ]0,1]$ is a discount factor and $\mathbb{E}$ denotes the expectation. Intuitively, $V^{\pi}(s)$ corresponds to the expected sum of rewards when starting in state $(s)$ and following policy $\pi$. The action value function is defined by $Q^{\pi}(s,a) = \mathbb{E}_{\pi}(\sum_{i=0}^{\infty}\gamma^i r_{i+1} | s_t= s, a_t = a)$ where $a_t$ refers to the action taken by the agent at the current starting time step. Intuitively, it corresponds to the expected sum of rewards when starting in state $(s)$, taking action $a$ and following policy $\pi$ afterwards. In environments with large and continuous state spaces, these functions are usually approximated using neural networks. RL methods also belong to two categories: model-based and model-free. Model-based assumes knowledge of the transition probabilities in the MDP environment, while model-free does not. A popular approach for model-based is value iteration, which consists in updating Q-values by taking into account transition probabilities and known knowledge about transition states. Q-learning \index{Reinforcement Learning! Q-Learning} \citep{watkins1992q} is a popular approach for model-free approaches. It follows the idea of 'pulling' a given Q-value toward the result obtained from a simulation with the environment every time with a learning rate, so as to approximate transition probabilities indirectly. Q-learning follows this update scheme: $$Q(s_t,a_t) \gets Q(s_t,a_t) + \alpha [r_t + \gamma \max_{a_i} Q(s_{t+1}, a_i) - Q(s_t, a_t) ]$$ \noindent where $\alpha$ is the learning rate; $\delta_t = r_t + \gamma \max_{a_i} Q(s_{t+1}, a_i) - Q(s_t, a_t)$ is called the temporal difference. The use of Deep Q Neural Networks (DQN) \index{Reinforcement Learning! DQN} \citep{mnih2013playing,mnih2015human} has allowed RL tasks to achieve human level gameplay on games from the Atari 2600 plaform. More recently, AlphaGo \index{Reinforcement Learning! AlphaGo} \citep{silver_2016}, an AI program conceived to play the game of Go combining deep CNNs and Monte Carlo Tree Search (MCTS), managed to defeat the world champion of Go. AlphaGo uses supervised learning to learn from expert gameplay, and then refines the learned policy with RL (policy gradient) by playing against itself. AlphaGoZero \citep{silver_2017}, a newer version, is only trained with RL and achieves superior gaming performance than AlphaGo. Model-based RL algorithms are usually more efficient than model-free ones since they can leverage planning using known environment dynamics. A solution for model-free approaches would be to learn the dynamics from interactions with the environment. Although learning dynamics which are accurate enough for planning has remained a challenge in model-free approaches, the PlaNet approach from \citep{hafner2019learning} \index{Reinforcement Learning! PlaNet} achieves a breakthrough on this subject for image-based domains. PlaNet learns the dynamics model by relying on a sequence of latent states generated by an encoder-decoder architecture, rather than images directly. PlaNet chooses actions purely by planning in this latent space, this allows it to require far lower interaction with the environment to optimize its policy than previous recent approaches in model-free RL. Lastly, in some situations, the environment may not be fully observable by the agent. This is the case in Partially Observable Markov Decision Processes (POMDP). Agents get sensory information and derive a probability distribution of states they may likely be in, and need to adapt their policy accordingly. Some popular works dealing with planning in POMDP include \citep{kurniawati2008sarsop} who introduce a point-based POMDP algorithm for motion-planning, \citep{silver2010monte} who propose an MCTS algorithm for planning in large POMDPs, \citep{somani2013despot} who present a random scenario sampling scheme to alleviate computational limitations and \citep{zhu2017improving} who propose a Deep Recurrent Q-Network to adapt RL tasks in POMDPs. \section{Path-Planning and Neural Networks} A*-based algorithms described previously are fast on small planning domains, but take exponentially longer as domain size and complexity grows. Probabilistic approaches such as PRMs and RRTs on the other hand construct a new graph with random sampling to bypass this complexity, but to guarantee consistent solution quality the sampling would need to be exponential again \citep{lavalle2004relationship}. Therefore, the idea of using neural networks for path-planning has long been a problem of interest, although recent advances in machine learning has made it a viable option only recently. We explore a few such works. \citep{glasius1995neural} is an early work which specifies obstacles into a topologically ordered neural map, and uses a neural network to trace a shortest path. The minimum of a Lyapunov function is used for convergence for neural activity. \citep{chen2016dynamic}, a more recent work, relies on \textit{Deep Variational Bayes Filtering} (DVBF) \citep{karl2016deep} to embed dynamic movement primitives of a high dimensional humanoid movements in the latent space of a low dimensional variational autoencoder framework. RL has also been used for such purposes. \citep{levine2013guided} present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima, where policies are approximated by neural networks. This method is successfully applied to learn policies for planar swimming, hopping, walking and simulated 3D humanoid running. \citep{tamar2016value} introduce a neural network to approximate the value iteration algorithm in order to predict outcomes that involve planning-based reasoning. Their use of CNNs limits their approach to path-planning on 2D grids and not motion planning in general. Some \textit{imitation learning}-based approaches have also been proposed. Imitation learning consists in having an expert provide demonstrations, in this case of desired trajectories. A neural network can then be used to approximate the behavior of the expert, and hopefully generalize outside of the scope of provided demonstrations. Imitation learning has been successful in several areas involving complex dynamical systems \citep{abbeel2010autonomous, calinon2010learning}. OracleNet, an extension of imitation learning for path-planning has been proposed recently in \citep{bency2019neural}. OracleNet relies on an LSTM to build end-to-end trajectories in an iterative manner. The LSTM needs to be trained on optimal trajectories that span the entire configuration space of the considered environment before being used. Those optimal trajectories can be computed by algorithms such as A*. Although the proposed approach can be problematic if the framework needs to be quickly used in a newly known environment and no training time is available, OracleNet achieves performance which makes up for it. Paths are generated extremely fast, scaling almost linearly with dimensions reportedly. On a benchmark comprised of a point-mass robot with multiple degrees of freedom, OracleNet is compared to A* and RRT*. It achieves solution quality reportedly rivaling A* and far above RRT*, while its execution time remains far below the other two. In the context of path-planning under constraints, Osanlou \emph{et al.}~ have combined a GNN with a constraint programming solver and a branch \& bound tree search algorithm, observing in each case a significant improvement in the computation of solution paths, outperforming A$^*$-based domain-tailored heuristics \citep{osanlou2021constrained, osanlou2019optimal, osanlou2021learning}. \section{Temporal Planning With Uncertainty} \label{intro_chapter_dtnu} Scheduling in the presence of uncertainty is an area of interest in artificial intelligence. In this section, we present necessary notions and work leading up to the Disjunctive Temporal Network with Uncertainty (DTNU). Temporal Networks \index{Scheduling! STN} \citep{dechter1991temporal} are a common formalism to represent temporal constraints over a set of timepoints (\emph{e.g.} start/end of activities in a scheduling problem). A Simple Temporal Network (STN) $\Gamma$ is defined by a pair: $$\Gamma = (A,C)$$ \noindent Where: \begin{itemize} \item $A=(a_1, a_2, ..., a_n) \in \mathbb{R}^n$ is a set of $n$ real controllable timepoint variables. \item $C$ is a set of \textit{free} constraints, each of which of the form: $a_j - a_i \in [x_k, y_k]$, where $a_i, a_j \in A$; $x_k \in \{-\infty\} \cup \mathbb{R}$; $y_k \in \mathbb{R} \cup \{+\infty\}$. \end{itemize} \noindent A solution to STN $\Gamma$ is a complete set of assignments in $\mathbb{R}$ for each $a_i \in A$ which satisfies all constraints in $C$. The Simple Temporal Networks with Uncertainty (STNUs) \citep{kn:Ts,kn:ViFa} explicitly incorporate qualitative uncertainty into temporal networks. In STNUs, some events are \textit{uncontrollable}. The only controllable aspect is when they start: how long they take to complete, however, is not known. Although the duration for completion is uncertain, it is often known to be within some bounds. These uncontrollable events are represented by a \textit{contingency link}, \emph{i.e. } a triplet $(a,[x,y],u)$, where $a$ is a controllable timepoint (representing the start of the uncontrollable event), $[x,y]$ is the bounded duration of the uncontrollable event and $u$ is an uncontrollable timepoint which signifies the end of the uncontrollable event. Uncontrollable timepoint $u$ will occur on its own, at earliest $x$ units of time after execution of $a$, $y$ at latest. Formally, an STNU \index{Scheduling! STNU} $\Gamma$ is defined as : $$\Gamma = (A,U,C,L)$$ \noindent Where: \begin{itemize} \item $A=(a_1, a_2, ..., a_n) \in \mathbb{R}^n$ is a set of real controllable timepoint variables, which can be scheduled at any moment in time. \item $U=(u_1, u_2, ..., u_q) \in \mathbb{R}^q$ is a set of uncontrollable timepoint variables. \item Each uncontrollable timepoint $u_j \in U$ is linked to exactly one controllable timepoint $a_i \in A$ by a contingency link $l \in L$ : $l = (a_i, [x,y], u_j)$ \item $C$ is a set of free constraints of the same form as with STNs, except constraints can also involve uncontrollable timepoints in addition to controllable timepoints. \end{itemize} \noindent We refer to timepoints in general (controllable or uncontrollable) as $V = A \cup U$. Different types of \textit{controllability} exist \citep{kn:ViFa}: \begin{itemize} \item \textit{Strong Controllablity} \index{Scheduling! SC} (SC): An STNU $\Gamma = (A,U,C,L)$ is strongly controllable if there exists at least one universal schedule of controllable timepoints $\{a_1 = w_1, a_2 = w_2, ..., a_n = w_n\}$ which satisfies the constraints in $C$ regardless of the values taken by uncontrollable timepoints $U$. \item \textit{Weak Controllablity} \index{Scheduling! WC} (WC): An STNU $\Gamma = (A,U,C,L)$ is weakly controllable if, for every value outcome of uncontrollable timepoints $U$, there is at least one schedule of controllable timepoints $\{a_1 = w_1, a_2 = w_2, ..., a_n = w_n\}$ which satisfies the constraints in $C$. \item \textit{Dynamic Controllablity} \index{Scheduling! DC} (DC): An STNU $\Gamma = (A,U,C,L)$ is dynamically controllable if there is a reactive strategy which guarantees constraints in $C$ will be satisfied if the scheduling strategy is followed by a controller agent, while observing possible occurrences of uncontrollable timepoints and using this knowledge to adapt decisions. It is said the problem is DC if and only if it admits a valid dynamic strategy expressed as a map from partial schedules to Real-Time Execution Decisions (RTEDs) \citep{cimatti2016dynamic}. A partial schedule represents the current scheduling state, \emph{i.e. } the set of timepoints that have been scheduled so far and their timing. RTEDs are popular semantics used to express a DC strategy \citep{hunsberger2009fixing}. RTEDs regroup two possible actions: \textbf{(1)} The wait action, \emph{i.e. } wait for an uncontrollable timepoint to occur. \textbf{(2)} The $(t, \mathcal{X})$ action, \emph{i.e. } if nothing happens before time $t \in \mathbb{R}$, schedule the controllable timepoints in $\mathcal{X}$ at $t$. A strategy is valid if, for every possible occurrence of the uncontrollable timepoints, controllable timepoints get scheduled in a way that all free constraints are satisfied. \end{itemize} \noindent Considerable work has resulted in algorithms to determine whether or not an STNU is DC or not \citep{kn:MoMu2,kn:Mofast}, leading to $\mathcal{O}(N^3)$ worst-case DC-checking algorithms, where $N$ is the number of timepoints of the STNU. These DC-checking algorithms also synthesize valid DC strategies executable in $\mathcal{O}(N^3)$ \citep{hunsberger2016efficient, kn:Mofast}. Disjunctive Temporal Networks with Uncertainty (DTNUs) \index{Scheduling! DTNU} generalize STNUs by allowing the presence of disjunctions in the constraints $C$ or contingency links $L$. Formally, each constraint in $C$ is of the form : $\lor_{k=1}^{q} v_{k,j} - v_{k,i} \in [x_{k},y_{k}]$. Furthermore, each contingency link $l \in L$ is of the form : $(a_i, \lor_{k=1}^{q'} [x_{k},y_{k}] ,u_j)$ where $x_{k} \leq y_{k} \leq x_{k+1} \leq y_{k+1} ~ \forall k = 1, 2, ..., q'-1$. All controllability types for STNUs remain available for DTNUs. The introduction of disjunctions inside $C$ and $L$ renders STNU's $\mathcal{O}(N^3)$ DC-checking algorithms unavailable for DTNUs. In fact, the complexity of DC checking for DTNUs is $PSPACE$-complete \citep{kn:BhWi}, making this a highly challenging problem. The difficulty in proving or disproving DC arises from the need to check all possible combinations of disjuncts in order to handle all possible occurrence outcomes of the uncontrollable timepoints. The only known approach for DC-checking and DC strategy generation for DTNUs is based on expressing DTNUs as timed-game automata (TGAs) \citep{cimatti2014sound}. TGAs can then be solved by the UPPAAL-TIGA software \citep{behrmann2007uppaal}. In \citep{cimatti2016dynamic}, authors express DTNUs as TGAs in the same way, but use a pruning procedure based on satisfiability modulo theory and achieve superior results than with UPPAAL-TIGA. Authors in \citep{osanlou2022solving} design a tree search algorithm that searches in \textit{Restricted Time-based Dynamic Controllability} (R-TDC), a subspace of DC. They show R-TDC allows higher strategy search efficiency than TGA-based approaches while retaining very high DC coverage, thus almost always finding a strategy when a DC one exists on considered benchmarks. They also note a significant increase in search performance for harder problems owing to a heuristic based on graph neural network they use for search guidance.
1,477,468,750,533
arxiv
\section{Introduction} One of the remarkable outcomes of the $AdS/CFT$ correspondence (see \cite{Beisert:2010jr} for a review) is the relation between the $S$-matrix of a spin chain on the gauge theory to the $R$-matrix of the one-dimensional Hubbard model \cite{Essler:2005bk,Beisert2007,Rej2006,Martins2007,Mitev2017}. On the other hand, the classical result of Shastry \cite{Shastry1988} is that the construction of the $R$-matrix of the one-dimensional Hubbard model involves only the $R$-matrix of the free fermion model. Thus, the integrability of fermionic two-dimensional models -- an interesting subject on its own, becomes especially relevant in relation to the $AdS/CFT$ correspondence. A particularly interesting integrable two-dimensional relativistic purely fermionic model had already appeared before on the string theory side as a result of the fermionization \cite{Arutyunov:2004yx}. Although its various classical and quantum integrability properties have been investigated from various points of view \cite{Melikyan:2011uf,Melikyan:2012kj,Melikyan:2014yma,Melikyan2019ac,Melikyan:2016gkd,Melikyan:2014mfa}, the main challenge still lies in the quantization of the model, due the non-ultralocal nature of the algebra of Lax operators \cite{Melikyan:2012kj,Melikyan:2014yma}. While quantization of such non-ultralocal relativistic fermionic models by means of the standard methods of the integrable systems remains an open problem, their essential features can already be captured by considering the free fermion model. The explicit expressions for the Lax operators for both full and free models can be found in \cite{Melikyan:2012kj,Melikyan:2014yma}. It is not currently known how to formulate non-ultralocal integrable models on a lattice and solve the problem by means of the Bethe Ansatz. The goal of this paper is to address the inverse problem: starting with a suitable known lattice formulation of an integrable model one can simply take the continuous limit and trace the appearance of the non-ultralocal terms in the algebra of the Lax operators. This program can be implemented in particular for the free fermion model, since its lattice formulation is well-know. It becomes especially relevant having in mind the relation outlined above between the $S$-matrix of a spin chain on the gauge theory side and the $R$-matrix of the one-dimensional Hubbard model, which itself reduces to finding the $R$-matrix for the free fermion model (see \cite{Essler:2005bk} for a detailed exposition). However, the resulting $R$-matrix for the one-dimensional Hubbard model is not, unlike most representations of the Yang-Baxter equation, of the difference type in spectral parameters, as a result of the so-called decorated Yang-Baxter equation \cite{Shastry1988}. Thus, it is not obvious how, in principle, to obtain in the continuous limit, which one has to consider in the context of the $AdS/CFT$ correspondence, the $(1+1)$-relativistic fermion model as, for example, the one mentioned above - appearing from string theory, where the dependence of the physical quantities, such as the $S$-matrix is of difference form. To address this problem, we consider in this paper a more general three-parameter parametrization of the free fermion model due to Bazhanov and Stroganov \cite{Bazhanov:1984iw,Bazhanov:1984ji,Bazhanov:1984jg}. In addition, towards the goal of obtaining a purely fermionic model, we use a more convenient for this purposes fermionic $R$-operator formalism given in \cite{Umeno1998b,Umeno1998,Umeno2000}. The resulting Yang-Baxter and decorated Yang-Baxter relations turn out to be in the form where the dependence of $R$-matrix is indeed of the difference type with respect to one of the spectral parameters \cite{Melikyan2020}. We then find the Lax connection with the desired dependence of the difference type and bosonize the auxiliary space to obtain the Lax connection in the more familiar graded form, suitable for taking the continuous limit. \section{Bazhanov-Stroganov elliptic parametrization for the free fermion model} The free fermion model is defined by the $R$-matrix of an inhomogeneous eight-vertex model of the form: \begin{align} \hat{R}=\begin{pmatrix} a & 0 & 0 & d\\ 0 & b & c & 0 \\ 0 & c' & b' & 0 \\ d' & 0 & 0 & a \label{bs:R_matrix_orig}, \end{pmatrix}, \end{align} together with the following free fermion condition \cite{Fanwu723}: \begin{align} a a' + b b' - c c' - d d'=0 \label{bs:free_fermion_condition} \end{align} A particularly interesting and general parametrization of this model has been given by Bazhanov and Stroganov \cite{Bazhanov:1984iw,Bazhanov:1984ji,Bazhanov:1984jg}, where the coefficients in \eqref{bs:R_matrix_orig} are parameterized by the spectral parameter $u \in \mathbb{C}$, and, in addition, two complex rapidities $\zeta_{1}$ and $\zeta_{2}$:\footnote{Our notations here follow \cite{Melikyan2020}.} \begin{align} &a(u;\zeta_{1},\zeta_{2}) =\rho \left[ 1-\mathrm{e}(u)\mathrm{e}(\zeta_{1})\mathrm{e}(\zeta_{2}) \right], \quad a'(u;\zeta_{1},\zeta_{2}) =\rho \left[ \mathrm{e}(u)-\mathrm{e}(\zeta_{1})\mathrm{e}(\zeta_{2}) \right], \label{bs:a_a_prime} \\ &b(u;\zeta_{1},\zeta_{2}) =\rho \left[ \mathrm{e}(\zeta_{1})-\mathrm{e}(u)\mathrm{e}(\zeta_{2}) \right], \quad b'(u;\zeta_{1},\zeta_{2}) =\rho \left[ \mathrm{e}(\zeta_{2})-\mathrm{e}(u)\mathrm{e}(\zeta_{1}) \right], \label{bs:b_b_prime}\\ &c(u;\zeta_{1},\zeta_{2})=c'(u;\zeta_{1},\zeta_{2}) =\rho \: \mathrm{sn}^{-1}\left(\frac{u}{2}\right)\left[ 1-\mathrm{e}(u)\right]\left[\mathrm{e}(\zeta_{1})e(\zeta_{2})\mathrm{sn}(\zeta_{1})\mathrm{sn}(\zeta_{2}) \right]^{1/2}, \label{bs:c_c_prime}\\ &d(u;\zeta_{1},\zeta_{2})=d'(u;\zeta_{1},\zeta_{2}) =- \mathrm{i}\, k \rho \: \mathrm{sn} \left(\frac{u}{2} \right)\left[ 1+\mathrm{e}(u)\right]\left[\mathrm{e}(\zeta_{1})e(\zeta_{2})\mathrm{sn}(\zeta_{1})\mathrm{sn}(\zeta_{2}) \right]^{1/2} \label{bs:d_d_prime}. \end{align} Here, the functions $\mathrm{sn}(x)$ and $\mathrm{cn}(x)$ are the Jacobi elliptic functions of modulus $\kappa$ \cite{whittaker_watson_1996}, $\mathrm{e}(x):=\mathrm{cn}(x) + \mathrm{i}\, \mathrm{sn}(x)$ is the elliptic exponential, and $\rho$ is an arbitrary factor. With respect to this parametrization, the $R$-matrix \eqref{bs:R_matrix_orig} satisfies the Yang-Baxter equation: \begin{align} \hat{R}_{12}(\eta_{12};\zeta_{1},\zeta_{2})\hat{R}_{13}(\eta_{13};\zeta_{1},\zeta_{3})\hat{R}_{23}(\eta_{23};\zeta_{2},\zeta_{3})=\hat{R}_{23}(\eta_{23};\zeta_{2},\zeta_{3})\hat{R}_{13}(\eta_{13};\zeta_{1},\zeta_{3})\hat{R}_{12}(\eta_{12};\zeta_{1},\zeta_{2}),\label{bs:YBE} \end{align} which is of difference type with respect to the spectral parameter $u$. In \eqref{bs:YBE}, we have used the shorthand notation $\eta_{jk} \equiv u_{j}-u_{k}$. In order to obtain a purely fermionic model from the Yang-Baxter equation \eqref{bs:YBE}, it is convenient to introduce from the beginning an equivalent fermionic $R$-operator \cite{Umeno1998b,Umeno1998}, corresponding to the $R$-matrix \eqref{bs:R_matrix_orig}. To this end, one has to apply the Jordan-Wigner transformation (see \cite{Essler:2005bk} for an extensive treatment) to the above $R$-matrix, as well as the the Yang-Baxter equation \eqref{bs:YBE}. The essential technical details are explained in \cite{Umeno1998b,Umeno1998}, and are omitted here. Thus, the fermionic $R$-operator associated to \eqref{bs:R_matrix_orig} takes the form: \begin{align} R_{jk}(u;\zeta_{j},\zeta_{k})&=a(u;\zeta_{j},\zeta_{k})\left[-n_{j} n_{k} \right] +a'(u;\zeta_{j},\zeta_{k})\left[(1-n_{j})(1- n_{k}) \right]+b(u;\zeta_{j},\zeta_{k})\left[n_{j}(1- n_{k}) \right] \nonumber\\ &+b'(u;\zeta_{j},\zeta_{k})\left[n_{k}(1- n_{j})\right] +c(u;\zeta_{j},\zeta_{k})\left[\Delta_{jk}+\Delta_{kj} \right] + d(u;\zeta_{j},\zeta_{k})\left[-\tilde{\Delta}^{(\dagger)}_{jk}-\tilde{\Delta}_{jk}\right]. \label{bs:fermionic_R} \end{align} Here the (spinless) fermionic variables are $c_{k}$, $c^{\dagger}_{k}$ and satisfy the usual anticommutation relations: $\{c_{k},c^{\dagger}_{j}\}=\delta_{jk}$. We have also denoted: $n_{k}=c^{\dagger}_{k}c_{k}$, $\Delta_{jk}=c^{\dagger}_{j}c_{k},\tilde{\Delta}^{(\dagger)}_{jk}=c^{\dagger}_{j}c^{\dagger}_{k}$, and $\tilde{\Delta}_{jk}=c_{j}c_{k}$. Furthermore, it can be shown that the fermionic $R$-operator \eqref{bs:fermionic_R} satisfies the Yang-Baxter equation: \begin{align} R_{12}(\eta_{12};\zeta_{1},\zeta_{2})R_{13}(\eta_{13};\zeta_{1},\zeta_{3})R_{23}(\eta_{23};\zeta_{2},\zeta_{3})=R_{23}(\eta_{23};\zeta_{2},\zeta_{3})R_{13}(\eta_{13};\zeta_{1},\zeta_{3})R_{12}(\eta_{12};\zeta_{1},\zeta_{2}).\label{bs:YBE_fermionic} \end{align} Next, we extend the above construction to account for spin degrees of freedom by considering two copies of the $R^{(s)}$-operator, one for each spin $s=\uparrow,\downarrow$, in order to define: \begin{align} \mathcal{R}_{jk}(u_{j}-u_{k};\zeta_{j},\zeta_{k};\zeta'_{j},\zeta'_{k}):=R^{(\uparrow)}_{jk}(u_{j}-u_{k};\zeta_{j},\zeta_{k})R^{(\downarrow)}_{jk}(u_{j}-u_{k};\zeta'_{j},\zeta'_{k}). \label{bs:fermionic_R_spin} \end{align} In \eqref{bs:fermionic_R_spin} and following formulas, the parameters with prime stand for the relevant quantities with spin $s=\downarrow$. The fermionic operator $\mathcal{R}_{jk}(u_{j}-u_{k};\zeta_{j},\zeta_{k};\zeta'_{j},\zeta'_{k})$ defined in \eqref{bs:fermionic_R_spin} satisfies the same Yang-Baxter equation \eqref{bs:YBE_fermionic}, and one can construct all relevant quantities following the standard methods.\footnote{See \cite{Umeno1998b} for details on the fermionic $R$-operator corresponding to the $XYZ$ model, and the construction of relevant quantities.} As an application, we use the fermionic Yang-Baxter relation for the $R$-operator \eqref{bs:fermionic_R_spin} with $\zeta_{j} = \zeta_{k} \equiv \zeta$ and $\zeta'_{j} = \zeta_{k}' \equiv \zeta'$ to obtain the Hamiltonian: \begin{align} \hat{\mathcal{H}}=\tau^{-1}(0;\zeta,\zeta')\frac{d}{du}\tau(u;\zeta,\zeta')\vert_{u=0}.\label{bs:Hamiltonian_definition} \end{align} The spinful monodromy operator factorizes as: \begin{align} \tau(u;\zeta,\zeta'):=\tau^{(\uparrow)}(u;\zeta) \tau^{(\downarrow)}(u;\zeta'), \label{bs:monodromy_op_factorization} \end{align} in terms of the monodromy operator $\tau^{(s)}(u;\zeta)$ for spin $s$. Using the explicit form of the coefficients \eqref{bs:a_a_prime}-\eqref{bs:d_d_prime}, and the relations: \begin{align} R^{(s)}_{jk}(0;\zeta)&=\beta(\rho,\zeta) P^{(s)}_{jk},\label{bs:R_betaP}\\ \tau^{(s)}(0;\zeta)&=\left[\beta(\rho,\zeta)\right]^{N}P^{(s)}_{12}P^{(s)}_{23}\cdot \ldots \cdot P^{(s)}_{N,N-1},\label{bs:monodromy_op} \end{align} where we denoted $R^{(s)}_{jk}(0;\zeta):=R^{(s)}_{jk}(0;\zeta,\zeta)$, $\beta(\rho,\zeta):=(-2 \mathrm{i}\, \rho) \: \mathrm{e}(\zeta) \: \mathrm{sn}(\zeta)$, and \begin{align} P^{(s)}_{jk}:=1-n_{j,(s)}-n_{k,(s)}+\Delta_{jk,(s)}+\Delta_{kj,(s)} \label{bs:fermionic_permutation_op} \end{align} is the permutation operator corresponding to spin $s$, one finds from \eqref{bs:Hamiltonian_definition}:\footnote{We consider here periodic boundary conditions identifying the site j=N+1 with the site j=1.} \begin{align} \hat{\mathcal{H}}=\frac{1}{\beta(\rho,\zeta)}\sum_{j=1}^{N}\Gamma^{(\uparrow)}_{j,j+1}(\zeta)+\frac{1}{\beta(\rho',\zeta')}\sum_{j=1}^{N}\Gamma^{(\downarrow)}_{j,j+1}(\zeta'),\label{bs:Hamiltonian_Gammas} \end{align} with: \begin{align} \Gamma^{(s)}_{jk}(\zeta):=P^{(s)}_{jk}\frac{d}{du}R^{(s)}_{jk}(u;\zeta)\Bigr|_{\substack{u=0}}.\label{bs:Gammas_s_def} \end{align} The explicit calculation of the functions $\Gamma^{(s)}_{jk}(\zeta)$ in \eqref{bs:Hamiltonian_Gammas} leads to the Hamiltonian for two non-interacting fermionic $XY$ models in external fields, which are parameterized by the rapidities $\zeta$ and $\zeta'$:\footnote{In passing from \eqref{bs:Hamiltonian_Gammas} to \eqref{bs:XY_Hamiltonian} we have ignored an additive constant factor.} \begin{align} \hat{\mathcal{H}}^{XY}= \sum_{j=1}^{N}\tilde{H}^{(\uparrow)}_{j,j+1}(\zeta)+ \sum_{j=1}^{N}\tilde{H}^{(\downarrow)}_{j,j+1}(\zeta'),\label{bs:XY_Hamiltonian} \end{align} where: \begin{align} \tilde{H}^{(s)}_{j,j+1}(\zeta)&:=\frac{1}{2\, \mathrm{sn}(\zeta)}\left[\left(\Delta_{j,j+1,(s)}+\Delta_{j+1,j,(s)}\right)+ k\mathrm{sn}(\zeta) \left(\tilde{\Delta}^{(\dagger)}_{j,j+1,(s)}-\tilde{\Delta}_{j,j+1,(s)}\right)+2\mathrm{cn}(\zeta)\left(n_{j,(s)}-\nicefrac{1}{2}\right)\right].\label{bs:XY_Hamiltonian_tilde_s} \end{align} Note that the $\mathcal{R}$-operator \eqref{bs:fermionic_R_spin} is of the difference type in the spectral parameter $u$, unlike the case considered in \cite{Umeno1998b,Umeno1998}. In addition, the procedure in \cite{Umeno1998b,Umeno1998} to obtain the $XY$ model in an external field \eqref{bs:XY_Hamiltonian_tilde_s} is rather non-linear even for the spinless case, requiring also the decorated Yang-Baxter relation and some nontrivial guess-work, while our construction and derivation of the Hamiltonian \eqref{bs:XY_Hamiltonian} is rather linear and follows the standard steps. We also mention here that the the decorated Yang-Baxter relation considered in \cite{Umeno1998b} is not of the difference type with respect to the spectral parameter $u$, which is the reason for the $R$-matrix of the Hubbard model also not being of the difference type. In contrast, the decorated Yang-Baxter equation corresponding to the fermionic $R$-operator \eqref{bs:fermionic_R} depends on the differences of the spectral parameters $\eta_{jk} \equiv u_{j}-u_{k}$, taking an asymmetrical form only with respect to the other parameters $\zeta_{i}$, and has the following general form \cite{Melikyan2020}: \begin{align} R^{(s)}_{12}(\eta_{12};\zeta_{1},\zeta_{2}-2\mathrm{K}(\kappa);\kappa) \; (2n_{1,s}-1) \; R^{(s)}_{13}(\eta_{13};\zeta_{1},\zeta_{3}-2\mathrm{K}(\kappa);-\kappa) \; R^{(s)}_{23}(\eta_{23};\zeta_{2},\zeta_{3};\kappa) \nonumber \\ =R^{(s)}_{23}(\eta_{23};\zeta_{2},\zeta_{3};\kappa)\; R^{(s)}_{13}(\eta_{13};\zeta_{1},\zeta_{3}-2\mathrm{K}(\kappa);-\kappa)\; (2n_{1,s}-1) \; R^{(s)}_{12}(\eta_{12};\zeta_{1},\zeta_{2}-2\mathrm{K}(\kappa);\kappa), \label{bs:DYBE} \end{align} where we have written the dependence on the modulus $\kappa$ in $R^{(s)}_{jk}(u;\zeta_{j},\zeta_{k};\kappa)$ explicitly, and $\mathrm{K}(\kappa)$ is the complete elliptic integral of the first kind \cite{whittaker_watson_1996}. \section{The Lax connection} We now turn to the question of obtaining the Lax connection starting from the Yang-Baxter equation for the fermionic operator $\mathcal{R}_{jk}(u_{j}-u_{k};\zeta_{j},\zeta_{k};\zeta'_{j},\zeta'_{k})$ defined in \eqref{bs:fermionic_R_spin}. We follow the general derivation of the Lax pair outlined in \cite{Izergin1981,Wadati1987,Olmedilla1987,Shiroshi1996}. As in the previous section, we set here $\zeta_{j} = \zeta_{k} \equiv \zeta$ and $\zeta'_{j} = \zeta_{k}' \equiv \zeta'$ to illustrate the main steps, with the general case being a straightforward generalization of the expressions given below. Denoting (\rm{cf.} equation \eqref{bs:fermionic_R_spin}): \begin{align} \mathcal{R}_{jk}(u;\zeta;\zeta')&:=\mathcal{R}_{jk}(u;\zeta,\zeta;\zeta',\zeta')=R^{(\uparrow)}_{jk}(u;\zeta,\zeta)R^{(\downarrow)}_{jk}(u;\zeta',\zeta'),\label{lax:define_reduced_R}\\ \mathcal{P}_{jk}&:=P^{(\uparrow)}_{jk}P^{(\downarrow)}_{jk},\label{lax:Permutation} \end{align} and using the Yang-Baxter equation \eqref{bs:YBE_fermionic} for spin $s=\uparrow, \downarrow$, we find the Yang-Baxter equation for $\mathcal{R}_{jk}(u;\zeta;\zeta')$ in \eqref{lax:define_reduced_R}: \begin{align} \mathcal{R}_{12}(u-v;\zeta;\zeta')\mathcal{R}_{13}(u;\zeta;\zeta')\mathcal{R}_{23}(v;\zeta;\zeta')=\mathcal{R}_{23}(v;\zeta;\zeta')\mathcal{R}_{13}(u;\zeta;\zeta')\mathcal{R}_{12}(u-v;\zeta;\zeta').\label{lax:YBE_fermionic} \end{align} Then, one finds from \eqref{lax:YBE_fermionic}: \begin{align} & \left[ \Gamma_{23}(\zeta;\zeta'),\mathcal{R}_{13}(u;\zeta;\zeta')\mathcal{R}_{12}(u;\zeta;\zeta')\right]\label{lax:commutator_eq} \\ &=\beta(\rho,\zeta)\beta(\rho',\zeta')\left( \frac{d}{dv}\mathcal{R}_{13}(u-v;\zeta;\zeta')\Bigr|_{\substack{v=0}}\mathcal{R}_{12}(u;\zeta;\zeta') -\mathcal{R}_{13}(u-v;\zeta;\zeta')\frac{d}{dv}\mathcal{R}_{12}(u;\zeta;\zeta')\Bigr|_{\substack{v=0}} \right),\notag \end{align} where we have denoted (\rm{cf. equation \eqref{bs:Gammas_s_def}}): \begin{align} \Gamma_{jk}(\zeta;\zeta'):=\beta(\rho',\zeta')\Gamma^{(\uparrow)}_{jk}(\zeta)+\beta(\rho,\zeta)\Gamma^{(\downarrow)}_{jk}(\zeta').\label{lax:Gamma_def} \end{align} From \eqref{bs:Hamiltonian_Gammas} and \eqref{bs:Gammas_s_def} we also find: \begin{align} \frac{d}{dt}\mathcal{R}_{jk}(u;\zeta;\zeta') = \frac{\mathrm{i}\,}{\beta(\rho,\zeta)\beta(\rho',\zeta')}\left(\left[\Gamma_{j-1,j}(\zeta;\zeta'),\mathcal{R}_{jk}(u;\zeta;\zeta')\right]+\left[\Gamma_{j,j+1}(\zeta;\zeta'),\mathcal{R}_{jk}(u;\zeta;\zeta')\right] \right).\label{lax:Time_derivative} \end{align} Finally, renaming the indices $1 \to a;\, 2 \to j;\, 3 \to j+1$ in \eqref{lax:commutator_eq}, and using the equation \eqref{lax:Time_derivative}, we arrive at the zero-curvature condition for integrable models on the lattice: \cite{Faddeev:1987ph,Korepin:1997bk}: \begin{align} \frac{d \mathcal{L}_{j}}{dt}= \mathcal{M}_{j+1}\mathcal{L}_{j} - \mathcal{L}_{j}\mathcal{M}_{j},\label{lax:zcc} \end{align} where the Lax connection has the following explicit form: \begin{align} \mathcal{L}_{j}&=\mathcal{R}_{aj}(u;\zeta;\zeta'),\label{lax:L_operator}\\ \mathcal{M}_{j}&= \frac{\mathrm{i}\,}{\beta(\rho,\zeta)\beta(\rho',\zeta')}\mathcal{R}^{-1}_{aj}(u;\zeta;\zeta')\left(\beta(\rho,\zeta)\beta(\rho',\zeta')\frac{d}{dv}\mathcal{R}_{aj}(u-v;\zeta;\zeta')\Bigr|_{\substack{v=0}} - \left[\Gamma_{j-1,j}(\zeta;\zeta'),\mathcal{R}_{aj}(u;\zeta;\zeta') \right] \right).\label{lax:M_operator} \end{align} Here $\mathcal{R}^{-1}_{aj}(u;\zeta;\zeta')$ denotes the inverse of \eqref{lax:define_reduced_R}, which corresponds to \begin{align} \mathcal{R}^{-1}_{jk}(u;\zeta;\zeta') &= {R^{(\downarrow)}_{jk}}^{-1}(u;\zeta',\zeta'){R^{(\uparrow)}_{jk}}^{-1}(u;\zeta,\zeta), \label{lax:inverse_reduced_R}\\ {R^{(s)}_{jk}}^{-1}(u;\zeta_j,\zeta_k) &= \frac{1}{b b'- c c'} \left[ -a (1 - n_{j,(s)})(1-n_{k,(s)}) + a' n_{j,(s)} n_{k,(s)} + b (1-n_{j,(s)})n_{k,(s)} +b'n_{j,(s)} (1-n_{k,(s)}) \right. \label{lax:inverse_spin_R} \\ &- \left.c \Delta_{jk} -c'\Delta_{kj} + d \tilde{\Delta}^{(\dagger)}_{jk} + d'\tilde{\Delta}_{jk} \right].\nonumber \end{align} For the sake of clarity we omitted in \eqref{lax:inverse_spin_R} the dependence on the spectral parameter $u$ and the rapidities $\zeta_{j}$ and $\zeta_{k}$ of the Boltzmann weights \eqref{bs:a_a_prime} - \eqref{bs:d_d_prime}. We have emphasized the index $a$ in the above formulas, in order to stress that this index corresponds to an extra space, different from the ones corresponding to $j=1, \ldots ,N$. Thus, we have derived the zero-curvature condition and the corresponding Lax connection \eqref{lax:L_operator} and \eqref{lax:M_operator} starting only from the Yang-Baxter equation \eqref{lax:YBE_fermionic} of the difference type in one of the spectral parameters. We also stress that since the Yang-Baxter \eqref{lax:YBE_fermionic} as well as the decorated Yang-Baxter equations \eqref{bs:DYBE} are of the difference type in one of the spectral parameters, any quantity that is obtained from these two equations (for example, in the context of the Hubbard model to construct an interacting theory) will inherit this dependence. \section{Jordan-Wigner transformation} \label{jw} Up to this point the Lax connection is written entirely in terms of the fermionic operators $c_{j,(s)}$ and $c^{\dagger}_{j,(s)}$. To obtain the usual graded Lax connection in matrix form, we must bosonize the auxiliary space denoted by the index $a$. To this end, we consider the following Jordan-Wigner transformation \cite{Essler:2005bk}: \begin{align} c_{a,\uparrow} &= c_a \otimes \mathbb{1} \xrightarrow{JW} \bigotimes_{k=1}^{N}\left(-\sigma_k^z\right) \bigotimes_{l=1}^{N}\left(-\tau_l^z\right) \otimes \sigma_a^-, \label{jw:caup} \\ c_{a,\uparrow}^{\dagger} &= c_a^{\dagger} \otimes \mathbb{1} \xrightarrow{JW} \sigma_a^+ \otimes \bigotimes_{l=1}^{N}\left(-\tau_l^z\right) \bigotimes_{k=1}^{N}\left(-\sigma_k^z\right), \label{jw:caupdg} \\ c_{a,\downarrow} &= \mathbb{1} \otimes c_a \xrightarrow{JW} (-\sigma_a^z) \otimes \bigotimes_{k=1}^{N}\left(-\sigma_k^z\right) \bigotimes_{l=1}^{N}\left(-\tau_l^z\right) \otimes \tau_a^-, \label{jw:cadown} \\ c_{a,\downarrow}^{\dagger} &= \mathbb{1} \otimes c_a^{\dagger} \xrightarrow{JW} \tau_a^+ \otimes \bigotimes_{l=1}^{N}\left(-\tau_l^z\right) \otimes \left(-\sigma_a^z\right) \otimes \bigotimes_{k=1}^{N}\left(-\sigma_k^z\right),\label{jw:cadowndg} \end{align} and apply it only to this auxiliary space to obtain the desired matrix structure. Here $\sigma^i_j$ and $\tau^i_j$, $i=x,y,z$ and $j=a,1,2,\ldots,N$ are two copies of the Pauli matrices, corresponding respectively to spin up and spin down components. Moreover, as usual, we introduce: \begin{align} \sigma^{\pm}_j = \frac{1}{2} \left( \sigma_j^x \pm i \sigma_j^y\right), \; \tau^{\pm}_j = \frac{1}{2} \left( \tau_j^x \pm i \tau_j^y\right). \end{align} We also note that the extra copies of $\sigma_a^z$ appearing in \eqref{jw:cadown} and \eqref{jw:cadowndg} are necessary to ensure the correct anticommutation relations. Thus, the L-operator \eqref{lax:L_operator} becomes: \begin{align} \mathcal{L}_j = \begin{pmatrix} \xi_{j,\uparrow}^{(1)} \xi_{j,\downarrow}^{(1)} & -\Lambda \: \xi_{j,\uparrow}^{(1)} \chi_{j,\downarrow}^{(1)} & \Lambda \: \chi_{j,\uparrow}^{(1)} \xi_{j,\downarrow}^{(1)} & - \chi_{j,\uparrow}^{(1)} \chi_{j,\downarrow}^{(1)} \\ \Lambda \: \xi_{j,\uparrow}^{(1)} \chi_{j,\downarrow}^{(2)} & \xi_{j,\uparrow}^{(1)} \xi_{j,\downarrow}^{(2)} & \chi_{j,\uparrow}^{(1)} \chi_{j,\downarrow}^{(2)} & \Lambda \: \chi_{j,\uparrow}^{(1)} \xi_{j,\downarrow}^{(2)}\\ -\Lambda \: \chi_{j,\uparrow}^{(2)} \xi_{j,\downarrow}^{(1)} & - \chi_{j,\uparrow}^{(2)} \chi_{j,\downarrow}^{(1)} & \xi_{j,\uparrow}^{(2)}\xi_{j,\downarrow}^{(1)} & \Lambda \:\xi_{j,\uparrow}^{(2)} \chi_{j,\downarrow}^{(1)} \\ \chi_{j,\uparrow}^{(2)} \chi_{j,\downarrow}^{(2)} & - \Lambda \: \chi_{j,\uparrow}^{(2)} \xi_{j,\downarrow}^{(2)} & - \Lambda\: \xi_{j,\uparrow}^{(2)} \chi_{j,\downarrow}^{(2)} & \xi_{j,\uparrow}^{(2)}\xi_{j,\downarrow}^{(2)} \label{jw:L-matrix} \end{pmatrix} \end{align} with \begin{align} \chi_{j,(s)}^{(1)} &= c\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) c_{j,(s)} - d\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) c^{\dagger}_{j,(s)}, \label{jw:chi_1}\\ \chi_{j,(s)}^{(2)} &= d'\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) c_{j,(s)} + c'\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) c^{\dagger}_{j,(s)}, \label{jw:chi_2}\\ \xi_{j,(s)}^{(1)} &= b\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) - \left[ a\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) + b\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) \right]n_{j,(s)}, \label{jw:xi_1}\\ \xi_{j,(s)}^{(2)} &= a'\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) + \left[ -a'\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) + b'\left(u;\zeta_{a,(s)},\zeta_{j,(s)} \right) \right]n_{j,(s)}, \label{jw:xi_2} \end{align} and \begin{align} \Lambda = \bigotimes_{k=1}^{N} \sigma_k^z \bigotimes_{l=1}^{N} \tau_l^z. \label{jw:lambda} \end{align} The factor $\Lambda$ \eqref{jw:lambda} results in the non-local form of $L$-matrix \eqref{jw:L-matrix}, as it involves contributions from all the sites of the chain. It clearly is a direct consequence of the non-local character of the Jordan-Wigner transformation \eqref{jw:caup} - \eqref{jw:cadowndg}. To get rid of this non-locality, we consider the following gauge transformation: \begin{align} \mathcal{L}_j \to G \mathcal{L}_j G^{-1}, \; \text{with} \; G = G_{\uparrow}(\beta_1,\beta_2)\otimes_s G_{\downarrow}(\alpha_1,\alpha_2), \; \alpha_i, \beta_i \in \mathbb{C}, \; i=1,2, \label{jw:gauge_transformation} \end{align} where the gauge transformation acting on each spin component is given by: \begin{align} G_{\uparrow}(\beta_1,\beta_2) = \text{diag}\left(\beta_1 \Lambda, \beta_2\right), \; G_{\downarrow}(\alpha_1,\alpha_2) = \text{diag} \left(\alpha_1, \alpha_2 \Lambda \right). \label{jw:gauge_transformation_spin} \end{align} The gauge transformed $L$-matrix is local and can be written in terms of the following supertensor product \cite{Essler:2005bk}: \begin{align} \mathcal{L}_j = L_j^{(\uparrow)}(\beta_1,\beta_2) \otimes_s \tilde{L}_j^{(\downarrow)} (\alpha_1, \alpha_2) \label{jw:local_L-matrix} \end{align} of two copies of the spinless $L$-matrix \begin{align} L_j^{(s)}(\alpha_1,\alpha_2) = \begin{pmatrix} \xi_{j,(s)}^{(1)} & \left(\frac{\alpha_1}{\alpha_2}\right) \chi_{j,(s)}^{(1)}\\ \left(\frac{\alpha_2}{\alpha_1}\right) \chi_{j,(s)}^{(2)} & \xi_{j,(s)}^{(2)} \end{pmatrix}, \; \text{with} \; \tilde{L}_j^{(s)} (\alpha_1, \alpha_2) = \sigma^z L_j^{(s)}(\alpha_1,\alpha_2) \sigma^z. \label{jw:spinless_L-matrix} \end{align} The spinless $L$-matrix \eqref{jw:spinless_L-matrix} can be derived by applying a spinless version of the Jordan-Wigner transformation defined by \eqref{jw:caup} and \eqref{jw:caupdg} to the spinless $R$-matrix \eqref{bs:fermionic_R} followed by a gauge transformation similar to $G_{\uparrow}(\alpha_1,\alpha_2)$ or $G_{\downarrow}(\alpha_1,\alpha_2)$. It also corresponds to the graded $L$-matrix derived within the formalism of \cite{Essler:2005bk} in terms of graded projection operators. Before elaborating on this connection, we derive the graded $M$-operator in matrix form. Using the fact that the gauge transformed $L$-operator \eqref{jw:local_L-matrix} factors into the supertensor product of gauge transformed spinless $L$-matrices \eqref{jw:spinless_L-matrix}, the zero curvature condition \eqref{lax:zcc} fixes the form of the $M$-operator as: \begin{align} \mathcal{M}_j = \left( M_j^{(\uparrow)}(\beta_1,\beta_2) + \partial_t G_{\uparrow}(\beta_1,\beta_2) G_{\uparrow}^{-1}(\beta_1,\beta_2) \right) \otimes_s \mathbb{1} + \mathbb{1} \otimes_s \left( \tilde{M}_j^{(\downarrow)}(\alpha_1, \alpha_2) + \partial_t G_{\downarrow}(\alpha_1, \alpha_2) G_{\downarrow}^{-1}(\alpha_1, \alpha_2) \right). \label{jw:M-matrix} \end{align} Here, the spinless $M$-matrix \begin{align} M_j^{(s)}(\alpha_1,\alpha_2) = \begin{pmatrix} M^{(s)}_{11,j} & \left(\frac{\alpha_1}{\alpha_2}\right) M^{(s)}_{12,j} \\ \left(\frac{\alpha_2}{\alpha_1}\right) M^{(s)}_{21,j} & M^{(s)}_{22,j} \end{pmatrix}, \; \text{with} \; \tilde{M}_j^{(s)} (\alpha_1, \alpha_2) = \sigma^z L_j^{(s)}(\alpha_1,\alpha_2) \sigma^z \label{jw:spinless_M-matrix} \end{align} can similarly be derived by applying the Jordan-Wigner transformation \eqref{jw:caup} and \eqref{jw:caupdg} followed by the gauge transformation $G_{\uparrow}$ or $G_{\downarrow}$ to the spinless version of the M-operator \eqref{lax:M_operator}. The components of \eqref{jw:spinless_M-matrix} are: \begin{align} M^{(s)}_{11,j} &= \frac{i}{\beta} \frac{1}{b b' - c c'} \Big\{ \beta (b'\dot{b} - c \dot{c}') - (a_0-c_0)c c' n_{j-1,(s)} + \left[ \beta \left(-a' \dot{a} -b'\dot{b} + c \dot{c}'+ d \dot{d}' \right) \right. \Big. \label{jw:M11}\\ &+ \Big. \left. (a_0-c_0)(c c'- d d') n_{j-1,(s)} \right] n_{j,(s)} + \left[ (a b' + b b'-c c') \chi^{(4)}_{j-1,(s)} + c d' \chi^{(3)}_{j-1,(s)}\right]c_{j,(s)} \Big. \nonumber \\ &+ \Big.\left[ (a' b - b b'+c c') \chi^{(3)}_{j-1,(s)} + c' d \chi^{(4)}_{j-1,(s)}\right]c_{j,(s)}^{\dagger} \Big\},\nonumber \\ M^{(s)}_{22,j} &= \frac{i}{\beta} \frac{1}{b b' - c c'} \Big\{ \beta (- a\dot{a}' + d' \dot{d}) + (a_0-c_0)d d' n_{j-1,(s)} + \left[ \beta \left(a \dot{a}' + b\dot{b}' - c' \dot{c} - d' \dot{d} \right) \right. \Big. \label{jw:M22} \\ &+ \Big. \left. (a_0-c_0)(c c'- d d') n_{j-1,(s)} \right] n_{j,(s)} + \left[ (a b' + b b'-c c') \chi^{(4)}_{j-1,(s)} + c d' \chi^{(3)}_{j-1,(s)}\right]c_{j,(s)} \Big.\nonumber \\ &+ \Big.\left[ (a' b - b b'+c c') \chi^{(3)}_{j-1,(s)} + c' d \chi^{(4)}_{j-1,(s)}\right]c_{j,(s)}^{\dagger} \Big\}, \nonumber \\ M^{(s)}_{12,j} &=\frac{i}{\beta} \frac{1}{b b' - c c'} \Big\{ a'c \chi^{(3)}_{j-1,(s)} + b'd \chi^{(4)}_{j-1,(s)} + \left[ \beta (b'\dot{c} - c \dot{b}') - (a_0-c_0) b'c n_{j-1,(s)} \right] c_{j,(s)} \Big. \label{jw:M12} \\ &+ \Big. \left[ \beta (d \dot{a}' - a' \dot{d}) - (a_0-c_0) a'd n_{j-1,(s)} \right] c_{j,(s)}^{\dagger} \Big\}, \nonumber \\ M^{(s)}_{21,j} &=\frac{i}{\beta} \frac{1}{b b' - c c'} \Big\{ b d' \chi^{(3)}_{j-1,(s)} + a c' \chi^{(4)}_{j-1,(s)} + \left[ \beta (-a\dot{d}' + d' \dot{a}) + (a_0-c_0) a d' n_{j-1,(s)} \right] c_{j,(s)} \Big. \label{jw:M21} \\ &+ \Big. \left[ \beta (- c' \dot{b} + b \dot{c}') + (a_0-c_0) b c' n_{j-1,(s)} \right] c_{j,(s)}^{\dagger} \Big\}. \nonumber \end{align} To avoid cluttering, we omitted all the arguments of the Boltzmann weights as well as of the quantities derived thereof, such as the nonzero coefficients $a_0, b_0, c_0, d_0$ of $\Gamma_{j-1,j}^{(s)}(\zeta)$ as in \eqref{bs:Gammas_s_def} and the coefficients $\dot{a}, \dot{a}', \dot{b}, \dot{b}'$, $\dot{c}, \dot{c}', \dot{d}, \dot{d}'$ of $\partial_{v} R^{(s)}_{a j}(u-v,\zeta_a, \zeta_j)|_{v=0}$. The parameter $\beta$ also depends on $\rho$ and $\zeta$ as in \eqref{lax:M_operator}. We also introduced the quantities: \begin{align} \chi^{(3)}_{j,(s)} &= b_0(\zeta) c_{j,(s)} - d_0(\zeta) c_{j,(s)}^{\dagger},\\ \chi^{(4)}_{j,(s)} &= - d_0(\zeta) c_{j,(s)} + b_0(\zeta) c_{j,(s)}^{\dagger}. \end{align} The contribution from the derivatives of the gauge matrices $G_{\uparrow}$ and $G_{\downarrow}$ to \eqref{jw:M-matrix} can be easily computed as: \begin{align} \left( \partial_t G_{\uparrow} G_{\uparrow}^{-1} \right) \otimes_s \mathbb{1} + \mathbb{1} \otimes_s \left( \partial_t G_{\downarrow} G_{\downarrow}^{-1} \right) = \text{diag} \left( \partial_t \Lambda \: \Lambda, 2 \partial_t \Lambda \: \Lambda, 0, \partial_t \Lambda \: \Lambda\right), \end{align} where \begin{align} \partial_t \Lambda \: \Lambda &= 2 \sum_{j=1}^{N} \sum_{s=\uparrow,\downarrow} \left[ \partial_t n_{j,(s)}, n_{j,(s)} \right] = 4 i \sum_{j=1}^{N} \sum_{s=\uparrow,\downarrow} \left[ b_0 \left(\Delta_{j,j+1,(s)}+\Delta_{j+1,j,(s)}\right) + d_0 \left(\tilde{\Delta}^{(\dagger)}_{j,j+1,(s)}-\tilde{\Delta}_{j,j+1,(s)}\right) \right] \end{align} corresponds to a multiple of the Hamiltonian of the $XY$-model. Finally, to elaborate on the connection with the formalism of \cite{Essler:2005bk}, we consider the invariance of the Yang-Baxter relations \eqref{bs:YBE} under the simultaneous redefinition of the Boltzmann weights: $a \leftrightarrow -a$, $a' \leftrightarrow -a'$, $c \leftrightarrow -c$, $c' \leftrightarrow -c'$, to define an equivalent representation of \eqref{bs:YBE}. Thus, denoting the $R$-matrix \eqref{bs:R_matrix_orig} elements as: \begin{align} \hat{R}^{11}_{11} = -a, \; \hat{R}^{11}_{22} = d, \; \hat{R}^{12}_{12} = b, \; \hat{R}^{12}_{21} = -c, \; \hat{R}^{21}_{12} = - c', \; \hat{R}^{21}_{21} = b', \; \hat{R}^{22}_{11} = d', \; \hat{R}^{22}_{22} = -a', \label{jw:R_matrix_parametrization} \end{align} it is easy to verify that it satisfies the compatibility condition of Kulish and Sklyanin \cite{Kulish:1980ii}: \begin{align} \hat{R}^{\alpha \beta}_{\gamma \delta}(u;\zeta_j,\zeta_k) = (-1)^{p_{\alpha} + p_{\beta} + p_{\gamma} + p_{\delta}} \hat{R}^{\alpha \beta}_{\gamma \delta}(u;\zeta_j,\zeta_k), \label{jw:compatibility_condition} \end{align} which forces some elements of the $R$-matrix to vanish, so that it is compatible with the grading of the underlying vector space. Here, $p$ is a parity function defined on the homogeneous components of a finite dimensional local space of states $V = V_1 \oplus V_2$, so that $p_{\alpha} := p\left(\mathbf{v}_{\alpha}\right), \; \mathbf{v}_\alpha \in V_{\alpha}, \; \alpha=1,2$. For the case under consideration \eqref{jw:R_matrix_parametrization}, we have $p_1=0, \; p_2 =1$. Thus, it is possible to define a graded $L$-matrix at site $j$ as \cite{Essler:2005bk}: \begin{align} {L_j}_{\beta}^{\alpha}(u;\zeta_a,\zeta_j) = (-1)^{p_{\alpha} p_{\gamma}} \hat{R}^{\alpha \gamma}_{\beta \delta}(u;\zeta_a,\zeta_j) \: {e_j}^{\delta}_{\gamma}, \label{jw:L-korepin} \end{align} so that it satisfies the usual bilinear relations: \begin{align} \tilde{R}(\eta_{ab};\zeta_a,\zeta_b) \left( L_j(\eta_{aj};\zeta_a,\zeta_j) \otimes_s L_j (\eta_{bj};\zeta_b,\zeta_j) \right) = \left( L_j(\eta_{bj};\zeta_b,\zeta_j) \otimes_s L_j(\eta_{aj};\zeta_a,\zeta_j)\right) \tilde{R}(\eta_{ab};\zeta_a,\zeta_b), \label{jw:L_bilinear_relations} \end{align} where $\tilde{R}^{\alpha \beta}_{\gamma \delta}(u;\zeta_j,\zeta_k) = \hat{R}^{\beta \alpha}_{\gamma \delta}(u;\zeta_j,\zeta_k)$. The graded projection operators ${e_j}^{\beta}_{\alpha}$ appearing in \eqref{jw:L-korepin} can be defined through the anticommutation relations: \begin{align} {e_j}_{\alpha}^{\beta} {e_j}_{\gamma}^{\delta} &= \delta_{\gamma}^{\beta} {e_j}_{\alpha}^{\delta}, \label{jw:graded_projection_operators_algebra}\\ {e_j}_{\alpha}^{\beta} {e_k}_{\gamma}^{\delta} &= (-1)^{\left(p_{\alpha} + p_{\beta}\right) \left(p_{\gamma} + p_{\delta}\right)} {e_k}_{\gamma}^{\delta} {e_j}_{\alpha}^{\beta} \nonumber. \end{align} A possible matrix representation in terms of fermionic creation and annihilation operators of the algebra \eqref{jw:graded_projection_operators_algebra} is \begin{align} e_j = \begin{pmatrix} c^{\dagger}_{j} c_j & c^{\dagger}_j \\ c_j & 1 - c^{\dagger}_{j} c_j \end{pmatrix}. \label{jw:graded_projection_operator_rep} \end{align} Plugging \eqref{jw:graded_projection_operator_rep} into \eqref{jw:L-korepin} with the parametrization \eqref{jw:R_matrix_parametrization} leads to the spinless $L$-matrix \eqref{jw:spinless_L-matrix} for $\alpha_1 = -1$ and $\alpha_2 = 1$. To conclude this section, we note that the $L$-matrix \eqref{jw:spinless_L-matrix} reduces to that of the $XY$-model, which is the usual building block for the construction of the $R$-matrix for the Hubbard model. This special case can be obtained by fixing $k=0$ and $\zeta_1 = \zeta_2=\nicefrac{\pi}{2}$, and normalizing the Boltzmann weights with respect to $c(u)$. The resulting Boltzmann weights are: \begin{align} a(u) = a'(u) = \cos \frac{u}{2}, \; b(u) = b'(u) = \pm \sin \frac{u}{2}, \; c(u) = 1, \; d(u) =0. \label{jw:XY_limit} \end{align} \section{Conclusion} We have constructed the Lax connection for the free fermion model starting from the fermionic form of Bazhanov and Stroganov's solution for the Yang-Baxter equation, which is of difference type in one of the spectral parameters and is most suitable for obtaining a relativistic theory in the continuous limit. We have employed Umeno's fermionic $R$-matrix formalism as it immediately results in the fermionic form of the Lax connection, thus, making the procedure of taking the continuous limit a rather straightforward calculation using the explicit expressions given in section \ref{jw}. We therefore have made a step forward towards relating the Lax connection of the continuous fermionic model in \cite{Melikyan:2012kj} to a lattice model, with the larger goal, as discussed in the introduction, of quantizing non-ultralocal models. The results of this investigation will be presented in a future publication. \bibliographystyle{elsarticle-num} \section{} \label{} \section{} \label{} \section{} \label{} \section{Introduction} \file{elsarticle.cls} is a thoroughly re-written document class for formatting \LaTeX{} submissions to Elsevier journals. The class uses the environments and commands defined in \LaTeX{} kernel without any change in the signature so that clashes with other contributed \LaTeX{} packages such as \file{hyperref.sty}, \file{preview-latex.sty}, etc., will be minimal. \file{elsarticle.cls} is primarily built upon the default \file{article.cls}. This class depends on the following packages for its proper functioning: \begin{enumerate} \item \file{natbib.sty} for citation processing; \item \file{geometry.sty} for margin settings; \item \file{fleqn.clo} for left aligned equations; \item \file{graphicx.sty} for graphics inclusion; \item \file{txfonts.sty} optional font package, if the document is to be formatted with Times and compatible math fonts; \item \file{hyperref.sty} optional packages if hyperlinking is required in the document; \item \file{endfloat.sty} optional packages if floats to be placed at end of the PDF. \end{enumerate} All the above packages (except some optional packages) are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. Furthermore, users are free to make use of \textsc{ams} math packages such as \file{amsmath.sty}, \file{amsthm.sty}, \file{amssymb.sty}, \file{amsfonts.sty}, etc., if they want to. All these packages work in tandem with \file{elsarticle.cls} without any problems. \section{Major Differences} Following are the major differences between \file{elsarticle.cls} and its predecessor package, \file{elsart.cls}: \begin{enumerate}[\textbullet] \item \file{elsarticle.cls} is built upon \file{article.cls} while \file{elsart.cls} is not. \file{elsart.cls} redefines many of the commands in the \LaTeX{} classes/kernel, which can possibly cause surprising clashes with other contributed \LaTeX{} packages; \item provides preprint document formatting by default, and optionally formats the document as per the final style of models $1+$, $3+$ and $5+$ of Elsevier journals; \item some easier ways for formatting \verb+list+ and \verb+theorem+ environments are provided while people can still use \file{amsthm.sty} package; \item \file{natbib.sty} is the main citation processing package which can comprehensively handle all kinds of citations and works perfectly with \file{hyperref.sty} in combination with \file{hypernat.sty}; \item long title pages are processed correctly in preprint and final formats. \end{enumerate} \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). It can also be found in any of the nodes of the Comprehensive \TeX{} Archive Network (\textsc{ctan}), one of the primary nodes being \url{http://tug.ctan.org/tex-archive/macros/latex/contrib/elsarticle/}. Please download the \file{elsarticle.dtx} which is a composite class with documentation and \file{elsarticle.ins} which is the \LaTeX{} installer file. When we compile the \file{elsarticle.ins} with \LaTeX{} it provides the class file, \file{elsarticle.cls} by stripping off all the documentation from the \verb+*.dtx+ file. The class may be moved or copied to a place, usually, \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Usage}\label{sec:usage} The class should be loaded with the command: \begin{vquote} \documentclass[<options>]{elsarticle} \end{vquote} \noindent where the \verb+options+ can be the following: \begin{description} \item [{\tt\color{verbcolor} preprint}] default option which format the document for submission to Elsevier journals. \item [{\tt\color{verbcolor} review}] similar to the \verb+preprint+ option, but increases the baselineskip to facilitate easier review process. \item [{\tt\color{verbcolor} 1p}] formats the article to the look and feel of the final format of model 1+ journals. This is always single column style. \item [{\tt\color{verbcolor} 3p}] formats the article to the look and feel of the final format of model 3+ journals. If the journal is a two column model, use \verb+twocolumn+ option in combination. \item [{\tt\color{verbcolor} 5p}] formats for model 5+ journals. This is always of two column style. \item [{\tt\color{verbcolor} authoryear}] author-year citation style of \file{natbib.sty}. If you want to add extra options of \file{natbib.sty}, you may use the options as comma delimited strings as arguments to \verb+\biboptions+ command. An example would be: \end{description} \begin{vquote} \biboptions{longnamesfirst,angle,semicolon} \end{vquote} \begin{description} \item [{\tt\color{verbcolor} number}] numbered citation style. Extra options can be loaded with\linebreak \verb+\biboptions+ command. \item [{\tt\color{verbcolor} sort\&compress}] sorts and compresses the numbered citations. For example, citation [1,2,3] will become [1--3]. \item [{\tt\color{verbcolor} longtitle}] if front matter is unusually long, use this option to split the title page across pages with the correct placement of title and author footnotes in the first page. \item [{\tt\color{verbcolor} times}] loads \file{txfonts.sty}, if available in the system to use Times and compatible math fonts. \item [{\tt\color{verbcolor} reversenotenum}] Use alphabets as author--affiliation linking labels and use numbers for author footnotes. By default, numbers will be used as author--affiliation linking labels and alphabets for author footnotes. \item [{\tt\color{verbcolor} lefttitle}] To move title and author/affiliation block to flushleft. \verb+centertitle+ is the default option which produces center alignment. \item [{\tt\color{verbcolor} endfloat}] To place all floats at the end of the document. \item [{\tt\color{verbcolor} nonatbib}] To unload natbib.sty. \item [{\tt\color{verbcolor} doubleblind}] To hide author name, affiliation, email address etc. for double blind refereeing purpose. \item[] All options of \file{article.cls} can be used with this document class. \item[] The default options loaded are \verb+a4paper+, \verb+10pt+, \verb+oneside+, \verb+onecolumn+ and \verb+preprint+. \end{description} \section{Frontmatter} There are two types of frontmatter coding: \begin{enumerate}[(1)] \item each author is connected to an affiliation with a footnote marker; hence all authors are grouped together and affiliations follow; \pagebreak \item authors of same affiliations are grouped together and the relevant affiliation follows this group. \end{enumerate} An example of coding the first type is provided below. \begin{vquote} \title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[t2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \begin{vquote} \author[1]{Jos Migchielsen\corref{cor1}% \fnref{fn1}} \ead{[email protected]} \author[2]{CV Radhakrishnan\fnref{fn2}} \ead{[email protected]} \author[3]{CV Rajagopal\fnref{fn1,fn3}} \ead[url]{www.stmdocs.in} \end{vquote} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \fntext[fn3]{Yet another author footnote.} \address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \address[2]{Sayahna Foundations, JWRA 34, Jagathy, Trivandrum 695014, India} \address[3]{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \end{vquote} The output of the above \TeX{} source is given in Clips~\ref{clip1} and \ref{clip2}. The header portion or title area is given in Clip~\ref{clip1} and the footer area is given in Clip~\ref{clip2}. \deforange{blue!70} \src{Header of the title page.} \includeclip{1}{130 612 477 707}{1psingleauthorgroup.pdf \deforange{orange} \deforange{blue!70} \src{Footer of the title page.} \includeclip{1}{93 135 499 255}{1pseperateaug.pdf \deforange{orange} Most of the commands such as \verb+\title+, \verb+\author+, \verb+\address+ are self explanatory. Various components are linked to each other by a label--reference mechanism; for instance, title footnote is linked to the title with a footnote mark generated by referring to the \verb+\label+ string of the \verb=\tnotetext=. We have used similar commands such as \verb=\tnoteref= (to link title note to title); \verb=\corref= (to link corresponding author text to corresponding author); \verb=\fnref= (to link footnote text to the relevant author names). \TeX{} needs two compilations to resolve the footnote marks in the preamble part. Given below are the syntax of various note marks and note texts. \begin{vquote} \tnoteref{<label(s)>} \corref{<label(s)>} \fnref{<label(s)>} \tnotetext[<label>]{<title note text>} \cortext[<label>]{<corresponding author note text>} \fntext[<label>]{<author footnote text>} \end{vquote} \noindent where \verb=<label(s)>= can be either one or more comma delimited label strings. The optional arguments to the \verb=\author= command holds the ref label(s) of the address(es) to which the author is affiliated while each \verb=\address= command can have an optional argument of a label. In the same manner, \verb=\tnotetext=, \verb=\fntext=, \verb=\cortext= will have optional arguments as their respective labels and note text as their mandatory argument. The following example code provides the markup of the second type of author-affiliation. \begin{vquote} \author{Jos Migchielsen\corref{cor1}% \fnref{fn1}} \ead{[email protected]} \address{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \author{CV Radhakrishnan\fnref{fn2}} \ead{[email protected]} \address{Sayahna Foundations, JWRA 34, Jagathy, Trivandrum 695014, India} \author{CV Rajagopal\fnref{fn1,fn3}} \ead[url]{www.stmdocs.in} \address{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \end{vquote} \vspace*{-.5pc} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} The output of the above \TeX{} source is given in Clip~\ref{clip3}. \deforange{blue!70} \src{Header of the title page..} \includeclip{1}{119 563 468 709}{1pseperateaug.pdf \deforange{orange} \pagebreak Clip~\ref{clip4} shows the output after giving \verb+doubleblind+ class option. \deforange{blue!70} \src{Double blind article} \includeclip{1}{124 567 477 670}{elstest-1pdoubleblind.pdf \deforange{orange} \vspace*{-.5pc} The frontmatter part has further environments such as abstracts and keywords. These can be marked up in the following manner: \begin{vquote} \begin{abstract} In this work we demonstrate the formation of a new type of polariton on the interface between a .... \end{abstract} \end{vquote} \vspace*{-.5pc} \begin{vquote} \begin{keyword} quadruple exiton \sep polariton \sep WGM \end{keyword} \end{vquote} \noindent Each keyword shall be separated by a \verb+\sep+ command. \textsc{msc} classifications shall be provided in the keyword environment with the commands \verb+\MSC+. \verb+\MSC+ accepts an optional argument to accommodate future revisions. eg., \verb=\MSC[2008]=. The default is 2000.\looseness=-1 \subsection{New page} Sometimes you may need to give a page-break and start a new page after title, author or abstract. Following commands can be used for this purpose. \begin{vquote} \newpageafter{title} \newpageafter{author} \newpageafter{abstract} \end{vquote} \begin{itemize} \leftskip-2pc \item [] {\tt\color{verbcolor} \verb+\newpageafter{title}+} typeset the title alone on one page. \item [] {\tt\color{verbcolor} \verb+\newpageafter{author}+} typeset the title and author details on one page. \item [] {\tt\color{verbcolor} \verb+\newpageafter{abstract}+} typeset the title, author details and abstract \& keywords one one page. \end{itemize} \section{Floats} {Figures} may be included using the command, \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by \file{graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. \file{graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts \file{*.pdf}, \file{*.mps} (metapost), \file{*.jpg} and \file{*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. The \verb+table+ environment is handy for marking up tabular material. If users want to use \file{multirow.sty}, \file{array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and \file{elsarticle.cls} will work in combination with all loaded packages. \section[Theorem and ...]{Theorem and theorem like environments} \file{elsarticle.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. \file{elsarticle.cls} provides three commands to format theorem or theorem-like environments: \begin{vquote} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{vquote} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{vquote} \begin{thm} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{thm} \end{vquote} Clip~\ref{clip5} will show you how some text enclosed between the above code\goodbreak \noindent looks like: \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newtheorem}} \includeclip{2}{1 1 453 120}{jfigs.pdf} \deforange{orange} The \verb+\newdefinition+ command is the same in all respects as its\linebreak \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newdefinition}} \includeclip{1}{1 1 453 105}{jfigs.pdf} \deforange{orange} The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newproof}} \includeclip{3}{1 1 453 65}{jfigs.pdf} \deforange{orange} Users can also make use of \verb+amsthm.sty+ which will override all the default definitions described above. \section[Enumerated ...]{Enumerated and Itemized Lists} \file{elsarticle.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \end{vquote} \deforange{blue!70} \src{List -- Enumerate} \includeclip{4}{1 1 453 185}{jfigs.pdf} \deforange{orange} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. \begin{vquote} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{vquote} \deforange{blue!70} \src{List -- enhanced} \includeclip{5}{1 1 313 83}{jfigs.pdf} \deforange{orange} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section[Mathematical ...]{Mathematical symbols and formulae} Many physical/mathematical sciences authors require more mathematical symbols than the few that are provided in standard \LaTeX. A useful package for additional symbols is the \file{amssymb} package, developed by the American Mathematical Society. This package includes such oft-used symbols as $\lesssim$ (\verb+\lesssim+), $\gtrsim$ (\verb+\gtrsim+) or $\hbar$ (\verb+\hbar+). Note that your \TeX{} system should have the \file{msam} and \file{msbm} fonts installed. If you need only a few symbols, such as $\Box$ (\verb+\Box+), you might try the package \file{latexsym}. Another point which would require authors' attention is the breaking up of long equations. When you use \file{elsarticle.cls} for formatting your submissions in the \verb+preprint+ mode, the document is formatted in single column style with a text width of 384pt or 5.3in. When this document is formatted for final print and if the journal happens to be a double column journal, the text width will be reduced to 224pt at for 3+ double column and 5+ journals respectively. All the nifty fine-tuning in equation breaking done by the author goes to waste in such cases. Therefore, authors are requested to check this problem by typesetting their submissions in final format as well just to see if their equations are broken at appropriate places, by changing appropriate options in the document class loading command, which is explained in section~\ref{sec:usage}, \nameref{sec:usage}. This allows authors to fix any equation breaking problem before submission for publication. \file{elsarticle.cls} supports formatting the author submission in different types of final format. This is further discussed in section \ref{sec:final}, \nameref{sec:final}. \subsection*{Displayed equations and double column journals} Many Elsevier journals print their text in two columns. Since the preprint layout uses a larger line width than such columns, the formulae are too wide for the line width in print. Here is an example of an equation (see equation 6) which is perfect in a single column preprint format: \bigskip \setlength\Sep{6pt} \src{See equation (6)} \deforange{blue!70} \includeclip{4}{105 500 500 700}{1psingleauthorgroup.pdf} \deforange{orange} \noindent When this document is typeset for publication in a model 3+ journal with double columns, the equation will overlap the second column text matter if the equation is not broken at the appropriate location. \vspace*{6pt} \deforange{blue!70} \src{See equation (6) overprints into second column} \includeclip{3}{59 421 532 635}{elstest-3pd.pdf} \deforange{orange} \vspace*{6pt} \noindent The typesetter will try to break the equation which need not necessarily be to the liking of the author or as it happens, typesetter's break point may be semantically incorrect. Therefore, authors may check their submissions for the incidence of such long equations and break the equations at the correct places so that the final typeset copy will be as they wish. \section{Bibliography} Three bibliographic style files (\verb+*.bst+) are provided --- \file{elsarticle-num.bst}, \file{elsarticle-num-names.bst} and \file{elsarticle-harv.bst} --- the first one can be used for the numbered scheme, second one for numbered with new options of \file{natbib.sty}. The third one is for the author year scheme. In \LaTeX{} literature, references are listed in the \verb+thebibliography+ environment. Each reference is a \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \verb+\bibitem[Elson et al.(1996)]{ESG96}+ is cited as \verb+\citet{ESG96}+. \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package \file{natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference: \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. \file{natbib} package is loaded by \file{elsarticle} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the \file{natbib} package, you can do so with the \verb+\biboptions+ command, which is described in the section \ref{sec:usage}, \nameref{sec:usage}. For details of various options of the \file{natbib} package, please take a look at the \file{natbib} documentation, which is part of any standard \LaTeX{} installation. In addition to the above standard \verb+.bst+ files, there are 10 journal-specific \verb+.bst+ files also available. Instruction for using these \verb+.bst+ files can be found at \href{http://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} {http://support.stmdocs.in} \section{Graphical abstract and highlights} A template for adding graphical abstract and highlights are available now. This will appear as the first two pages of the PDF before the article content begins. \pagebreak Please refer below to see how to code them. \begin{vquote} .... .... \end{abstract} \begin{graphicalabstract} \end{graphicalabstract} \begin{highlights} \item Research highlight 1 \item Research highlight 2 \end{highlights} \begin{keyword} .... .... \end{vquote} \section{Final print}\label{sec:final} The authors can format their submission to the page size and margins of their preferred journal. \file{elsarticle} provides four class options for the same. But it does not mean that using these options you can emulate the exact page layout of the final print copy. \lmrgn=3em \begin{description} \item [\texttt{1p}:] $1+$ journals with a text area of 384pt $\times$ 562pt or 13.5cm $\times$ 19.75cm or 5.3in $\times$ 7.78in, single column style only. \item [\texttt{3p}:] $3+$ journals with a text area of 468pt $\times$ 622pt or 16.45cm $\times$ 21.9cm or 6.5in $\times$ 8.6in, single column style. \item [\texttt{twocolumn}:] should be used along with 3p option if the journal is $3+$ with the same text area as above, but double column style. \item [\texttt{5p}:] $5+$ with text area of 522pt $\times$ 682pt or 18.35cm $\times$ 24cm or 7.22in $\times$ 9.45in, double column style only. \end{description} Following pages have the clippings of different parts of the title page of different journal models typeset in final format. Model $1+$ and $3+$ will have the same look and feel in the typeset copy when presented in this document. That is also the case with the double column $3+$ and $5+$ journal article pages. The only difference will be wider text width of higher models. Therefore we will look at the different portions of a typical single column journal page and that of a double column article in the final format. \begin{center} \hypertarget{bsc}{} \hyperlink{sc}{ {\bf [Specimen single column article -- Click here]} } \hypertarget{bsc}{} \hyperlink{dc}{ {\bf [Specimen double column article -- Click here]} } \end{center} \src{}\hypertarget{sc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{88 120 514 724}{elstest-1p.pdf}} \deforange{orange} \src{}\hypertarget{dc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{27 61 562 758}{elstest-5p.pdf}} \deforange{orange} \end{document} \section{Introduction} The Elsevier cas-sc class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the \begin{itemize} \item document style \item baselineskip \item front matter \item keywords and MSC codes \item theorems, definitions and proofs \item lables of enumerations \item citation style and labeling. \end{itemize} This class depends on the following packages for its proper functioning: \begin{enumerate} \itemsep=0pt \item {natbib.sty} for citation processing; \item {geometry.sty} for margin settings; \item {fleqn.clo} for left aligned equations; \item {graphicx.sty} for graphics inclusion; \item {hyperref.sty} optional packages if hyperlinking is required in the document; \end{enumerate} All the above packages are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). The class may be moved or copied to a place, usually, \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Front matter} The author names and affiliations could be formatted in two ways: \begin{enumerate}[(1)] \item Group the authors per affiliation. \item Use footnotes to indicate the affiliations. \end{enumerate} See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to. \section{Bibliography styles} There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use Bib\TeX\ to generate your bibliography and include DOIs whenever available. Here are two sample references: See \citet{Fortunato2010}. Also refer \citet{Fortunato2010,NewmanGirvan2004}. More citations are here \citep{Fortunato2010,Vehlowetal2013}. \section{Floats} {Figures} may be included using the command, \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by {graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. {graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts {*.pdf}, {*.mps} (metapost), {*.jpg} and {*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. \begin{figure} \centering \includegraphics[scale=.75]{figs/Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling ($g_{1,l}$) scaled to the bulk exciton-photon coupling ($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and the \PMS is placed directly on the cuprous oxide sample ($\delta r=0$, See also Table \protect\ref{tbl1}).} \label{FIG:1} \end{figure} The \verb+table+ environment is handy for marking up tabular material. If users want to use {multirow.sty}, {array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and {cas-sc.cls} will work in combination with all loaded packages. \begin{table}[width=.9\linewidth,cols=4,pos=h] \caption{This is a test caption. This is a test caption. This is a test caption. This is a test caption.}\label{tbl1} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4\\ \midrule 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ \bottomrule \end{tabular*} \end{table} \section[Theorem and ...]{Theorem and theorem like environments} {cas-sc.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. {cas-sc.cls} provides three commands to format theorem or theorem-like environments: \begin{verbatim} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{verbatim} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{verbatim} \begin{theorem} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{theorem} \end{verbatim} \newtheorem{theorem}{Theorem} \begin{theorem} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{theorem} The \verb+\newdefinition+ command is the same in all respects as its \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \section[Enumerated ...]{Enumerated and Itemized Lists} {cas-sc.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{verbatim} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \item One more item before we start another. \item One more item before we start another. \item One more item before we start another. \end{verbatim} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. \begin{verbatim} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{verbatim} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section{Bibliography} Two bibliographic style files (\verb+*.bst+) are provided --- {model1-num-names.bst} and {model2-names.bst} --- the first one can be used for the numbered scheme. This can also be used for the numbered with new options of {natbib.sty}. The second one is for the author year scheme. When you use model2-names.bst, the citation commands will be like \verb+\citep+, \verb+\citet+, \verb+\citealt+ etc. However when you use model1-num-names.bst, you may use only \verb+\cite+ command. \verb+thebibliography+ environment. Each reference is a \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package {natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference: \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. {natbib} package is loaded by {cas-sc} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the {natbib} package, you can do so with the \verb+\biboptions+ command. For details of various options of the {natbib} package, please take a look at the {natbib} documentation, which is part of any standard \LaTeX{} installation. \section{Introduction} The Elsevier cas-dc class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the \begin{itemize} \item document style \item baselineskip \item front matter \item keywords and MSC codes \item theorems, definitions and proofs \item lables of enumerations \item citation style and labeling. \end{itemize} This class depends on the following packages for its proper functioning: \begin{enumerate} \itemsep=0pt \item {natbib.sty} for citation processing; \item {geometry.sty} for margin settings; \item {fleqn.clo} for left aligned equations; \item {graphicx.sty} for graphics inclusion; \item {hyperref.sty} optional packages if hyperlinking is required in the document; \end{enumerate} All the above packages are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). The class may be moved or copied to a place, usually,\linebreak \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Front matter} The author names and affiliations could be formatted in two ways: \begin{enumerate}[(1)] \item Group the authors per affiliation. \item Use footnotes to indicate the affiliations. \end{enumerate} See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to. \section{Bibliography styles} There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use Bib\TeX\ to generate your bibliography and include DOIs whenever available. Here are two sample references: \cite{Fortunato2010} \cite{Fortunato2010,NewmanGirvan2004} \cite{Fortunato2010,Vehlowetal2013} \section{Floats} {Figures} may be included using the command,\linebreak \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by {graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. {graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts {*.pdf}, {*.mps} (metapost), {*.jpg} and {*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. \begin{figure} \centering \includegraphics[scale=.75]{figs/Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling ($g_{1,l}$) scaled to the bulk exciton-photon coupling ($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and the \PMS is placed directly on the cuprous oxide sample ($\delta r=0$, See also Table \protect\ref{tbl1}).} \label{FIG:1} \end{figure} The \verb+table+ environment is handy for marking up tabular material. If users want to use {multirow.sty}, {array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and {cas-dc.cls} will work in combination with all loaded packages. \begin{table}[width=.9\linewidth,cols=4,pos=h] \caption{This is a test caption. This is a test caption. This is a test caption. This is a test caption.}\label{tbl1} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4\\ \midrule 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ 12345 & 12345 & 123 & 12345 \\ \bottomrule \end{tabular*} \end{table} \section[Theorem and ...]{Theorem and theorem like environments} {cas-dc.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. {cas-dc.cls} provides three commands to format theorem or theorem-like environments: \begin{verbatim} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{verbatim} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{verbatim} \begin{theorem} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{theorem} \end{verbatim} \newtheorem{theorem}{Theorem} \begin{theorem} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{theorem} The \verb+\newdefinition+ command is the same in all respects as its \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \section[Enumerated ...]{Enumerated and Itemized Lists} {cas-dc.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{verbatim} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \item One more item before we start another. \item One more item before we start another. \item One more item before we start another. \end{verbatim} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. \begin{verbatim} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{verbatim} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section{Bibliography} Two bibliographic style files (\verb+*.bst+) are provided --- {model1-num-names.bst} and {model2-names.bst} --- the first one can be used for the numbered scheme. This can also be used for the numbered with new options of {natbib.sty}. The second one is for the author year scheme. When you use model2-names.bst, the citation commands will be like \verb+\citep+, \verb+\citet+, \verb+\citealt+ etc. However when you use model1-num-names.bst, you may use only \verb+\cite+ command. \verb+thebibliography+ environment. Each reference is a\linebreak \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package {natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference:\break \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. {natbib} package is loaded by {cas-dc} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the {natbib} package, you can do so with the \verb+\biboptions+ command. For details of various options of the {natbib} package, please take a look at the {natbib} documentation, which is part of any standard \LaTeX{} installation. \section{Introduction} Two classfiles namely \file{cas-sc.cls} and \file{cas-dc.cls} were written for typesetting articles submitted in journals of Elsevier's Complex Article Service (CAS) workflow. \subsection{Usage} \begin{enumerate} \item \file{cas-sc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-sc} \end{vquote} \item \file{cas-dc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-dc} \end{vquote} \end{enumerate} and have an option longmktitle to handle long front matter. \section{Front matter} \begin{vquote} \title [mode = title]{This is a specimen $a_b$ title} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \author[1,3]{CV Radhakrishnan}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \cormark[1] \fnmark[1] \ead{[email protected]} \ead[url]{www.cvr.cc, [email protected]} \end{vquote} \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{CV Rajagopal}[% role=Co-ordinator, suffix=Jr, ] \fnmark[2] \ead{[email protected]} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} \address[2]{Sayahna Foundation, Jagathy, Trivandrum 695014, India} \author[1,3]{Rishi T.} \cormark[2] \fnmark[1,3] \ead{[email protected]} \ead[URL]{www.stmdocs.in} \address[3]{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the first author footnote. but is common to third author as well.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} \begin{vquote} \nonumnote{This note has no numbers. In this work we demonstrate $a_b$ the formation Y\_1 of a new type of polariton on the interface between a cuprous oxide slab and a polystyrene micro-sphere placed on the slab. } \begin{abstract}[S U M M A R Y] This template helps you to create a properly formatted \LaTeX\ manuscript. \noindent\texttt{\textbackslash begin{abstract}} \dots \texttt{\textbackslash end{abstract}} and \verb+\begin{keyword}+ \verb+...+ \verb+\end{keyword}+ which contain the abstract and keywords respectively. Each keyword shall be separated by a \verb+\sep+ command. \end{abstract} \begin{keywords} quadrupole exciton \sep polariton \sep \WGM \sep \BEC \end{keywords} \maketitle \end{vquote} \begin{figure} \includegraphics[width=\textwidth]{sc-sample.pdf} \caption{Single column output (classfile: cas-sc.cls).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{dc-sample.pdf} \caption{Double column output (classfile: cas-dc.cls).} \end{figure} \subsection{Title} \verb+\title+ command have the below options: \begin{enumerate} \item \verb+title:+ Document title \item \verb+alt:+ Alternate title \item \verb+sub:+ Sub title \item \verb+trans:+ Translated title \item \verb+transsub:+ Translated sub title \end{enumerate} \begin{vquote} \title[mode=title]{This is a title} \title[mode=alt]{This is a alternate title} \title[mode=sub]{This is a sub title} \title[mode=trans]{This is a translated title} \title[mode=transsub]{This is a translated sub title} \end{vquote} \subsection{Author} \verb+\author+ command have the below options: \begin{enumerate} \item \verb+auid:+ Author id \item \verb+bioid:+ Biography id \item \verb+alt:+ Alternate author \item \verb+style:+ Style of author name chinese \item \verb+prefix:+ Prefix Sir \item \verb+suffix:+ Suffix \item \verb+degree:+ Degree \item \verb+role:+ Role \item \verb+orcid:+ ORCID \item \verb+collab:+ Collaboration \item \verb+anon:+ Anonymous author \item \verb+deceased:+ Deceased author \item \verb+twitter:+ Twitter account \item \verb+facebook:+ Facebook account \item \verb+linkedin:+ LinkedIn account \item \verb+plus:+ Google plus account \item \verb+gplus:+ Google plus account \end{enumerate} \begin{vquote} \author[1,3]{Author Name}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910, facebook=<facebook id>, twitter=<twitter id>, linkedin=<linkedin id>, gplus=<gplus id>] \end{vquote} \subsection{Various Marks in the Front Matter} The front matter becomes complicated due to various kinds of notes and marks to the title and author names. Marks in the title will be denoted by a star ($\star$) mark; footnotes are denoted by super scripted Arabic numerals, corresponding author by of an Conformal asterisk (*) mark. \subsubsection{Title marks} Title mark can be entered by the command, \verb+\tnotemark[<num>]+ and the corresponding text can be entered with the command \verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be: \begin{vquote} \title[mode=title]{Leveraging social media news to predict stock index movement using RNN-boost} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \verb+\tnotetext+ and \verb+\tnotemark+ can be anywhere in the front matter, but shall be before \verb+\maketitle+ command. \subsubsection{Author marks} Author names can have many kinds of marks and notes: \begin{vquote} footnote mark : \fnmark[<num>] footnote text : \fntext[<num>]{<text>} affiliation mark : \author[<num>] email : \ead{<emailid>} url : \ead[url]{<url>} corresponding author mark : \cormark[<num>] corresponding author text : \cortext[<num>]{<text>} \end{vquote} \subsubsection{Other marks} At times, authors want footnotes which leave no marks in the author names. The note text shall be listed as part of the front matter notes. Class files provides \verb+\nonumnote+ for this purpose. The usage \begin{vquote} \nonumnote{<text>} \end{vquote} \noindent and should be entered anywhere before the \verb+\maketitle+ command for this to take effect. \subsection{Abstract and Keywords} Abstract shall be entered in an environment that starts with \verb+\begin{abstract}+ and ends with \verb+\end{abstract}+. Longer abstracts spanning more than one page is also possible in Class file even in double column mode. We need to invoke longmktitle option in the class loading line for this to happen smoothly. The key words are enclosed in a \verb+{keyword}+ environment. \begin{vquote} \begin{abstract} This is a abstract. \lipsum[3] \end{abstract} \begin{keywords} First keyword \sep Second keyword \sep Third keyword \sep Fourth keyword \end{keywords} \end{vquote} \section{Main Matter} \subsection{Tables} \subsubsection{Normal tables} \begin{vquote} \begin{table} \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2\\ \midrule 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ \bottomrule \end{tabular*} \end{table} \end{vquote} \subsubsection{Span tables} \begin{vquote} \begin{table*}[width=.9\textwidth,cols=4,pos=h] \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\ \midrule 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ \bottomrule \end{tabular*} \end{table*} \end{vquote} \subsection{Figures} \subsubsection{Normal figures} \begin{vquote} \begin{figure} \centering \includegraphics[scale=.75]{Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling ($g_{1,l}$) scaled to the bulk exciton-photon coupling ($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and the \PMS is placed directly on the cuprous oxide sample ($\delta r=0$, See also Fig. \protect\ref{FIG:2}).} \label{FIG:1} \end{figure} \end{vquote} \subsubsection{Span figures} \begin{vquote} \begin{figure*} \centering \includegraphics[width=\textwidth,height=2in]{Fig2.pdf} \caption{Schematic of formation of the evanescent polariton on linear chain of \PMS. The actual dispersion is determined by the ratio of two coupling parameters such as exciton-\WGM coupling and \WGM-\WGM coupling between the microspheres.} \label{FIG:2} \end{figure*}\end{vquote} \subsection{Theorem and theorem like environments} CAS class file provides a few hooks to format theorems and theorem like environments with ease. All commands the options that are used with \verb+\newtheorem+ command will work exactly in the same manner. Class file provides three commands to format theorem or theorem like environments: \begin{enumerate} \item \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font for theorem statement, bold weight for theorem heading and theorem number typeset at the right of theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. Here is an example coding and output: \begin{vquote} \newtheorem{theorem}{Theorem} \begin{theorem}\label{thm} The \WGM evanescent field penetration depth into the cuprous oxide adjacent crystal is much larger than the \QE radius: \begin{equation*} \lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1} \right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6 \mbox{ \AA} \end{equation*} \end{theorem} \end{vquote} \item \verb+\newdefinition+ command does exactly the same thing as with except that the body font is up-shape instead of italic. See the example below: \begin{vquote} \newdefinition{definition}{Definition} \begin{definition} The bulk and evanescent polaritons in cuprous oxide are formed through the quadrupole part of the light-matter interaction: \begin{equation*} H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s} \cdot {\bf p} \end{equation*} \end{definition} \end{vquote} \item \verb+\newproof+ command helps to define proof and custom proof environments without counters as provided in the example code. Given below is an example of proof of theorem kind. \begin{vquote} \newproof{pot}{Proof of Theorem \ref{thm}} \begin{pot} The photon part of the polariton trapped inside the \PMS moves as it would move in a micro-cavity of the effective modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it can escape through the evanescent field. This evanescent field essentially has a quantum origin and is due to tunneling through the potential caused by dielectric mismatch on the \PMS surface. Therefore, we define the \emph{evanescent} polariton (\EP) as an evanescent light - \QE coherent superposition. \end{pot} \end{vquote} \end{enumerate} \subsection{Enumerated and Itemized Lists} CAS class files provides an extended list processing macros which makes the usage a bit more user friendly than the default LaTeX list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. You can see the coding and typeset copy. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.' so that the item counter will be suffixed by a period as in the optional argument. \item If you provide a closing parenthesis to the number in the optional argument, the output will have closing parenthesis for all the item counters. \item You can use `(a)' for alphabetical counter and `(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \end{vquote} \begin{vquote} \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \section{Biography} \verb+\bio+ command have the below options: \begin{enumerate} \item \verb+width:+ Width of the author photo (default is 1in). \item \verb+pos:+ Position of author photo. \end{enumerate} \begin{vquote} \bio[width=10mm,pos=l]{tuglogo.jpg} \textbf{Another Biography:} Recent experimental \cite{HARA:2005} and theoretical \cite{DEYCH:2006} studies have shown that the \WGM can travel along the chain as "heavy photons". Therefore the \WGM acquires the spatial dispersion, and the evanescent quadrupole polariton has the form (See Fig.\ref{FIG:3}): \endbio \end{vquote} \section[CRediT...]{CRediT authorship contribution statement} Give the authorship contribution after each author as \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \end{vquote} To print the details use \verb+\printcredits+ \begin{vquote} \author[1,3]{V. {{\=A}}nand Rawat}[auid=000, bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \end{vquote} \begin{vquote} \cormark[1] \fnmark[1] \ead{[email protected]} \ead[url]{www.cvr.cc, www.tug.org.in} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Indian \TeX{} Users Group, Trivandrum 695014, India} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{T. Rishi Nair}[role=Co-ordinator, suffix=Jr] \fnmark[2] \ead{[email protected]} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} . . . . . . . . . \printcredits \end{vquote} \section{Bibliography} For CAS categories, two reference models are recommended. They are \file{model1-num-names.bst} and \file{model2-names.bst}. Former will format the reference list and their citations according to numbered scheme whereas the latter will format according name-date or author-year style. Authors are requested to choose any one of these according to the journal style. You may download these from The above bsts are available in the following location for you to download: \url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} \hfill $\Box$ \end{document}
1,477,468,750,534
arxiv
\section{Introduction} \label{sec:intro} Owing to the cost of machinery, expertise, and time, the molecular testing approaches, namely, the reverse transcription polymerase chain reaction (RT-PCR) test and the rapid antigen test (RAT) cannot be easily scaled up and deployed in a short time \cite{mercer2021testing}. This results in a bottleneck in containing the spread of COVID-19, and has led to research on alternate testing methodologies \cite{kevadiya2021diagnostics}. Some of these include nanomaterial based bio-sensing of SARS-CoV-2 virus \cite{chacon2020optimized} and radiographic imaging based on computed tomography (CT) \cite{hosseiny2020radiology}, X-ray (CTX) \cite{ctx_study}, and ultrasound \cite{poggiali2020can} to categorize the health status of lungs. The primary symptoms of COVID-19 disease are fever, sore throat, cough, chest, muscle pain, and dyspnoea. Further, the pathogenesis of COVID-19 indicates minor to acute infection in the respiratory system during the onset of the disease. This has garnered interest in the speech and audio research community, and there have been several studies \cite{imran2020ai4covid, orlandic2020coughvid, brown2020exploring, laguarta} gathering insights on the possibility of acoustics based COVID-19 diagnosis. Such an approach can provide a point-of-care, rapid, easy to use, and cost-effective tool to help contain COVID-19 spread. This paper discusses the second in the series of DiCOVA open challenges, an attempt to benchmark the acoustic based diagnosis research among various groups. \begin{figure*}[t] \centering \input{plot_metadata} \vspace{-0.25in} \caption{Illustration of metadata corresponding to development dataset. (a) Health status of Non-COVID subjects broken down into categories of healthy (no symptoms), pre-existing respiratory ailment (asthma, chronic lung disease, pneumonia), and symptoms (cold, cough, fever, loss of taste or smell); (b) COVID-19 status of the subjects; (c) Pooled subject gender and age distribution.} \label{fig:dev_set_metadata} \vspace{-0.25in} \end{figure*} The DiCOVA challenge series is designed with a curated development dataset taken from the crowd-sourced Coswara dataset \cite{sharma2020coswara}. The challenge dataset, with labels and a baseline system, are released and researchers are invited to develop machine learning systems that perform well on a blind test set. A leader-board style ranking is created for the evaluation on blind test set. The first DiCOVA Challenge\footnote{\url{http://dicova2021.github.io/}} was launched earlier~\cite{dicova}, which had garnered participation from $28$ teams, coming from both academia and industry. Among these teams, $21$ teams surpassed the baseline system performance. A summary of the challenge is provided in \cite{sharma2021towards}. Inspired by the interest of the research community, and the increasing demand for fast, remote and accurate COVID-19 testing methodologies, we had launched the second DiCOVA challenge\footnote{\url{http://dicovachallenge.github.io/}} on 12-Aug-2021. The key considerations over the previous challenge that motivated us to conduct the second DiCOVA challenge are the following, \begin{itemize} \item Since the closing of the first DiCOVA Challenge, there was a spike in daily global COVID-19 cases (Apr-May, 2021). The spike has been attributed to the new strains of the virus. This has enabled us to increase the data set size for the Second DiCOVA Challenge. \item In addition to the cough sound category, the challenge brings to focus two additional sound categories, namely, breathing and speech. A leaderboard-style evaluation on blind test set is built for four tracks, one associated with each sound category (that is, breathing, cough, and speech) and a fourth fusion track allowing experimentation with combinations of the individual sound categories. \item Recently, multiple open datasets have been released to the public by different research groups. These include COVID-19 Sounds dataset \cite{covid19sounddetector} by University of Cambridge (UK), Buenos Aires COVID-19 Cough dataset \cite{badata} by Cabinet Ministers (Argentina), COUGHVID dataset \cite{orlandic2020coughvid} by EPFL University (Switzerland), and COVID-19 Open COUGH dataset \cite{virufy_set} by Virufy (US). The participants were encouraged to use these datasets for enhancing model training and analysis. \end{itemize} In this paper, we present an overview of the Second DiCOVA Challenge which spanned $51~$days and concluded on 01-Oct-2021. \section{Literature Review} \label{sec:lit_review} In a study by Imran et al.~\cite{imran2020ai4covid}, a four class classifier was designed to detect healthy, pertussis, bronchitis, and COVID-19 individuals. On a privately collected cough sound dataset from hospitals, they report a sensitivity of $~94\%$ (and ~$91\%$ specificity) using a convolutional neural network (CNN) architecture with mel-spectrogram feature input. Other studies have focused on the binary task of COVID-19 detection only. Agbley et al. \cite{agbley2020wavelet} reported $81\%$ specificity (at $43\%$ sensitivity) on a subset of the COUGHVID dataset \cite{orlandic2020coughvid}. Laguarte et al. \cite{laguarta} used a privately collected dataset of COVID-19 infected individuals and report an area under the-receiver operating characteristic curve (AUC-ROC) performance of $97.0\%$. Andreu-Perez et al. \cite{9361107} created a controlled dataset by collecting cough sound samples from patients visiting hospitals, and report $98.8\%$ AUC-ROC. A few studies have explored using the breathing and voice sounds as well. Brown et al.~\cite{brown2020exploring} created a dataset through crowd-sourcing, and analyzed COVID-19 detection. The authors report a performance between $80-82\%$ AUC-ROC. Han et al~\cite{han_voice_symptoms} proposed using voice samples and demonstrate $77\%$ AUC-ROC. Further, the use of symptoms along with voice provides a $2\%$ improvement in the AUC-ROC. While these studies are encouraging, there are several limitations \cite{coppock2021_grains_covid}. Particularly, $(i)$ different COVID-19 patient population (with varied sizes) is used in each study, $(ii)$ use of different performance evaluation methodologies across the studies, and $(iii)$ lack of common data and reproducibility. The Second DiCOVA Challenge was aimed at encouraging research groups to design and evaluate their classification system on a common dataset, using the same performance metrics. This addresses $(i)$ and $(ii)$, and helps us to benchmark the designed systems against a baseline system. To address $(iii)$ we encouraged the participants to submit detailed system reports and open source the developed systems. \section{Dataset} \label{sec:dataset} \noindent The challenge dataset is derived from the Coswara dataset \cite{sharma2020coswara}, a crowd-sourced dataset of sound recordings\footnote{\url{https://coswara.iisc.ac.in/}}. The volunteers from across the globe, age groups and health conditions were encouraged to record their sound data in a quiet environment using a web connected device (like smartphone or computer). Through the website, the subjects first provide demographic information like age and gender. This is followed by a short questionnaire to record their health status, including symptoms, pre-existing respiratory ailments, and co-morbidity, if any. Subsequently, their COVID-19 status is recorded by asking if they are currently COVID-19 positive, recovered, exposed to COVID-19 patients through primary contacts, or healthy. After collecting this information as metadata, the subjects record their acoustic data corresponding to $9$ audio categories, namely, $(a)$ shallow and deep breathing ($2~$ types), $(b)$ shallow and heavy cough ($2$ types), $(c)$ sustained phonation of vowels [\ae] (as in bat), [i] (as in beet), and [u] (as in boot) ($3$ types), and $(d)$ fast and normal pace number counting ($2$ types). The whole process takes $5-7$~minutes. The dataset collection protocol was approved by the Human Ethics Committee of the Indian Institute of Science, Bangalore (India). The Second DiCOVA Challenge used a subset of the Coswara dataset, sampled from the data collected between Apr-$2020$ and Jul-$2021$. The sampling included only the age group of $15-90$ years. The subjects with health status of ``recovered'' (that is, already recovered from COVID infection during the time of recording) and ``exposed'' (suspecting exposure to the virus through primary contacts) were not included in the dataset. Further, subjects with audio recording duration less than $500$~msec were discarded. Only three sound categories are considered in the challenge. These correspond to breathing-deep, cough-heavy, and counting-normal, and for brevity are referred to as breathing, cough, and speech, respectively. The resulting curated subject pool was divided into the following two groups. \begin{itemize} \item \textbf{Non-COVID}: Subjects self-declared as healthy or having COVID-19 like symptoms (such as cold, cough, fever, muscle pain or fatigue, loss of taste or smell) or pre-existing respiratory ailments (such as asthma, pneumonia, chronic lung disease) but were not tested positive for COVID-19. \item \textbf{COVID}: Subjects self-declared as COVID-19 positive (asymptomatic or symptomatic with mild/moderate infection) \end{itemize} \subsection{Development data} The development dataset release is composed of audio records from $965$~($172$~COVID) subjects. This results in $965$~(subjects)$\times3$ (sound categories) audio recordings. An illustration of the metadata of the subject pool is provided in Figure~\ref{fig:dev_set_metadata}. About $70$\% of the subjects were male. The majority of the participants lie in the age group of $15-45$ years. In the non-COVID subject pool, close to $86\%$ are healthy without respiratory ailments or COVID-19-like symptoms. In the COVID subject pool, close to $87\%$ are symptomatic. In the development set, $17.2\%$ subjects belong to COVID class. This represents an imbalanced dataset, reflecting the typical real-world scenario in the design of point-of-care-tests (POCTs) for COVID-19. The development dataset release also contains a five fold ($80-20\%$) train-validation split of this dataset. This provides an option to the participants for exploring hyper-parameter tuning in their models. \subsection{Evaluation data} For evaluation, a blind test set consisting of $471$ audio files $\times 3$ sound categories was provided. This contained $71$ COVID-19 positive individuals while the remaining subjects were Non-COVID. The age, and gender distribution on the blind test set was matched with those in the development set. The participating teams were required to upload their probability scores for the blind test set audio files to an online evaluation website portal\footnote{\url{https://competitions.codalab.org/competitions/34801}}. This portal was equipped to compute the performance using the evaluation metrics (defined in Section~\ref{sec:metrics}) and rank order the teams performance on a common leader-board. This was done real-time. Every team received a maximum of $15$ tickets-per-track for submitting scores to the leader-board. Following the evaluation, the teams have been provided with the labels and the metadata corresponding to the blind test set. \subsection{Audio Specifications} \noindent All audio recordings were re-sampled to $44.1$~kHz and compressed as FLAC (Free Lossless Audio Codec) format for the ease of distribution. The duration of audio in each track corresponded to $4.62~$hrs for Track-1, $1.68~$hrs for Track-2, and $3.93~$hrs for Track-3. \section{Challenge Tasks} \label{sec:tasks} The challenge required designing a binary classifier to detect COVID/Non-COVID health status of subjects using sound categories corresponding to each track. Every registered team was provided with the development dataset to facilitate training and design of classification models. The teams were free to use any dataset except the publicly available Coswara dataset\footnote{\url{https://github.com/iiscleap/Coswara-Data}} for data augmentation purposes. Alongside the development dataset and blind test set, a baseline software implementation was also shared with the participants. This provided a data analysis code pipeline and model training recipe on the development dataset with five-fold validation. Post challenge, that is, $01$-Oct-$2021$, teams submitted their system report describing the designed models to the challenge organizers. Every team signed a terms and condition document stating the data licenses, the fair use of data and the rules of the challenge. \begin{figure*}[!t] \centering \input{plot_leaderboard} \vspace{-0.25in} \caption{Illustration of results of different teams, indicated by T-$n$ where $n$ denotes the index after sorting teams by their names. (a) Distribution of teams based on country of origin. (b-e) Best Blind Test AUC (under the ROC) \% posted by different teams on the leaderboard (ordered in descending order). (f) Scatter plot of Blind Test AUC versus Sensitivity (at $95\%$ Specificity) for all submissions made to the evaluation portal. Note: Team T-4 (hatched bar) stands for the baseline.} \label{fig:leaderboard} \vspace{-0.25in} \end{figure*} \subsection{Evaluation Metrics}\label{sec:metrics} \noindent As the dataset was imbalanced, we choose not to use accuracy as an evaluation metric. Each team submitted the COVID probability score, with a higher value indicating a higher likelihood of COVID infection, for the list of validation and test audio recordings. The web interface used the scores and the ground truth labels to compute the receiver operating characteristics (ROC) curve. The curve is obtained by varying the decision threshold with a step size of $0.0001$ and obtaining the specificity (true negative rate) and sensitivity (true positive rate) at every threshold value. The area under the resulting ROC curve, AUC-ROC, was used as the primary performance metric. The area was computed using the trapezoidal method. Further, the sensitivity at a specificity of $95\%$ was used as a secondary evaluation metric. For brevity, we will refer to AUC-ROC as AUC in the rest of the paper. \input{table_perfm_val} \section{Baseline System} \label{sec:baseline} The different processing stages in the baseline system are described below. The same baseline system setup was used for all tracks. \\ \textit{\bf{Pre-processing:}} The audio sample was normalized to lie between $\pm~1$. This was followed by discarding low activity regions from the signal. Using a sound sample activity detection threshold of $0.01$, and a buffer size of $50~$msec on either side of a sample, any audio region with sample values greater than the threshold was retained. \\ \textit{\bf{Feature extraction:}} The log mel-spectrogram features were extracted using short-time windowed segments of size $1024$ samples ($23.2~$msec) and temporal hop of $441$ samples ($10$~ms), and a $64$ mel filters. This resulted in a $64\times N_k$ dimensional feature matrix for the $k^{th}$ sound file, where $N_k$ represents the number of short-time frames. The mel-spectrogram features were appended with the first and second order temporal derivatives. The resulting $192\times N_k$ dimensional features were file-level mean-variance normalized. \\ \textit{\bf{Classifier:}} The initial experimentation with the traditional classifiers such as logistic regression and random forest gave a performance in the range of $60-70\%$ AUC on the validation folds for all the tracks. With the aim to provide a competent baseline, we opted using a deep learning framework with a cascade of two bi-directional long-short term memory (BiLSTM) and a fully connected layer. The initial BiLSTM layers with $128$ units were used to model the long-term dependencies in the audio signal. The output of the BiLSTM layers are of dimension $256\times T$. This is fed to a pooling layer which performs averaging along the time dimension to generate a sequence level embedding of $256\times1$. This output is fed to a fully connected feedforward layer of $64$ nodes and a $\tanh(\cdot)$ non-linearity. The final layer resembles a two class logistic regression model with $64$ input nodes. \\ \noindent \textit{\bf{Training:}} For training the classifier, contiguous segments were extracted, with a $10$ frame stride, from the features matrix to obtain $192\times T$ fixed dimensional feature representations. We choose $T$ as $51$ in the baseline system. The label of each chunk is the same as that of the audio file. Each mini-batch is composed of $1024$ feature matrices of size $192\times T$, randomly sampled from different audio files such that the proportion of COVID and Non-COVID labels is balanced. This oversampling of the minority class is done to overcome the limitation of class imbalance. The binary cross entropy (BCE) loss, Adam optimizer with an initial learning rate of $0.0001,$ and $\ell_2$ regularization set to $0.0001$, were used to train the classifier. The learning rate was reduced by a factor of $10$ for a patience parameter set to three epochs. A dropout factor of $0.1$ was applied to the outputs of the first BiLSTM layer and the feedforward layer. \\ \textit{\bf{Inference:}} Given an audio recording, $192\times T$ mel-spectrogram feature matrices (with a stride of $10$ frames) were extracted (similar to the training stage). These were input to the trained classifier and the output probability scores were obtained for each chunk. The average of the probability scores from each segment was output as the COVID probability score of the audio file. On the blind test set, for each sound category, we draw inference by averaging the score obtained using a model trained on each validation fold. \\ \textit{\bf{Fusion:}} Three classifiers are trained separately for the three trackwise sound categories. A final prediction at the subject level is obtained as the arithmetic mean of the COVID probability scores for the different audio sound categories. \section{Results} For the baseline system, the fold-wise AUCs for the sound categories are shown in Table~\ref{table:perfm_val}. The AUC performance is better than chance ($50\%$ AUC) for all the sound categories. An average AUC performance of $77.3\%,~75.2\%$, and $80.2\%$ was obtained for breathing, cough, and speech categories, respectively. The best average AUC of $81.67\% $was obtained with the fusion of the categories. Amongst the three sound categories, the best test set AUC of $84.5\%$, was obtained for the breathing sound category. This was followed by speech and cough. Sixteen teams actively competed on the challenge leader-board. These came from different countries (see Figure~\ref{fig:leaderboard}(a)). An illustration of leader-board rankings for each sound category is shown in Figure~\ref{fig:leaderboard}(b-e). A total of $10$ teams outperformed the baseline performance in Track-2 (Cough). This number was lower for other tracks ($4$ in Track-1 (Breathing), $3$ in Track-3 (Speech), and $5$ in Track-4, respectively). A best AUC performance of $87.2\%$, $82.0\%$, $85.2\%$, and $88.4\%$ was reported in the breathing, cough, speech, and fusion tracks, respectively. Figure~\ref{fig:leaderboard}(f) depicts a scatter plot of AUC\% versus sensitivity (at $95\%$ specificity) obtained using all the $320$ submissions made to the evaluation portal. The team T-15, top performer in Track-3 and Track-4, employed the BiLSTM architecture similar to the baseline system but with a novel initialization obtained by averaging model parameters across sound categories. They also used high-level features (that is, wav2vec2.0 \cite{baevski2020wav2vec}) alongside the MFCCs. The team T-14, top performer in Track-1 and Track-2, employed classification models based on random forests and multi-layer perceptron, and acoustic features such as RelAtive SpecTrAl-Perceptual Linear Prediction (RASTA-PLP) \cite{hermansky1994rasta}. The system reports of teams agreeing to make them public is provided in the challenge website. Post challenge, we did an additional experiment. For each track, we averaged the normalized probability scores submitted by the top three teams for the blind test set, and considered this as a hypothetical "pooled system". This system showed slight improvement over all teams. The obtained AUCs were $88.8\%$, $84.5\%$, $86.7$, and $90.4\%$ for Track-1, 2, 3, and 4, respectively. \vspace{-0.15in} \section{Conclusion} \label{sec:conclusion} The Second DiCOVA Challenge introduced multiple categories of acoustic signals like cough, breathing and speech for developing a COVID-19 diagnosis approach. A curated dataset, a baseline system, and a blind test set were provided to all participants. Out of a total of $63$ registered teams, $16$ competed actively in the final leader-board. Several teams surpassed the strong baseline performance. The results strengthen the hypothesis on presence of acoustic signature of COVID-19 in respiratory sound signals, and encourage development of acoustic based point-of-care testing tools. \vspace{-0.15in} \section{Acknowledgement} \noindent The authors would like to express gratitude to Ananya Muguli, Prashant Krishnan, and Rohit Kumar for help in challenge logistics, and Anand Mohan, Lance Pinto, Chandra Kiran, and Murali Alagesan for coordination in audio data collection. \bibliographystyle{IEEEbib}
1,477,468,750,535
arxiv
\section{Introduction} \vspace{-0.6em}\label{sec:intro} In recent years, deep learning-based methods for object detection have achieved great success. Although the performance of current object detectors under normal conditions can be impressive, recent work \cite{Exdark,REG} shows that machine perception under low-light conditions turns out to be a complex problem that requires special attention from the researchers. Fundamentally, difficulties in perception in low light stem from imaging problems caused by low photon count. Techniques in photography to address this are to: 1) gather more light by increasing the aperture size or extending the exposure time, and 2) increase the sensitivity by setting higher ISO values. However, these solutions lead to out-of-focus blur, motion blur, and amplification of noise, which prove difficult for machine perception. An appropriate approach for learning-based frameworks is to increase the number of data presenting low-light conditions. Still, the issue of low-light is far from being solved, given that: 1) high-quality large-scale datasets containing a high proportion of low-light data are sparse and difficult to label, 2) \cite{Exdark} showed that even when trained on equal amounts of low-light and bright data ConvNets do not learn to normalize deep features with respect to the lighting conditions, \textit{i.e.,} low-light and bright features form two separate clusters of data, and thus require separate modeling. We address these issues in three dimensions by: 1) releasing a novel dataset that can be used to study and develop models targeting images in low-light conditions, 2) analyzing the limitations of the baseline model on our dataset and gaining insight as to what exactly is difficult for machine cognition in low light, 3) developing a learning-based image enhancement module and novel augmentation techniques targeting low-light conditions. Our first contribution is the Night Object Detection (NOD) dataset\footnote{Our dataset is publicly available at \href{https://github.com/igor-morawski/NOD}{https://github.com/igor-morawski/NOD}.}: a high-quality large-scale dataset captured in the wild under challenging low-light conditions, annotated with instances of \textit{people}, \textit{bicycles} and \textit{cars}. For a subset of the dataset, we provide instance-level annotations of lighting conditions in novel terms of \textit{extreme} and \textit{non-extreme} low-light conditions, for meaningful evaluation and benchmarking of the future object detection and image enhancement methods targeting low-light conditions. Our second contribution is an analysis of how low-light conditions impact the performance of the baseline, where we show that the problem cannot be entirely solved by training on low-light data. We further link the lighting conditions to perceptual difficulty, and identify that there are \textit{non-extreme} low-light conditions that are moderately difficult for current object detectors, and \textit{extreme} low-light conditions that are very difficult for machine perception. Specifically, we define extreme low-light as a condition where most object edges and keypoints are not visible due to low illumination only, \textit{e.g.,} not due to occlusion. As we show, training on an appropriate low-light dataset does not remove the performance gap between these two conditions (Fig. \ref{fig:two} (a)). Thus, new methods that target low-light conditions particularly are required. Finally, we propose a method targeting low-light object detection that consists of an image enhancement module for intermediate image enhancement (Fig. \ref{fig:two} (b)) and two data augmentation methods. Accordingly, we present experimental results that show a consistent improvement over the baseline, including improvement under \textit{extreme} low-light conditions. \vspace{-1.2em}\section{Related Work} \vspace{-0.6em}\label{sec:related} \textbf{Low-light image enhancement}. Learning-based solutions have been applied to numerous low-level vision tasks such as denoising, super-resolution, image enhancement, including low-light image enhancement. Many works are inspired by Retinex theory that decomposes images into illumination and reflectance \cite{zhang2019kindling,zhang2021beyond,wang2019underexposed,wei2018deep}. However, Image Signal Processing (ISP) pipeline, used to produce JPEG images from raw data, breaks down under extreme low-light conditions, and thus, another line of work focuses on developing learning-based ISP pipelines \cite{SID,chen2019seeing,schwartz2018deepisp,ratnasingam2019deep}. Deep learning-based methods most often required paired training data and because of that most datasets are limited to static scenes \cite{SID} or synthetic data \cite{LORE2017650,lim2015robust,wang2019underexposed}. In contrast with these methods, we focus not on improving perceptual quality, but on improving image representation for machine cognition in high-level tasks. \textbf{Low light in high-level vision tasks}. The closest to our work are \cite{Exdark,REG} and \cite{poor_visibility_benchmark}. Besides contributing a dataset of low-light images for image recognition and object detection, based on extensive investigation, \cite{Exdark} concluded that: 1) increasing amount of low-light data is necessary for improving low-light image cognition, 2) learned features extracted from the same object under good and poor lighting conditions belong to different data clusters. In our work, we continue investigation into machine cognition under low-light conditions, but we link the lighting conditions directly to perceptual difficulty rather than, \textit{e.g.,} light source as \cite{Exdark}. In comparison with their dataset, our dataset contains, on average, more annotated instances per object category, and the resolution of the images is higher. Another dataset under low-light conditions, DarkFace, a large-scale dataset for face detection under was released by \cite{poor_visibility_benchmark}. In contrast with the DarkFace \cite{poor_visibility_benchmark} dataset, when an object was occluded in our dataset, we still annotated around the most probable boundary rather than around the visible part only. This is especially important in situations where a part of the object is not visible due to imaging difficulties. Finally, \cite{REG} proposed a detection-with-enhancement framework for low-light face detection based on the generation of multi-exposure images from a single image. Similarly, we propose to incorporate an image enhancement module. However, our image enhancement module produces single-exposure enhanced image representation. \vspace{-0.6em}\section{NOD: Night Object Detection Dataset} \vspace{-0.6em}\label{sec:dataset} \begin{table}[!h] \vspace{-1.3em} \footnotesize \begin{center} \begin{tabular}{c c c c c c c} \toprule Dataset & Camera & \# classes & \makecell{\# annotated \\ images} & \# instances & \makecell{\# unannotated \\ images} & \makecell{High- \\ Res.}\\ \hline Sony & Sony RX100 VII & 3 & 3.2k & 18.7k & 0.9k & \checkmark \\ Nikon & Nikon D750 & 3 & 4.0k & 28.0k & 0 & \checkmark \\ \hline NOD (ours) & Sony \& Nikon & 3 & 7.2k & 46.7k & 0.9k & \checkmark \\ ExDark \cite{Exdark}& & 12 & 7.3k & 23.7k & 0 & \ding{55} \\ \toprule \end{tabular} \vspace{-.8em} \end{center} \caption{Basic statistics in the Night Object Detection (NOD) dataset. We provide high-quality bounding box annotation for \textit{people}, \textit{bicycles} and \textit{cars}.} \label{tab:stats} \vspace{-1.5em} \end{table} We present a high-quality large-scale dataset of outdoor images targeting low-light object detection. The dataset contains more than 7K images and 46K annotated objects (with bounding boxes) that belong to classes: \textit{person}, \textit{bicycle}, and \textit{car}. The photos were taken on the streets at evening hours, and thus all images present low-light conditions to a varying degree of severity. We used two DSLR cameras to capture the scenes: Sony RX100 VII and Nikon D750, and throughout the paper, we refer to the sets collected by these cameras as Sony and Nikon (data)set. We show the statistics of our dataset in Tab. \ref{tab:stats}. All photos were shot handheld, and most of them were shot in Full Auto mode. Some of them shot in Shutter Priority mode, especially when there were fast moving objects (\textit{e.g.} cars) involved. Thus, the images in our dataset show all common culprits of low-light photography: motion blur, out-of-focus blur, and severe noise. To ensure the high quality of annotation under challenging conditions, we outsourced data labeling to a company that annotated instances on images enhanced by MBLLEN \cite{lv2018mbllen} in their original resolution. \vspace{-0.2em} \begin{figure}[t] \begin{center} \begin{tabular}{ccccc} \includegraphics[height=2cm]{figures/Marks/D.jpg} & \includegraphics[height=2cm]{figures/Marks/s.jpg} & \includegraphics[height=2cm]{figures/Marks/X.jpg} & \includegraphics[height=2cm]{figures/Marks/v.jpg} & \includegraphics[height=2cm]{figures/Marks/t.jpg} \vspace{-1mm} \\ \ding{117} & \begin{footnotesize}\ding{110}\end{footnotesize}& \ding{54} & \ding{116} & \ding{115}\vspace{-0.5em} \end{tabular} \begin{tabular}{cc} \includegraphics[width=5.55cm]{figures/tsne/isextreme_p50_lr200.pdf} & \includegraphics[width=5.55cm]{figures/tsne/cats_p50_lr200.pdf} \vspace{-0.2em} \\ (a) & (b) \vspace{-0.7em} \end{tabular} \end{center} \caption{t-SNE embeddings of the features extracted by the baseline model pre-trained on the COCO \cite{COCO} dataset. Rather than classifying by the lighting source, we directly link low-light conditions to perceptual difficulty. We define \textit{extreme} low-light conditions as conditions where most of the object edges are not visible due to poor illumination. To illustrate, \ding{117} and \begin{footnotesize}\ding{110}\end{footnotesize} belong to the same data cluster despite a large apparent difference, \textit{i.e.}, \ding{117} is well-illuminated and \begin{footnotesize}\ding{110}\end{footnotesize} is backlit. At the same time, \ding{116} belongs to the \textit{extreme} conditions cluster, even though a part of the object (legs) is relatively well-illuminated and clearly visible.} \label{fig:tsne} \vspace{-1.5em} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=5.55cm]{figures/COCO__R50/bbox-allclass-allarea.pdf} & \includegraphics[width=5.55cm]{figures/A_R50/bbox-allclass-allarea.pdf} \\ \vspace{-0.2em} (a) & (b) \vspace{-0.7em} \end{tabular} \end{center} \caption{(a) Precision-Recall curves of an off-the-shelf object detector on our dataset. Misdetections due to \textit{extreme} low-light conditions constitute a large part of errors in the baseline model. (b) PR curves of an object detector fine-tuned on our dataset. Although training the performance is significantly improved, errors due to \textit{extreme} low-light conditions still make up a substantial part of errors. \textit{C50} and \textit{C75} are PR curves at $IoU=0.5$ and $0.75$, respectively. \textit{Extreme Low-Light} is the PR curve after eliminating all misdetections that can be attributed to extreme low-light conditions. \textit{Other} is the PR curve after eliminating all misdetections that cannot be attributed to extreme low-light conditions.} \label{fig:prs-anal} \vspace{-1.5em} \end{figure} We target applications that focus on machine perception and end-task performance rather than applications such as enhancing perceptual quality. Therefore, in our dataset, we captured dynamic scenes in an uncontrolled environment that represent significant problems in photography under poor lighting. Over all, all images in our dataset present low-light conditions. However, the degree of severity of these conditions varies, and for more detailed evaluation, we provide instance-level annotation of lighting conditions for the test subset of the Sony set. We define extreme low-light as a condition where most object edges and keypoints are not visible due to low illumination only, \textit{e.g.,} not due to occlusion. Out of 1827 instances in this set, we manually labeled 810 as presenting extreme low-light conditions. Moreover, for this set, we also indicate if the object is truncated or strongly occluded. More details about the dataset, setup and annotation procedure as well as sample images and bounding box annotations can be found in the supplementary material. \vspace{-1.2em}\section{Dataset and Baseline Analysis}\vspace{-0.6em}\label{sec:analysis} To investigate whether differentiating between the extreme and non-extreme low-light conditions in this way is meaningful, we visualized t-SNE embedding of the features extracted by the backbone pre-trained on the COCO dataset \cite{COCO} with less than 0.23\% of images presenting low-light conditions \cite{Exdark}. We show the result colored using two mappings: by object class and by lighting conditions, in Fig. \ref{fig:tsne} (a) and (b), respectively. Indeed, the features extracted from regions presenting \textit{extreme} and \textit{non-extreme} low-light conditions belong to different data clusters. The observation that ConvNets do not normalize lighting conditions is in line with the observation made by \cite{Exdark} for image recognition networks. However, our classification of lighting types is by perceptual difficulty rather than by, \textit{e.g.}, the light source as in \cite{Exdark}. Moreover, we observe that there is no sharp boundary between these conditions, which seems to follow the intuition that lighting conditions are a spectrum from bright (easy for perception) through low-light (moderately difficult for perception) to extreme low-light (difficult for perception). Similarly, we visualized t-SNE embeddings for all the models in our paper and found out that these findings hold for all of them. Next, we analyzed the performance of an off-the-shelf detector on challenging low-light data. To this end, similarly as for visualizing t-SNE feature embeddings, we used the baseline model trained on the COCO dataset \cite{COCO}. We evaluated the performance on the test set of Sony, and used the lighting conditions annotations to investigate the impact of extreme low-lighting conditions on the detector. We show the results in Fig. \ref{fig:prs-anal} (a), where we observe that errors due to extreme low-light conditions constitute a large part of errors of the off-the-shelf detector. Similarly, we analyzed the performance of the same model fine-tuned on our low-light dataset, shown in Fig. \ref{fig:prs-anal} (b). Although the detector trained on low-light data performs much better under low-light conditions, the proportion of errors due to extreme low-light remains significant, and there is a large room for improvements in this aspect. In order to verify that the performance gap between the pre-trained and fine-tuned baseline model is, indeed, due to the lack of low-light data rather than the distribution mismatch only, we compared the Precision-Recall curves separately under \textit{extreme} and \textit{non-extreme} low-light conditions. The results are shown in Fig. \ref{fig:two} (a). In comparison with the baseline fine-tuned on our dataset, the performance of the pre-trained baseline model under \textit{extreme} low-light conditions is disproportionately lower with respect to the performance under \textit{non-extreme} low-light conditions. In other words, training on an appropriate low-light dataset, helps to reduce the gap between the performance under extreme and non-extreme lighting conditions. However, despite the large amounts of extreme low-light data in training, the gap is not entirely reduced, which suggests that special attention from the researchers is required to solve the problem of low-light conditions in high-level tasks. \vspace{-0.6em}\section{Proposed Method} \vspace{-0.6em}\label{sec:method} Inspired by the observation that ISP pipelines are not designed to work under extreme low-light conditions, we introduce an image enhancement module that will compensate for the errors of the ISP pipeline as shown in Fig. \ref{fig:method}. The image enhancement module is to compensate for extreme low-light conditions and is trained jointly with the object detector to learn image representation optimal for machine cognition rather than for the human visual system. In our exploratory study, we have experimented with image-to-image fully-convolutional networks and image-to-parameter networks. In the end, we have selected U-Net \cite{unet} as an effective architecture for this task. In contrast with image-to-parameters networks, such architecture is capable of both performing intensity adjustment as well as denoising. We have also found out that training U-Net from scratch jointly with the object detector initialized from a pre-trained checkpoint leads to suboptimal results. Therefore, we propose a simple but effective pre-training procedure that takes advantage of large amounts of bright images easily available for training. \begin{figure}[t] \begin{center} \includegraphics[width=10cm]{figures/PM.pdf} \vspace{-1.9em} \end{center} \caption{The proposed method consists of an image enhancement module trained under the guidance of the object detector that produces enhanced intermediate representation optimal for machine cognition. } \vspace{-1.5em} \label{fig:method} \end{figure} \vspace{-1.2em}\subsection{Pre-Training Image Enhancement Module}\vspace{-0.6em We propose a pseudo-supervised pre-training procedure that leverages the abundant amount of data collected under normal conditions. Two observations inspired our approach: under extreme low-light conditions, 1) low SNR is one of the critical factors limiting image quality, and 2) because of the relatively low bit-depth of JPEG data, naively applying brightness and contrast adjustment leads to visual artifacts that are similar to the posterization effect. We formulate the pre-training as an image restoration task, with original bright images as target images. We extract random patches from images and corrupt well-lit data by first reducing the number of gray levels to $k$ (\textit{i.e.}, we use $k$ levels to represent 256 gray levels, using \textit{e.g.} uniform color quantization), and then adding shot noise on top of posterized image patches. The number of gray levels and noise parameters can be used to control the severity of image corruption. Examples of the corrupted images can be found in the supplementary material. We train the image enhancement module from scratch, and measure the distance between the original image $I$ and the reconstructed image $\hat{I}$ using pixel-wise MSE and the Structure Similarity (SSIM) index \cite{ssim}. Moreover, we use VGG loss \cite{vggloss}, to encourage the network to focus on high-level image features rather than low-level statistics, crucial to the object detection task. The total loss is formulated as below:\vspace{-.7em} \begin{equation} \pazocal{L} = MSE(I,\hat{I}) + \lambda_1 SSIM(I,\hat{I}) + \lambda_2 VGG(I,\hat{I}) \vspace{-.9em} \end{equation} where $\lambda_1$, $\lambda_2$ are hyper-parameters. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=5.55cm]{figures/aug/pa.png} & \includegraphics[width=5.07cm]{figures/aug/sb1.png} \vspace{-0.2em} \\ (a) & (b) \vspace{-0.7em} \end{tabular} \end{center} \caption{Proposed augmentation methods: (a) patch-wise light augmentation to reduce the spatial redundancy, (b) block shuffle augmentation to encourage the detector to look at the object's context. Best viewed on display.} \vspace{-1.5em} \label{fig:aug} \end{figure} \vspace{-1.2em}\subsection{Patch-Wise Light Augmentation}\vspace{-0.6em In night scenes, local illumination changes are common due to the presence of local light sources and shadows cast by objects. Under- and over-exposed regions are another example of local lighting variations common in low-light photography. Ideally, a good image enhancement module should learn to remove these variations. To facilitate the learning of compensating these local variations, we propose to remove the spatial redundancy in the images by randomly adjusting brightness and contrast in every patch of the input image. We adjust brightness and contrast using the formula: \vspace{-.7em} \begin{equation} I'(x,y)=\alpha(x,y)(I(x,y) + \delta(x,y) ), \vspace{-.9em} \end{equation} where $I(x,y)$ is the original intensity, $\alpha$ controls contrast adjustment, and $\delta$ controls brightness adjustment. For each patch, $\alpha$ and $\delta$ are sampled randomly from a range of possible values. An example of patch-wise light augmentation is shown in Fig. \ref{fig:aug} (a). \vspace{-1.2em}\subsection{Block Shuffle Augmentation}\vspace{-0.6em Under extreme low-light conditions, object features are severely affected by lighting variations, \textit{e.g.,} edges or object keypoints are not visible. In such a case, humans may approach the problem by paying closer attention to the object's context rather than only the object itself. In order to encourage the object detector to do the same, we propose to destroy the spatial correlation of the features by "scrambling" each object region. For each object region, if it's selected for augmentation with probability $p$, we divide the region into blocks sized $B\times B$ and randomly permute them. An example of this augmentation is shown in Fig. \ref{fig:aug} (b). In this way, the detector is forced to look outside the scrambled region to look for more clues about the object. We hypothesize that this augmentation might also boost object detection performance by augmenting features outside their normal context. \vspace{-0.6em}\section{Experimental Results and Discussion} \textbf{Implementation Details}. We implement all models with Open MMLab Detection Toolbox \cite{mmdetection} on 2 Tesla-V100 32GB GPUs with SyncBN. We use SGD optimizer, apply a batch size of 8, and set the learning rate to $\expnumber{1}{-4}$. As for the U-Net \cite{unet}, we replace ReLU activatations with Mish \cite{mish} and add Batch Norm layers before every activation layer. We extract random patches from the COCO dataset \cite{COCO}, and corrupt them by applying posterization, by reducing from 256 to $k\in [2,8]$ gray levels, and adding shot noise. We reduced gray levels by uniform color quantization, \textit{i.e.}, we divided each axis of the color space into equal sized segments. We use Adam \cite{adam}, apply a batch size of 64, aset the learning rate to $\expnumber{1}{-4}$, and train using two Teslas K80 12GB. As for the patch-wise augmentation, we set $\alpha$, $\delta$ limits to $[-0.3,0.3]$, and vary patch size from 4\% to 20\% of the image size during training. As for the block shuffle augmentation we set block size to $16\times 16$. More details can be found in the supplementary material. \vspace{-1.2em}\subsection{Experiments \& Discussion}\vspace{-0.6em We first present the ablation study to show the impact of the proposed improvements on the performance. The effectiveness of the proposed pre-training method is shown in Tab. \ref{tab:pretraining}, the ablation study of the enhancement module is shown in Tab. \ref{tab:rebuttal-camera-abl}, and the effectiveness of the proposed image enhancement module and augmentation methods are shown in Tab. \ref{tab:ablation}. We also present images enhanced by the proposed image enhancement module in Fig. \ref{fig:two} (b) and in the supplementary materials. \begin{table}[ht]\vspace{-0.7em} \footnotesize \begin{minipage}[b]{0.45\linewidth} \centering \begin{tabular}{c c c c} \toprule Pre-training & $AP_{50}$ & $AP_{75}$ & $AP$ \\ \hline \ding{55} & 64.9\% & 41.2\% & 40.3\% \\ \checkmark & \textbf{74.0\%} & \textbf{48.0\%} & \textbf{47.1\%} \\ \toprule \end{tabular} \caption{Effectiveness of our pre-training method. } \label{tab:pretraining} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.45\linewidth} \centering \begin{tabular}{c c c c c} \toprule Test & Enh. & & & \\ dataset & module & $AP_{50}$ & $AP_{75}$ & $AP$ \\ \hline \multirow{2}{*}{Sony} & \ding{55} & 72.8 \% & 47.6 \% & 45.9\% \\ & Nikon & \textbf{73.3\%} & \textbf{47.8\%} & \textbf{46.3\%} \\ \toprule \end{tabular} \caption{Ablation study. We train the enhancement module on the Nikon subset, freeze the weights, and append it to the detector trained on the Sony subset.} \label{tab:rebuttal-camera-abl} \end{minipage} \vspace{-1.4em} \end{table} \begin{table}[h] \footnotesize \begin{center} \begin{tabular}{c c c c c c c c c c} \toprule & \multicolumn{2}{c}{Augmentation} & \multicolumn{3}{c}{Sony} & \multicolumn{2}{c}{Nikon} \\ Backbone & Light & \makecell{Shuffle} & $AP_{50}$ & $AP_{75}$ & $AP$ & $AP_{50}$ & $AP_{75}$ & $AP$ \\ \hline R(esNet)50& & & 72.8\% & 47.6\% & 45.9\% & 69.7\% & 43.7\% & 44.0\%\\ \makecell{U-Net + R50} & & & 74.0\% & 48.0\% & 47.1\% & 70.8\% & 43.3\% & 44.4\% \\ \makecell{U-Net + R50}& \checkmark & & 73.9\% & 48.8\% & 46.9\% & 70.9\% & \textbf{45.0\%} & 44.9\% \\ \makecell{U-Net + R50} & & \checkmark & \textbf{74.3\%} & 49.0\% & \textbf{47.5\%} & 71.2\% & 44.7\% & 44.8\% \\ \makecell{U-Net + R50} & \checkmark & \checkmark & 74.1\% & \textbf{49.1\%} & \textbf{47.5\%} & \textbf{71.6\%} & 44.9\% & \textbf{45.4\% } \\ \toprule \end{tabular} \vspace{-1.em} \end{center} \caption{Ablation study of the proposed method. We use RetinaNet \cite{lin2017focal} as the object detection framework and extend it by an image enhancement module if indicated by U-Net.} \label{tab:ablation} \vspace{-1.5em} \end{table} \begin{table}[t] \footnotesize \begin{center} \begin{tabular}{c c c c c} \toprule Dataset & Proposed Method & $AP_{50}$ & $AP_{75}$ & $AP$ \\ \hline \multirow{2}{*}{Sony} & \ding{55} & 72.8\% & 47.6\% & 45.9\% \\ & \checkmark &\textbf{ 74.1\%} & \textbf{49.1\%} &\textbf{ 47.5\%} \\ \hline \multirow{2}{*}{Nikon} & \ding{55} & 69.7\% & 43.7\% & 44.0\% \\ & \checkmark &\textbf{ 71.6\% }& \textbf{44.9\%} &\textbf{ 45.4\%} \\ \hline \multirow{2}{*}{NOD (Sony+Nikon)} & \ding{55} & 73.1\% & 47.0\% & 46.3\% \\ & \checkmark & \textbf{74.4\%} &\textbf{ 48.3\%} & \textbf{47.6\%} \\ \hline \multirow{2}{*}{ExDark \cite{Exdark}} & \ding{55} & 78.3\% & 52.9\% & 48.7\% \\ & \checkmark & \textbf{79.1\%} &\textbf{53.6\%} & \textbf{49.4\%} \\ \hline \toprule \end{tabular} \vspace{-1.em} \end{center} \caption{Results of the proposed method, including proposed augmentation methods, on subsets of our dataset (Sony, Nikon), our dataset (Sony+Nikon) and ExDark \cite{Exdark}. In all experiments, we use RetinaNet \cite{lin2017focal} as the object detection framework.} \label{tab:overall} \vspace{-1.5em} \end{table} We observe that the proposed image enhancement module improves the overall average precision, although it is more effective for the Sony dataset than for Nikon. We also observe that the augmentation methods have different impact on the performance depending on the dataset. For the Sony dataset, block shuffle augmentation is more effective, and for the Nikon dataset patch-wise light augmentation is more effective. Overall, the proposed method is effective in improving performance under low-light conditions. Next, we collected the overall results of the proposed method in Tab. \ref{tab:overall}. The proposed method shows a consistent improvement over the baseline model on our dataset as well as the ExDark dataset \cite{Exdark}. However, as we showed in Subsection \ref{sec:analysis}, low light is a spectrum from \textit{non-extreme} conditions that are relatively easy for human and machine perception to \textit{extreme} conditions that are difficult for perception. To validate that the proposed method eliminates errors due to extreme low-light conditions rather than eliminating some other unrelated error, we evaluated the model on the instances in the Sony dataset presenting extreme low-light conditions only. The resulting Precision-Recall curves are shown in Fig. \ref{fig:three}. Our method has higher APs at $IoU=0.5$ and $0.75$, and generally shows a higher precision at the same recall level. \vspace{-1.2em} \begin{figure*}[!h] \begin{center} \begin{tabular}{cc} \includegraphics[width=5.55cm]{figures/three/PT_Baseline-Baseline-Our_Method-high_low-bbox-all-allarea-C50.pdf} & \includegraphics[width=5.55cm]{figures/three/PT_Baseline-Baseline-Our_Method-high_low-bbox-all-allarea-C75.pdf} \vspace{-0.2em} \\ (a) & (b) \vspace{-0.7em} \end{tabular} \end{center} \caption{\label{fig:three} Precision-Recall curves under extreme low-light conditions, evaluated at (a) $IoU=0.5$ and (b) $IoU=0.75$. Under extreme-low light conditions, our method leads to more precise detection at the same recall level. Strongly occluded and truncated annotations were excluded from this evaluation. } \vspace{-1.em} \end{figure*} \begin{table}[!h] \footnotesize \begin{center} \begin{tabular}{c c c c c c c c c c} \toprule Enhancement & Requires & Learning- & Model & Model & \multicolumn{2}{c}{Extreme} & \multicolumn{2}{c}{Non-Extreme} \\ method& bright gt. & -based & GFLOPS & parameters & $AP_{50}$ & $AP_{75}$ & $AP_{50}$ & $AP_{75}$\\ \hline Ours & \ding{55} & \checkmark & 12.1 & 8.0M & \textbf{63.7\%} & 35.2\% & 87.7\% & \textbf{71.2}\% \\ KinD++ \cite{zhang2021beyond} & \checkmark & \checkmark & 7.9 & 8.3M & 63.5\% & \textbf{35.3\%} & \textbf{88.5\%} & 70.6\% \\ Zero-DCE \cite{Zero-DCE} & \ding{55} & \checkmark & 5.2 & 79K & 61.3\% & 31.1\% & 87.3\% & 69.3\% \\ LIME \cite{LIME} & \ding{55} & \ding{55} & - & - & 60.8\% & 32.7\% & 87.4\% & 69.3\% \\ Hist. equal. & \ding{55} & \ding{55} & - & - & 60.1\% & 30.2\% & 86.2\% & 67.5\% \\ \toprule \end{tabular} \vspace{-1.5em} \end{center} \caption{Comparison to the related work in low-light image enhancement (training and testing on the Sony subset). The computational complexities are given for an input of size $256\times 256 \times 3$. The computational complexity of the baseline model (RetinaNet) is 13.1 GFLOPS. Strongly occluded and truncated annotations were excluded from this evaluation. } \label{tab:rebuttal-SOTA} \vspace{-1.5em} \end{table} We also show a comparison of our proposed enhancement module with other low-light enhancement modules after fine-tuning the baseline model in Tab. \ref{tab:rebuttal-SOTA}. Although the performance using KinD++ \cite{zhang2021beyond} and our method is comparable, our enhancement model is learned without low/normal-light image pairs jointly with the object detector using bounding box annotation only, which is potentially a significant advantage, \textit{e.g.} in dynamic scenes. Finally, our proposed method can be used with different detectors. \begin{table}[!h] \footnotesize \begin{center} \begin{tabular}{c c c c c c} \toprule & & Proposed Enh. & & & \\ Detector & init. & Module & $AP_{50}$ & $AP_{75}$ & $AP$ \\ \hline \multirow{2}{*}{RetinaNet \cite{lin2017focal}} & \multirow{2}{*}{COCO \cite{COCO}} & \ding{55} & 72.8\% & 47.6\% & 45.9\% \\ & & \checkmark &\textbf{ 74.1\%} & \textbf{49.1\%} &\textbf{ 47.5\%} \\ \hline \multirow{2}{*}{PAA \cite{paa-eccv2020}} & \multirow{2}{*}{COCO \cite{COCO}} & \ding{55} & 71.6\% & 47.5\% & 45.4\% \\ & & \checkmark & \textbf{73.0\%} &\textbf{ 48.5\%} & \textbf{46.7\%} \\ \hline \multirow{2}{*}{Faster R-CNN \cite{Ren_2017}} & \multirow{2}{*}{ImageNet \cite{deng2009imagenet}} & \ding{55} & 61.6\% & 34.5\% & 33.3\% \\ & & \checkmark &\textbf{ 64.2\% }& \textbf{35.8\%} &\textbf{ 35.5\%} \\ \hline \multirow{2}{*}{FCOS \cite{tian2019fcos}} & \multirow{2}{*}{random} & \ding{55} & 56.2\% & \textbf{22.1\%} & 26.4\% \\ & & \checkmark & \textbf{58.1\%} & {21.0\%} & \textbf{26.7\%} \\ \hline \toprule \end{tabular} \vspace{-1.em} \end{center} \caption{Results of the proposed method, including proposed augmentation methods, on subsets of our dataset (Sony, Nikon), our dataset (Sony+Nikon) and ExDark \cite{Exdark}. In all experiments, we use RetinaNet \cite{lin2017focal} as the object detection framework. The models were trained and tested on the Sony subset. } \label{tab:rebuttal-detectors} \vspace{-1.1em} \end{table} In Tab. \ref{tab:rebuttal-detectors}, we show that our proposed method, in addition to a single-stage anchor-based detector (RetinaNet \cite{lin2017focal}), can work for other detection models as well: two-stage Faster R-CNN \cite{Ren_2017}, anchor-free FCOS \cite{tian2019fcos}, and a single-stage detector with an alternative anchor assignment scheme PAA \cite{paa-eccv2020}. \vspace{-1.2em}\section{Conclusion} \vspace{-0.6em}\label{sec:conclusion} In this paper, we presented a high-quality large-scale dataset for object detection under low-light conditions showing outdoor scenes with all common challenges of low light photography: motion blur, out-of-focus blur, and noise. Further, we linked perceptual difficulty to low-light conditions and annotated instances in the Sony test set as \textit{extreme} and \textit{non-extreme}, allowing for more in-depth evaluation of methods targeting low-light conditions in the future. We expect that this dataset will be a valuable resource for the researchers in domains of object detection, low-light image enhancement, and domain adaptation, to name a few. Moreover, we proposed to incorporate an image enhancement module into the object detection framework that, paired with the proposed block shuffle and patch-wise light augmentation, led to improvements over the baseline model on low-light datasets. Performance gains introduced by our method were slight but consistent -- in this paper we show that the perception under extreme low-light conditions is a difficult problem that should be addressed on its own, rather than merely a substask of object detection. All in all, in our paper, we highlighted that there exists a significant difficulty for object detectors under low-light conditions. In particular, we showed that there is a large performance gap under extreme and non-extreme low-light conditions that cannot be eliminated by including large amounts of extreme examples in the training or our proposed enhancement module. Paired with the observation that ConvNets do not learn to normalize features with respect to the lighting type, this suggests that machine cognition under low-light conditions is a non-trivial problem that requires special attention from researchers. \vspace{-1.2em} \section*{Acknowledgement} This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 110-2634-F-002-026 and Qualcomm Technologies, Inc. We are grateful to the National Center for High-performance Computing. \clearpage
1,477,468,750,536
arxiv
\section*{CRediT Author Statement} \begin{itemize} \item \textbf{Cezar Sas}: Conceptualization, Methodology, Software, Validation, Investigation, Data Curation, Writing - Original Draft, Visualisation \item \textbf{Andrea Capiluppi}: Conceptualization, Methodology, Validation, Investigation, Writing - Review \& Editing, Supervision \end{itemize} \end{document} \section*{Biography} \begin{itemize} \item \textbf{Cezar Sas}, M.Sc., is a Ph.D. student at the University of Groningen under the supervision of Prof dr Andrea Capilupi. His research interest include Natural Language Processing (NLP) and Machine Learning (ML). Currently his work focuses on applying NLP and ML for Software Engineering (SE) problems including Software Classification and Domain Specific Taxonomy Induction. He is part of the SEARCH research group, and the NLPforSE subgroup. \item Prof dr \textbf{Andrea Capiluppi} works as an Associate Professor in Software Engineering at University of Groningen since 2020. He specialises in Open Source software, effort estimation and software maintenance. He is part of the SEARCH research group, and he is the director of the NLPforSE subgroup within SEARCH. Prof Capiluppi has a strong connection to industry and a sustained collaboration with industrial partners, who act as the beneficiaries of software development for Regional and National impact. A list of the industry collaborators is available at \href{http://www.cs.rug.nl/search/Collaborations/Collaborations}{http://www.cs.rug.nl/search/Collaborations/Collaborations}. The NLPforSE (Natural Language Processing for Software Engineering) is a newly formed research sub-group with a focus on machine learning techniques for software engineering. The focus of the NLPforSE sub-group is the application of natural language processing techniques, as well as machine learning and artificial intelligence, to the maintenance and evolutionary aspects of software engineering. \end{itemize} \end{document} \section{Case Study \label{sec:dataset} In this section we describe our attempt at creating an \revision{example} classification, \revision{with real world usages}, that minimises the general issues noted above. This section describes the original source of the classification (\ref{subsec:class_source}), the manual process that was used to reduce the categories in order to balance the number of examples in each (\ref{subsec:label_mapping}). Finally, in order to evaluate how distinct the categories are from each other, we evaluated the lexical similarity between categories, described by their projects content (\ref{subsec:lexsimil}). \subsection{Classification Source} \label{subsec:class_source} As a starting point (e.g., the `seed') for the creation of our \revision{case study} dataset, we picked a pre-existing classification, from a \textit{Java Awesome List} hosted on GitHub. Awesome Lists are curated repositories containing resources that are useful for a particular domain: in our case we use \textit{Awesome-Java}\footnote{\url{https://github.com/akullpp/awesome-java}}, a GitHub project that aggregates overall 700 curated Java frameworks, libraries and software organised in 69 categories. In an initial phase of cleaning, we removed tutorials and URLs to websites, obtaining 530 examples; we also removed the projects that could not be analyzed \revision{(e.g., gives errors in the pipeline, including: encoding, no keywords left to create an encoding, etc.)}. The total of projects finally considered for our task was 495. Using GitHub Topics\footnote{\url{https://github.com/topics}} could be an alternative to the selected Java Awesome List: however, the categories for the same list of projects is larger in the former (around 1,000 labels) than in the latter (69 labels). Also, the decision of using the Awesome Java list was to avoid using pre-existing classifications or taxonomies. Beside the previously mentioned issues, other have sporadically emerged in the past (e.g., in \cite{leclair2018neural}, where many examples of that dataset come from secondary code that is not relevant to the main projects. Moreover, \textit{Awesome-Java} is an annotation of a closed ecosystem (Java development), making it the seed of a small, but realistic, classification. In fact this process, when improved and automated, could be applied to GitHub's \textit{Topics} annotations to obtain an unlimited source of distantly annotated examples. \revision{Lastly, the \textit{Awesome-Java} repository is a collective effort of more than 300 contributors and continuous updates to the list, making it the go-to source for more than 31K (as stars) developers when looking for a library.} \revision{However, the GitHub Topics, would be a better source for a more general list of categories and a larger scale source of projects. However, there are larger challenges as there are more than 100K GitHub Topics, hence, this will move the focus of our study to unrelated issues.} \subsection{Label Mapping} \label{subsec:label_mapping} The \textit{Awesome-Java} classification contains 69 categories: on average, each category contains 8 projects. Also, some of the categories represent either general concepts (`Science') or detailed keywords (e.g., `Bean Mapping'). As a result, the Awesome-Java categories make classification tasks quite challenging: therefore we decided to manually reduce the original categories, in order to reduce the complexity, and avoid duplicates or synonyms. This mapping was performed manually, in a hierarchical fashion, by one of the authors, and resulted in a smaller set of 13 categories (\textit{Reduced AJ}): the \textit{Label} column of Table \ref{tab:dataset} lists the reduced categories that were obtained from the original 69. \revision{The reductions were evaluated by the second author, and disagreements on terms were resolved by discussion.} \begin{table}[htb!] \small \begin{center} \caption{Distribution of the number of examples for each category in the \textit{Reduced AJ}. \begin{tabular}{lcc} \toprule \textbf{Label} & \textbf{Projects} & \\%& \textbf{Samples} & \textbf{Increase (x)} \\ \midrule Introspection & 32 & \\%& 744 & 23 \\% CLI & 8 & \\%& 142 & 17 \\ Data & 49 & \\%& 1,088 & 22 \\ Development & 100 & \\%& 2,306 & 23 \\ Graphical & 11 & \\%& 226 & 22 \\ Miscellaneous & 59 & \\%& 1,729 & 20 \\ Networking & 25 & \\%& 503 & 20 \\ Parser & 41 & \\%& 935 & 22 \\ STEM & 39 & \\%& 915 & 23 \\ Security & 14 & \\%& 249 & 18 \\ Server & 37 & \\%& 727 & 20 \\ Testing & 42 & \\%& 974 & 23 \\ Web & 38 & \\%& 964 & 25 \\ \midrule \textbf{Total} & \textbf{495} \\% & \textbf{11,502} & \textbf{23} \\ \bottomrule \end{tabular} \label{tab:dataset} \end{center} \end{table} This reduction, in addition to increasing the amount of examples per class, also helps with one of the issues that the original \textit{Awesome-Java} presented, that is the lack of a hierarchical relation between categories. Figure~\ref{fig:reduction} shows a visual representation of the reduction, and how this helps the establishment of hierarchical links. Given three labels in the \textit{Awesome-Java} taxonomy: Natural Language Processing (`NLP'), Computer Vision (`CV'), and Machine Learning (`ML'); we reduce those three to an intermediate conceptual label `AI'. Together with two other categories, `Date/Time' and `Geospatial', we assign all these to the `STEM' category. \begin{figure}[htb!] \centering \includegraphics[width=0.7\columnwidth]{Reduction.pdf} \caption{Example of the reduction process. The blue rectangles are actual classes in the initial or final dataset, the grey one are intermediate logical classes used to aggregate labels. The dots represent not well defined intermediate classes.} \label{fig:reduction} \end{figure} The initial and final annotated labels are stored as a CSV file, and is available in our replication package. The file has the following schema: \begin{itemize} \item \textbf{project.name}: name of the project; \item \textbf{project.desc}: short description of the project from \textit{Awesome-Java}; \item \textbf{project.link}: URL to the GitHub repository; \item \textbf{category}: \textit{Awesome-Java} annotation; \item \textbf{category.desc}: short description of the category from \textit{Awesome-Java}; \item \textbf{label}: mapping of the original category into one of the reduced set. \end{itemize} \subsection{Evaluation} \label{sec:evaluation} We evaluate the quality of our approach of reduction on the \textit{Awesome-Java} taxonomy using both qualitatively and quantitative measure. We first compare the original and the reduced taxonomies using the introduced perils and pitfalls. Furthermore, we measure the lexical similarity of classes. \subsubsection{Antipatterns} A summary of the antipatterns found in the original \textit{Awesome-Java} and the \textit{Reduced AJ} is present in Table~\ref{tab:pathologies_our}. The original \textit{Awesome-Java} presents several of the antipatterns identified in the taxonomies in previous works, from non exhaustive label set (NE), to mixed granularity (MG) and mixed taxonomies (MT). Therefore, we can be more confident that the processes we used to reduce these issues can be deployed to different taxonomies as well. Examples of the antipatterns found in \textit{Awesome-Java} include: \begin{itemize}[noitemsep] \item Mixed Taxonomy: examples of this antipatterns are the presence of technologies like \dscat{Apache Commons} in the list; \item Mixed Granularity: for examples we find the label \dscat{Science} with the label \dscat{Configuration}, or \dscat{Development} and \dscat{Compiler Compiler}. Moreover, there are labels that are in a `\texttt{IS-A}' relationship, like \dscat{Mobile development} and \dscat{Development}. \item Non Exhaustive Categories: one examples is the lack of a \dscat{Audio Processing} category, while there are for \dscat{Computer Vision} and \dscat{Natural Language Processing}. \item Sink category: the \dscat{Apache Commons} label contains many projects that can be annotated with another label in the set, for example \dscat{Commons CLI\footnote{\href{https://commons.apache.org/proper/commons-cli/}{https://commons.apache.org/proper/commons-cli/}}} can be annotated with the \dscat{Command Line Interface} label. \end{itemize} The two main benefits of the reduction process are the removal of the Non Exhaustive (NE) label set issue and the removal of the Mixed Taxonomy (MT). Another benefit, although not completely removed as seen in Table~\ref{tab:pathologies_our}, is a marked decrease in the severity of the Mixed Granularity (MG) issue. In our case study, most of these antipatterns have been resolved using the label reduction process. However better results require tackling Mixed Taxonomy (MT) issue: its resolution required manual annotations of the examples belonging to the problematic category (e.g., `Apache Commons') as the mere reduction would just map everything to Sink Category. \begin{table}[htb!] \centering \caption{Summary of the antipatterns in the original \textit{Awesome-Java} and our Reduced AJ.} \begin{tabular}{lccccccc} & \rot{MT} & \rot{MG} & \rot{SC} & \rot{NE} & \rot{NRC} & \rot{UJC} & \rot{SKC} \\ \midrule \textbf{Awesome-Java} & \OK & \OK & \OK & \OK & & & \OK \\ \textbf{Reduced AJ} & & \OK & \OK & & & & \OK \\ \bottomrule \end{tabular} \label{tab:pathologies_our} \end{table} Similarly for the other datasets, we also computed the similarity of the labels using fastText, in \ref{fig:label_similarity_our} we can see the similarity before and after the reduction. The lower average similarity is caused by a reduction in the terms in a hierarchical relationship, and also by a lower number of terms sharing a common subword. \begin{figure}[htb!] \centering \includegraphics[width=\columnwidth]{raw_label_similarities_fasttext_box_ours.pdf} \caption{Cosine Similarity between labels using fastText embeddings.} \label{fig:label_similarity_our} \end{figure} \color{black} \subsubsection{Lexical Similarity between Categories} \label{subsec:lexsimil} To evaluate the quality of our process, we evaluated the lexical similarity between each category using the content of all projects belonging to that category. This step is of fundamental importance, since it helps to evaluate the quality of the mapping process into categories, and to give an empirical evaluation of how similar categories are, before and after reduction. In order to lexically represent the categories we used the TFIDF approach; in order to measure the similarity of two categories, we used the cosine similarity. \revision{We did not opted for embedding solutions like fastText or BERT as they are not suited for our task. For example fastText is not designed for long document embeddings, as it performs a mean operation over the embeddings to create the final representation, meaning that all the documents will converge to a very similar embedding, resulting in very high similarities between all documents. With BERT, given the small amount of token it accepts as input (512) we will have a similar issue, as we will need to combine the embeddings of subset. Other, more code oriented solutions, like code2vec or CodeBERT have issue as well. Code2vec is trained with the objective of encoding structure and semantic of the code, and not semantic of the word. CodeBERT suffers of the issues of BERT.} \paragraph{Extraction of the category documents} For each project belonging to a category, we created the category document using all the \textit{identifiers} contained in the source code files of the project belonging to that category. For the extraction of the identifiers, we used the \textit{tree-sitter}\footnote{\href{https://github.com/tree-sitter/tree-sitter}{https://github.com/tree-sitter/tree-sitter}} parser generator tool. The identifiers, without keywords, are extracted from the annotated concrete syntax tree created using a grammar for Java code. The identifiers were further processed by (1) separating the camel case strings into words, (2) lower casing every word, and (3) removing common Java terms that do not add much semantically (e.g., `\textit{main}', `\textit{println}', etc). Lastly, we perform (4) lemming, which is a way to reduce amount of different terms in the vocabulary by removing the morphological differences in words with the same root (e.g., `\textit{networking}' becomes `\textit{network}'). \paragraph{Evaluation of the similarity between categories} These category documents were used as an input to TFIDF, a statistical vectorization of words based on the Bag of Words (BoW) model. Documents are considered as a collection of words/terms, and converted to a vector by counting the occurrences of every term. Differently from BoW, in TFIDF the words are weighted by their total frequency in the collection of documents. This will result in a list of vectors representing the lexical content of that category. We limit the amount to the top 1,000 terms that have a max document frequency lower than 0.8, hereby words that are present in less than 80\% of the labels, therefore ignoring common words We adopted the cosine similarity, a measure of similarity between two vectors, in order to measure the similarity between all categories, and to evaluate possible overlaps or large differences between them. The cosine similarity, when using TFIDF, ranges from 0, different content, to 1 identical content. We compute the similarities between the categories of the original finer grained \textit{Awesome-Java} classification, and the \textit{Reduced AJ} as well. \paragraph{Results} The results in Figure~\ref{fig:class_sim} show the final similarities for the reduced classification, while Figure~\ref{fig:class_sim3} shows the similarities between the categories of the original \textit{Awesome-Java}. The initial impression is that the overall similarity between categories is very low for both classifications: this is a clear effect of the pre-filtering of the terms that are very frequent in all documents. The second observation is that the mean similarity of the classification with 69 labels is higher and has more variance: in particular there is an average similarity of $0.0520\pm0.0677$, as compared to the reduced one of $0.0210\pm0.0290$. This is also visible graphically in the heat map of Figure \ref{fig:class_sim3}: the brighter spots, therefore higher similarity are much more frequent there than in the reduced classification in Figure~\ref{fig:class_sim}. \paragraph{Discussion} The higher similarities of Figure~\ref{fig:class_sim3} are caused by a combination of different factors: \begin{enumerate} \item the first cause is the presence \revision{of the Mixed Granularity antipattern}t. For example, the similarity between the categories `Development' and `Code Analysis' is 0.45. These two were mapped into the `Development' combined category of the reduced classification. \item the second cause of high similarity \revision{is the Single-Label antipattern}. As a result, some projects are labeled with one category, but their features would require multiple labels. An example of this would be the high similarity (0.68) between `Database' and `Messaging' which in \textit{Awesome-Java} is described as ``\textit{Tools that help send messages between clients to ensure protocol independency}". An example explaining this high similarity is by considering `\textit{Apache Kafka}', a distributed event streaming framework used for various tasks including data integration, being categorized as `Messaging' while still containing a high amount of data management terms like `\textit{query}'. This also remains in the reduced classification (Figure~\ref{fig:class_sim3}) for the `Database' and `Networking', in which `Messaging' is mapped. \item lastly, we also have to consider noise, given the smaller number of examples per category in the original classification, the documents used might not be very representative of the category. \end{enumerate} \begin{figure*} \centering \includegraphics[width=\columnwidth]{class_similarities_1st_0.8_0.1_final_2.pdf} \caption{Cosine similarities between categories in the \textit{Reduced AJ}. The last two rows are the mean and max similarity per category.} \label{fig:class_sim} \end{figure*} \begin{sidewaysfigure}[!htp] \includegraphics[width=1.15\textwidth]{class_similarities_3rd_0.8_0.1_final_2.pdf} \caption{Cosine similarities between categories in the original \textit{Awesome-Java}. The last two rows are the mean and max similarity per category.} \label{fig:class_sim3} \end{sidewaysfigure} \subsection{Discussion: moving towards a complete taxonomy} The purpose of a classification, like the ones that we have summarised in Table~\ref{tab:summary} is to organise similar items (in this case, software systems) into categories for future reference. This could be for proactively recommending systems~\cite{nguyen2018crosssim}, in order to generate a list of alternatives from the same category; or for identification of common patterns into the found categories~\cite{1d3929b4506d48aa949bf6a3ecf16d39}. On the other hand, the purpose of a taxonomy is to organise categories into levels: for instance, in a taxonomy for biology, `class' (say, ``mammals") is placed on a different level than `order' (say, ``carnivorous"). In the classification works that we analysed, we never observed an attempt to define at which level the categories are (except in ~\cite{zhang2019HiGitClass}), or whether those should be considered more or less generic or specific in the more general terms of a taxonomy. As a further analysis, and in order to evaluate how specific, general, or mixed level the taxonomy is, we asked a group of 10 people \revision{belonging to our research group. The pool of annotators is} composed of PhDs, Post Docs, and Professors in the Software Engineering field to indicate whether the categories illustrated in Table~\ref{tab:dataset} should be placed in a higher or lower level of a taxonomy. The questionnaire included \revision{13 questions, one for each topic in the \textit{Reduced AJ}, where the annotators were asked to perform a rating by assigning the topic into one of} 5 levels, from 1 (very generic) to 5 (very specific). We collected their responses and analysed them to determine if any of the categories that were reduced from the Awesome-Java sample should be considered a `family', or `group' or even a `species' within a software taxonomy. The results of this preliminary qualitative analysis showed that specific categories were placed fairly consistently among either the very general (e.g., the `STEM' category), or the very specific level (`Introspection', `CLI'). On the other hand, several other categories were assigned uniformly to levels 2, 3 and 4, therefore being placed to middle-ground levels of the taxonomy, depending on the assessor's point of view. Figure~\ref{fig:taxo} shows the visualisation of what has initially emerged from the answers of our questionnaire. \revision{The assignment of a topic to a level was performed using the majority voting, those without majority are not presented.} This is further evidence that defining categories for software systems faces the challenging task of placing them in an overarching map: the `mixed levels' antipattern will always affect a classification effort, unless a more concerted research effort is conducted and shared, in order to build a taxonomy and to place the categories in its levels. \revision{A magnifier of the the uniform distribution of some topics can be imputed to our methodology, the rating task is more complex compared to a ranking to asses subjective characteristics~\cite{ye2014subjective}. Hence, the future work will focus on using better methods to rank topics.} \begin{figure}[!htb] \centering \includegraphics[width=0.7\columnwidth]{taxonomy.pdf} \caption{Assignment of the categories to levels of a taxonomy} \label{fig:taxo} \end{figure} \section{Threats to Validity} \label{sec:threats} We use the classification of Runeson et al.~\cite{Runeson2012threats} for analyzing the threats to validity in our work. We will present the \emph{construct validity}, \emph{external validity}, and \emph{reliability}. Internal validity was not considered as we did not examine causal relations \cite{Runeson2012threats}. \subsection{Construct Validity} A construct threat in our work is the choice of the classification for the case study. However, given the wide variety of datasets, and the similarity Awesome Java has regarding the issues with the state of the art classifications, this threat is mitigated. Another threat regards the way the reduction was performed. Having a single annotator performing the reduction can increase the bias in the selection of the resulting categories. We mitigated this threat by having another author evaluate the resulting categories; furthermore, we collected feedback from other colleagues regarding the same resulting categories. \subsection{External Validity} We reduce the external validity to a minimum by analyzing a large variety of datasets. We analyzed 12 datasets, with a different origin of the base classification: both bottom up, and top down classifications have been considered for study. Moreover, these classifications are based on different domains, some more specific (e.g., bio-engineering) other more generic: this should help alleviate this threat to validity. \subsection{Reliability} The analysis of the classifications and taxonomies is inherently subjective, as it involves natural language, and prior knowledge about the different application domains. We adopted objective tools, like semantic analysis, to aid with the subjective analysis. \section{Conclusions and Future Work} \label{sec:conclusions} In this work we evaluated the different classifications used for the software classification task. The current classifications have issues that might compromise generalizability of classification models, moreover, there is no general classification that can be actively used (\textbf{RQ1}). We identified a list of 7 common antipatterns that researchers encounter when creating a software classifications for classifying systems into application domains (\textbf{RQ2}). While the ideal case would be to avoid those antipatterns when creating a classification, this is quite difficult, and a refinement stage helps with the reduction (but not the complete removal) of some of these issues. We presented a case study, using a real classification, in which we mitigated some of the antipatterns using a reduction of the categories (\textbf{RQ3}). The reduction was performed manually in a hierarchical fashion. As future works we plan to perform the similarity between the categories content also for the other works in the literature. Furthermore, we plan to perform analysis similar to the work of Sen et al.~\cite{priyanka2020what} for the Question Answering (QA) task in the Natural Language Processing field, where they look at what clues QA models actually use to answer questions. We are interested in checking if the models learn general terms of a specific domains, or they pick up dataset specific clues that are not transferable to others. We are also planning to create a taxonomy induced from all the GitHub \textit{Topics} given the variety of projects, and therefore application domains, that are hosted on the platform. Given the large amount of terms, the hierarchical aggregation process needs to be automated. First, we plan to create a ranking with a larger pool of annotators, and given the high disagreement in our ranking case study, use a different methodology: ranking from pairwise comparisons, as is less complex for annotators~\cite{shah2016estimation}. Lastly, use the ranking to create links between levels, to group terms from different levels in the same domain. \section{Introduction} In the context of empirical software engineering research, the main goal of empirical papers is to achieve the generality of the results. The most common approach in doing so is to analyse projects having different application domains to decrease threats due to the generalizability of the results: as only a few examples, the work in~\cite{mojica2013large} analyses a collection of 200,000 mobile systems,~\cite{wen2019large} examples 1,500 GitHub systems based on their popularity, while~\cite{zhao2017impact} is based on 165,000 GitHub projects based on the Travis CI. As a side effect, the domain, context and uniqueness of a software system have not been considered very often by researchers as driving factors for detecting similarities or differences between software systems. In parallel, there has been a call for `context-driven software engineering research'~\cite{briand2012embracing,briand2017case}: in the testing and verification fields, for example, the set of assumptions is obviously specific to the systems under test, and those assumptions are based on the type of system, the development process, and other factors. Although the diversity and context of software systems have received some attention in the past~\cite{vassallo2018context, easterbrook2008selecting}, contemporary research in the computing field is almost entirely application-independent. This has not always been the case - early in the computing era, `\textit{there were totally separate application domains (for example, scientific and data processing) and the research focus was often application-specific}'~\cite{glass1995contemporary}. From the practitioners' point of view, categories and types of software systems are an important aspect to consider. Well known collaborative platforms like GitHub, that contain very large amounts of software repositories, show an increasing need to search and retrieve repositories based on their semantics. As a solution, GitHub has started to propose a service called \textit{Topics}\footnote{\href{https://github.com/topics}{https://github.com/topics}}, that allows developers to annotate their projects manually, and other users to search software via these topics. GitHub also provides the means to create \textit{Collections}\footnote{\href{https://github.com/collections}{https://github.com/collections}} (previously named Showcases), that is a curated list of topics where good quality repositories are grouped and showcased under the same umbrella. However, both these solutions have various issues: for example, \textit{Topics} are only optional features to a hosted software project, while GitHub does not suggest or restrict its usage in any way. As a result, there are plenty of similar (or identical, with a different morphological form) topics, making the search less effective. On the other hand, the \textit{Collections} list is manually curated; therefore, it is not scalable to all topics, reducing the effectiveness of finding repositories, especially those annotated with non-popular topics. \revision{Furthermore, developers tend to not use these tools, or using topics that are not helpful to retrieve their code (e.g., using programming languages).} \revision{The call for context-driven software engineering research, easier retrieval of relevant projects using semantics, and the extra burden put on the developers to label their project with all the correct labels requires a more automated way to label software projects.} From past research and efforts, there have been several approaches to perform software classification, and depending on what seed classification has been used as a stepping stone. In some cases, the seed was initiated with a \textit{top-down} approach, i.e., using an external classification~\cite{vasquez2014api,soll2017classifyhub}: researchers would then use the categories (or labels) of the given classification to fit a sample of software projects. In other cases, categories were generated by the researchers~\cite{Borges2016popularity}, and the software projects assigned to the categories using again a top-down approach. Finally, a \textit{bottom-up} approach was used when researchers used, as categories, the labels assigned by developers to their own software~\cite{capiluppi2020towards}. \revision{Moreover, there are various artefacts that can be used to perform the classification, from README files, to the source code. These two approaches are very different in terms of difficulty, for example the README might be lacking or containing irrelevant information about the repository content like information regarding the building of the code. Whereas the source code, there are hundreds or thousands of files, each containing some relevant semantic information that needs to be aggregated keeping track not only about the frequency but also about the interactions of the files.} \revision{The misclassification, or lack of it, has various implications both on the repository, and also on the research that makes use of the labels. The developer might struggle to find contributors for their new and less popular repositories, as these are unable to discover and use the code. Furthermore, research that uses poorly labeled projects might infer wrong patterns and give bad advice to practitioners.} In this work, we evaluate several existing software classifications proposed in the literature. The selection criteria of these works comprise \begin{enumerate*}[label=(\roman*)] \item research papers that attempt a classification of application domains, and \item research works that made their data available.\end{enumerate*} While analysing the resulting body of research works, we came across a number of recurring issues that researchers struggled with. These represent the most common \textit{antipatterns} in classifying software systems. Similarly to the work in \cite{kalliamvakou16promiseperils} that highlighted the pitfalls of mining software repositories, the goal of our work is to analyse existing classifications from past datasets, and to present a list of common antipatterns that researchers encountered when creating them. In this work we focus on the following research questions: \begin{enumerate}[label=\textbf{RQ\arabic*} - , leftmargin=*] \item \revision{How} is the \textit{quality} of existing software classification datasets? \item \revision{What} are the \textit{antipatterns} of creating a software classification dataset or a taxonomy of software application domains? \item How can we \textit{improve} software classifications and move towards a universal taxonomy that can be actively shared and used? \end{enumerate} This paper presents two main contributions: first, we perform a case study attempting to create a classification for software systems that minimizes common issues present in current datasets' classification. Second, using the acquired experience and inductive analysis, we distil a set of 7 common \textit{antipatterns} that researchers have encountered while attempting to classify software systems. These antipatterns might have happened (and are likely to happen again) when the researchers \begin{enumerate*}[label=(\roman*)] \item create their own labels, \item use a pre-defined classification of labels, or \item use the labels manually entered by software developers. \end{enumerate*} \revision{A visual representation of our pipeline is presented in Figure~\ref{fig:pipeline}.} For the sake of replication, we have made all our data\footnote{\href{http://doi.org/10.5281/zenodo.5018234}{http://doi.org/10.5281/zenodo.5018234}} and code~\footnote{\href{https://github.com/SasCezar/ComponentSemantics/blob/dev-semantic/componentSemantics/class\_term\_based\_classification.ipynb}{https://github.com/SasCezar/ComponentSemantics/blob/dev-semantic/componentSemantics/class\_term\_based\_classification.ipynb}} publicly available. The rest of this work is structured as follows: in Section~\ref{sec:background} we will give an overview of previous work with a focus on the used classifications. Using the evidence obtained from the analysis of the datasets, in Section~\ref{sec:pandp} we summarise the antipatterns when creating a software classification. In Section~\ref{sec:dataset} we present a case study on creating a software classification and resolving some issues. In Section~\ref{sec:threats} we discuss the threats to validity to our work. Finally, we present our conclusions and discuss future developments for our work in Section~\ref{sec:conclusions}. \begin{figure} \centering \includegraphics[width=\columnwidth]{pipeline.pdf} \caption{Pipeline used to identify the antipatterns in software application domain classification datasets.} \label{fig:pipeline} \end{figure} \section{Antipatterns} \label{sec:pandp} Using the evidence observed during our case study, and an inductive analysis of the state of the art classification and taxonomies that comprises the information summarized in Table~\ref{tab:summary}, we highlight 7 antipatterns that researchers have faced so far while creating a taxonomy or a classification of software. We also add a discussion to each, and a suggested solutions to reduce the effect of each of these antipatterns. The analysis was performed with a particular focus on the following characteristics: \begin{itemize} \item Coverage: each classification has its own domain, that can be more specific or general. With this requirement, we can evaluate if there are missing categories given the domain of the classification, and therefore evaluate its completeness and usability in its domain; \item Cohesion: this also relates to the domain of the classification, however, with this requirement, we try to assess if there are categories that do not belong to the domain; \item \revision{Consistency}: lastly, we check if the classification has any other issue \revision{that affect its consistency} like duplicate or overlapping categories, categories with a bad surface form (e.g. in \cite{leclair2018neural} there is `contrib/math' and `math'), or any other abnormality. \end{itemize} The results of the analysis is a list of 7 common antipatterns. Below, we present a definition of each, and we will discuss them with instances of how each was observed in past literature and give some suggestions on how they can be fixed from a practical point of view. \begin{description} \item[MT --] \textbf{Mixed Taxonomies}: this issue happens when a label set contains various categories from different taxonomies (e.g., Programming Languages and Application Domains). \item[MG --]\textbf{Mixed Granularity}: this issue emerges when categories are found (or imposed) by researchers to a dataset, although those categories belong to different levels (e.g., family, species, etc.) of a taxonomy. \item[SC --] \textbf{Single Category}: this is a very common issue in software classifications, and it is based on simplifying a complex artifact like a software system, and describing it with only one category. \item[NE --] \textbf{Non Exhaustive Categories}: this issue is visible when the categories chosen (i.e., top-down) or extracted (i.e., bottom-up) by the researchers don't cover the entire spectrum of software categories. \item[NRC --]\textbf{Non Relevant Categories}: this issue is visible when researchers choose or extract a subset of categories that is not representative of the domain of the classification. \item[UJC --]\textbf{Unnecessarily Joined Categories}: this issue occurs when researchers arbitrarily join or use several categories as one, although those are a compound of two or more different domains. \item[SKC --] \textbf{Sink Category}: this is another very common issue in software classification, and it manifests itself when researchers use a generic category for all the software systems that do not fit any of the available categories. \end{description} \subsection{Mixed Taxonomies - MT} \begin{description}[leftmargin=0cm] \item\textit{Definition:} The MT antipattern is defined as a label set consisting of a mixture of two or more different taxonomies, each covering a different domain. \item\textit{Possible Root Cause:} We hypnotize, that one of the causes of this, is the fear of the creators of the dataset to exclude labels, which might make it less appealing to the final users. \item\textit{Potential Side Effects:} This is problematic as the model has to perform two or more tasks at the same time with only a single label per project. Having multiple annotations for the same project is not a problem itself, the issue is having them mutually exclusive as there might be labels diverging from each other. \item\textit{Concrete Examples:} One very common additional part of the classification is based on having programming languages in the labels set. This is common to~\cite{sharma2017cataloging, leclair2018neural, sipio2020naive}, where, on average, $10\%$ of the examples belong to programming language categories. It is also common to have language specific frameworks or technologies as part of the label set, as for example found in \cite{sipio2020naive}. In some cases, we might find a domain category like `Deep-Learning', and specific framework or technology labels like `Django' or `AWS' that are part of the same classifications. \item\textit{Potential Solution:} A solution to this issue is to define the different classifications independently, and use them as separate tasks when training models. Having projects annotated with different classifications is useful as they can be used as auxiliary tasks to a multi-task learning~\cite{caruana1997mtl} model to improve generalizations~\cite{ruder2017overview}, and boost performance for all the tasks. \end{description} \subsection{Mixed Granularity - MG} \begin{description}[leftmargin=0cm] \item\textit{Definition:} Having a dataset where some labels are very specific for a field and others are more general, or worse, when labels are in an `\texttt{IS-A}' relationship, without these relations being explicitly represented. \item\textit{Possible Root Cause:} This issue is caused by the difficulty in creating such relations among all the labels in the categorization. \item\textit{Potential Side Effects:} The former can make the model catch very specific terms, that are dependant on the sample in the dataset, to distinguish between categories. The latter causes overlap between classes, making the classification harder or even impossible when having a single annotation for the projects. \item\textit{Concrete Examples:} As examples of this antipattern in action, we observed that \cite{sipio2020naive} contains the `Cryptocurrency' and `Bitcoin' categories, that have a similarity of $0.84$. Similarly, \cite{vasquez2014api} contains the `Compilers' and `Interpreters' categories with a similarity of $0.52$. Even in the label sets where we could not detect hierarchical relations among categories (for example in~\cite{vasquez2014api}), we also observed more general categories like `Networking' and `Web' along with very specific ones like `Interpreters' and `Indexing'. \item\textit{Potential Solution:} A solution to this antipattern is to perform a refinement of the categories, and try to aggregate them, in a hierarchical fashion, as we attempted in our case study. The benefits are visible in Figure~\ref{fig:label_similarity}, we have a lower number of outliers with high similarity. Moreover, a more qualitative analysis to evaluate the extent of the issue, is to assign a position in a taxonomic ranking scale, like the ones used in biology, for each category and evaluate the number of different ranks covered by the used taxonomy. \end{description} \subsection{Single Category - SC} Software systems do not contain only one large feature or functionality, but they are rather composed of many other smaller parts, each with its own specific task. Most of the times software systems are labeled with only one category, that limits the extent to which researchers can learn from it. \begin{description}[leftmargin=0cm] \item\textit{Definition:} We define the \textit{Single Category} antipattern as the annotation of software with only one category, despite it being a mix of different ones. \item\textit{Possible Root Cause:} This is caused by the annotation process being time consuming and also by not being made by the developers, who will only need to annotate their own project. \item\textit{Potential Side Effects:} Different components of a project, each with its own domain, influence what a model learns, making it harder for it to assign a single category to all components, especially when their semantic contribution to the model is different. This antipattern gets more evident when having a Mixed Granularity (and also a Mixed Taxonomy) classification, where one category is contained in another, however the system is penalized by suggesting the other category. \item\textit{Concrete Examples:} This antipattern was detected in all datasets except for the one of Vasquez et al.~\cite{vasquez2014api} that performs multi-label classification, and \cite{sipio2020naive, izadi2020topic} that performs recommendation of GitHub \textit{Topics}. \item\textit{Potential Solution:} While the solution for this is obvious (e.g., `annotate each sample with multiple categories'), this is not that easy to achieve as it requires extra effort from researchers and developers during the annotation phase. A less demanding approach to achieve the goal would be to adopt the annotation approach as in the GitHub \textit{Topics}, however those topics are highly noisy as mentioned previously. Therefore, this antipattern requires more attention in future works. \end{description} \subsection{Non Exhaustive Categories - NE} \begin{description}[leftmargin=0cm] \item\textit{Definition:} A taxonomy where there are terms that have a common parent, but one of them is lacking, is considered to be suffering of this anti-pattern. For a classification to be usable, it needs to cover the entire range in the domain. This is dependant on the actual domain, and also changes over time, and therefore has to be considered in the domain behind the classification. \item\textit{Possible Root Cause:} There are various possible causes, one is that the missing term was not existent, or very uncommon, at the time that the taxonomy was created (e.g., the Deep Learning was not very common 20 years ago). Another cause is the how the taxonomy was created, if a subsampling of another one, then some terms might have been excluded by the process, if the taxonomy is defined top-down, then the knowledge of the domain by the authors has a big impact on the presence or not of this antipattern. \item\textit{Potential Side Effects:} Having a classification that is too small or with missing relevant categories is an issue as the classification model performance can be affected based on what category is missing, as the category can be easily differentiated if no similar ones are present. Moreover, will make the approach less useful for general uses. \item\textit{Concrete Examples:} Some example from the previous work are the classifications of \cite{Kawaguchi2006MUDABlue, tian2009lact, altarawy2018lascad} that are too small and lack many categories, and in \cite{leclair2018neural}, where they have `Interpreters' but not `Compilers', they are also missing a `Security'/`Cryptography' related categories. In~\cite{sipio2020naive}, a GitHub Topics based classification, we have `NLP' but not `Computer Vision' in spite of it having 134 categories. \item\textit{Potential Solution:} Solutions to this are limited, as mentioned previously, application domains change over time with new categories appearing. A possible solution is to have a coarser granularity, however, this might not be a possibility in some cases, and will also reduce the utility of the classification. \end{description} \subsection{Non Relevant Categories - NRC} \begin{description}[leftmargin=0cm] \item\textit{Definition:} This antipattern is based on assigning very fine-grained categories to a project. This means that researchers have in the past added categories to their taxonomies that are too specific and non relevant. \item\textit{Possible Root Cause:} The presence of these categories can be caused by a lack of a specific usage for the taxonomy; or a lack of refinement of categories, when subsampling them from a larger pool. \item\textit{Potential Side Effects:} This antipattern has the effect of making the classification task too simple, since the categories have very few shared terms (or even none) with the others. This can be viewed as a special case of Mixed Taxonomies: however, the categories that would be more relevant to those are usually one or two, and, differently from Mixed Taxonomies, they are not related to specific technologies or programming languages. \item\textit{Concrete Examples:} Examples of non relevant categories were found for instance in \cite{Kawaguchi2006MUDABlue}, where categories are very different between each other (e.g., `Boardgame' and `Editor`); and in \cite{soll2017classifyhub}, where there are a `Development' category and a `Homework' one. Another example is from \cite{sipio2020naive}, where we find `Minecraft' (i.e., a popular videogame) as a category. \item\textit{Potential Solution:} A possible workout for this antipattern would be to just remove these categories and either discard the examples along with it, or try to reassign them to a relevant one that belongs to the domain that is being modeled. \end{description} \subsection{Unnecessarily Joined Categories - UJC} \begin{description}[leftmargin=0cm] \item\textit{Definition:} This antipattern manifests itself in categories that join several categories using the ``and'' conjunction (e.g., `Gaming and Chat Engines`). While this is a less common antipattern, having a category that is a conjunction of two unrelated categories is something to pay attention to. \item\textit{Possible Root Cause:} One cause for this is the high similarity between the terms, and the low number of examples each of them have, therefore joining them will make the number of examples higher. \item\textit{Potential Side Effects:} The joint categories do not provide as much of an information to the final user as having a single term: which category of the two in the conjunction does the project belong to?. \item\textit{Concrete Examples:} In~\cite{sharma2017cataloging} there are many examples of these joined categories: while some might be considered acceptable (for example, `{Data Management and Analysis}') others form a weak combination (for example, `Gaming and Chat Engines` or `Build and Productivity tools'), since they join labels that belong to very different categories. \item\textit{Potential Solution:} An easy solution for this antipattern would be for researchers to avoid using conjunctions, or to use them only when the categories are related. However, if the categories are indeed related, there should be a more general (single) label to group them under, which is a more appropriate solution. \end{description} \subsection{Sink Category - SKC} \paragraph{Definition:} This is a very common antipattern to fall in, when dealing with large classifications. This antipattern manifests itself with a category, used as a super-label, that is applied to any software that doesn't fit any another category in the classification, but that still needs an annotation. The most common one is to have a category named `Others', or other synonyms. However, there are other categories that might not be that obvious, like `Frameworks', `Libs' and so on. \paragraph{Possible Root Cause:} While the `Other' category is needed for the classification, it's abuse and presence of the oter \textit{Sink Categories} is eased by the difficulty of annotating some projects, hence labeling them with a \textit{Sink Category} makes it easier. \paragraph{Potential Side Effects:} This sink category adds extra noise, as it might get applied to the majority of projects contained in the pre-existing classification. This category might be also applied to projects that actually belong to other categories, but that were not originally contained in the classification; and also can be used as a backup for harder to classify projects. \paragraph{Concrete Examples:} Examples from previous work include: in LeClair et al.~\cite{leclair2018neural} has three of these which are `Libs', `Utils', and `Misc' which total to $30\%$ of the dataset size; in \cite{Borges2016popularity} they have a category called `Non-web libraries and Frameworks' containing $25\% $ of their dataset's examples. Lastly, \cite{sharma2017cataloging} has a category `Others' containing 50\% of the dataset examples. \paragraph{Potential Solution:} This antipattern is a harder to avoid, and it was commonly found in our survey: the works that do not suffer from this were usually dealing with small classifications, or very domain-specific ones. \color{black} \subsection{Summary} In Table~\ref{tab:pathologies} we summarize, for each work, the antipatterns they have in their classification. We can notice the least easy to fall in antipatterns are the NRC and UJC, and the most common are NE and SC which are also the hardest to avoid as they require extra work in the annotation phase. The most problematic issues, MT and MG are also quite common, with the former being present in most of the larger and more general taxonomies. We can also see that there is no perfect taxonomy: if one only considered the amount of antipatterns contained in a dataset, they would select the works of Di Sipio et al.~\cite{sipio2020naive}, Zhang et al. (HiGitClass)~\cite{zhang2019HiGitClass}, and Ohashi et al.~\cite{ohashi2019cnn_code}. However, the latter two have very specific and closed domains, that are more straightforward to create, but less useful to other researchers. \begin{table}[htb!] \centering \caption{Summary of the antipatterns in previous works.} \begin{tabular}{lccccccc} & \rot{MT} & \rot{MG} & \rot{SC} & \rot{NE} & \rot{NRC} & \rot{UJC} & \rot{SKC} \\ \midrule MUDABlue~\cite{Kawaguchi2006MUDABlue} & \OK & \OK & \OK & \OK & \OK & & \\ LACT~\cite{tian2009lact} & & \OK & \OK & \OK & & & \\ Vasquez et al.~\cite{vasquez2014api} & & \OK & & & & & \OK \\ Borges et al.~\cite{Borges2016popularity} & & \OK & \OK & \OK & & & \OK \\ LeClair et al.~\cite{leclair2018neural} & \OK & \OK & \OK & & & & \OK \\ LASCAD~\cite{altarawy2018lascad} & & \OK & \OK & \OK & & & \\ Ohashi et al.~\cite{ohashi2019cnn_code} & & & \OK & \OK & & & \\ Sharma et al.~\cite{sharma2017cataloging} & \OK & & \OK & \OK & & \OK & \OK \\ ClassifyHub~\cite{soll2017classifyhub} & \OK & & \OK & \OK & \OK & & \\ HiGitClass~\cite{zhang2019HiGitClass} & \OK & & \OK & & & \OK & \\ Di Sipio et al.~\cite{sipio2020naive} & \OK & \OK & & \OK & & & \\ \bottomrule \end{tabular} \label{tab:pathologies} \end{table} \revision{In Section~\ref{sec:dataset}, we present an attempt at creating a classification using a new, bottom up taxonomy: we will annotate all the steps in doing so, and try and address the limitations of existing classifications presented above.} \section{Related Work and Existing Taxonomies} \label{sec:background} There have been several attempts in the literature focusing on software classification: in our paper we choose to only focus on those performing a classification of application domains. In general, all those previous works use their own datasets and different classifications. This generates an even more broad issue: it becomes hard to have a real applicability of these approaches, or an agreement on a shared benchmark. While this paper is not a systematic literature review, the analyzed works have been selected using a similar approach to a systematic literature review. We retrieved the past works that (1) focused on the classification of software into application domains, and that (2) are \revision{proposing a new dataset.} In order to perform this query, we performed an initial search \revision{on computer science bibliography services like \textit{dblp}, \textit{Google Scholar}, and \textit{Arxiv}. We used the} following terms: `\textit{software categorization}', `\textit{software classification}', `\textit{github repository classification}', and `\textit{software similarity}'. We perform a first stage to validate the relevance of each work, we filter on the results using the title and abstracts. The works that passed the first filtering were subsequently used to perform a manual forward and backward snowballing for further relevant papers. \revision{Works that are case limits, were kept if their method or dataset can be used to perform software classification (e.g., \cite{theeten2019import2vec}, \cite{Borges2016popularity}).} The list of papers that form the result of our search are listed in Table~\ref{tab:summary}, and it spans a window of 15 years (2006 to 2020). The approaches used to perform the classification task of software projects in the retrieved works varies: from project metrics~\cite{liu2018onboarding}, to source code~\cite{vasquez2014api}, and binary data~\cite{escobar2015bytecode}. In this paper, we focus on the approaches based on: \begin{description} \item (A) source code; and \item (B) other project data (e.g., \textit{README files}), \end{description} as we are interested in the classification task using semantic information, and structural (can be extracted from source code). \revision{Table~\ref{tab:works} contains a list of the works divided by their approach.} Below we provide, for each of the used information source a summary of the representative works with a focus on the work's dataset. A more in detail review of the software classification task landscape is presented in~\cite{auch2020similarityreview}. \begin{table}[] \centering \caption{List of works divided by the different data source used.} \label{tab:works} \begin{tabular}{ll} \toprule Data Source & Works \\ \midrule Source Code & \cite{Kawaguchi2006MUDABlue, tian2009lact, vasquez2014api, leclair2018neural, mcmillan2012clan, linares2016clandroid, altarawy2018lascad, theeten2019import2vec, ohashi2019cnn_code} \\ Other Project Data & \cite{vargas2015automatic, sharma2017cataloging, soll2017classifyhub, nguyen2018crosssim, zhang2019HiGitClass, sipio2020naive, izadi2020topic, Borges2016popularity} \\ \bottomrule \end{tabular} \end{table} \subsection{Source Code Approaches} One of the initial works on software classification is MUDABlue~\cite{Kawaguchi2006MUDABlue}, which applied information retrieval techniques to classify software into 6 SourceForge categories. In particular, the authors used Latent Semantic Analysis (LSA), on the source code identifiers of 41 projects written in C. Following MUDABlue, Tian et al. proposed LACT~\cite{tian2009lact}, an approach based on Latent Dirichlet Allocation (LDA), a generative probabilistic model that retrieves topics from textual datasets, to perform the classification task from the identifiers and comments in the source code. In addition, the authors use a heuristic to cluster similar software. The authors use a dataset of 43 examples divided in 6 SourceForge categories. The list of projects is available in their paper. A different approach was adopted in~\cite{vasquez2014api}, the authors used API packages, classes, and methods names and extracted the words using the naming conventions. Using the example in~\cite{Ugurel2002classification}, the authors use information gain to select the best attributes as input to different machine learning methods for the task of classifying 3,286 Java projects into 22 SourceForge categories. Their dataset is not available anymore. LeClair et al.~\cite{leclair2018neural} used a neural network approach. The authors use the project name, function name, and the function content as input to a C-LSTM~\cite{zhou2015clstm}, a combined model of convolutional and recurrent neural networks. Their dataset is made of 9,804 software projects, with the annotations from the Debian packages repository. The authors only analysed programs containing C/C++ source code, divided into 75 categories: many of these categories have only a few examples, and 19 are duplicate categories with different surface form, more specifically `contrib/X', where X is a category present in the list. CLAN~\cite{mcmillan2012clan} provides a way to detect similar apps based on the idea that similar apps share some semantic anchors. Given a set of applications, the authors create two terms-document matrices, one for the structural information using the package and API calls, the other for textual information using the class and API calls. Both matrices are reduced using LSA, then, the similarity across all applications is computed. Lastly, the authors combine the similarities from the packages and classes by summing the entries. The data is not available. In \cite{linares2016clandroid}, the authors propose CLANdroid, a CLAN adaptation to the Android apps domain, and evaluate the solution on 14,450 Android apps. Their dataset is not available. Another unsupervised approach was adopted by LASCAD~\cite{altarawy2018lascad}, a language agnostic classification and similarity tool. As in LACT, the authors used LDA over the source code, and further applied hierarchical clustering with cosine similarity on the output topic terms matrix of LDA to merge similar topics. The authors also proposed two datasets: an annotated one consisting of 103 projects divided in 6 categories (from GitHub Collections) with 16 programming languages (although many languages have only 1 example), and an unlabeled one which is not available. Taking a more Natural Language Processing (NLP) inspired approach, based on the distributional hypothesis: `\textit{A word is characterized by the company it keeps}'~\cite{firth1957studies}, \cite{theeten2019import2vec} proposed a neural network solution to create dense representation (i.e., embeddings) of libraries. The authors used the co-occurrences of import statements of libraries to learn a semantic space where libraries that appear in the same context are close (similar) in the space. The authors do not perform classification, therefore, their dataset is not annotated, however, the learned representation can be used to compute similarity and also train a classification model. Differently from the previous works, \cite{ohashi2019cnn_code} used the C++ keywords and operators, represented as a binary matrix, as input to a convolutional neural network to assign the correct category out of 6 in the computer science and engineering field. The dataset is made of 40,023 students written source code for assignment/exams (short, single file, programs). Their dataset is not publicly available. \subsection{Other Approaches} The following is a review of the works that have been based on software artifacts other than source code. Following MUDABlue, Sally~\cite{vargas2015automatic} used an approach based on bytecode, the external dependencies of the project and information from Stack Overflow to generate a tag cloud. Their dataset is no longer available. Sharma et al.~\cite{sharma2017cataloging} used a combined solution of topic modeling and genetic algorithms called LDA-GA~\cite{panichella2013ldaga}. The authors apply LDA topic modeling on the README files, and optimize the hyper-parameters using genetic algorithms. While LDA is an unsupervised solution, humans are needed to annotate the topics from the identified keywords. The authors release a list of 10,000 examples annotated by their model into 22 categories, which was evaluated using 400 manually annotated projects. It is interesting to notice that half of the projects eventually end up in the `{Other}' category, which means that they are not helpful when training a new model. ClassifyHub~\cite{soll2017classifyhub} used an ensemble of 8 na\"{i}ve classifiers, each using different features (e.g. file extensions, README, GitHub metadata and more) to perform the classification task. The authors use the InformatiCup 2017\footnote{\href{https://github.com/informatiCup/informatiCup2017}{https://github.com/informatiCup/informatiCup2017}} dataset, which contains 221 projects unevenly divided into 7 categories. Nguyen et al.~\cite{nguyen2018crosssim} proposed CrossSim, an approach that uses the manifest file and the list of contributors of GitHub Java projects: this data is used to create a RDF graph where projects and developers are nodes, and edges represent the use of a project by another or that a developers is contributing to that project. The authors used SimRank~\cite{glen2002simrank} to identify similar nodes in the graph. According to SimRank, two objects are considered to be similar if they are referenced by similar objects. HiGitClass~\cite{zhang2019HiGitClass} used an approach for modeling the co-occurrence of multimodal signals in a repository (e.g. user, name of repository, tags, README and more). The authors performed the annotation according to a taxonomy (hierarchical classification) that is given as an input with keyword for each leaf node. The authors released a dataset with taxonomies for two domains: an artificial intelligence (AI) taxonomy with 1,600 examples, and a bioinformatics (Bio) one with 876 projects. Di Sipio et al.~\cite{sipio2020naive} used the content of the README files and source code, represented using TFIDF, as input to a probabilistic model called Multinomial Na\"{i}ve Bayesian Network to recommend possible topics. Given its premises, the work is defined as a multi-label classification. The authors used 120 popular topics from GitHub, and released a dataset of around 10,000 annotated projects in different programming languages. Repologue~\cite{izadi2020topic} also adopted a multimodal approach. The authors used project names, descriptions, READMEs, wiki pages, and file names concatenated together as input to BERT~\cite{devlin-etal-2019-bert}, a neural language model, that creates a dense vector representation (i.e., embeddings) of the input text. Then, a fully connected neural network was applied to these embeddings to predict multiple categories. Their dataset (currently unavailable) contains 152K repositories in various languages classified using 228 categories from GitHub's \textit{Collections}, which should be similar as the ones from Di Sipio et al.~\cite{sipio2020naive}. Finally, Borges et al.~\cite{Borges2016popularity}, albeit not performing a classification of software repositories, made a list of 2,500 projects (annotated in 6 domains/categories) available for other researchers. \subsection{Summary of Related Work} Table~\ref{tab:dataset} presents a summary of the datasets used in the literature. A single file with all the categories and the number of examples for each of the analyzed works is available in the replication package for inspection or further analysis. We use the following attributes to describe and analyze each dataset: \begin{itemize} \item \textbf{Work}: the publication details where the dataset is proposed or used for the first time; \item \textbf{Year}: the publication year; \item \textbf{Data Source}: the type of information used to perform the classification task from the software systems. This was further coded into the following: \begin{itemize} \item \textit{Source Code}: when the authors of the research directly used the source code of a system to infer (i.e., bottom up) or assign (i.e., top down) the software categories; \item \textit{README}: when the authors used the textual description of a software project of the README file to infer or assign one or more categories; \item \textit{Imports}: when the authors focused on what external libraries have been imported into a software project to infer or assign a category; \item \textit{Key and Op}, when the authors used the predefined or reserved words of a specific programming language (e.g., C++), along with the operators used in the source code; \item \textit{Multimodal}~\cite{Baltrusaitis2019multimodal}: when the authors used a combination of several sources (e.g., \textit{Source Code} and \textit{Wiki pages}). \end{itemize} \item \textbf{Available}: whether the dataset is available or not. We distinguish whether the available part is the list of annotated projects, or the list and the used files are; \item \textbf{Task}: the type of task that can be performed using the dataset. This attribute has the following possibilities: \begin{itemize} \item \textit{Classification}: assign one of $n$ mutually exclusive categories to the input project; \item \textit{Multi-Label Classification}~\cite{Tsoumakas2007MultiLabelCA}: assign to a project one or more categories from set of $n$; \item \textit{Hierarchical Classification}~\cite{gordon1987review}: assign $m$ of $n$ categories as for the Multi-Label problem, however there is a hierarchy among the categories; \item \textit{Similarity}: the task is to retrieve software that are similar to one given as input; \item \textit{Representation Learning}~\cite{bengio2013representation}: a more general case of the Similarity, in this case the goal is to create a dense representation (embedding) that preserves the similarities among projects, and it can also be used for downstream tasks. \end{itemize} \item \textbf{Examples}: the total amount of examples in the dataset; \item \textbf{Categories}: the number of different categories used to classify the software into, higher scores aren't always better as we will see later on; \item \textbf{Balance}: the level of class balance in terms of examples. It is computed using the Shannon Diversity Index (or Normalized Class Entropy~\cite{kalousis2004normalizedclassentropy} in Machine Learning), a normalized entropy~\cite{shannon1948mathematical} value: \begin{equation*} \mbox{Balance} = \frac{-\sum\limits_{i = 1}^k \frac{c_i}{n} \log{ \frac{c_i}{n}}} {\log{k}} \end{equation*} where the numerator is the entropy for a dataset of size $n$ with $k$ categories each of size $c_i$, and the denominator is the perfect case of a dataset with balanced categories, used to normalizes the values. The results range between 0 (e.g., a completely unbalanced dataset with only one category containing all the examples) and 1 (e.g., a perfectly balanced dataset containing categories with the same amount of examples). A low score means that the dataset contains a large number of categories that are not well represented in examples, and therefore more difficult to perform the classification task when encountered. \revision{This measure is not suitable for cases where there is a large amount of classes with many examples, and only a few classes with a small number of examples;} \item \textbf{Min}: \revision{the number of examples for the class with the least amount of representation;} \item \textbf{Max}: \revision{the number of examples for the largest class in the dataset.} \end{itemize} \newcolumntype{g}{l} \begin{landscape} \begin{table*}[htb!] \footnotesize \begin{center} \caption{Summary of the different datasets used in literature} \begin{tabularx}{1.65\textwidth}{llllllllgg} \toprule \multirow{2.5}{*}{Work} & \multirow{2.5}{*}{Year} & \multirow{2.5}{*}{Data Source} & {\multirow{2.5}{*}{Available}} & \multirow{2.5}{*}{Task} & \multicolumn{5}{c}{Dataset Stats} \\ \cmidrule(lr){6-10} & & & & & Examples & Categories & Balance & Min & Max \\ \midrule MUDABlue~\cite{Kawaguchi2006MUDABlue} & 2006 & Source Code & List Only$^\diamond$ & Classification & 41 & 6 & 0.91 & 2 & 13 \\ LACT~\cite{tian2009lact} & 2009 & Source Code & Yes & Classification & 43 & 6 & 0.97 & 4 & 9 \\ CLAN~\cite{mcmillan2012clan} & 2012 & Source Code & No & Similarity & 8,310 & - & - & - & - \\ Vasquez et al.~\cite{vasquez2014api} & 2014 & Source Code & No & Multi-Label Class. & 3,286 & 22 & 0.96$^{\dagger}$ & 303 & 1115 \\ Borges et al.~\cite{Borges2016popularity} & 2016 & - & List Only & Classification & 2500 & 6 & 0.88 & 103 & 837 \\ CLANdroid~\cite{linares2016clandroid} & 2016 & Multimodal & No & Similarity & 14,450 & - & - & - & - \\ Sharma et al.~\cite{sharma2017cataloging} & 2017 & README & List Only & Classification & 10,000 (5,360$^\ddagger$) & 22 & 0.60 (0.91$^\ddagger$) & 85 & 670$^\ddagger$ \\ ClassifyHub~\cite{soll2017classifyhub}$^\divideontimes$ & 2017 & README & List Only & Classification & 208 & 5 & 0.88 & 95 & 19 \\ LeClair et al.~\cite{leclair2018neural} & 2018 & Source Code & Yes & Classification & 9,804 & 75 & 0.73 & 1 & 3534 \\ LASCAD~\cite{altarawy2018lascad} & 2018 & Source Code & Yes & Classification & 103 & 6 & 0.95 & 7 & 26\\ CrossSim~\cite{nguyen2018crosssim} & 2018 & Multimodal & Yes & Similarity & 582 & - & - & - & - \\ Import2Vec~\cite{theeten2019import2vec} & 2019 & Imports & Embedding & Repr. Learning & - & - & - & - & - \\ Ohashi~\cite{ohashi2019cnn_code} & 2019 & Key and Op & No & Classification & 40,023 & 23 & 0.93 & 4713 & 10769 \\ {HiGitClass~\cite{zhang2019HiGitClass}} - AI & {2019} & {Multimodal} & {Yes} & {Hierarchical Class.} & 1,596 & 3 - 13$^\bigstar$ & 0.58 - 0.87$^\bigstar$ & 48 - 1213$^\bigstar$ & 21 - 361$^\bigstar$ \\ {HiGitClass~\cite{zhang2019HiGitClass}} - Bio & {2019} & {Multimodal} & {Yes} & {Hierarchical Class.} & 876 & 2 - 10$^\bigstar$ & 0.87 - 0.91$^\bigstar$ & 261 - 27$^\bigstar$ & 615 - 210$^\bigstar$ \\ Di Sipio et al.~\cite{sipio2020naive} & 2020 & README & Yes & Multi-Label Class. & 12,060 & 134 & 1 & 100 & 100 \\ Repologue~\cite{izadi2020topic} & 2020 & Multimodal & No & Multi-Label Class. & 152,000 & 228 & - & - & - \\ \textbf{Awesome-Java} & 2021 & - & Yes & Classification & 495 & 69 & 0.93 & 1 & 39 \\ \textbf{Reduced AJ} & 2021 & Source Code & Yes & Classification & 495 & 13 & 0.93 & 8 & 100\\ \midrule \multicolumn{8}{l}{$\diamond$ A reproduced dataset is available in LACT~\cite{tian2009lact}} \\ \multicolumn{8}{l}{$\dagger$ Multi-labels, and also numbers in the paper table do not sum to the number of examples used for the measure}\\ \multicolumn{8}{l}{$\ddagger$ After removing the `Others' class that contains almost half of the examples in the dataset}\\ \multicolumn{8}{l}{$\divideontimes$ InformatiCup 2017 Dataset}\\ \multicolumn{8}{l}{$^\bigstar$ Two level hierarchy. First value is for the first level, the other for the second level of the taxonomy.}\\ \bottomrule \end{tabularx} \label{tab:summary} \end{center} \end{table*} \end{landscape} \subsection{Related Work Content Analysis} The quantitative summarization of the previous section is not sufficient to give us a complete idea of the datasets. In this section we present the content of the categorizations in the selected datasets. We will give an overview of their intended application, inferred from the labels, and discuss in more detail the semantics of the labels using word embeddings. We use fastText~\cite{bojanowski-etal-2017-enriching}, a neural language model, as the method for extracting the vector representation of the category: this is because it can handle out-of-vocabulary words, however, we obtained similar results also with BERT~\cite{devlin-etal-2019-bert} and StackOverflow~\cite{Efstathiou18SOW2V} embeddings. In Figure~\ref{fig:label_similarity}, we can see the distribution of similarities among categories, for each dataset. On the one hand, it is difficult to say anything definitive of the low similarity outliers, as terms from different domains might have low similarity; on the other hand, for the high similarity ones, these mostly result in categories that are in a hierarchical relationship or are highly related. \begin{itemize} \item \textbf{MUDABlue}: it has a very small categorization, with a focus on developers containing categories like \dscat{Compilers}, \dscat{Editor}, and a very specific one \dscat{xterm}. However, it also contains labels that are not relevant to the others, in particular \dscat{Boardgame}. Overall the terms are not too similar between themselves, with the only outlier being the pair \dscat{xterm} and \dscat{Compilers} of $0.46$. And the lowest similarity being among the \dscat{Boardgame} label, and \dscat{Editor}. Given its small size, we can see the high spread as a lack of specificity. \item \textbf{LACT}: it has a similar domain as MUDABlue, but with more terms. It is also more general, as it contains several terms that are broader and less specific. We can find: \dscat{Database} and \dscat{Editor}, as in MUDABlue, \dscat{Terminal}, which can be considered a more general version of \dscat{xterm}. The other terms are more cohesive compared to MUDABlue, for example \dscat{E-Mail} and \dscat{Chat}. In this case, we do not find any outlier classes with a high similarity, and the distribution is quite narrow. \item \textbf{Vasquez}: it proposes a much more general taxonomy, as is a subset of SourceForge. Its labels span multiple fields in the Computer Science domain, some more general: \dscat{Scientific}, \dscat{Networking}, and \dscat{Security}: others more specific: \dscat{Indexing}, and \dscat{Compilers}. Given its well defined focus, and a higher number of topics compared to previous dataset, we find some labels that have a high similarity: \dscat{Compilers} and \dscat{Interpreters} result in a similarity of $0.52$ while \dscat{Networking} and \dscat{Communication} are at $0.48$. The latter are co-hyponyms, so hyponyms that share the same hypernym, while the former are related, as communication software use networking technologies. \item \textbf{LASCAD}: smaller compared to Vasquez et al., while still using Computer Science labels, they are not as complete and their labels are more sparse and less related to each other. The labels are: \dscat{Machine Learning}, \dscat{Data Visualization}, \dscat{Game Engine}, \dscat{Web Framework}, \dscat{Text Editor}, and \dscat{Web Game}. As expected from their surface form, the high similarity outliers pairs are: \dscat{Web Game} and \dscat{Game Engine}, with a similarity of $0.60$, and \dscat{Web Game} and \dscat{Web Framework} with $0.59$. These pairs, beside being related by sharing a term, are also related in terms of usage. \item \textbf{Ohashi}: this is a very specific categorization, based on the domain of science courses. The label set includes: \dscat{Combinatorial Optimization Problems}, \dscat{Number Theory Problems}, \dscat{Shortest Path Problems}. The overall high similarity between labels is due to the fact that all the labels contain the \dscat{Problems} term. \item \textbf{Sharma}: it is a developers oriented classsification. The terms cover various areas, labels include: \dscat{Security}, \dscat{Music}, \dscat{Gaming and Chat Engines}, and \dscat{Blogging}. Furthermore, there also some programming languages like \dscat{Lua} and \dscat{Ruby related}. \item \textbf{ClassifyHub}: as a more educational oriented dataset, its focus is not well defined, and it has high level labels in very loosely related domains: \dscat{Homework}, \dscat{Documents}, \dscat{Development}, \dscat{Education}, and \dscat{Website}. \item \textbf{HiGitClass}: their two datasets are very specific, one focusing on AI subfields, while the other is focused on Bioinformatics. Labels in the AI dataset include: \dscat{Computer Vision}, \dscat{NLP}, and \dscat{Speech} at level zero and \dscat{Image Generation}, \dscat{Super Resolution}, \dscat{Language Modeling} at the first level. In Figure~\ref{fig:label_similarity}, we can see the similarity among the labels at all levels. The outliers are due to surface similarity among the labels (e.g., \dscat{Text Classification} and \dscat{Image Classification}). As expected, the average similarity is higher, given the very specific domain which as mentioned, also means that some words are present in multiple labels, increasing this score. In the Bioinformatics dataset, labels include: \dscat{Computational Biology} and \dscat{Data-Analytics} at level zero, and \dscat{Sequence Analysis}, \dscat{Database and Ontology}, and \dscat{System Biology} at level one. This dataset contains some labels that are representing two distinct concepts (e.g., \dscat{Database and Ontology}), these labels are less informative when used as we are unsure of which of the two concepts the annotated project belongs to. The outliers show similar characteristics as for the AI dataset. \item \textbf{Di Sipio}: their categorization is the most general as it is a subset of the most common GitHub Topics. The topics include application domains like \dscat{Machine Learning}, \dscat{Database}, and \dscat{Operating System}. Moreover, we find programming languages like \dscat{Python} and \dscat{Java}, and also companies and services like \dscat{Google} and \dscat{AWS}. Given the large variety in labels, we also have many that are highly related to others. We start with \dscat{Cryptocurrency} and \dscat{Bitcoin} with similarity $0.84$; followed by a list of database-related labels with similarity in the range $0.75-0.80$, these are \dscat{PostgreSQL}, \dscat{SQL}, and \dscat{MySQL}, \dscat{NoSQL}, \dscat{MongoDB}. And also \dscat{Machine Learning} with \dscat{Deep Learning} having a similarity of $ 0.77$. \end{itemize} A complete list of the statistics and labels for each dataset is available in our data replication package\footnote{\href{https://zenodo.org/record/5018234}{https://zenodo.org/record/5018234}}. \begin{figure}[htb!] \centering \includegraphics[width=0.99\columnwidth]{raw_label_similarities_fasttext_box_sota.pdf} \caption{Cosine Similarity between labels using fastText embeddings.} \label{fig:label_similarity} \end{figure} \subsection{Discussion} We gathered several insights analysing the results collected in Table~\ref{tab:dataset}: first, approximately one in three of the datasets are not publicly available; similarly, the authors have only released the list of categories in one in three of the datasets, which in most cases is a sub-sample of a larger classification. In both those cases, it is hard (if not impossible) to reproduce the steps that lead to the classification: the unclear pre-processing has in fact a direct effect on the performance of the empirical approach~\cite{UYSAL2014104}. Second, we noticed the variance of the amount of examples and the resulting classifications: from 41 examples to 12K or 150K categories (although the latter are not publicly available), and from 6 to 134 or 228 (again, the latter are unavailable). The higher bound of these stats shows acceptable numbers for both the number of example and the number of different categories. \revision{Furthermore, from the inspection of the categories shows some issues: in particular, they contain some categories that are not relevant to the intended use of the dataset: \textit{software application domain classification}.} From the observations above, it becomes clear that most existing classifications have fundamental issues that prevent them from being further adopted by other researchers. While creating a new classification, one should not only be able to reproduce the steps performed by other researchers, but also annotate the aspects that might represent common antipatterns for others pursuing a similar goal. \revision{Next, in Section~\ref{sec:pandp}, we collect all the practical insights gained from analyzing the datasets and systematically present issues that we found in other classifications.}
1,477,468,750,537
arxiv
\section{Introduction}\label{intro} In recent decades, significant advances have been made in the study of the magnetic and superconducting properties of high-temperature superconductors (HTSC). Competition and the mutual influence of magnetism and superconductivity are being intensively studied. For a number of compounds based on copper oxides and iron pnictides (and others) both coexistence of superconductivity and commensurate or incommensurate magnetic orders and separation into magnetic and superconducting phases are found~\cite {Sidis01, Yu07,Takeshita09,Lee99, Miller06,Kibune22}. For the quasi-two-dimensional superconducting compounds the system instability in respect to the formation of incommensurate and non-collinear spin density wave (SDW) ordering is established. The question of which type of the superconducting order parameter is realized in both copper oxides and iron pnictides is still open. Various symmetries are studied in the literature: singlet $s$-~\cite{Tsuei00, Mazin08} and $ d $-wave states~\cite{Huang90, Maier11}, intermediate states like $ s + $i$d $~\cite {Ruckenstein87, Kotliar88}, $ d + $i$d $~\cite {Laughlin98, Kreisel17}, etc. In the framework of Hubbard model~\cite{Hubbard63}, which is traditionally applied to $ 3d $-metals compounds, an interplay between magnetism and superconductivity is revealed. Using the random phase approximation Scalapino et al. show that the SDW type magnetic ordering, arising from the Fermi surface nesting, leads to conditions favorable for $ s $- and $ d $-wave superconductivity, and the system is sensitive to the band structure and its filling~\cite {Scalapino86, Scalapino87}. The spin-fluctuation mechanism of the Cooper pairing is studied by the authors of Ref.~\cite {Romer15} in the weak-coupling limit of the Hubbard model at $ T = 0$. The phase diagrams constructed in this work include a rich variety of singlet and triplet superconducting states, and the $ d $-wave symmetry of the superconducting order parameter remains the ground state near the half-filling. The competition between antiferromagnetic (AF) order and $ d $-wave superconductivity is considered in a number of investigations within the Hartree-Fock approximation (HFA) for the Hubbard model. Both the microscopic coexistence and the macroscopic phase separation (PS) of the states are found~\cite {Ghosh99,Reiss07}. The authors of~\cite {Ghosh99} take into account the $ s $- and $ s + d $-wave pairings, emphasizing that in their model the magnetic state is more stable than superconducting one due to the Fermi surface nesting. The coexistence and PS between AF and superconducting states are shown within the framework of the Monte Carlo method~\cite{Kobayashi10,Yokoyama13} and the renormalization group functional~\cite {Reiss07, Yamase16}, as well as between $ s $-wave superconductivity and AF (commensurate and incommensurate) ordering in the weak-coupling limit~\cite {Vorontsov09, Fernandes10}. In the slave boson approach (SBA) the extended $s$-wave pairing is shown to be stabilized for a limited doping range ~\cite{Kopp88}. A significant difference in the results of the HFA and SBA approaches in vicinity of the half-filling for $U/W\approx 1$ ($W$ is the bandwidth) is found~\cite{Bulka96}. In the weak-coupling limit, the gap in the excitation spectrum at $T = 0$ decreases for SBA in comparison with the value obtained in HFA. The difference between the two approaches is maximal near half-filling and decreases near the band edges. Qualitative and quantitative corrections of SBA to HFA was shown: the energy gap agrees with the HFA only in the small-density limit~\cite{Bak98}. The results of SBA method show that electronic correlations significantly change the properties of the superconducting phase~\cite{Bulka98}. The appearance of superconductivity with extended $ s $- and $ d $-wave symmetries of the superconducting order parameter for the heavy fermion systems is considered in the paper \cite{Sacramento10} as a function of Coulomb repulsion. The calculation results show that, if the attractive interaction is not too weak, superconductivity is retained with an increase in $ U $ and prevails for all band fillings. Superconductivity is suppressed at high $ U $ only near half-filling (in particular, for $ d $-wave symmetry). Despite the large number of studies presented in the literature, all of them appear to be limited in one way or another: all possible superconducting or non-collinear magnetic states are not taken into account; the studies themselves are performed in the selected areas of the model parameters; the approximations used do not take into account electronic correlations; a possibility of the coexistence or PS between the magnetism and superconductivity is ignored. A study that systematically considers the competition between superconductivity with the mixed order parameter symmetry and spiral magnetic states on a square lattice, and establishes the role of electronic correlations, is not performed. The conditions for the formation of spiral magnetic states using the HFA and SBA approximations in the Hubbard model on square and cubic lattices are studied in~\cite {Igoshev09, Igoshev13}. The phase diagrams of the model in terms of the Hubbard repulsion $U$ and the band filling $ n $ include a variety of spiral magnetic phases, as well as PS between them. Comparison of HFA and SBA results show that electron correlations significantly suppress the magnetic states. In this paper, we present the results of a study of the two-dimensional single-band extended Hubbard model within HFA and SBA approaches. \section{Formalism} We study the mutual influence of magnetism and superconductivity using the Hubbard model extended by a term describing the attraction of electrons located at nearest neighbor sites, $\hat{V}$: \begin{equation}\label{coex_ham} \begin{array}{l} \displaystyle \hat{H} =\hat{K} + \hat{U} - \hat{V},\\[15pt] \displaystyle \hat{K} = \sum_{j,j',\sigma} t_{j,j'}c^\dag_{j,\sigma}c_{j',\sigma}-\mu\sum_{j,\sigma}c^\dag_{j,\sigma}c_{j,\sigma},\\[15pt] \displaystyle \hat{U} = U\sum_{j}n_{j,\uparrow}n_{j,\downarrow}= U\sum_{j}c^\dag_{j,\uparrow}c_{j,\uparrow}c^\dag_{j,\downarrow}c_{j,\downarrow},\\[13 pt] \displaystyle \hat{V} = V_0 \sum_{j,j'}n_{j,\uparrow}n_{j',\downarrow}=V_0 \sum_{j,j'}c^\dag_{j,\uparrow}c^\dag_{j',\downarrow}c_{j',\downarrow}c_{j,\uparrow}, \end{array} \end{equation} where $t_{j, j'}$ is the matrix of electron transfer integrals (we take into account the nearest and next-nearest neighbor sites with integrals $-t $ and $t'$, respectively), $c^\dag_{j,\sigma}$ and $c_{j',\sigma} $ are the creation and annihilation operators of electrons at a site $j$ with spin $\sigma$, $U$ is the on-site Coulomb repulsion parameter, $V_0$ is an attraction parameter between nearest neighbor sites, which is responsible for the formation of Cooper pairing, $\mu$ is the chemical potential, $n_{j, \sigma} = c^\dag_{j,\sigma} c_{j,\sigma}$ is the operator of electron number at the site $j$ with spin $\sigma$. So far, no consensus has been reached on the nature of HTSC. We use the attraction term $\hat V$ to get peculiarities of superconducting state, the mechanism of the electron attraction being not specified, but we assume it to be driven by either AF spin fluctuations~\cite{Izyumov99}, or the resonating valence bond mechanism~\cite{Kotliar88}. The Hamiltonian with the effective attraction between different site fermions was used to describe the $d$-wave pairing~\cite{Mayr05} and allows one to yield the superconducting state of $s$-wave, $d$-wave and intermediate $s+$i$d$-wave type~\cite{Timirgazin19,Micnas02}. Assuming the inter-site $\hat{V}$ interaction to be weaker than the on-site $\hat{U}$, we make the mean field approximation for the first one: \begin{equation} \displaystyle {\hat V} = \dfrac{1}{2}\sum_{j,j'}\left( \Delta_0\exp{(i\phi_{j,j'})}c^\dag_{j,\uparrow}c^\dag_{j', \downarrow}+h.c. \right)-\dfrac{N|\Delta_0|^2}{V_0}, \end{equation} where $N$ is the number of lattice sites. Here the order parameter is introduced: $V_0\langle c^\dag_{j,\uparrow}c^\dag_{j',\downarrow} \rangle \equiv \Delta_0{\exp(-i\phi_{j,j'})}/2$, where the phase shift $\phi_{j,j'}$ is homogeneous and depends on the mutual arrangement of sites $j$ and $j'$. In order to treat spiral magnetic order the local spin rotation by the angle $\mathbf{QR}_j$ is applied to the Hamiltonian, where $\mathbf{Q}$ is a spiral wave vector. This makes mapping to an effective ferromagnetic state with non-diagonal hopping and superconducting terms: $t_{j,j'}\rightarrow t^{\sigma,\sigma'}_{j,j'}$, $\Delta_0\rightarrow \Delta^{\sigma,\sigma'}_{j,j'}$~\cite{Igoshev15}. In the Kotliar and Ruckenstein formulation of SBA~\cite{Kotliar86} bosonic operators $e_j$, $p_{j,\sigma} $ and $d_{j}$ are introduced, corresponding to empty, once and twice occupied sites $j$, and constrains are imposed that exclude nonphysical states: \begin{equation} \begin{array}{l} \displaystyle e^\dag_j e_j+\sum_{\sigma}p^\dag_{j,\sigma}p_{j,\sigma}+d^\dag_j d_j=1,\\ \displaystyle p^\dag_{j,\sigma}p_{j,\sigma}+d^\dag_j d_j=c^\dag_{j,\sigma}c_{j,\sigma}.\\ \end{array} \end{equation} The replacement $c_{j,\sigma} \rightarrow z_{i,\sigma} c_ {j,\sigma}$ ensures the coherence of bosonic and fermionic fields. In the introduced parametrization, the Hamiltonian takes the diagonal form with respect to the bosonic operators: \begin{equation}\label{ham_sba_node} \begin{array}{c} \displaystyle {\cal H} = \sum_{j,j',\sigma,\sigma'}t^{\sigma,\sigma'}_{j,j'} c^\dag_{j,\sigma}c_{j',\sigma'} z^\dag_{j,\sigma}z_{j',\sigma'}+U\sum_j d^\dag_j d_j + \\[13pt] \displaystyle + \dfrac{1}{2}\sum_{j,j',\sigma,\sigma'} \Delta_{j,j'}^{\sigma,\sigma'}c^\dag_{j,\sigma}c^\dag_{j',\sigma'}z^\dag_{j,\sigma}z^\dag_{j',\sigma'}+h.c.-\dfrac{N|\Delta_0|^2} {V_0}. \end{array} \end{equation} Using the static and saddle point approximations, the thermodynamic potential of the grand canonical ensemble of the system can be written as~\cite{Igoshev15}: \begin{equation}\label{ensemble} \begin{array}{c} \displaystyle \Omega=\eta\left(e^2+p_\uparrow^2+p_\downarrow^2+d^2-1\right)+Ud^2-\\[13pt] \displaystyle - \sum_\sigma \lambda_\sigma\left(p_\sigma^2+d^2\right)+\dfrac{\Delta_0^2}{V_0}+\Omega_f , \end{array} \end{equation} where $\lambda_\sigma$ and $\eta$ are Lagrange multipliers. The fermionic part $\Omega_f$ of the potential (\ref{ensemble}) after the Fourier transform can be represented in the matrix form: \begin{equation} \Omega_f = \dfrac{1}{2}\sum_\mathbf{k} {\hat \gamma}^\dag_\mathbf{k}{\hat{\cal T}}_\mathbf{k}^f{\hat \gamma}^{}_\mathbf{k}, \end{equation} where ${\hat \gamma^\dag_\mathbf{k}}=\begin{pmatrix} c^\dag_{\mathbf{k-Q}/2,\uparrow} & c_{\mathbf{-k+Q}/2,\downarrow} & c^\dag_{\mathbf{k+Q}/2,\downarrow} & c_{\mathbf{-k-Q}/2,\uparrow} \end{pmatrix}$, $\hat{\cal T}_\mathbf{k}^f$ is $4\times 4$ square matrix: \begin{equation}\label{matrix} {\hat {\cal T}}_\mathbf{k}^f=\begin{pmatrix} z^2_\uparrow \varepsilon_{\mathbf{k},+}-\mu+\lambda_\uparrow & -z_\uparrow z_\downarrow \Delta_\mathbf{k,+} & z_\uparrow z_\downarrow \varepsilon_\mathbf{k,-} & z^2_\uparrow \Delta_\mathbf{k,-}\\ -z_\uparrow z_\downarrow \Delta_{\mathbf{k},+}^* & -\left(z^2_\downarrow \varepsilon_\mathbf{k,+}-\mu+\lambda_\downarrow\right) & -z_\downarrow^2 \Delta_{\mathbf{k},-}^* & z_\uparrow z_\downarrow \varepsilon_{\mathbf{k},-} \\ z_\uparrow z_\downarrow \varepsilon_\mathbf{k,-} & -z^2_\downarrow \Delta_\mathbf{k,-} & z^2_\downarrow \varepsilon_\mathbf{k,+} -\mu+\lambda_\downarrow & z_\uparrow z_\downarrow \Delta_\mathbf{k,+} \\ z^2_\uparrow \Delta_\mathbf{k,-}^* & z_\uparrow z_\downarrow \varepsilon_\mathbf{k,-} & z_\uparrow z_\downarrow \Delta_\mathbf{k,+}^* & -\left( z^2_\uparrow \varepsilon_\mathbf{k,+}-\mu+\lambda_\uparrow \right) \\ \end{pmatrix} \end{equation} Here \begin{equation} \begin{array}{l} \varepsilon_{\mathbf{k},\pm}=\left(\varepsilon^0_{\mathbf{k}+\mathbf{Q}/2}\pm \varepsilon^0_{\mathbf{k}-\mathbf{Q}/2}\right)/2,\\[13 pt] \varepsilon^0_\mathbf{k}=-2t(\cos{k_x}+\cos{k_y})+4t'\cos{k_x}\cos{k_y}, \\[13 pt] \Delta_{\mathbf{k},\pm}=\left(\Delta_{\mathbf{k}+\mathbf{Q}/2}\pm \Delta_{\mathbf{k}-\mathbf{Q}/2}\right)/2,\\ [13 pt] \Delta_\mathbf{k}=\frac{1}{2}\Delta_0\sum_{j,j'} \exp(i\phi_{j,j'})\exp\left( i\mathbf{k}\left( \mathbf{R}_j-\mathbf{R}_{j'} \right) \right), \end{array} \end{equation} $\varepsilon_\mathbf{k}^0$ being the square lattice dispersion law. Choosing the phase shift in the form \begin{equation} \phi_{j,j'} = \begin{cases} \displaystyle \;\;\; \pi\alpha,\; \mathbf{R}_j-\mathbf{R}_{j'} = (\pm 1,0),\\ \displaystyle -\pi\alpha,\; \mathbf{R}_j-\mathbf{R}_{j'} = (0,\pm 1), \end{cases} \end{equation} we obtain an intermediate $s+$i$d$-wave superconducting order parameter: \begin{equation}\label{result} \begin{array}{l} \displaystyle \Delta_\mathbf{k} = \Delta^s_\mathbf{k}\cos{\pi\alpha}+i\Delta^d_\mathbf{k}\sin{\pi\alpha}. \end{array} \end{equation} where: \begin{equation} \Delta_\mathbf{k}^s=\Delta_0\left(\cos{k_x}+\cos{k_y}\right) \end{equation} is $s$-wave (extended $s$ or $s_{x^2+y^2}$) and \begin{equation} \Delta_\mathbf{k}^d=\Delta_0\left(\cos{k_x}-\cos{k_y}\right) \end{equation} is $d$-wave ($d_{x^2-y^2}$). Varying $\alpha$ from $0$ to $\pi/2$ allows for a continuous transition from $s$- to $d$-wave pairing symmetry. The quantum mechanical averaging of the fermionic part $\Omega_f$ of the thermodynamic potential (\ref{ensemble}) over the ground state of the Hamiltonian leads to the following result: \begin{equation} \langle \Omega_f \rangle = \dfrac{1}{2}\sum_\mathbf{k}\langle {\hat \gamma}^\dag_\mathbf{k}{\hat{\cal T}}_\mathbf{k}^f{\hat \gamma}_\mathbf{k} \rangle=\dfrac{1}{2}\sum_{\mathbf{k}}\left(E^{(1)}_{\mathbf{k}}+E^{(2)}_{\mathbf{k}}\right), \end{equation} where $E^{(1)}_{\mathbf{k}}$ and $E^{(2)}_{\mathbf{k}}$ are the negative spectrum branches, which must be determined numerically at each $\mathbf{k}$-point. Note that in HFA, where only the fermionic part is kept, the spectrum can be written explicitly: \begin{equation} \begin{array}{c} \displaystyle E_{\mathbf{k}}=\pm\sqrt{(Um/2)^2+ \varepsilon_\mathbf{k,+}^2+\varepsilon_\mathbf{k,-}^2+\Delta_\mathbf{k,+}^2+\Delta_\mathbf{k,-}^2\pm D_\mathbf{k} },\\[13 pt] D_\mathbf{k}=2\sqrt{(\varepsilon_\mathbf{k,+}\varepsilon_\mathbf{k,-}+\Delta_\mathbf{k,+}\Delta_\mathbf{k,-})^2+(Um/2)^2(\varepsilon_\mathbf{k,+}^2+\Delta_\mathbf{k,+}^2)}. \end{array} \end{equation} By numerically calculating the eigenvectors, the average values $n$, $m$ and $\Delta_0$ can be determined. Minimizing the thermodynamic potential (\ref{ensemble}) in respect to all the magnetic ($\mathbf{Q}$) and superconducting ($\alpha$) states at fixed parameters $U$, $V_0$, $t'$, $n$, one can construct the ground state phase diagrams of the system. \section{Results} \begin{figure} {a)\includegraphics[width=\linewidth]{1.eps}} {b)\includegraphics[width=\linewidth]{2.eps}} \caption{Phase diagrams for $ U = 4t, t ' = 0.2t $: a) HFA, b) SBA. Thick blue lines --- the second order phase transitions, thick red lines --- the first order phase transition (narrow PS areas), thin red lines --- the boundaries of the PS areas (shaded). "SC" --- superconducting state, " + " --- the coexistence of the magnetic and superconducting orders, $(Q_1,Q_2)$ is the wave vector of the magnetic spiral. Thick green line $n=1$ --- AF insulating state. } \label{coex_4} \end{figure} \begin{figure}[h!] {a)\includegraphics[width=\linewidth]{3.eps}} {b)\includegraphics[width=\linewidth]{4.eps}} \caption{Phase diagrams for $ U = 6t, t '/ t = 0 . 2 $. Notation is the same as in the figure \ref{coex_4}, but for a) index "SC" is omitted for all magnetic areas, the coexistence of magnetism and superconductivity is still being implied.} \label{coex_6} \end{figure} The results of our study consist of two parts: the phase diagrams of the model ground state are constructed in the variables of the superconducting electron attraction $V_0/t$ and the site electron concentration $n$ for the fixed values of Coulomb repulsion $U = 4t$ and $U=6t$; and the dependencies of the amplitudes of the magnetic moment $m$ and the superconducting gap $\Delta_0$ on the electron density $n$ for the values $V_0=1.5t$ and $U=4t$. The chosen values of $U$ and $t'$ correspond to HTSC based on copper oxides~\cite{Kuchinskii12,Hybertsen92}. \subsection*{1. $(V_0/t,n)$ phase diagrams} To construct the phase diagrams, the ground state of the system should be determined on a grid of parameters $\mu$ and $V_0/t$ with fixed Coulomb repulsion $U$. For each set of parameters $(\mu,U,V_0,t'/t)$, the energy of all possible magnetic and superconducting states is calculated. The energies are then compared, and the state with the lowest energy is considered as the ground state. In the system, we detect phase transitions of the first and second order, and PS areas. The PS areas boundaries are determined by two values of the electron density $n_1$ and $n_2$ corresponding to the same value of the Fermi level $\mu$. If the electron concentration is within this region, then two spatially separated phases are simultaneously realized in the system. We find the regions of the pure SC (magnetic order is absent), coexistence and pure AF (superconductivity is absent) insulating states in the system. Magnetic order have a spin-spiral structure with wave vector $(Q_1,Q_2)$. The phase diagrams for $ U = 4t $ and $ U = 6t $ in the cases of HFA and SBA are shown of the figures~\ref{coex_4} and \ref{coex_6}, correspondingly. Superconductivity is realized in the entire range of parameters under consideration with the only exception, which is the half-filling line, since the Fermi level lies in the energy gap, and the system becomes an AF insulator. It was found earlier that $s$-wave superconductivity is realized at low concentrations of charge carriers and there is $d$-wave superconductivity when approaching half filling, the transition between $s$-wave and $d$-wave states occuring through an intermediate $s+$i$d$-wave region~\cite{Timirgazin19,Micnas02}. The "{SC}" area on the phase diagrams~\ref{coex_4}, \ref{coex_6} contains all above mentioned superconducting states, but in the region of existence of magnetic order the only $d$-wave state is realized. Increasing of $U/t$ from 4 to 6 within HFA initiates the appearance of $(0,Q)$ and $(Q,Q)$ spin-spiral phases, which is in agreement with results of \cite{Igoshev15}. Accounting for electron correlations in the SBA approach leads to the suppression of magnetic ordering, narrowing of the PS areas, and an expansion of the superconducting state region (Fig. \ref{coex_4},b)). It should be noted that magnetic order vanishes when increasing of $V_0/t$ due to the superconducting gap becomes greater than the AF one even for $n=1$. In general, comparison of the diagrams for the HFA and SBA methods allows one to conclude that taking into account electronic correlations leads to the suppression of the range and variety of spiral magnetic states and the expansion of the superconductivity region. \begin{figure}[h] {a)\includegraphics[width=\linewidth]{5.eps}} {b)\includegraphics[width=\linewidth]{6.eps}} \caption{Dependence of the magnetic $ m $ (blue solid line) and superconducting $ \Delta_0 $ (red solid line) order parameters on the electron concentration $ n $ for $ U = 4t $, $ V_0 = 1 . 5t $ and $ t '= 0 { ,} 2t $. The notations $ s $ (yellow), $ d $ (green), and $ s + id $ (blue) correspond to the symmetry of the superconducting order parameter in this region. The shaded area denotes the PS region (PS). Notations AF and $ (Q, \pi) $ (spiral) correspond to the magnetic order. Vertical thin black lines are the boundaries of phase transitions. Thin dashed lines show the dependences of the order parameters in pure magnetic and superconducting systems ($\Delta_0=0$ and $m=0$, correspondingly. The symbol "~$ + $~" means the coexistence of orders). } \label{coex_dep} \end{figure} \subsection*{2. $n$-dependence of the order parameters} The behavior of the magnetic moment $ m $ and the amplitude of the superconducting gap $ \Delta_0 $ illustrated on the figures \ref{coex_dep}a,b, which are constructed corresponding to $ U = 4t $, $ V_0 = 1 . 5t $ and $ t '= 0 . 2t $. At low electron concentrations the ground state is the $ s $-wave superconductor. The order parameter behaves non-monotonically with a maximum at $ n \approx0 . 16 $ for both HFA and SBA. At $ n \approx0 . 4 $, a transition to the $ d $-wave superconductor occurs through an intermediate state with the $ s + id $ symmetry of the order parameter. At $ n \approx0 . 44 $ for HFA and $n\approx 0.85$ for SBA, a local magnetic moment appears abruptly with an amplitude of $ m \approx0 . 1 $ (this is the first-order phase transition with negligible narrow phase separation area depicted with a single thick red line). Starting from this moment, the superconducting and magnetic orders begin to coexist. In the region of coexistence, the magnetic moment and the amplitude of the superconducting order parameter are smaller compared to pure magnetic and pure superconducting states, the order parameters of which are shown in the figure by dashed lines. Thus, superconductivity and spiral magnetization have a mutually suppressive effect on each other. We see the first order phase transition to the insulating AF state, accompanied by a region of PS. In the separation region, a combination of different states is realized: part of the system is insulating AF, while the other part has a spiral magnetic order and is, at the same time, a superconductor. The transition from the superconducting to the dielectric state probably has a percolation nature: the conductivity disappears at the concentration at which the spiral magnetic clusters stop being interconnected. In simple models such a transition occurs at the point at which the fraction of dielectric clusters is equal to $ 1/3 $, which corresponds to the electron concentration~$\approx 0 . 95 $~\cite{Efros}. There is the difference for HFA and SBA methods: HFA diagram have narrow $d$-wave area, but wide coexistence and PS regions, but SBA diagram have wide $d$-wave area and more narrow coexistence and PS regions. The electron correlations provide more favorable conditions for the superconductivity in competition with the spiral magnetism, and have significant influence on $d$-wave superconductivity, less than $s$-wave and $s+$i$d$-wave. The amplitude of the superconducting gap $\Delta_0 $ behaves non-monotonically in the coexistence region. It grows up to $ \Delta^{max}_0$ and then decreases. Thus, within the framework of our model, it is possible to reproduce the dome-shaped form of the $n$-dependence of the superconducting gap amplitude, which is characteristic of HTSC compounds~\cite{Armitage10, Hosono17}. Traditionally, it is believed that the dome shape is associated with the non-monotonic behavior of the pairing interaction value, which is determined by the nature of the Cooper pairing, for example, unconventional mechanism such as spin fluctuations~\cite{Izyumov99} and other~\cite{Setty22}. Since we do not specify the nature of attraction, and its strength is considered independent of concentration, we show that the mutual influence of the superconducting and magnetic orders can make a sizable contribution into the formation of the dome-shaped dependence $\Delta_0(n)$. \section{Discussion and conclusions} We investigate the conditions of coexistence of superconductivity with the intermediate $s+$i$d$ symmetry and spiral magnetic order on a square lattice. A possibility of coexistence and PS between SC and magnetism is studied by Hartree--Fock and slave boson approaches for $t-U-V$ model. The results for HFA and SBA qualitatively similar near $n=1$, but when electron density is far from half filling the correlation effects lead to strong suppression of the variety of magnetic states and the magnetic region width. Hence, the superconductivity becomes more favorable. It has been shown in \cite{Bulka96} that in the weak-coupling limit the gap in the excitation spectrum, obtained in SBA, is reduced in comparison to that obtained in the HFA. In our investigation $\Delta_0^{HFA}$ is slightly greater than $\Delta_0^{SBA}$ in pure superconducting regime for $d$-wave state, $\Delta_0^{HFA}\approx \Delta_0^{SBA}$ for $s$-wave and $s+$i$d$-wave states, but in the coexistence mode, the reverse situation is observed: $\Delta_0^{SBA}>\Delta_0^{HFA}$. In the coexistence regime magnetic moment and $d$-wave superconducting amplitude mutually suppress each other, in agreement with the renormalization group + mean field analysis of the Hubbard model~\cite{Reiss07,Yamase16}, but in our research there is the coexistence with $(Q,\pi)$ spiral magnetic state rather than AF. We have analyzed the correlation effects influence on the spiral magnetic and superconducting solutions stability within HFA and SBA comparison, which extends the results obtained in~\cite{Bulka96,Bulka98}. The Hubbard model phase diagrams were constructed in \cite{Reiss07,Kobayashi10} accounting for $d$-wave superconductivity and commensurate AF magnetic state. The diagrams are similar to ours and in quiet agreement. At the same time, our research takes into account the full set of possible states: $s$-wave, $d$-wave and $s+$i$d$-wave superconductivity, spiral magnetic order and phase transitions between them. \section{Declaration of Competing Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \section{Acknowledgements} The work was carried out within the framework of the state assignment of the Ministry of Science and Higher Education of the Russian Federation (topic \textnumero 121030100005-1). The authors are grateful to Ph.D. P.A. Igoshev for his contribution to the program code.
1,477,468,750,538
arxiv
\section{Introduction} The population average causal effect is by in large the most common form of total effect evaluated in observational data due to the natural connection to scientific queries arising from randomized studies. However, alternate forms of total effect may be of greater interest in observational studies with harmful exposure such that one may not want to conceive of a hypothetical intervention that forces a person to be exposed. \cite{hubbard2008population} define the population intervention effect (PIE) of an exposure as the contrast relating the mean of an outcome in the population to that in the same observed population had no one been exposed. Interestingly, the PIE is closely related to the effect of treatment on the treated (ETT) and attributable fraction (AF), which have also been toted as causal quantities to assess public health impact of a harmful exposure \citep{geneletti2007defining,hahn1998role, sjolander2010doubly, greenland1993maximum}. The ETT compares the outcome among those exposed to the potential outcome had they not been exposed -- for binary treatment, the PIE is equal to the ETT scaled by prevalence of treated persons. The AF is the proportion of potential outcome events that would be eliminated from the observed population had contrary to fact no one been exposed -- for binary outcome, the PIE is equal to the AF scaled by prevalence of outcome. As such, the PIE is a scale-dependent version of these quantities and may be of greater interest when evaluating the potential impact of programs that eliminate a harmful exposure from a population. Recent causal mediation methods have been developed to decompose such total causal effects into direct and indirect pathways through a mediating variable \citep{pearl2001direct,vansteelandt2012natural, sjolander7mediation}. Although the natural (pure) direct and indirect effects of the average causal effect (ACE) are the most common form of mediated causal effects, researchers have argued that the direct and indirect components of the ETT and AF are equally of scientific interest and may in fact require weaker conditions for identification \citep{vansteelandt2012natural}. Namely, identification of natural direct and indirect effects requires the stringent assumption that there is no unmeasured confounding of the exposure-outcome, exposure-mediator, and mediator-outcome associations and no exposure induced confounding of the mediator-outcome association, even by measured factors \citep{pearl2001direct, avin2005identifiability}. \cite{vansteelandt2012natural} propose a particular form of direct and indirect effects of the ETT which they show remain identified in the presence of exposure-mediator unmeasured confounding. This is an important result for settings where a randomized experiment is impractical or unethical such that observational data must be used and unmeasured confounding effects of the exposure cannot be ruled out. Unfortunately, \cite{vansteelandt2012natural} are unable to identify the indirect effect whenever the exposure-outcome association is confounded. In this paper, we propose an alternative form of indirect effect and describe sufficient conditions for nonparametric identification in the presence of unmeasured confounding of the exposure-outcome association, therefore complementing the results of \cite{vansteelandt2012natural}. Specifically, we propose a decomposition of the PIE into the population intervention direct effect (PIDE) and population intervention indirect effect (PIIE). The PIIE is interpreted as the contrast between the observed outcome mean for the population and the population outcome mean had contrary to fact the mediator taken the value it would have in the absence of exposure. Thus, the PIIE is relative to the current distribution of an exposure and does not require conceiving of an intervention that would force an individual to take a harmful level of exposure in the case of binary exposure \citep{hubbard2008population}. Our approach leads to an alternative effect decomposition of the ETT and AF of \cite{vansteelandt2012natural} and \cite{sjolander7mediation} (up to a scaling factor). Notably, we establish that the PIIE can be identified even when there is an unmeasured common cause of exposure and outcome variables, provided it is not also a cause of the mediator. This estimand may be of interest in a variety of settings where unmeasured confounding of the exposure-outcome relation cannot be ruled out with certainty. For example, in recommender systems, the assignment mechanism for the mediator (e.g. recommendation) is typically known or under control of the researcher, such that unmeasured confounding of the exposure-mediator and mediator-outcome relations are not of concern. The application considered in this paper investigates the indirect effect of a woman's pregnancy risk on monetary savings for delivery mediated by the amount she is recommended to save by a community health worker. Note that the PIDE does not share this identification result and is identified under the same conditions as the natural direct effect. Beyond its inherent scientific interest as quantifying the mediated component of the PIE, the PIIE may also be viewed as an approach to partially identify a total effect of an exposure on an outcome subject to unmeasured confounding in settings where one might be primarily interested in such a total effect. Interestingly, the identifying formula we obtain for the PIIE matches Judea Pearl's celebrated front-door formula, a well-known result for identification of the total effect in the presence of unmeasured confounding given that (1) a mediating variable(s) intercepts all directed paths from exposure to outcome so that the indirect effect equals the total effect and (2) there is no unmeasured confounding of the mediator-outcome or exposure-mediator associations \citep{pearl2009causality}. In the setting where an investigator believes they have captured one or more mediating variables that satisfy the front-door criterion, they can use our proposed methodology to estimate either the PIE or the average causal effect. Notably, identification of indirect effects with Pearl's front-door criterion requires a key assumption of no direct effect of the exposure on the outcome not through the mediator in view. In contrast, our generalized front-door criterion allows for presence of such direct effects. Thus, even if an investigator cannot satisfy criteria (1), they may still be able to capture the un-confounded component of the PIE through one or more mediating variables. Compared to other methods that relax the assumption of no unmeasured confounding to identify causal effects, our approach applies more generally as it does not require a valid instrumental variable, measuring one or more negative control variables, or parametric assumptions for identification \citep{angrist1996identification,imbens2008regression,campbell2015experimental,lipsitch2010negative,miao2017invited,vanderweele2011bias}. We emphasize that while the front-door criterion has long been established, the proposed generalized front-door criterion is entirely new to the literature. In addition to new identification results, we also develop both parametric and semiparametric theory for inference about the PIIE. To the best of our knowledge, the proposed methodology also delivers the first doubly robust estimator of Pearl's front-door formula in the literature. The rest of the paper is organized as follows, in section 2, we discuss nonparametric identification of the PIIE and PIDE. In section 3, we derive both parametric and semiparametric estimators, including a doubly robust semiparametric locally efficient estimator for the PIIE and PIDE. In section 4, the performance of these estimators is evaluated in a range of settings in extensive simulation studies. In section 5, the proposed methods are used to measure the effectiveness of monetary savings recommendations for delivery among pregnant women enrolled in a maternal health program in Zanzibar, Tanzania. \section{Nonparametric Identification} In the following, let $Z(a)$ denote the counterfactual mediator variable had the exposure taken value $a$ and $Y(a) = Y(a, Z(a))$ denote the counterfactual outcome had exposure possibly contrary to the fact taken value $a$. We will also consider the counterfactual outcome $Y(A,Z(a^*)) = Y(Z(a^*))$ had exposure taken its natural level and the mediator variable taken the value it would have under $a^*$. Note that when $a^* = 0$, $Y(Z(0))$ is the counterfactual outcome had exposure taken its natural level and the mediator variable taken the value it would have under no exposure. Additionally, let $C$ be a set of observed pre-exposure covariates known to confound $A$-$Z$, $A$-$Y$ and $Z$-$Y$ associations. Throughout $Z$ can be vector valued. We first consider the standard decomposition of the average causal effect (ACE). For exposure levels $a$ and $a^*$, \begin{align*} ACE(a,a^*) & = E[Y(a,Z(a)) - Y(a^*,Z(a^*))] \\ & = \underbrace{E[Y(a,Z(a)) - Y(a, Z(a^*))]}_\text{Natural Indirect Effect} + \underbrace{E[Y(a, Z(a^*)) - Y(a^*,Z(a^*))]}_\text{Natural Direct Effect} \end{align*} The natural indirect effect is the difference between the potential outcome under exposure value $a$ and the potential outcome had exposure taken value $a$ but the mediator variable had taken the value it would have under $a^*$; $$NIE(a,a^*) = E[Y(a, Z(a)) - Y(a,Z(a^*))]$$ The natural direct effect is therefore given by $ACE(a,a^*) - NIE(a,a^*)$. The NIE and NDE are well-known to be identified under the following conditions \citep{pearl2012causal,imai2010general}: \begin{align*} \textrm{M1.} \ & \textrm{Consistency assumptions: } \textrm{(1) If $A=a$, then $Z(a) =Z$ w.p.1}, \\ & \hspace{4.8cm} \textrm{(2) If $A=a$, then $Y(a) =Y$ w.p.1}, \\ & \hspace{4.8cm} \textrm{(3) If $A=a$ and $Z=z$, then $Y(a,z) =Y$ w.p.1} \\ \textrm{M2.} \ & Z(a^*) \perp A \mid C=c \ \ \ \forall \ a^*, c \\ \textrm{M3.} \ & Y(a,z) \perp Z(a^*) \mid A=a,C=c \ \ \ \forall \ z,a,a^*,c \\ \textrm{M4.} \ & Y(a,z) \perp A \mid C=c \ \ \ \forall \ z,a, c \end{align*} M1 states the observed outcome is equal to the counterfactual outcome corresponding to the observed treatment. The remaining assumptions essentially state that there is no unmeasured confounding of the exposure and the mediator variable (M2), the mediator variable and the outcome (M3), and the exposure and the outcome (M4). In addition, M3 rules out exposure-induced mediator-outcome confounding. These assumptions could equivalently be formulated under a Nonparametric Structural Equation Model with Independent Errors (NPSEM-IE) interpretation of the diagram in Figure 1a \citep{pearl2009causality}. In addition, define the following positivity assumptions, \begin{align*} \textrm{P1.} & \textrm{ There exists } m_1 >0 \textrm{ such that } f(Z | A, C) > m_1 \textrm{ almost surely} \\ \textrm{P2.} & \textrm{ There exists } m_2 >0 \textrm{ such that } f(A | C) > m_2 \textrm{ almost surely} \end{align*} \noindent where $f(Z | A, C)$ and $f(A | C)$ are the probability density functions for $Z|A,C$ and $A|C$, respectively. Under M1-4 and the positivity conditions P1-2, \begin{align} E[Y(a,Z(a^*))] & = \sum_{c,z} E(Y \mid A=a, Z=z, C=c) Pr(Z = z \mid A=a^*, C=c) Pr(C=c) \label{mediation2} \end{align} The NIE and NDE fail to be nonparametrically identified if any of assumptions M1-4 fail to hold without an additional assumption \citep{imai2010identification,shpitser2013counterfactual}. We will now formally define the decomposition of the population intervention effect under exposure value $a^*$, \begin{align} PIE(a^*) & = E[Y(A,Z(A)) - Y(a^*)]\notag \\ & = \underbrace{ E[Y(A,Z(A)) - Y(A,Z(a^*)] }_\text{Population Intervention Indirect Effect} + \underbrace{ E[Y(A,Z(a^*)) - Y(a^*,Z(a^*)] }_\text{Population Intervention Direct Effect} \notag \end{align} The population intervention indirect effect (PIIE) is a novel measure of indirect effect corresponding to the effect of an intervention which changes the mediator from its natural value (i.e. its observed value) to the value it would have had under exposure value $a^*$, \begin{align} PIIE(a^*) = E[Y(A,Z(A)) - Y(A,Z(a^*)] \label{piie} \end{align} The PIIE is indeed an indirect effect as it would only be non-null if changing the exposure from its natural value to $a^*$ results in a change in the value of the mediator which in turn results in a change in the value of the outcome. That is, the PIIE captures an effect propagated along the $A \to Z \to Y$ pathway only, and would be null if $A$ has no effect on $Z$ or $Z$ has no effect on $Y$ for all persons in the population. Compared to the NIE, the PIIE only requires intervention on the exposure level of the mediator in the second term and does not require intervention on the exposure level for the potential outcomes for $Y$. Similarly, the Population Intervention Direct Effect (PIDE) is a novel measure of direct effect corresponding to the effect of an intervention which changes the exposure from its natural level to the value under intervention $a^*$, while keeping the mediator variable at the value it would have under intervention $a^*$. This is indeed a direct effect as it would only be non-null if changing the exposure from its natural value to $a^*$, while preventing the mediator variable to change, results in a change in the value of the outcome. That is, the PIDE captures an effect along the $A \to Y$ pathway only. The first term of the PIIE, $E(Y)$, is nonparametrically identified; however, the second term requires identification conditions. Identification conditions for the PIIE are less stringent than the NIE as seen by comparing Figure 1a and 1c under a NPSEM-IE interpretation of the diagrams \citep{pearl2009causality}. In fact, the following result states that assumption M4 is no longer needed. \begin{lemma}Under assumptions M1-3 and positivity conditions P1-2, the population intervention indirect effect is given by, \begin{align*} PIIE(a^*) = E[Y] - E[Y(Z(a^*))] = E[Y] - \Psi \end{align*} where \vspace{-.5cm} \begin{align} \Psi & = \sum_{z,c} Pr(Z = z \mid A=a^*, C=c) \notag \\ & \hspace{2.5cm} \times \sum_{a} E(Y \mid A=a, Z=z, C=c) Pr(A=a \mid C=c) Pr(C=c) \label{psi} \end{align} \end{lemma} \noindent Further, equation (\ref{psi}) implies nonparametric identification in the sense that conditions M1-3 and P1-2 do not restrict the observed data distribution. The proof for this lemma can be found in the Appendix section A1.1. Interestingly, $\Psi$ is closely connected to Judea Pearl's front-door criterion. Pearl's front-door criterion provides conditions for identification of the indirect effect in the presence of unmeasured confounding of the exposure-outcome relation. The criterion requires: (1) $Z$ intercepts all directed paths from the exposure $A$ to the outcome $Y$ so that the indirect effect equals the total effect of $A$ on $Y$, (2) there is no unblocked back-door path from $A$ to $Z$, and (3) all back-door paths from $Z$ to $Y$ are blocked by $A$ and $C$ \citep{pearl2009causality}. More formally, suppose that M1-3 and the following additional assumption hold, \begin{align*} \textrm{F1.} & \ Y(a,z) = Y(a^*,z) = Y(z) \ \ \ \forall \ a, a^*,z \end{align*} F1 crucially states that $Z$ fully mediates the effect of $A$ on $Y$. In other words, mediator variable(s) $Z$ intercepts all directed paths from the exposure to the outcome. Figure 1b encodes one possible graph that satisfies the front-door criterion under a Finest Fully Randomized Causally Interpretable Structured Tree Graph, a submodel of the NPSEM-IE, interpretation of the causal diagram \citep{robins1986new, pearl2009causality, pearl2012causal}. When F1 holds, the term $E(Y(Z(a^*)))$ reduces to $E(Y(a^*))$. The identifying formula for the latter term is known as Pearl's front-door functional and matches equation (\ref{psi}) \citep{pearl2009causality}. See Appendix A2.1 for proof and further discussion. Under the front-door criterion (e.g. M1-3 and F1), the population intervention indirect effect can be expressed as, \begin{align} PIIE(a^*) = E[Y] - E[Y(a^*)] = PIE(a^*) \label{frontdoor} \end{align} \noindent That is, the $PIIE(a^*)$ is equal to the $PIE(a^*)$ when F1 holds. The identifying conditions for the PIIE can be thought of as a generalization of Pearl's front-door criterion as F1 need not hold, thereby allowing a direct effect of the exposure $A$ on the outcome $Y$, not through the mediator variable(s) $Z$ (i.e. the PIDE may or may not be null). Importantly, while the PIIE is nonparametrically identified under M1-3, the PIE and the PIDE are not identified. In the event that M4 also holds, and thus $E[Y(a^*)]$ is identified, the PIE and PIDE are both nonparametrically identified along with the NIE and PIIE. In the special case of binary $A$, the PIE can be written as the effect of treatment on the treated (ETT) scaled by prevalence of treated persons, \begin{align*} PIE(0)& = \underbrace{E(Y(1)-Y(0)| A=1))}_\text{ETT} \times Pr(A=1) \end{align*} See proof in Appendix section A2.5. Thus, the PIIE and PIDE can respectively be written as the indirect and direct components of the ETT simply upon rescaling by the prevalence of treated persons. This decomposition of the ETT offers an alternative to that of \cite{vansteelandt2012natural}. Further, in the case of binary $Y$, the PIE can be written as the attributable fraction (AF) scaled by the prevalence of outcome, \begin{align*} PIE(a^*)& = \underbrace{ [ E(Y-Y(a^*))/E(Y) ] }_\text{AF} \times E(Y) \end{align*} Thus, the PIIE and PIDE can also be written as the indirect and direct components on the AF simply upon rescaling by prevalence of outcome. This decomposition of the AF offers an alternative to that of \cite{sjolander7mediation}. Further discussion can be found in Appendix section A2.6. \begin{figure}[htbp!] \centering \subfloat[direct effect of $A$-$Y$]{ \begin{tikzpicture}[->,>=stealth',node distance=1cm, auto,] \node[est] (A) {$A$}; \node[est, right = of A] (Z) {$Z$}; \node[est, right = of Z] (Y) {$Y$}; \node[est, below = of Z] (C) {$C$}; \path[pil, densely dashed] (A) edgenode {} (Z); \path[pil, densely dashed] (Z) edgenode {} (Y); \path[pil] (C) edgenode {} (A); \path[pil] (C) edgenode {} (Z); \path[pil] (C) edgenode {} (Y); \path[pil] (A) edge [bend left=60] node [left] {} (Y); \end{tikzpicture} } \hspace{1.5cm} \subfloat[unmeasured confounding of $A$-$Y$]{ \begin{tikzpicture}[->,>=stealth',node distance=1cm, auto,] \node[est] (A) {$A$}; \node[est, right = of A] (Z) {$Z$}; \node[est, right = of Z] (Y) {$Y$}; \node[est, below = of Z] (C) {$C$}; \node[shade, above = of Z] (U) {$U$}; \path[pil,densely dashed] (A) edgenode {} (Z); \path[pil,densely dashed] (Z) edgenode {} (Y); \path[pil] (U) edgenode {} (A); \path[pil] (U) edgenode {} (Y); \path[pil] (C) edgenode {} (A); \path[pil] (C) edgenode {} (Z); \path[pil] (C) edgenode {} (Y); \end{tikzpicture} } \hspace{1.5cm} \subfloat[direct effect and unmeasured confounding of $A$-$Y$]{ \begin{tikzpicture}[->,>=stealth',node distance=1cm, auto,] \node[est] (A) {$A$}; \node[est, right = of A] (Z) {$Z$}; \node[est, right = of Z] (Y) {$Y$}; \node[est, below = of Z] (C) {$C$}; \node[shade, above = of Z] (U) {$U$}; \path[pil,densely dashed] (A) edgenode {} (Z); \path[pil,densely dashed] (Z) edgenode {} (Y); \path[pil] (U) edgenode {} (A); \path[pil] (U) edgenode {} (Y); \path[pil] (C) edgenode {} (A); \path[pil] (C) edgenode {} (Z); \path[pil] (C) edgenode {} (Y); \path[pil] (A) edge [bend left] node {} (Y); \end{tikzpicture} } \caption{Causal diagrams with indirect effects as dashed lines. The following indirect effects are identified in each diagram under a Nonparametric Structural Equation Model with Independent Errors \citep{pearl2009causality} interpretation of the diagram: (a) natural indirect effect and population intervention indirect effect, (b) natural indirect effect (equal to the total effect) and population intervention indirect effect (equal to the population intervention effect), and (c) population intervention indirect effect. Further, the indirect effects in (b) are identified under a Finest Fully Randomized Causally Interpretable Structured Tree Graph \citep{robins1986new}, which does not encode so-called ``cross-world" assumptions such as M3.} \label{figure1} \end{figure} \section{Estimation and Inference} \subsection{Parametric estimation} We have considered identification under a nonparametric model for the observed data distribution. Estimation of formula (\ref{psi}) clearly requires estimation of the mean of $Y|A, Z, C$ and the densities for $Z|A,C$, $A|C$, and $C$. In principle, one may wish to estimate these quantities nonparameterically; however, as will typically be the case in practice, the observed set of covariates $C$ may have two or more components that are continuous, so that the curse of dimensionality would rule out the use of nonparametric estimators such as kernel smoothing or series estimation. Thus, we propose four estimators for the population intervention indirect effect that impose parametric models for different parts of the observed data likelihood, allowing other parts to remain unrestricted. Under this setting, each estimator will be consistent and asymptotically normal (CAN) under the assumed semiparametric model. We also propose a doubly robust estimator which is CAN under a semiparametric union model thereby allowing for robustness to partial model misspecification. We only discuss estimation for the second term in the PIIE contrast, $\Psi$, as the first term $E(Y)$ can be consistently estimated nonparametrically by the empirical mean of $Y$. Let $Pr(y|a,z,c; \theta)$ denote a model for the density of $Y|A, Z, C$ evaluated at $y, a, z, c$ and indexed by $\theta$. Likewise, let $Pr(z|a,c; \beta)$and $Pr(a|c; \alpha)$ denote models for $Z|A,C$ and $A|C$ evaluated at $z,a,c$ and $a,c$ respectively with corresponding parameters $\beta$ and $\alpha$. These models could in principle be made as flexible as allowed by sample size, to simplify exposition, we will focus on simple parametric models. The first of the four estimators is the maximum likelihood estimator (MLE), $\hat{\Psi}_{mle}$, under a model that specifies parametric models for $A$, $Z$, and $Y$, and a nonparametric model for the distribution of $C$ estimated by its empirical distribution. The MLE is obtained by the plug-in principle \citep{casella2002statistical}: \begin{align*} \hat{\Psi}_{mle} & =\frac{1}{n} \sum_{i=1}^n \bigg\{ \sum_{z} Pr(Z = z \mid A=a^*, C_i ; \hat{\beta}) \times \\ & \hspace{3cm} \sum_{a} E(Y \mid A=a, Z=z, C_i ; \hat{\theta}) Pr(A=a \mid C_i ; \hat{\alpha}) \bigg\} \end{align*} \noindent where $\hat{\theta}$, $\hat{\beta}$, and $\hat{\alpha}$ are the MLEs of $\theta$, $\beta$, and $\alpha$. This estimator is only consistent under correct specification of the three required models, which we define as $\mathcal{M}_{y,z,a}$. For the remainder of the paper, we consider an alternate ML estimator under model $\mathcal{M}_{y,z}$, which specifies parametric models for $Z$ and $Y$, and a nonparametric model for the joint distribution of $A,C$ estimated by its empirical distribution. \begin{align*} \hat{\Psi}^{alt}_{mle} & =\frac{1}{n} \sum_{i=1}^n \bigg\{ \sum_{z} Pr(Z = z \mid A=a^*, C_i ; \hat{\beta}) E(Y \mid A_i, Z=z, C_i ; \hat{\theta}) \bigg\} \end{align*} \subsection{Semiparametric estimation} Next, we consider two semiparametric estimators for $\Psi$. The first is under model $\mathcal{M}_z$ which posits a density for the law of $Z|A,C$ but allows the densities of $Y|A,Z,C$, $A | C$, and $C$ to remain unrestricted. The second is under model $\mathcal{M}_{y,a}$ which instead posits a density for the outcome mean of $Y|Z, A, C$ and the density of $A|C$ but allows the densities of $Z|A,C$ and $C$ to be unrestricted. \begin{align*} \hat{\Psi}_1 & = \frac{1}{n} \sum_{i=1}^n Y_i \frac{f(Z_i \mid a^*, C_i; \hat{\beta})}{f(Z_i \mid A_i, C_i; \hat{\beta})} \\ \hat{\Psi}_2 & = \frac{1}{n} \sum_{i=1}^n \frac{I(A_i = a^*)}{f(A_i \mid C_i; \hat{\alpha})} E\big( E \big\{ Y_i \mid A_i,Z_i,C_i ; \hat{\theta} \big\} \mid C_i; \hat{\alpha} \big) \end{align*} \begin{lemma} Under standard regularity conditions and P1, the estimator $\hat{\Psi}_1$ is consistent and asymptotically normal under model $\mathcal{M}_{z}$. \end{lemma} \begin{lemma} Under standard regularity conditions and P2, the estimator $\hat{\Psi}_2$ is consistent and asymptotically normal under model $\mathcal{M}_{y,a}$. \end{lemma} The estimator $\hat{\Psi}_1$ will generally fail to be consistent if the density for $Z | A,C$ is incorrectly specified even if the rest of the likelihood is correctly specified. Likewise, the estimator $\hat{\Psi}_2$ will also generally fail to be consistent if either the mean model for $Y|A,Z,C$ or the density of $A|C$ is incorrectly specified. In order to motivate our doubly robust estimator, the following result gives the efficient influence function for $\Psi$ in the nonparametric model $\mathcal{M}_{np}$, which does not place any model restriction on the observed data distribution. The following results are entirely novel and have previously not appeared in the literature. \setcounter{theorem}{0} \begin{theorem} The efficient influence function of $\Psi$ in $\mathcal{M}_{np}$ is: \begin{align} \varphi^{eff}(Y, Z, A, C) & = (Y - E(Y \mid A,Z,C)) \frac{f(Z \mid a^*, C)}{f(Z \mid A,C)} \notag \\ & \hspace{.5cm} + \frac{I(A = a^*)}{f(A \mid C)} \big( \sum_a E[Y \mid a, Z, C] f(a \mid C) \notag \\ & \hspace{4cm} - \sum_{a, \bar{z}} E(Y \mid a, \bar{z}, C) f(\bar{z} \mid A, C) f(a \mid C) \big) \notag \\ &\hspace{.5cm} + \sum_{z} E[Y \mid A, z, C] f(z \mid a^*,C) - \Psi \label{eif} \end{align} \noindent and the semiparametric efficiency bound of $\Psi$ in $\mathcal{M}_{np}$ is given by $var\{ \varphi^{eff} \}$. \end{theorem} The proof for this theorem can be found in the Appendix section A1.4. An implication of this result is that for any regular and asymptotically linear (RAL) estimator $\hat{\Psi}$ in model $\mathcal{M}_{np}$ it must be that $\sqrt{n}( \hat{\Psi} - \Psi) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \varphi^{eff}(Y_i, Z_i, A_i, C_i) + o_p(1)$. In other words, all RAL estimators in this model are asymptotically equivalent and attain the semiparametric efficiency bound of $\Psi$ in $\mathcal{M}_{np}$ \citep{bickel1998efficient}. The result motivates the following estimator of $\Psi$, which we formally establish to be doubly robust. \begin{align} \hat{\Psi}_{dr} & = \frac{1}{n} \sum_{i=1}^n [Y_i - E(Y \mid A_i,Z_i,C_i; \hat{\theta})] \frac{f(Z \mid a^*, C_i; \hat{\beta})}{f(Z \mid A_i,C_i; \hat{\beta})} \notag \\ & \hspace{1cm} + \frac{I(A_i = a^*)}{Pr(A_i =a^* \mid C_i; \hat{\alpha})} \big( \sum_a E[Y \mid a, Z_i, C_i; \hat{\theta}] f(a \mid C_i; \hat{\alpha}) \notag \\ & \hspace{7cm} - \sum_{a, \bar{z}} E(Y \mid a, \bar{z}, C_i; \hat{\theta}) f(\bar{z} \mid A_i, C_i; \hat{\beta}) f(a \mid C_i; \hat{\alpha}) \big) \notag \\ & \hspace{2cm} + \sum_{z} E[Y \mid A_i, z, C_i; \hat{\theta}] f(z \mid a^*,C_i; \hat{\beta}) \label{est_sp} \end{align} \begin{theorem} Under standard regularity conditions and the positivity assumptions given by P1 and P2, the estimator $\hat{\Psi}_{dr}$ is consistent and asymptotically normal provided that one of the following holds: (1) the model for the mean $E(Y | A,C)$ and the exposure density $f(A | C)$ are both correctly specified; or (2) The model for the mediator density $f(Z | A,C)$ is correctly specified. Also, $\hat{\Psi}_{dr}$ attains the semiparametric efficiency bound for the union model $\mathcal{M}_{union} = \mathcal{M}_{y,a} \bigcup \mathcal{M}_{z}$, and therefore for the nonparametric model $\mathcal{M}_{np}$ at the intersection submodel where all models are correctly specified. \end{theorem} The estimator $\hat{\Psi}_{dr}$ offers two genuine opportunities to consistently estimate $\Psi$, and, thus, the PIIE. This is clearly an improvement over the other estimators $\hat{\Psi}_{mle}$, $\hat{\Psi}_1$ and $\hat{\Psi}_2$, which are only guaranteed to be consistent under more stringent parametric restrictions. In addition, the doubly-robust estimator achieves the semiparametric efficiency bound in the union model $\mathcal{M}_{union}$ and will thus have valid inference provided one of the two strategies holds. Note that the estimator will be less efficient than the MLE in the submodel $\mathcal{M}_{y,z,a}$ where all models are correctly specified. For inference on $\Psi$, we provide a consistent estimator of the asymptotic variance for the proposed estimators in the Appendix section A2.4. Wald-type confidence intervals for $\Psi$ can then be based on $\hat{\Psi}_{mle}$, $\hat{\Psi}_1$, $\hat{\Psi}_2$, or $\hat{\Psi}_{dr}$ and the corresponding standard error estimator. An important advantage of the doubly-robust estimator is that it can easily accommodate modern machine learning for estimation of high dimensional nuisance parameters, such as $E(Y|A,Z,C)$ or $f(Z|A,C)$ \citep{van2011targeted,newey2017cross,chernozhukov2017double}. Although, investigators should exercise caution when implementing these more flexible methods, particularly if nonparametric methods are used to estimate nuisance parameters. This is because such methods typically cannot attain root-$n$ convergence rates, although the doubly robust estimator would in principle provide valid root-n inferences about $\Psi$ provided that estimators of nuisance parameters have a convergence rate faster than $n^{-1/4}$ \citep{newey1990semiparametric,robins2017higher}. A major challenge with using complex machine learning methods such as random forests arises if the corresponding estimator of the nuisance function (say $f(A|C)$) fails to be consistent at rate $n^{1/4}$ even if the other nuisance function (say $f(Z|A,C)$) is estimated at rate root-$n$, in such case, it is not entirely clear what the asymptotic distribution is for $\hat{\Psi}_{dr}$. \section{Simulation Study} \subsection{Data generating mechanism} We now report extensive simulation studies which aim to illustrate: (i) robustness of PIIE to exposure-outcome unmeasured confounding (ii) robustness properties to model misspecification of our various semiparametric estimators. The data generating mechanism for simulations was as followed: \begin{align*} C_1 & \sim Ber(.6) \\ C_2 | C_1 & \sim Ber(\textrm{expit}(1 + .5c_1)) \\ C_3 & \sim Ber(.3) \\ A | C_1, C_2, C_3 & \sim Ber(\textrm{expit}(.5 + .2c_1 + .4c_2 + .5c_1 c_2 + .2 c_3)) \\ Z | A, C_1, C_2 & \sim N(1 + a -2c_1 + 2c_2 + 8c_1 c_2, \ 4) \\ Y | A, Z, C_1, C_2, C_3 & \sim N(1 + 2a + 2z - 8az + 3c_1 + c_2 + c_1 c_2 + c_3, \ 1) \end{align*} Therefore, $C_1$, $C_2$, and $C_3$ confound the $A-Y$ association while only $C_1$ and $C_2$ confound the $A-M$ and $M-Y$ associations. Simulations were performed 10,000 times with a sample size of 1,000. We evaluated the performance of the proposed estimators under the following settings, \begin{align*} (a) &\ \mathcal{M}_{y,z,a} : \ \stackrel{*}{E}(Y \mid a, z, c_1, c_2, c_3), \ \stackrel{*}{f}(Z \mid a,c_1,c_2), \ \stackrel{*}{f}(A \mid c_1,c_2,c_3) \\ (b) &\ \mathcal{M}_{y,z,a}': \overline{E}(Y \mid a, z, c_1, c_2) \ \textrm{($c_3$ left out)}, \ \overline{f}(A \mid c_1,c_2) \ \textrm{($c_3$ left out)}, \ \stackrel{*}{f}(Z \mid a,c_1,c_2) \\ (c) &\ \mathcal{M}_{z}: \ \ \ \tilde{E}(Y \mid a, z, c_1, c_2) \ \textrm{($az$ left out)}, \ \tilde{f}(A \mid c_1) \ \textrm{($c_2$, $c_1c_2$, $c_3$ left out)}, \ \stackrel{*}{f}(Z \mid a,c_1,c_2) \\ (d) &\ \mathcal{M}_{y,a}: \ \ \stackrel{*}{E}(Y \mid a, z, c_1, c_2), \ \stackrel{*}{f}(A \mid c_1), \ \tilde{f}(Z \mid a,c_1) \ \textrm{($c_2$, $c_1c_2$, $c_3$ left out)} \end{align*} \noindent where $*$ denotes that the model is correctly specified and $\sim$ and $-$ denote the model is misspecified. Note that the alternate ML estimator, $\hat{\Psi}^{alt}_{mle}$, does not specify a model for $A \mid C$. \subsection{Results} Estimation and inference were performed using the \texttt{piieffect} function implemented in the \texttt{frontdoorpiie} R package \citep{fulchergithub}. Under simple linear models for the outcome and mediator variables, the variance estimator of the MLE admits a simple closed form expression (see Appendix section A2.3). The variance estimator for the semiparametric estimators is described in Appendix section A2.4. Alternatively, one may use the nonparametric bootstrap for inference. In both Figure 2 and Table 1, the maximum likelihood estimator $\hat{\Psi}^{alt}_{mle}$ was only consistent under correct model specification (a) whether or not there was unmeasured confounding of the exposure-outcome relationship (b). This confirms our theoretical result as the PIIE is in fact empirically identified even if the exposure-outcome relationship is subject to unmeasured confounding. The MLE is not robust to model misspecification of the form in scenarios (c) and (d). On the other hand, the doubly-robust semiparametric estimator $\hat{\Psi}_{dr}$ appears to be consistent under all scenarios (a)-(d). The semiparametric estimator $\hat{\Psi}_{1}$ which only depends on the choice of model for the density for $Z | A,C$ has large bias in scenario (d). The semiparametric estimator $\hat{\Psi}_{2}$ which only depends on a model for the mean $Y | A,Z,C$ and $A | C$ has large bias in scenario (c). As expected, the maximum likelihood estimator is more efficient than the semiparametric estimators when all parametric models are correctly specified. For correctly specified models, Monte Carlo coverage of 95\% confidence intervals was close to nominal level. Confidence intervals based on inconsistent estimators had incorrect coverage. \setcounter{figure}{1} \begin{figure}[!htbp] \centering \caption{Population intervention indirect effect by estimator and model specifications} \includegraphics[scale=.11] {sim-results-notitle-revisions-bw.png} \end{figure} \begin{table}[!htbp] \centering \caption{Operating characteristics by model specifications and estimator} \label{} \fbox{% \begin{tabular}{lccccc} & $\hat{\Psi}$ & $\widehat{PIIE}$ & Variance & Proportion bias & .95 CI Coverage \\ \hline &&&&& \\ \large $\mathcal{M}_{y,z,a}$ \normalsize &&&&& \\ MLE & -18.19 & -4.61 & 0.50 & $<0.01$ & 0.95 \\ SP 1 & -18.20 & -4.59 & 0.54 & $<0.01$ & 0.95 \\ SP 2 & -18.19 & -4.61 & 0.60 & $<0.01$ & 0.95 \\ SP DR & -18.19 & -4.61 & 0.56 & $<0.01$ & 0.95 \\ &&&&& \\ \large $\mathcal{M}_{y,z,a}'$\normalsize &&&&& \\ MLE & -18.20 & -4.59 & 0.50 & $<0.01$& 0.95 \\ SP 1 & -18.21 & -4.57 & 0.54 & $<0.01$ & 0.95 \\ SP 2 & -18.20 & -4.59 & 0.55 & $<0.01$ & 0.94 \\ SP DR & -18.20 & -4.59 & 0.55 & $<0.01$ & 0.94 \\ &&&&& \\ \large $\mathcal{M}_{z}$\normalsize &&&&& \\ MLE & -19.63 & -3.17 & 0.27 & -0.31 & 0.23 \\ SP 1 & -18.22 & -4.58 & 0.54 & $<0.01$ & 0.95 \\ SP 2 & -16.70 & -6.10 & 1.04 & 0.33 & 0.70 \\ SP DR & -18.22 & -4.58 & 0.52 & $<0.01$ & 0.94 \\ &&&&& \\ \large $\mathcal{M}_{y,a}$ \normalsize &&&&& \\ MLE & -14.16 & -8.64 & 1.61 & 0.88 & 0.12 \\ SP 1 & -12.75 & -10.05 & 3.32 & 1.18 & 0.10 \\ SP 2 & -18.20 & -4.60 & 0.55 & $<0.01$ & 0.95 \\ SP DR & -18.20 & -4.60 & 0.55 & $<0.01$ & 0.95 \\ \end{tabular}} \footnotesize \begin{flushleft} Note: for the $\hat{\Psi}$ column, MLE refers to using the $\hat{\Psi}^{alt}_{mle}$ estimator for the $\widehat{PIIE}$. Likewise, SP1 refers to using $\hat{\Psi}_{1}$, SP2 refers to using $\hat{\Psi}_{2}$, and SP DR refers to using $\hat{\Psi}_{dr}$ \\ \end{flushleft} \end{table} \newpage \section{Safer Deliveries Program in Zanzibar, Tanzania} The Safer Deliveries program aimed to reduce the high rates of maternal and neonatal mortality in Zanzibar, Tanzania by increasing the number of pregnant women who deliver in a health care facility and attend prenatal and postnatal check-ups. As of May 2017, the program was active in six (out of 11) districts in Zanzibar on the islands of Unguja and Pemba. The program trains community health workers (CHWs) selected by the Ministry of Health to participate in the program based on their literacy, expressed commitment to the improvement of health, and respectability in their communities. The CHWs work with community leaders and staff at nearby health facilities to identify and register pregnant women and are expected to visit the woman in her home three times during pregnancy to screen for danger signs and provide counseling to help the woman prepare for a facility delivery. During the registration visit, the mobile app calculated a woman's risk category (low, medium, or high) based on a combination of obstetric and demographic factors. Women categorized as high risk were instructed to deliver at a referral hospital. The app then calculated a recommended savings amount based on the women's recommended delivery location. On average, high risk women were recommended to save more money than low or medium risk women as they were recommended to deliver at referral hospitals of which there are only four on the island. This analysis assessed the effectiveness of this tailored savings recommendation by risk category on actual savings. We considered high risk category (vs. low or medium risk) as our binary exposure of interest; although, our methods would equally apply for categorical exposure variable. The mediator variable was recommended savings in Tanzanian Shilling (TZS), which was calculated during the first visit. The outcome variable was actual savings achieved by the woman and her family at time of her delivery. In the analysis, we adjusted for district of residence to account for regional differences in health-seeking behavior and accessibility of health facilities. The population intervention indirect effect was the best estimand for this research question as we were interested in the mediated effect of savings recommendations under the risk categories observed in the current population. Additionally, there was likely unmeasured confounding between the exposure (high risk) and outcome (actual savings) relationship because most socio-economic factors and health-seeking behavior that may be associated with other factors related to risk category and a woman's ability to save were not collected by the program. Furthermore, confounding of exposure-mediator and mediator-outcome associations was less of a concern as the app calculated the recommended savings based on the delivery location which is determined both by risk category and distance to the appropriate health facility. That is, women who are in a low risk category are recommended to deliver at the facility closest to them, whereas women in the high risk category are recommended to deliver at one of four available referral facilities in Zanzibar. \begin{table}[!htbp] \centering \caption{Characteristics of the Safer Deliveries study population (n=4,102)} \label{} \fbox{% \begin{tabular}{lr} \textbf{Variable} & $n$ (\%) \\ \hline \textbf{Risk category} & \\ \ \ Low or medium & 3,364 (82) \\ \ \ High & 738 (18) \\ \textbf{District} & \\ \ \ North A & 977 (24) \\ \ \ North B & 1,392 (34) \\ \ \ Central & 691 (17) \\ \ \ West & 798 (19) \\ \ \ South & 244 (6) \\ \textbf{Recommended savings} & \\ \ \ mean (sd) & 13.12 (6.03) \\ \textbf{Actual savings} & \\ \ \ mean (sd) & 14.09 (12.11) \end{tabular}} \end{table} This study included women enrolled in the Safer Deliveries program who had a live birth by May 31, 2017 (n=4,511). We excluded: 253 women from the newly-added Mkoani district of Pemba Island, 2 women with missing LMP date and EDDs, 31 women with invalid enrollment times, and 123 women with missing risk category, district, or savings information. Our final study population included 4,102 women. Therefore, the following analyses are only valid under an assumption that data are missing completely at random. The observed average savings at time of delivery was \$14.09. Note that for ease of interpretation we converted from Tanzanian Shilling (1 USD = 2,236.60 TZS on May 31, 2017). We estimated the population intervention indirect effect; that is, the difference in average savings between the current population of women and a population of women had possibly contrary to the fact every woman received the savings recommendation of a low or medium risk woman. To estimate the population intervention indirect effect we employed our four estimators under the following parametric models: \begin{align*} & highrisk = \alpha_0 + \alpha_2^T district + \varepsilon_a \\ & savings_{rec} = \beta_0 + \beta_1 highrisk + \beta_2^T district + \varepsilon_z \\ & savings_{act} = \theta_0 + \theta_1 highrisk + \theta_2 savings_{rec} + \theta_3^T district + \varepsilon_y \end{align*} Table 2 gives the distribution of variables in this study population. The maximum likelihood estimator, $\hat{\Psi}^{alt}_{mle}$, estimated the average savings for all women had their recommended savings been set to the amount they would have been recommended to save had they not been high risk to be \$13.87 resulting in a PIIE of \$0.22 with a 95\% CI of (\$0.15, \$0.30). The semiparametric estimator that only includes models for $A | C$ and $Y | A, Z, C$, $\hat{\Psi}_{2}$, gave almost identical results. The doubly robust semiparametric estimator of the PIIE was estimated for to be \$13.95 with 95\% CI of (-\$0.03, \$0.32). The semiparametric estimator that only depends on a parametric model for $Z | A, C$, $\hat{\Psi}_{1}$ resulted in very similar inferences to the doubly-robust estimator. To compare these estimators, we conducted a bootstrap test of the null hypothesis that each of the estimator (MLE, SP1, SP2) converged to the same probability limit as the semiparametric doubly-robust estimator. The procedure was motivated by \cite{hausman1978specification} to directly test whether two estimators are consistently estimating the same parameter value. We used 1,000 boostrap samples and did not find evidence of a difference between any of the three estimators and the SP DR ($P=0.35$ for MLE; $P=0.14$ for SP1; $P=0.36$ for SP2). As such, we concluded that there was evidence of a non-zero PIIE -- revealing that the tailored savings recommendations to high risk women affects their actual savings by the time of their delivery. On average, if high risk women had been recommended to save what they would have if they were low to medium risk, this would slightly decrease the amount of money she saved. \begin{table}[!htbp] \caption{Effect of risk category on actual savings mediated by recommended savings ($n=4,102$) } \centering \fbox{% \begin{tabular}{ccccc} & $\hat{\Psi}$ & $\widehat{PIIE}$ & Standard Error & 95\% CI \\ \hline MLE & 13.87 & 0.22 & 0.04 & (0.15, 0.30) \\ SP 1 & 14.08 & 0.02 & 0.11 & (-0.20, 0.23) \\ SP 2 & 13.87 & 0.22 & 0.05 & (0.13, 0.31) \\ SP DR & 13.95 & 0.14 & 0.09 & (-0.03, 0.32) \end{tabular}} \end{table} \newpage \section{Discussion} In this paper, we have presented a decomposition of the population intervention effect, which we have argued is useful to address policy-related questions at the population-level especially in the presence of a harmful exposure. In addition, the decomposition offers an alternative to the recently proposed decompositions for the effect of treatment on the treated \citep{vansteelandt2012natural} and the attributable fraction \citep{sjolander7mediation}. Importantly, our resulting population intervention indirect effect is robust to unmeasured confounding of the exposure-outcome relationship, which does not hold for the natural indirect effect, natural indirect effect on the exposed, nor the natural indirect attributable fraction. We note that in a separate manuscript, we recently established that the NIE can in fact be identified if one replaces M4 with the assumption that there is no additive interaction between the mediator and the unmeasured confounder of the $A-Y$ association, a strictly stronger requirement than that for the PIIE \citep{fulcher2018estimation}. We developed a doubly-robust estimator for the PIIE, which is consistent and asymptotically normal in a union model where at least one of the following hold: (1) outcome and exposure models are correctly specified or (2) mediator model is correctly specified. Our estimator is strictly more robust than the multiply robust estimator for the NIE proposed by \cite{tchetgen2012semiparametric}, which requires that any two of the three models is correctly specified. \cite{sjolander7mediation} proposed a doubly-robust estimator for the natural indirect attributable fraction requiring that either $p(Y |A,M,C)$ or $p(A|M,C)$ are correctly specified \underline{and} either $p(Y|A,C)$ or $p(A|C)$ are correctly specified. As mentioned by \cite{sjolander7mediation}, a doubly-robust estimator may not be realizable due to the fact various submodels of the union models are not variation independent, such that misspecification of the former generally rules out possibility that the latter could still be correctly specified. For example, when $M$ is binary, a logistic model for $p(Y|A,M,C)$ would imply a complex form for $p(Y | A,C)$. In a separate strand of work, \cite{lendle2013identification} developed an estimator for the natural indirect effect among the (un)exposed with the same robustness properties as \cite{sjolander7mediation}. We emphasize that the use of the doubly-robust estimator of the PIIE does not obviate concerns about unmeasured confounding of the exposure-mediator, mediator-outcome relation, or exposure-induced mediator-outcome confounding. When such confounding is of concern, a sensitivity analysis should be performed \citep{vanderweele2011bias,tchetgen2012semiparametric,tchetgen2014estimation}. Investigators should exercise caution if they also wish to report the PIDE and PIE as these effects are not robust to exposure-outcome confounding. If exposure-outcome unmeasured confounding can be ruled out with reasonable certainty, then one can estimate the PIDE using our doubly-robust estimator for $\Psi$ and the well-known doubly-robust estimator for $E(Y(a^*))$ from \cite{robins2000sensitivity}. Likewise, the PIE can be estimated using the doubly-robust estimator developed by \cite{hubbard2008population}. Lastly, despite the front-door criterion being available in the literature for several years, this is the first methodology developed for semiparametric estimation and inference of the front-door functional $\Psi$. Therefore, when an investigator believes she has identified one or more mediator variables that satisfy the front-door criterion, she can use our proposed methodology to obtain an estimate of the PIE or the average causal effect that is not only doubly-robust, but also robust to unmeasured confounding of the exposure-outcome relation. \section{Proofs of lemmas and theorems} \subsection{Proof of Lemma 1. Generalized front-door functional derivation} \begin{align*} \Psi & = E[Y(Z(a^*))] \\ & = \sum_{c,a,z} E(Y(a,z)| Z(a^*)=z,A=a,C=c) Pr(Z(a^*) = z | A=a, C=c) Pr(A=a,C=c) \\ & \stackrel{M2}{=} \sum_{c,a,z} E(Y(a,z)| Z(a^*)=z,A=a,C=c) Pr(Z(a^*) = z | A=a^*, C=c) Pr(A=a,C=c) \\ & \stackrel{M1,M3}{=} \sum_{c,a,z} E(Y(a,z)| A=a,C=c) Pr(Z = z | A=a^*, C=c) Pr(A=a,C=c) \\ & \stackrel{M3}{=} \sum_{c,a,z} E(Y(a,z)| Z=z,A=a,C=c) Pr(Z = z | A=a^*, C=c) Pr(A=a,C=c)\\ & \stackrel{M1}{=} \sum_{c,a,z} E(Y| Z=z,A=a,C=c) Pr(Z = z | A=a^*, C=c) Pr(A=a,C=c)\\ & = \sum_{z,c}Pr(Z = z | A=a^*, C=c) \sum_a E(Y| Z=z,A=a,C=c)Pr(A=a | C=c) Pr(C=c) \end{align*} \subsection{Proof of Lemma 2.} \begin{align*} E[Y \frac{f(Z | a^*, C)}{f(Z | A, C)}] & = \sum_{y,a,z,c} y \frac{f(z | a, c)}{f(z | a, c)} f(y | a,z,c) f(z | a^*, C) f(a | c) f(c) \\ & = \sum_{a,z,c} E(Y | a, z,c) f(z | a^*, c) f(a | c) f(c) \\ & = \Psi \end{align*} The proof of asymptotic normality is fairly standard under the usual regularity conditions once unbiasedness of the estimating equation is established (see Theorem 1A in \cite{robins1992estimating}). \subsection{Proof of Lemma 3.} \begin{align*} E\bigg[ \frac{I(A = a^*)}{f(A | C)} E\big( E \big\{ Y | A,Z,C \big\} | C \big) \bigg] & = \sum_{z,\bar{a},c} I(\bar{a}= a^*) f(z | \bar{a}, c) f(c) E\big( E \big\{ Y | A,z,c \big\} | c \big) \\ & = \sum_{z,c} f(z | a^*, c) f(c) \sum_{a} E (Y | a,z,c) f(a | c) \\ & = \Psi \end{align*} The proof of asymptotic normality is fairly standard under the usual regularity conditions once unbiasedness of the estimating equation is established (see Theorem 1A in \cite{robins1992estimating}). \subsection{Proof of Theorem 1. Efficient influence function derivation} \noindent We aim to find an efficient influence function, $\varphi^{eff}(Y,Z,A,C)$, for $\Psi = E[Y(Z(a^*))]$ under model corresponding to Figure 1c. Our functional is nonparametrically identified under the causal model represented by a complete graph. In other words, the causal model induces no restrictions on the observed data. Thus, there is a unique influence function, $\varphi^{eff}(Y,Z,A,C)$, and it achieves the semiparametric efficient bound of $\Psi$ in $\mathcal{M}_{np}$. We will use the definition of pathwise differentiability to find the efficient influence function. $$\frac{d}{dt} \Psi(F_t) = E[ \varphi^{eff}(Y,Z,A,C) \times S(Y,A,Z,C) ] $$ \noindent where $S(Y,A,Z,C)$ is the score corresponding to the whole model. \begin{align} \frac{d}{dt} \Psi(F_t) & = \sum_{z,a,c} \frac{d}{dt} E_t[Y | A = a, Z=z, C=C] f_t(z | a^*, c) f_t(a | c) f_t(c) \notag \\ & \textrm{(from now on, for convenience I will just use } f \textrm{ instead of } f_t)\notag \\ & = \sum_{z,a,c} \sum_y y \frac{d}{dt} ( f(y | a, z,c) f(z | a^*, c) f(a | c) f(c) ) \notag \\ & = \sum_{z,a,c} \sum_y y S(y | a, z, c) f(y | a,z,c) f(z | a^*, c) f(a | c) f(c) \notag \\ & + \sum_{z,a,c} E[Y | a, z, c] S(z | a^*, c) f(z | a^*, c) f(a | c) f(c) \notag \\ & + \sum_{z,a,c} E[Y | a, z, c] f(z | a^*,c) S(a | c) f(a | c) f(c) \notag \\ & + \sum_{z,a,c} E[Y | a, z, c] f(z | a^*, c) f(a | c) S(c) f(c) \notag \\ & \textrm{...detail for each of the four terms portion given below...} \notag \\ & = E\big[ (Y - E(Y | A,Z,C)) \frac{f(Z | a^*, C)}{f(Z | A,C)} \times S(Y, A, Z, C) \big] \label{term1} \tag{A1} \\ & + E\bigg[ \bigg( \sum_a E[Y | a, Z, C] f(a | C) \notag\\ & \hspace{2cm} - \sum_{a, \bar{m}} E(Y | a, \bar{m}, C) f(\bar{m} | \bar{A}, C) f(a | C) \bigg) \frac{I(\bar{A} = a^*)}{f(\bar{A} | c)} \times S(Y, A, Z, C) \bigg] \label{term2} \tag{A2} \\ & + E \bigg[ \bigg( \sum_{m} E[Y | A, z, C] f(z | a^*,C) \notag \\ & \hspace{2cm} - \sum_{a,m} E[Y | a,z,C] f(z | a^*,C) f(a | C) \bigg)\times S(Y, A, Z, C) \bigg] \label{term3} \tag{A3} \\ & + E \bigg[ \bigg( \sum_{z,a} E[Y | a,z,C] f(z | a^*, C) f(a | C) - \Psi \bigg) \times S(Y, A, Z, C) \bigg] \label{term4} \tag{A4} \end{align} \noindent Each of the four terms (\ref{term1})-(\ref{term4}) will be handled in turn. The goal is to get them in the form $E[ IF \times S] = \sum_{i=1}^4 E[ IF_{i} \times S(Y,A,Z,C) ]$. \begin{align*} \textrm{(\ref{term1})} & = \sum_{z,a,c} \sum_y y S(y | a, z, c) f(y | a,z,c) f(z | a^*, c) f(a | c) f(c) \\ & = \sum_{z,a,c,y} y \frac{f(z | a^*, c)}{f(z | a,c)} f(z | a, c) f(y | a,z,c) f(a | c) f(c) S(y | a, z, c) \\ & \stackrel{*}{=} \sum_{z,a,c,y} (y - E[Y | a,z,c]) \frac{f(z | a^*, c)}{f(z | a,c)} f(y | a,z,c) f(z | a, c) f(a | c) f(c) S(y | a, z, c) \\ & \stackrel{**}{=} \sum_{z,a,c,y} (y - E[Y | a,z,c]) \frac{f(z | a^*, c)}{f(z | a,c)} f(y | a,z,c) f(z | a, c) f(a | c) f(c) \\ & \hspace{2cm} \times \big( S(y | a, z, c) + S(z | a,c) + S(a | c) + S(c) \big) \\ & = \sum_{z,a,c,y} (y - E[Y | a,z,c]) \frac{f(z | a^*, c)}{f(z | a,c)} S(y, a, z, c) f(y, a, z, c) \\ & = E\big[ (Y - E(Y | A,Z,C)) \frac{f(Z | a^*, C)}{f(Z | A,C)} \times S(Y, A, Z, C) \big] \end{align*} \noindent * The equality will hold because the added term will evaluate to zero as the expectation of a score is zero (in brackets), \\ $$\sum_{z,a,c} E[Y | a,z,c] \frac{f(z | a^*, c)}{f(z | a,c)} f(z | a, c) f(a | c) f(c) \bigg[ \sum_{y} S(y | a, z, c) f(y | a,z,c) \bigg] = 0$$ \noindent ** Similar to above, the additional terms will all evaluate to zero as the term in the large brackets is zero: \begin{align*} \sum_{z,a,c} \frac{f(z | a^*, c)}{f(z | a,c)} f(z | a, c) f(a | c) f(c) S( z | a, c) \bigg[ \sum_{y} (y - E[Y | a,z,c]) f(y | a,z,c) \bigg] & = 0 \\ \sum_{z,a,c} \frac{f(z | a^*, c)}{f(z | a,c)} f(z | a, c) f(a | c) f(c) S(a | c)\bigg[ \sum_{y} (y - E[Y | a,z,c]) f(y | a,z,c) \bigg] & = 0 \\ \sum_{z,a,c} \frac{f(z | a^*, c)}{f(z | a,c)} f(z | a, c) f(a | c) f(c) S(c) \bigg[ \sum_{y} (y - E[Y | a,z,c]) f(y | a,z,c) \bigg] & = 0 \end{align*} \begin{align*} \textrm{(\ref{term2})} & = \sum_{z,a,c} E[Y | a, z, c] S(z | a^*, c) f(z | a^*, c) f(a | c) f(c) \\ & = \sum_{z,c} \sum_{\bar{a}} \bigg( \sum_a E[Y | a, z, c] f(a | c) \bigg) I(\bar{a} = a^*)S(z | \bar{a}, c) f(z | \bar{a}, c) f(c) \\ & = \sum_{z,c,\bar{a}} \bigg( \sum_a E[Y | a, z, c] f(a | c) \bigg) \frac{I(\bar{a} = a^*)}{f(\bar{a} | c)} S(z | \bar{a}, c) f(z | \bar{a}, c) f(\bar{a} | c) f(c) \\ & \stackrel{*}{=} \sum_{z,c,\bar{a}} \bigg( \sum_a E[Y | a, z, c] f(a | c) - \sum_{a, \bar{m}} E(Y | a, \bar{m}, c) f(\bar{m} | \bar{a}, c) f(a | c) \bigg) \\ & \hspace{2cm} \times \frac{I(\bar{a} = a^*)}{f(\bar{a} | c)} S(z | \bar{a}, c) f(z | \bar{a}, c) f(\bar{a} | c) f(c) \\ & \stackrel{**}{=} \sum_{z,c,\bar{a}} \bigg( \sum_a E[Y | a, z, c] f(a | c) - \sum_{a, \bar{m}} E(Y | a, \bar{m}, c) f(\bar{m} | \bar{a}, c) f(a | c) \bigg) \\ & \hspace{2cm} \times \frac{I(\bar{a} = a^*)}{f(\bar{a} | c)} [S(z | \bar{a}, c) + S(a | c) + S(c) ] f(z | \bar{a}, c) f(\bar{a} | c) f(c) \\ & = \sum_{z,c, \bar{a}, y}\bigg( \sum_a E[Y | a, z, c] f(a | c) - \sum_{a, \bar{m}} E(Y | a, \bar{m}, c) f(\bar{m} | \bar{a}, c) f(a | c) \bigg) \\ & \hspace{2cm} \times \frac{I(\bar{a} = a^*)}{f(\bar{a} | c)} [S(y | z, \bar{a}, c) + S(z | \bar{a}, c) + S(a | c) + S(c) ] f(y | z, \bar{a}, c) f(z | \bar{a}, c) f(\bar{a} | c) f(c) \\ & = \sum_{z,c, \bar{a}, y} \bigg( \sum_a E[Y | a, z, c] f(a | c) - \sum_{a, \bar{m}} E(Y | a, \bar{m}, c) f(\bar{m} | \bar{a}, c) f(a | c) \bigg) \\ & \hspace{2cm} \times \frac{I(\bar{a} = a^*)}{f(\bar{a} | c)} f(y, \bar{a}, z, c) S(y, a, z, c) \\ & = E\bigg[ \bigg( \sum_a E[Y | a, Z, C] f(a | C) - \sum_{a, \bar{m}} E(Y | a, \bar{m}, C) f(\bar{m} | \bar{A}, C) f(a | C) \bigg) \frac{I(\bar{A} = a^*)}{f(\bar{A} | c)} \times S(Y, A, Z, C) \bigg] \end{align*} \noindent *The reasoning here is identical to that for the first term. \\ \noindent **The reasoning here is identical to that for the first term. \\ \begin{align*} \textrm{(\ref{term3})} & = \sum_{z,a,c} E[Y | a, z, c] f(z | a^*,c) S(a | c) f(a | c) f(c) \\ & \stackrel{*}{=}\sum_{c,a} \big( \sum_{m} E[Y | a, z, c] f(z | a^*,c) - \sum_{a,m} E[Y | a,z,c] f(z | a^*,c) f(a | c) \big) S(a | c) f(a | c) f(c) \\ & \stackrel{**}{=} \sum_{c,a} \sum_{\bar{m},y} \big( \sum_{m} E[Y | a, z, c] f(z | a^*,c) - \sum_{a,m} E[Y | a,z,c] f(z | a^*,c) f(a | c) \big) \\ & \hspace{2cm} \times f(y | a, \bar{m}, c) f(\bar{m} | a, c) f(a | c) f(c) [S(y | a, \bar{m}, c) + S(\bar{m} | a, c) + S(a | c) + S(c) ] \\ & = E \bigg[ \bigg( \sum_{m} E[Y | A, z, C] f(z | a^*,C) - \sum_{a,m} E[Y | a,z,C] f(z | a^*,C) f(a | C) \bigg)\times S(Y, A, Z, C) \bigg] \end{align*} \noindent *The reasoning here is identical to that for the first term. \\ \noindent **The reasoning here is identical to that for the first term. \begin{align*} \textrm{(\ref{term4})} & = \sum_{z,a,c} E[Y | a, z, c] f(z | a^*, c) f(a | c) S(c) f(c) \\ & \stackrel{*}{=} \sum_{c} \bigg( \sum_{z,a} E[Y | a,z,c] f(z | a^*, c) f(a | c) - \sum_{z,a,c} E[Y | a,z,c] f(z | a^*, c) f(a | c) f(c) \bigg) S(c) f(c) \\ & \stackrel{**}{=} \sum_{c} \sum_{y,\bar{a},\bar{m}} \bigg( \sum_{z,a} E[Y | a,z,c] f(z | a^*, c) f(a | c) - \sum_{z,a,c} E[Y | a,z,c] f(z | a^*, c) f(a | c) f(c) \bigg) \\ & \hspace{2cm} \times f(y | \bar{a}, \bar{m}, c) f(\bar{m} | \bar{a}, c) f(\bar{a} | c) f(c) [S(y | \bar{a}, \bar{m}, c) + S(\bar{m} | \bar{a}, c) + S(\bar{a} | c) + S(c) ] \\ & = E \bigg[ \bigg( \sum_{z,a} E[Y | a,z,C] f(z | a^*, C) f(a | C) - \sum_{z,a,c} E[Y | a,z,c] f(z | a^*, c) f(a | c) f(c) \bigg) \times S(Y, A, Z, C) \bigg] \\ & = E \bigg[ \bigg( \sum_{z,a} E[Y | a,z,C] f(z | a^*, C) f(a | C) - \Psi \bigg) \times S(Y, A, Z, C) \bigg] \end{align*} \noindent *The reasoning here is identical to that for the first term. \\ \noindent **The reasoning here is identical to that for the first term. \\ \noindent Thus, the efficient influence function under the nonparametric model is as follows: \begin{align*} \varphi^{eff}(Y,Z,A,C) & = E \bigg[ (Y - E(Y | A,Z,C)) \frac{f(Z | a^*, C)}{f(Z | A,C)} \\ & \hspace{1cm} + \frac{I(A = a^*)}{f(A | C)} \big( \sum_a E[Y | a, Z, C] f(a | C) - \sum_{a, \bar{z}} E(Y | a, \bar{z}, C) f(\bar{z} | A, C) f(a | C) \big) \\ & \hspace{2cm} + \sum_{z} E[Y | A, z, C] f(z | a^*,C) \bigg] - \Psi \end{align*} \subsection{Proof of Theorem 2.} We first show that the influence function derived in Theorem 1 has expectation 0 if one of the following scenarios holds: \begin{enumerate} \item $E(Y | a,z,c)$ and $f(a | c)$ are correct \item $f(z | a,c)$ is correct \end{enumerate} \begin{align*} & \hspace{-1.4cm} \textrm{\underline{1. $E(Y | a,z,c) \textrm{ \& } f(a | c)$ correctly specified and $\tilde{f}(z | a, c)$ misspecified}} \\ E[\varphi^{eff}] & = E[(Y - E(Y | A,Z,C)) \frac{\tilde{f}(Z | a^*, C)}{\tilde{f}(Z | A,C)}] \\ & \hspace{1cm} + E[\frac{I(A = a^*)}{f(A | C)} \big( \sum_a E[Y | a, Z, C] f(a | C) - \sum_{a, z} E(Y | a, z, C) \tilde{f}(z | A, C) f(a | C) \big)] \\ & \hspace{2cm}+ E[\sum_{z} E[Y | A, z, C] \tilde{f}(z | a^*,C) - \Psi] \\ & = 0 + \sum_{a',z,c} \frac{I(a' = a^*)}{f(a' | c)} \bigg( \sum_a E[Y | a, z, c] f(a | c)- \sum_{a, z} E(Y | a, z, C) \tilde{f}(z | a', c) f(a | c) \bigg) f(z,a',c) \\ & \hspace{2cm} + E[\sum_{z} E[Y | A, z, C] \tilde{f}(z | a^*,C) - \Psi] \\ & = \sum_{z,c} \sum_a E[Y | a, z, c] f(a | c) f(z | a^*,c) f(c) - \sum_{a, z,c} E(Y | a, z, C) \tilde{f}(z | a^*, c) f(a | c) f(c) \\ & \hspace{1cm} + E[\sum_{z} E[Y | A, z, C] \tilde{f}(z | a^*,C) - \Psi] \\ & = \Psi - \sum_{a, z,c} E(Y | a, z, C) \tilde{f}(z | a^*, c) f(a | c) f(c) + \sum_{a,c} \sum_{z} E[Y | a, z, c] \tilde{f}(z | a^*,c) f(a | c) f(c) - \Psi \\ & = 0 \end{align*} \begin{align*} & \hspace{-1.4cm} \textrm{\underline{2. $f(z | a, c)$ correctly specified and $\tilde{E}(Y | a,z,c) \textrm{ \& } \tilde{f}(a | c)$ misspecified}} \\ E[\varphi^{eff}] & = E[(Y - \tilde{E}(Y | A,Z,C)) \frac{f(Z | a^*, C)}{f(Z | A,C)}] \\ & \hspace{1cm} + E[\frac{I(A = a^*)}{\tilde{f}(A | C)} \big( \sum_a \tilde{E}[Y | a, Z, C] \tilde{f}(a | C) - \sum_{a, z} \tilde{E}(Y | a, z, C) f(z | A, C) \tilde{f}(a | C) \big)] \\ & \hspace{2cm} + E[\sum_{z} \tilde{E}[Y | A, z, C] f(z | a^*,C) - \Psi] \\ & = \sum_{c,a,z} (E(Y | a,z,c) - \tilde{E}(Y | a,z,c)) f(z | a^*, c) f(a | c) f(c) \\ & \hspace{1cm} + \sum_{c,a,z} \frac{1}{\tilde{f}(a | c)} \bigg( \tilde{E}[Y | a, z, c] \tilde{f}(a | c) f(z | a^*,c) f(a^* | c)f(c) \\ &\hspace{3cm} - \tilde{E}[Y | a, z, c] \tilde{f}(a | c) f(z | a^*,c) f(a^* | c)f(c) \bigg) \\ & \hspace{2cm} + \sum_{c,a,z} \tilde{E}[Y | a, z, c] f(z | a^*,c) f(a | c) f(c) - \Psi \\ & = \Psi - \sum_{c,a,z} \tilde{E}(Y | a,z,c) f(z | a^*, c) f(a | c) f(c) + \sum_{c,a,z} \tilde{E}[Y | a, z, c] f(z | a^*,c) f(a | c) f(c) - \Psi \\ & = 0 \end{align*} \noindent Assuming the regularity conditions of Theorem 1A in \cite{robins1992estimating} hold for $\varphi^{eff}(Y,Z,A,C)$, the expression follows by standard Taylor expansion arguments: $$\sqrt{n} (\hat{\Psi}_{dr} - \Psi) = \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \varphi^{eff}(Y_i, Z_i, A_i, C_i)+ o_p(1) $$ \noindent The asymptotic distribution of the left hand side under $\mathcal{M}_{union}$ follows from the previous equation by the Central Limit Theorem and Slutsky's. \section{Additional materials} \subsection{Judea Pearl's front-door criterion} \begin{align*} E(Y(a^*)) & = \sum_{z,c} E(Y(a^*) | Z(a^*) = z, C=c) Pr(Z(a^*) =z | C=c) Pr(C=c) \\ & \stackrel{M2}{=} \sum_{z,c} E(Y(a^*) | Z(a^*) = z, C=c) Pr(Z(a^*) =z | C=c, A=a^*) Pr(C=c) \\ & \stackrel{M1}{=} \sum_{z,c} E(Y(a^*,z) | Z(a^*) = z, C=c) Pr(Z = z | C=c, A=a^*) Pr(C=c) \\ & \stackrel{F1}{=} \sum_{z,c} E(Y(z) | Z(a^*) = z, C=c) Pr(Z = z | C=c, A=a^*) Pr(C=c) \\ & = \sum_{z,c} \bigg[ \sum_a E(Y(z) | Z(a^*) = z, C=c, A=a) Pr(A=a | C=c) \bigg] Pr(Z = z | C=c, A=a^*) Pr(C=c) \\ & \stackrel{M3}{=} \sum_{z,c} \bigg[ \sum_a E(Y(z) | A=a, C=c) Pr(A=a | C=c) \bigg] Pr(Z = z | C=c, A=a^*) Pr(C=c) \\ & = \sum_{z,c} Pr(Z = z | C=c, A=a^*) \sum_a E(Y(z) | A=a, C=c) Pr(A=a | C=c) Pr(C=c)\\ & \stackrel{M3}{=} \sum_{z,c} Pr(Z = z | C=c, A=a^*) \sum_a E(Y(z) | A=a, Z=z, C=c) Pr(A=a | C=c) Pr(C=c)\\ & \stackrel{M1}{=} \sum_{z,c} Pr(Z = z | C=c, A=a^*) \sum_a E(Y | A=a, Z=z, C=c) Pr(A=a | C=c) Pr(C=c)\\ & = \Psi \end{align*} \noindent M3 encodes a so-called ``cross-world" assumption as it posits that $Y(a,z)$ is independent of $Z(a^*)$ given $A$ and $C$, which occur in different worlds (e.g. they cannot be represented in a single world intervention graph). Note that under F1, assumption M3 is no longer a cross-world assumption as it becomes $Y(z) \perp Z(a^*) \mid A=a,C=c \ \forall \ z,a,a^*,c$ \citep{pearl2009causality}. Additionally, if you happen to evaluate this reduced independence at $a^* = a$, then by consistency, $Y(z) \perp Z \mid A=a,C=c \ \forall \ z,a,c$. Thus, identification occurs under the Finest Fully Randomized Causally Interpretable Structured Tree Graph interpretation of Figure 1b. We also refer the reader to \cite{shpitser2016causal} where they discuss a general result giving cases when identifying functionals for indirect effects (that generalize the PIIE) and total effects (that generalize the front-door causal effect) coincide (see Lemma 7.3). \subsection{NPSEM-IE Interpretation of the causal diagram} \noindent We can formalize the conditions for identification of $\Psi$ under Figure 1c or assumptions M1-M4 using a system of equations known as ``Nonparametric Structural Equation Model". We assign a system of equations for each variable as below: \begin{center} \begin{align*} U & = g_U(\varepsilon_U) \\ C & = g_C(\varepsilon_C) \\ A & = g_A(C,U,\varepsilon_A) \\ Z & = g_Z(A,C,\varepsilon_Z) \\ Y & = g_Y(Z,A,U,C,\varepsilon_Y) \end{align*} \end{center} \noindent Each of the five random variables on this graph are associated with a distinct, arbitrary function, denoted $g$, and a distinct random disturbance, denoted $\varepsilon$, each with a subscript corresponding to its respective random variable. Each variable is generated by its corresponding function, which depends only on all variables that affect it directly. These equations provide a nonparametric algebraic interpretation of the Figure (1c), and are helpful in defining potential outcomes. The identification conditions given above can be formalized in terms of independence conditions about the errors; specifically, we require all the errors to be independent. \subsection{Parametric derivation for PIIE} \noindent Here, we build a parametric expression $E[Y(Z(a^*))]$ where we include parametric models for both $Y$ and $Z$ and leave the joint distribution of $A,C$ unspecified (e.g. estimated by its empirical distribution as described for $\hat{\Psi}^{alt}_{mle}$). We will compare the parametric estimator $\hat{\Psi}^{alt}_{mle}$ to the semiparametric estimators $\hat{\Psi}_{1},\hat{\Psi}_{2},\hat{\Psi}_{dr}$ in our simulation study. The two models are as follows: $$ E[Y | A=a, Z=z, C=c] = \theta_0 + \theta_1 a + \theta_2 z + \theta_3 az + \theta_4^T c $$ $$ E[Z | A=a^*, C=c] = \beta_0 + \beta_1a^* + \beta_2^Tc $$ \begin{align*} E[Y(Z(a^*))] & = \sum_{c} \sum_{z} Pr(Z = z | A=a^*, C=c) \sum_{a} E(Y | A=a, Z=z, C=c) Pr(A=a, C=c) \\ & = \sum_{c} \sum_{z} Pr(Z = z | A=a^*, C=c) \sum_{a} (\theta_0 + \theta_1 a + \theta_2 z + \theta_3 az + \theta_4^T c) Pr(A=a,C=c)\\ & = \sum_{c} Pr(C=c) \sum_{z} Pr(Z = z | A=a^*, C=c) \times \\ & \hspace{3.5cm}\big(\theta_0 + \theta_1 E[A | C=c] + \theta_2 z + \theta_3z E[A | C=c] + \theta_4^T c \big) \\ & = \theta_0 + \theta_1 \sum_{c} E[A | C=c] Pr(C=c) + \theta_2 \sum_{c} Pr(C=c) \sum_{z} z Pr(Z = z | A=a^*, C=c) \\ & \hspace{.7cm} + \theta_3 \sum_{c} Pr(C=c) \sum_{z} z E[A | C=c] Pr(Z = z | A=a^*, C=c) + \theta_4^T E[C] \\ & = \theta_0 + \theta_1 E[A] + \theta_2 \sum_{c} Pr(C=c) E[Z | A=a^*, C=c] \\ & \hspace{2cm} + \theta_3 \sum_{c} Pr(C=c) E[A | C=c] E[Z | A=a^*, C=c] + \theta_4^T E[C] \\ & = \theta_0 + \theta_1 E[A] + \theta_2 \sum_{c} Pr(C=c) (\beta_0 + \beta_1a^* + \beta_2^Tc) \\ & \hspace{2cm} + \theta_3 \sum_{c} Pr(C=c) E[A | C=c] (\beta_0 + \beta_1a^* + \beta_2^Tc) + \theta_4^T E[C] \\ & = \theta_0 + \theta_1 E[A] + \theta_2 \beta_0 + \theta_2 \beta_1 a^* + \theta_2 \beta_2^T E[C] + \theta_3 \beta_0 E[A] + \theta_3 \beta_1 a^* E[A] \\ & \hspace{2cm} + \theta_3 \beta_2^T \sum_{c} c Pr(C=c) E[A | C=c]+ \theta_4^T E[C] \\ & = \theta_0 + \theta_1 E[A] + \theta_2 \beta_0 + \theta_2 \beta_1 a^* + \theta_2 \beta_2^T E[C] + \theta_3 \beta_0 E[A] + \theta_3 \beta_1 a^* E[A] \\ & \hspace{2cm} + \theta_3 \beta_2^T E[C E[A | C=c]]+ \theta_4^T E[C] \\ & = \theta_0 + \theta_2 \beta_0 + \theta_2 \beta_1 a^* + (\theta_1 + \theta_3 \beta_0 + \theta_3 \beta_1 a^*)E[A] + (\theta_2 \beta_2^T + \theta_4^T) E[C] + \theta_3 \beta_2^T E[AC] \end{align*} \noindent For estimation, the empirical mean can be used for $E[A]$, $E[C]$, and $E[AC]$. In this setting, there is a closed form expression for the variance. In settings where the outcome or mediator variable are binary, the variance can be computed using the sandwich variance or via the nonparametric bootstrap. \begin{align*} Var(PIIE) & = \beta_1 \theta_2(\beta_1 \theta_3 Cov(A,A^2) + \beta_1 \theta_2Var(A)) + \beta_1 \theta_3 (Var(A^2) \beta_1 \theta_3 + \beta_1 \theta_2 Cov(A,A^2)) \\ & \hspace{1cm} + ( E(A) \theta_2 + E(A^2) \theta_3)^2 Var(\beta_1) + E(A) \beta_1 ( E(A) \beta_1 Var(\theta_2) + E(A^2) \beta_1 Cov(\theta_2,\theta_3)) \\ & \hspace{2cm} + E(A^2) \beta_1 ( E(A) \beta_1 Cov(\theta_2,\theta_3) + E(A^2) \beta_1 Var(\theta_3) ) \end{align*} \noindent For estimation with binary $A$, $E(A) = E(A^2) = \overline{A}$, $Var(A) = Var(A^2) = Cov(A,A^2) = S_A^2$ (sample variance), and all the parameters are estimated via their MLE in R. \subsection{Sandwich variance} \noindent Let $\theta$ denote the vector of all $K$ parameters and $U(\theta) = [U_1^T, ... , U_K^T]^T$ denote the score vector where the $K$th score corresponds to the score for $\Psi$. A consistent estimator for the asymptotic variance of $\theta$: $$ \widehat{Var(\theta)} = [\sum_{i=1}^n \frac{dU(\theta)}{d\theta} |_{\theta = \hat{\theta}} ]^{-1} U(\hat{\theta})^T U(\hat{\theta}) [\sum_{i=1}^n \frac{dU(\theta)}{d\theta} |_{\theta = \hat{\theta}} ]^{-1T} $$ \noindent Further, a consistent estimator for the asymptotic variance of $\hat{\Psi}$ will correspond to the $\widehat{Var(\theta)}_{k,k}$ element. \\ \subsection{Population intervention effect and the total effect among exposed} For binary $A$ and $a^* = 0$, \begin{align*} PIE(0) & = E(Y - Y(0)) \\ & = E(AY + (1-A)Y -Y(0)) \\ & = E(AY(1) + (1-A)Y(0) - Y(0)) \textrm{ by Consistency} \\ & = E(A(Y(1) - Y(0)) + Y(0) - Y(0) ) \\ & = E(A(Y(1) - Y(0)) ) \\ & = E( E(A (Y(1) - Y(0)) | A) ) \\ & = E( Y(1) - Y(0) | A=1) ) Pr(A=1) \\ & = ETT \ Pr(A=1) \end{align*} \subsection{Alternative population intervention effect decomposition} We could have used an alternative decomposition of the population intervention effect, \begin{align} PIE(a^*) & = E[Y - Y(a^*,Z(a^*))] = \underbrace{E[Y(a^*,Z) - Y(a^*)]}_\text{ indirect effect} + \underbrace{E[ Y - Y(a^*,Z) ]}_\text{ direct effect} \label{alt_decomp} \tag{A5} \end{align} The use of this alternative decomposition would not guarantee robustness to exposure-outcome confounding as the indirect effect includes the term $E(Y(a^*))$, which requires no unmeasured confounding of exposure-outcome relation for identification (similar to our PIDE). Additionally, identification of the term $E(Y(a^*,Z))$ requires a different set of assumptions that will not lead to a connection with the frontdoor formula. Under certain conditions, the indirect and direct effects from (\ref{alt_decomp}) connect to work by \cite{vansteelandt2012natural} and \cite{sjolander7mediation}. We discuss both below. \subsubsection{Connection to Vansteelandt and Vanderweele (2012)} If we were to condition on the exposed, the indirect and direct effects from (\ref{alt_decomp}) aligns with the effect decomposition of the effect of treatment on the treated (ETT), also known as the total effect on the exposed, described by \cite{vansteelandt2012natural}, $$ ETT = E[ Y(1) - Y(0) | A=1 ] = \underbrace{E[Y(0,Z) - Y(0) | A = 1]}_\text{natural indirect effect on the exposed} + \underbrace{E[ Y - Y(0,Z) | A = 1]}_\text{natural direct effect on the exposed} $$ \noindent The identification conditions needed for the \cite{vansteelandt2012natural}'s indirect effect are different than those needed for the PIIE. These are listed in their paper (section 4), but we also state them using our notation below: \begin{align*} \textrm{M1.} \ & \textrm{Consistency assumptions: } \textrm{(1) If $A=a$, then $Z(a) =Z$ w.p.1}, \\ & \hspace{4.5cm} \textrm{(2) If $A=a$, then $Y(a) =Y$ w.p.1}, \\ & \hspace{4.5cm} \textrm{(3) If $A=a$ and $Z=z$, then $Y(a,z) =Y$ w.p.1} \\ \textrm{M3.} \ & Y(a,z) \perp Z | A=a,C=c \ \ \forall \ z,a,c \\ \textrm{M4.} \ & Y(a,z) \perp A | C=c \ \ \ \forall \ z,a, c \\ \end{align*} \noindent These assumptions could be formulated under a Nonparametric Structural Equation Model with Independent Errors (NPSEM-IE) interpretation of the diagram in Figure 1a. Note that they do not follow from an FFRCISTG interpretation of the diagram. \subsubsection{Connection to Sj{\"o}lander (2018)} If we were to scale by proportion of persons with outcome, the indirect and direct effects from (\ref{alt_decomp}) aligns with the effect decomposition of the attributable fraction (AF) described by \cite{sjolander7mediation}, $$ AF = E[Y-Y(a^*)]/E[Y] = \underbrace{E[ Y(a^*,Z)-Y(a^*) ]/E[Y]}_\text{natural indirect attributable fraction} + \underbrace{E[Y-Y(a^*,Z)]/E[Y]}_\text{natural direct attributable fraction} $$ \noindent The identification conditions needed for Sj{\"o}lander (2018)'s indirect effect are the same as listed for \cite{vansteelandt2012natural}'s indirect effect. However, in addition to a consistency assumption, they state the necessary assumption for identification as $Y(a,Z) \perp A | Z,C$, which is implied by M3 and M4.
1,477,468,750,539
arxiv
\section{Introduction} In this paper we investigate the subspace Restricted Isometry Property (RIP) of random projections and try to capture the root of subspace RIP. In more intuitive language, given two linear subspaces in an ambient space, we ask that for which type of random projections, the ``distance'' of these two subspaces, when defined properly, is almost invariant after being projected. The precise meaning of these terms will be presented later in this section. Before that, we ground our results with some preliminaries. \subsection{Background} High-dimensional signals can be computationally expensive, or even intractable to analyze. Fortunately, many real-world high-dimensional signals are of low-dimensional nature. In this vein, numerous low dimensional models have been proposed and have remarkably fascinated researches in signal processing \cite{Bruckstein2009Sparse,Baraniuk2009Random,Elad2010Sparse}. Union of Subspaces (UoS) is a powerful low dimensional model which subsumes many classical models including sparse representation and has been used extensively in the recent decade \cite{Eldar2009Robust}. Briefly speaking, UoS model assumes that in a dataset with high ambient dimension, the data points actually lie on a few low dimensional linear subspaces, and these subspaces characterize the intrinsic structure of the dataset. Subspace clustering \cite{Soltanolkotabi2012Geometric,Elhamifar2013Sparse,Soltanolkotabi2014Robust,Heckel2015Rsobust} is one of the various successful applications of the UoS model that has achieved impressive performance in tasks such as motion segmentation, face clustering, and anomaly detection. Moreover, the performance of subspace clustering is theoretically guaranteed under fairly general conditions, a fact proved in \cite{Soltanolkotabi2012Geometric} based on the concept of affinity, c.f. Definition \ref{def:affinity}. However, for traditional subspace clustering algorithms there is a high computational cost in building the so-called similarity representation when the dataset is of high dimension. This defect can be overcome by random compression, as was done in Compressed Subspace Clustering (CSC) \cite{Mao2014Compressed, Meng2018CSC}. While random compression can significantly reduce the computational burden, it raised a new concern that the affinity between two subspaces may not be preserved after random compression, hence it is not clear whether there is a theoretical guarantee for CSC. Part of the above concern was addressed in \cite{Heckel2014Subspace, Heckel2017Dimensionality, Wang2019Theoretical}, which provided theoretical analyses for several popular CSC algorithms. However, these analyses are done per algorithm and do not focus on the concept of affinity. A theorem on ``invariance property'' of affinity under random projections would constitute a more universal framework to analyze the performance of CSC. Such a theorem was given in \cite{Li2018Restricted, Li2019Rigorous}, which basically states that the change of affinity between two subspaces is small with high probability under Gaussian random projections. Since affinity is closely related to the notion of projection Frobenius-norm distance between subspaces, this implies that the projection Frobenius-norm distance between subspaces is approximately preserved by a Gaussian random projection, a property named by \emph{subspace Restricted Isometry Property} (subspace RIP) resembling the classical RIP for sparse vectors. This paper is devoted to a thorough investigation of subspace RIP. Our first aim is to answer the question: what should be the proper abstract setting to study subspace RIP, or more precisely, what is the essential property of a matrix which would lead to subspace RIP? We will prove that such essential property is that the matrix acts as a near-isometry on any low-dimensional subspace. This is not obvious and requires involved analysis. In fact, a naive argument using near-isometry will lose a factor of the dimension of the subspaces, hence will be far from optimal. This fundamental result will be used to prove the subspace RIP for a large variety of random matrices, including subgaussian matrices and other random matrices with exponential Johnson-Lindenstrauss property, partial Fourier/Hadamard matrices and other randomly sampled Bounded Orthonormal Systems (BOS) \cite{Foucart2017Mathematical}, partial circulant/Toeplitz matrices \cite{Rauhut2012Restricted}, and also some typical heavy-tailed matrices, e.g. those with independent strongly regular rows \cite{Srivastava2013Covariance} or log-concave ensembles \cite{Adamczak2010Quantitative, Adamczak2011Sharp}. These results provide a universal framework to analyze the subspace RIP of random matrices and their effects on subspace related tasks, which requires rather weak assumptions on the random matrix but yields universal performance guarantee that are not constrained to specific algorithms. {\color{black} \subsection{Our Contribution} In this paper we proved that the essential property of a matrix that leads to subspace RIP is that the matrix acts as a near-isometry on any low-dimensional subspace. This accounts for the root of subspace RIP and provides the proper abstract setting, or a unified approach, to discuss subspace RIP. Both the statement and the proof of this result are deterministic, thus apparently differ from previous work on subspace RIP \cite{Li2018Restricted, Li2019Rigorous} which relied heavily on delicate probabilistic analysis of Gaussian matrices and cannot be decoupled into deterministic and probabilistic parts in an obvious way; it is even not clear how the proof there generalizes to subgaussian matrices. More discussions on this difference are carried out after sufficient technical preparation, in Section \ref{sec:discussions}. With this result, we are able to provide an easy proof of subspace RIP for random matrices with exponential Johnson-Lindenstrauss property, e.g. subgaussian matrices, which generalizes the conclusion of \cite{Li2018Restricted, Li2019Rigorous}. Moreover, we will also prove that randomly sampled BOS, e.g. partial Fourier/Hadamard matrices, and partial circulant/Toeplitz matrices possess subspace RIP. These are matrices with fast matrix-vector multiplication algorithms that permit a wide application and could significantly accelerate computation in practice, and our results validate their use in subspace related tasks. Note that in \cite{Heckel2017Dimensionality} it was claimed that randomly sampled BOS could be used for random compression in subspace clustering meanwhile keeping the clustering performance, but the proof was based on the assertion that randomly sampled BOS satisfies exponential Johnson-Lindenstrauss property, which was not provided with a legitimate proof there. The proof strategy in \cite{Heckel2017Dimensionality} appears only feasible to show that randomly sampled BOS satisfies exponential Johnson-Lindenstrauss property with unreasonably small constants, which is not helpful in practice. As such, our results constitute a more effective guarantee for performance of partial Fourier/Hadamard matrices and partial circulant/Toeplitz matrices on subspace clustering. Recently, there are rising interests on heavy-tailed random matrices. We will deal with two typical types of such random matrices, namely the ones with finite $4+\epsilon$ moments and log-concave ensembles, and show that how a combination of our characterization of subspace RIP and well-known results in covariance estimation implies easily the subspace RIP of these random matrices. From a practical point of view, our result holds for much more general random matrices compared with the subspace RIP for Gaussian random matrices in \cite{Li2019Rigorous}, hence allows the application of random matrices that are more useful in practice, for instance those matrices which are easier to generate and store on hardware, e.g. Bernoulli matrices, or those who arise in the physical world naturally and are more efficient to compute, e.g. partial Fourier/Hadamard matrices and partial circulant/Toeplitz matrices. Most of these matrices are inaccessible within the proof strategy in previous works on subspace RIP \cite{Li2018Restricted, Li2019Rigorous}. As pointed out in \cite{Hinojosa2018Coded}, in applications such as compressive spectral imaging, typical projection matrices are not Gaussian. Instead, Bernoulli matrices can be used \cite{Martin2016Hyperspectral}. Our result demonstrate more practical scenarios where techniques of random projections and in particular, CSC algorithms may apply. } \subsection{Notations and Conventions} Throughout this paper, $c$ and $C$ denote two positive universal constants that may vary upon each appearance, while $\tilde c$ is the constant appearing in the definition of exponential Johnson-Lindenstrauss property, c.f. Definition \ref{def:jl-property}. Bold upper case letters, e.g. ${\vect A}$, are used to denote a matrix, while bold lower case letters, e.g. $\u$, are used to denote a vector. $\bm\Phi$ will always be a random matrix. If $\mathcal X$ is a linear subspace of $\mathbb R^N$, $\mathcal X^\perp$ denotes its orthogonal complement. Orthogonal projections onto subspace $\mathcal X$ will be denoted by $\mathcal P_{\mathcal X}$. The maximal and minimal singular value of a matrix ${\vect A}$ will be denoted by $s_{\max}({\vect A})$ and $s_{\min}({\vect A})$. $\|\v\|$ is the Euclidean norm of the vector $\v$, and $\|{\vect A}\|_{\rm F}$ is the Frobenius norm of the matrix ${\vect A}$. The $(n-1)$-dimensional unit sphere in $\mathbb R^n$ is denoted by $\mathbb S^{n-1}$, i.e. $\mathbb S^{n-1}=\{{\vect x}\in\mathbb R^n:\|{\vect x}\|=1\}$. The affinity between subspaces $\mathcal X_1$, $\mathcal X_2$, defined in Definition \ref{def:affinity}, will be denoted by ${\operatorname{aff}}(\mathcal X_1,\mathcal X_2)$. Occasionally we will write $\aff_{\set X}$ (resp. $\aff_{\set Y}$) as an abbreviation of ${\operatorname{aff}}(\mathcal X_1, \mathcal X_2)$ (resp. ${\operatorname{aff}}(\mathcal Y_1,\mathcal Y_2)$). The probability of an event is denoted by ${\mathbb P}(\cdot)$. The expectation of a random variable/vector/matrix is denoted by $\mathbb E(\cdot)$. We will be a bit blurry when using ``infinitesimal'' $\varepsilon$. That is, we will implicitly shrink the value of $\varepsilon$ by a constant ratio when needed. For example, we will assert ${\mathbb P}(X>\varepsilon)<{\rm e}^{-c\varepsilon^2n}$ while we actually proved ${\mathbb P}(X>2\varepsilon)<{\rm e}^{-c\varepsilon^2n}$. Such gaps are usually easy to fill and harmless to skip. In fact, the former statement can be easily derived from the latter by replacing $\varepsilon$ with $\varepsilon/2$ and replacing $c$ with $4c$. \subsection{Organization} {\color{black} The rest of this paper is organized as follows. In Section \ref{sec:preliminaries}, definitions and basic properties of affinity are provided. In Section \ref{sec:main-results}, we state our main theorem that a matrix acting as a near-isometry on a subspace $\mathcal X$ preserves the affinity and projection Frobenius-norm distance between any pair of subspaces in $\mathcal X$. Using this theorem, we analyze several important classes of random matrices in Section \ref{sec:examples} and prove their subspace RIP. Section \ref{sec:proof} is devoted to the proof of the main theorem. Section \ref{sec:discussions} provides some further comments on proof strategies and comparison with related works. Section \ref{sec:applications} briefly introduces some examples among the various potential applications of our theory. Section \ref{sec:simulations} verifies our results on a real-world dataset. Finally in Section \ref{sec:conclusion} we conclude the paper. } \section{Preliminaries}\label{sec:preliminaries} A key ingredient in the statement of our results is the \emph{affinity} between two subspaces, defined as following \cite{Soltanolkotabi2012Geometric,Heckel2017Dimensionality, Li2019Rigorous}: \begin{definition}\label{def:affinity} Let $\mathcal X_1$, $\mathcal X_2$ be subspaces of dimension $d_1$, $d_2$ in $\mathbb R^n$. Denote by ${\mathcal P}_{\mathcal X_1}$, ${\mathcal P}_{\mathcal X_2}$ the matrix of orthogonal projection onto $\mathcal X_1$ and $\mathcal X_2$. The affinity between $\mathcal X_1$ and $\mathcal X_2$ is \begin{equation*} {\operatorname{aff}}(\mathcal X_1,\mathcal X_2)=\sqrt{\operatorname{tr}({\mathcal P}_{\mathcal X_1}{\mathcal P}_{\mathcal X_2})}. \end{equation*} \end{definition} There are several alternative ways to compute the affinity which will be used interchangeably. They are summarized in the following lemma. \begin{lemma}\label{lem:affinity-compute} Let $\mathcal X_1$, $\mathcal X_2$ be subspaces of dimension $d_1$, $d_2$ in $\mathbb R^n$. Denote by ${\mathcal P}_{\mathcal X_1}$, ${\mathcal P}_{\mathcal X_2}$ the orthogonal projection onto $\mathcal X_1$ and $\mathcal X_2$. \begin{enumerate}[label=\roman*)] \item If ${\vect U}_1$, ${\vect U}_2$ are orthonormal bases of $\mathcal X_1$, $\mathcal X_2$, then \begin{equation*} {\operatorname{aff}}(\mathcal X_1,\mathcal X_2)=\|{\vect U}_1^{\rm T}{\vect U}_2\|_{\rm F}, \end{equation*} where $\|\cdot\|_{\rm F}$ is the Frobenius norm. \item If ${\vect U}_2$ are orthonormal bases of $\mathcal X_2$, then \begin{equation*} {\operatorname{aff}}(\mathcal X_1,\mathcal X_2)=\|{\mathcal P}_{\mathcal X_1}{\vect U}_2\|_{\rm F}. \end{equation*} \item There exists orthonormal bases ${\vect U}_1$, ${\vect U}_2$ of $\mathcal X_1$, $\mathcal X_2$ and nonnegative real numbers $\lambda_1\ge\lambda_2\ge\ldots\ge\lambda_{\min(d_1,d_2)}$, such that \begin{equation*} \langle\u_{1,i},\u_{2,j}\rangle=\begin{cases} \lambda_i,\quad &i=j;\\ 0,\quad &i\ne j, \end{cases} \end{equation*} where $\u_{1,i}$,$\u_{2,j}$ denotes the $i$-th column of ${\vect U}_1$ and the $j$-th column of ${\vect U}_2$ respectively. Such ${\vect U}_1$, ${\vect U}_2$ are called \emph{principal orthonormal bases} of $\mathcal X_1$, $\mathcal X_2$. Furthermore, \begin{equation*} {\operatorname{aff}}^2(\mathcal X_1,\mathcal X_2)=\sum_{i=1}^{\min(d_1,d_2)}\lambda_i^2. \end{equation*} \end{enumerate} \end{lemma} As its name suggests, affinity measures how close two subspaces are to each other. A relevant notion is the \emph{projection Frobenius-norm distance} of two subspaces \cite{Li2019Rigorous}. \begin{definition}\label{def:frob-dist} The projection Frobenius-norm distance of two subspaces $\mathcal X_1$,$\mathcal X_2$ is defined as \[ D(\mathcal X_1,\mathcal X_2)=\frac{1}{\sqrt2}\|{\mathcal P}_{\mathcal X_1}-{\mathcal P}_{\mathcal X_2}\|_{\rm F}, \] where ${\mathcal P}_{\mathcal X_i}$ is the matrix of orthogonal projection onto $\mathcal X_i$, $i=1,2$. \end{definition} Affinity and projection Frobenius-norm distance are related by \begin{equation}\label{eqn:affinity-and-frobenius-dist} D^2(\mathcal X_1,\mathcal X_2)=\frac{d_1+d_2}2-{\operatorname{aff}}^2(\mathcal X_1,\mathcal X_2). \end{equation} Intuitively, this means that the closer (in affinity) two subspaces are to each other, the less distant (in projection Frobenius-norm) they are to each other, which sounds tautological. Our main results will be stated based on affinity, but they can be easily translated to statements on projection Frobenius-norm distance by (\ref{eqn:affinity-and-frobenius-dist}). We are now in the position to state the main result of \cite{Li2019Rigorous}. \begin{theorem}\label{thm:li} Assume $\bm\Phi$ is an $n\times N$ Gaussian matrix with i.i.d. entries sampled from $\mathcal N(0,1/n)$. For any two subspaces $\mathcal X_1$, $\mathcal X_2$ of dimension $d_1$, $d_2$ in $\mathbb R^N$, assuming $d_1\le d_2$, denote by $\mathcal Y_1$, $\mathcal Y_2$ the image of $\mathcal X_1$, $\mathcal X_2$ under $\bm\Phi$. Then for any $0<\varepsilon<1/2$ there exists positive constants $c_1(\varepsilon)$, $c_2(\varepsilon)$, such that for $n>c_1(\varepsilon)d_2$, the following is true with probability exceeding $1-{\rm e}^{c_2(\varepsilon)n}$. \begin{equation}\label{eqn:li-affinity} \left|{\operatorname{aff}}^2(\mathcal Y_1,\mathcal Y_2)-{\operatorname{aff}}^2(\mathcal X_1,\mathcal X_2)\right|\le\left(d_1-{\operatorname{aff}}^2(\mathcal X_1,\mathcal X_2)\right)\varepsilon. \end{equation} \end{theorem} \begin{remark} Using the notion of projection Frobenius-norm distance (\ref{eqn:li-affinity}) has the following corollary in an easy-to-remember form: \begin{equation}\label{eqn:li-frob-dist} \left|D^2(\mathcal Y_1,\mathcal Y_2)-D^2(\mathcal X_1,\mathcal X_2)\right|\le\varepsilon D^2(\mathcal X_1,\mathcal X_2). \end{equation} In other words, the distance of two subspaces only changes by a small portion after random projections with overwhelming probability. We thus call the ``affinity preserving'' property in (\ref{eqn:li-affinity}) by \emph{subspace Restricted Isometry property} (subspace RIP), a term resembling the classical Restricted Isometry Property for sparse vectors \cite{Candes2008Restricted}. \end{remark} The aim of this paper is to illuminate the root of subspace RIP and to extend Theorem \ref{thm:li} to a much wider range of random matrices that are more useful in practice. \section{Main Theorem}\label{sec:main-results} {\color{black} Lying in the center of our theory is the following theorem: \begin{theorem}\label{thm:cov-est-to-subspace-rip} Let $\mathcal X$ be a $d$-dimensional subspace in $\mathbb R^N$. Let $\mathcal X_1$, $\mathcal X_2$ be subspaces in $\mathcal X$ whose dimensions are respectively $d_1$ and $d_2$, and (without loss of generality) assume that $d_1\le d_2$. Denote by ${\vect U}$ a matrix whose columns constitute an orthonormal basis of $\mathcal X$. Suppose $\bm\Phi$ is a $n\times N$ matrix satisfying for some $\delta\in(0,1/4)$ that \begin{equation}\label{eqn:orthonormal-perturbation-bound} 1-\delta<s^2_{\min}(\bm\Phi{\vect U})\le s^2_{\max}(\bm\Phi{\vect U})<1+\delta. \end{equation} Then with $\mathcal Y_i=\bm\Phi\mathcal X_i$, $\aff_{\set Y}={\operatorname{aff}}(\mathcal Y_1,\mathcal Y_2)$, $\aff_{\set X}={\operatorname{aff}}(\mathcal X_1,\mathcal X_2)$, we have \begin{equation}\label{eqn:affinity-preserving} \left|\aff_{\set Y}^2-\aff_{\set X}^2\right|\le C(d_1-\aff_{\set X}^2)\delta, \end{equation} where $C>0$ is some universal constant. \end{theorem} \begin{remark} Consequently, the projection Frobenius-norm distance of $\mathcal X_1,\mathcal X_2$ is preserved by $\bm\Phi$: \begin{equation}\label{eqn:subspace-rip} \left|D^2(\mathcal Y_1,\mathcal Y_2)-D^2(\mathcal X_1,\mathcal X_2)\right|\le C\delta D^2(\mathcal X_1,\mathcal X_2), \end{equation} which means that $\bm\Phi$ possesses \emph{subspace RIP} for subspaces of $\mathcal X$. \end{remark} \begin{remark} The assumption \eqref{eqn:orthonormal-perturbation-bound} has a intimate connection with the concept of \emph{subspace embedding} in numerical linear algebra \cite{Clarkson2017Low}. The main difference is that subspace embedding in \cite{Clarkson2017Low} asks \eqref{eqn:orthonormal-perturbation-bound} to hold with probability at least $1-\varepsilon$, hence is a probabilistic assumption, while in our assumption \eqref{eqn:orthonormal-perturbation-bound} is deterministic and removes the need of probabilistic argument; we believe our assumption better captures the essence of the matter. \end{remark} The assumption \eqref{eqn:orthonormal-perturbation-bound} is equivalent to saying that $\bm\Phi$ acts as a near-isometry on $\mathcal X$, i.e. $(1-\delta)\|\u\|^2<\|\bm\Phi\u\|^2<(1+\delta)\|\u\|^2$ for any $\u\in\mathcal X$. Thus the essence of Theorem \ref{thm:cov-est-to-subspace-rip} is that a near-isometry on a subspace $\mathcal X$ preserves the pairwise distance of subspaces of $\mathcal X$. This is not an obvious fact, since affinity, hence subspace distance, is defined in a subtle way that involves orthonormal bases of both subspaces, but the latter is not preserved by a near-isometry. The overall effect of such structural degeneration makes the desired factor $(d_1-\aff_{\set X}^2)$, which is crucial in establishing \eqref{eqn:subspace-rip}, out of immediate reach. One has to perform some careful analysis to obtain \eqref{eqn:affinity-preserving} and \eqref{eqn:subspace-rip}. Before we present the proof (in Section \ref{sec:proof}), it is of interest to explain how Theorem \ref{thm:cov-est-to-subspace-rip} leads easily to a series of corollaries on subspace RIP for a wide variety of random matrices, which we will do in the next section. } {\color{black}\section{Random Matrices and Near-Isometry on Subspaces}\label{sec:examples} This section discusses in detail the near-isometry condition \eqref{eqn:orthonormal-perturbation-bound} and its connection with random matrices. Furthermore, this section examines various random matrices encountered in practice and shows that they satisfy the near-isometry condition, hence possess subspace RIP, which would validate their application in subspace-related tasks to accelerate computation. The near-isometry condition \eqref{eqn:orthonormal-perturbation-bound} and Theorem \ref{thm:cov-est-to-subspace-rip} are best understood in the context of random matrices. In practice, it is useless to discuss a pair of low-dimensional subspaces $\mathcal X_1,\mathcal X_2$ contained in a \emph{specific} subspace $\mathcal X$; one would often need \eqref{eqn:affinity-preserving} and \eqref{eqn:subspace-rip} for \emph{any} pair of such subspaces. For any pair of subspaces $\mathcal X_1$, $\mathcal X_2$, we consider their sum \[\mathcal X=\{{\vect x}_1+{\vect x}_2:{\vect x}_1\in\mathcal X_1,{\vect x}_2\in\mathcal X_2\}.\] This is a subspace of $\mathbb R^N$ of dimension at most $2d_2$ that contains both $\mathcal X_1$ and $\mathcal X_2$. If $\bm\Phi$ acts as a near-isometry on $\mathcal X$, then $\bm\Phi$ preserves the affinity and the distance between $\mathcal X_1$, $\mathcal X_2$. In order that $\bm\Phi$ preserves the affinity and the distance between any pair of $\mathcal X_1$, $\mathcal X_2$, one may impose that $\bm\Phi$ acts as a near-isometry on any subspace of dimension $2d_2$ in $\mathbb R^N$. This is, however, apparently impossible for deterministic $\bm\Phi$, and a standard way to resolve this is to use a random matrix $\bm\Phi$ instead. It is clear from the above argument that, if $\bm\Phi$ is a random matrix which acts as a near-isometry with high probability on any subspace of dimension $2d_2$, then $\bm\Phi$ preserves the affinity and the distance between $\mathcal X_1$, $\mathcal X_2$ with high probability for any pair of $\mathcal X_1$, $\mathcal X_2$ of dimension respectively $d_1$, $d_2$, where $d_1\le d_2$. As a consequence, analysis of subspace RIP now boils down to analysis of the singular values of $\bm\Phi{\vect U}$, where ${\vect U}$ is a matrix whose columns constitute an orthonormal basis for some subspace in $\mathbb R^N$. This will be carried out in the rest of this section. Throughout this section, $\mathcal X_1,\ldots,\mathcal X_L$ always denote subspaces in $\mathbb R^N$ of dimension $d_1,\ldots,d_L$, and $d_*=\max\{d_1,\ldots,d_L\}$. The image of $\mathcal X_i$ under the random projection $\bm\Phi$ will be denoted by $\mathcal Y_i$. The subspace RIP of $\bm\Phi$ will be characterized by maximum discrepancy \begin{equation*} \Delta=\max_{1\le i<j\le L}\frac{|{\operatorname{aff}}^2(\mathcal Y_i,\mathcal Y_j)-{\operatorname{aff}}^2(\mathcal X_i,\mathcal X_j)|}{\max\{d_i,d_j\}-{\operatorname{aff}}^2(\mathcal X_i,\mathcal X_j)}. \end{equation*} Note that \begin{equation*} \Delta\ge\max_{1\le i<j\le L}\frac{|D^2(\mathcal Y_i,\mathcal Y_j)-D^2(\mathcal X_i,\mathcal X_j)|}{D^2(\mathcal X_i,\mathcal X_j)}. \end{equation*} } {\color{black} \subsection{Example: Exponential Johnson-Lindenstrauss Property}} A class of random projections that deserves much emphasis is the ones with exponential Johnson-Lindenstrauss property\footnote{In literature the same property is usually under the name ``Johnson-Lindenstrauss property'', without ``exponential''; see for instance \cite{Foucart2017Mathematical}, Section 9.5.}, defined as following \begin{definition}\label{def:jl-property} A random matrix ${\vect A}\in\mathbb R^{n\times N}$ is said to satisfy \emph{exponential Johnson-Lindenstrauss property}, if there exists some constant $\tilde c>0$, such that for any $0<\varepsilon<1$ and for any ${\vect x}\in\mathbb R^N$, \begin{equation*} {\mathbb P}(\left|\|{\vect A}{\vect x}\|^2-\|{\vect x}\|^2\right|>\varepsilon\|{\vect x}\|^2)\le2{\rm e}^{-\tilde c\varepsilon^2n}. \end{equation*} \end{definition} Examples of random matrices with exponential Johnson-Lindenstrauss property are pervasive in both theory and practice. Section \ref{apd:jl-property} provides a non-comprehensive list of such examples (Gaussian matrices with independent columns, subgaussian matrices with independent rows, and partial Fourier/Hadamard matrices) and also a related theorem which asserts that classical RIP for sparse vectors with sufficiently small restricted isometry constants implies exponential Johnson-Lindenstrauss property. Taking the route discussed at the beginning of this section, we have \begin{lemma}\label{lem:JL-orthonormal-preserving} Let ${\vect U}$ be a matrix whose columns constitute an orthonormal basis for a $d$-dimensional subspace in $\mathbb R^N$. Assume the random matrix $\bm\Phi$ satisfies exponential Johnson-Lindenstrauss property. Then for any $0<\varepsilon<1$, we have \begin{equation*} {\mathbb P}(1-\varepsilon<s_{\min}^2(\bm\Phi{\vect U})\le s_{\max}^2(\bm\Phi{\vect U})<1+\varepsilon)\ge 1-{\rm e}^{-\tilde c\varepsilon^2 n+3d}. \end{equation*} \end{lemma} The proof is by a standard covering argument and is deferred to Section \ref{apd:jl-property}. As a corollary, we have the following result on subspace RIP of random matrices with exponential Johnson-Lindenstrauss property, which generalizes the main result in \cite{Li2019Rigorous}. \begin{corollary}\label{cor:JL-implies-SRIP} Assume the random matrix $\bm\Phi$ satisfies exponential Johnson-Lindenstrauss property. Then for some universal constant $c>0$ and for any $0<\varepsilon<1$, we have \[ \Delta\le\varepsilon \] with probability at least $1-L^2{\rm e}^{-c\tilde c\varepsilon^2n+6d_*}$. In particular, whenever $n>24c^{-1}\tilde c^{-1}\varepsilon^{-2}\max\{d_*,\log L\}$, the probability is at least $1-\mathrm e^{-c\tilde c\varepsilon^2n/2}$. \end{corollary} \begin{proof} By the argument at the beginning of this section, this follows from Lemma \ref{lem:JL-orthonormal-preserving}, Theorem \ref{thm:cov-est-to-subspace-rip} and union bound. \end{proof} Note that how this simple proof supersedes, in both effectivity and generality, the complicated probabilistic analysis which spans tens of pages in \cite{Li2019Rigorous}, thanks to Theorem \ref{thm:cov-est-to-subspace-rip}. We will discuss this difference in more detail in Section \ref{sec:discussions}. Corollary \ref{cor:JL-implies-SRIP} permits to apply various matrices used in practice (e.g. Bernoulli or partial Fourier) to subspace related tasks. For subgaussian matrices, the constant $\tilde c$ in exponential Johnson-Lindenstrauss property depends only on the subgaussian norm and is inverse proportional to the square of the subgaussian norm, which is quite satisfying. However, for partial Fourier matrices the above analysis is a bit rough, as the constant $\tilde c$ is proportional to $\sqrt N$. This is problematic when $N$ is large\footnote{In \cite{Heckel2017Dimensionality} it is claimed that $\tilde c=\Omega(\log^4 N)$, which was not legitimately proved there and is likely wrong. In fact, they argue that $\tilde c=\Omega(\log^4 N)$ follows from Theorem \ref{thm:apd:rip-to-jl} and Theorem \ref{thm:apd:rip-of-bos} presented in our Section \ref{apd:jl-property}, but to achieve the $\mathrm e^{-\Omega(n)}$ probability bound one has to take $s=\Omega(n)$ and $\zeta=\mathrm e^{-\Omega(n)}$, which requires that $n=\Omega(n^2)$ and is absurd. }. Moreover, this leaves out the commonly-used partial circulant matrices and partial Toeplitz matrices, which do not satisfy exponential Johnson-Lindenstrauss property with reasonable $\tilde c$. These matrices are endowed with fast matrix-vector multiplication algorithms that significantly accelerate the random compression procedure, hence are worth a more refined treatment, as shown in the next example. {\color{black} \subsection{Example: Some Random Matrices With Fast Algorithms}} Some random matrices are particularly fascinating for practical use due to their advantages in computational efficiency and their natural emergence in signal processing tasks. Such examples include partial Fourier matrices and other randomly sampled Bounded Orthonormal Systems (BOS) \cite{Foucart2017Mathematical}, which correspond to random subsampling in frequency domain and other feature domains. Another important example is partial circulant/Toeplitz matrix, which corresponds to subsampling after a random convolution. These matrices allow for $O(N\log N)$-time multiplication-by-vector algorithms by virtue of Fast Fourier Transform (FFT), Fast Walsh-Hadamard Transform (FWHT), etc. Proving subspace RIP of these matrices would legitimate their use in subspace related tasks, hence significantly improves the efficiency in handling such tasks. In fact, we will show in Section \ref{sec:simulations} how the application of random compression by these matrices boosts up subspace clustering on a real-world dataset. This motivates the following results. \begin{lemma}\label{lem:BOS-orthonormal-preserving} Let ${\vect U}$ be a matrix whose columns constitute an orthonormal basis for a $d$-dimensional subspace in $\mathbb R^N$. Let ${\vect A}\in\mathbb C^{n\times N}$ be the random sampling associated to a BOS\footnote{Partial Fourier matrices and partial Hadamard matrices are both randomly samplings associated to a BOS with constant $O(1)$.} with constant $K\ge 1$. Let ${\vect D}_{\epsilon}$ be a diagonal matrix with i.i.d. Rademacher random variables on its diagonal. Let $\bm\Phi={\vect A}{\vect D}_\epsilon$. Then there exists some constant $C>1$ such that for any $\varepsilon\in(0,1)$ and for any $n>CK^3\varepsilon^{-3}\max\{d\log^3d,\log^3N\}$, we have \[1-\varepsilon<s_{\min}^2(\bm\Phi{\vect U})\le s_{\max}^2(\bm\Phi{\vect U})<1+\varepsilon\] with probability at least \[1-\exp\left(-C^{-1}\big(\sqrt{d^2+K^{-2}\varepsilon^2n}-d\big)\right).\] \end{lemma} The proof is by a careful application of well-known properties of randomly sample BOS and is deferred to Section \ref{apd:fast-matrices}. Note that the exponent $3$ in $K^3$, $\varepsilon^{-3}$, $\log^3 d$ and $\log^3N$ can be replaced by $2+\epsilon$ for any $\epsilon>0$, and we chose $3$ only for typographical convenience. \begin{corollary}\label{cor:BOS-implies-SRIP} Let $\bm\Phi$ be as in Lemma \ref{lem:BOS-orthonormal-preserving}. Then there exists some constant $C>1$ such that for any $\varepsilon\in(0,1)$ and any $n>CK^3\varepsilon^{-3}\max\{d_*(\log^3 d_*+\log L),\log^2L,\log^3N\}$, we have \[ \Delta\le\varepsilon \] with probability at least \[1-\exp\left(-C^{-1}\big(\sqrt{d_*^2+K^{-2}\varepsilon^2n}-d_*\big)\right).\] \end{corollary} \begin{proof} By the argument at the beginning of this section, this follows from Lemma \ref{lem:BOS-orthonormal-preserving}, Theorem \ref{thm:cov-est-to-subspace-rip} and union bound. \end{proof} While Lemma \ref{lem:BOS-orthonormal-preserving} follows from standard results on randomly sampled BOS, for partial circulant/Toeplitz matrices the situation is more subtle. The following lemma will be proved in Section \ref{apd:fast-matrices} via some modifications of the proof strategy in \cite{Vybiral2011Variant}. \begin{lemma}\label{lem:circulant-orthonormal-preserving} Let ${\vect U}$ be a matrix whose columns constitute an orthonormal basis for a $d$-dimensional subspace in $\mathbb R^N$. Let $\vect a$ be a random vector in $\mathbb R^N$ with i.i.d. standard complex circular Gaussian entries. Let $\mathcal C(\vect a)$ be the circulant matrix generated by $\vect a$, i.e. whose first row is $\vect a$. Choose arbitrarily $n$ rows of $\mathcal C(\vect a)$ and form with these rows a new matrix ${\vect A}\in\mathbb R^{n\times N}$. Let $\bm\Phi=\frac{1}{\sqrt n}{\vect A}$. Then there exists some constant $C>1$ such that for any $\varepsilon\in(0,1)$ and any $n>C\varepsilon^{-2}\max\{d,\log N\}^2$, we have \[1-\varepsilon<s_{\min}^2(\bm\Phi{\vect U})\le s_{\max}^2(\bm\Phi{\vect U})<1+\varepsilon\] with probability at least $1-\mathrm e^{-c\varepsilon\sqrt n}$. \end{lemma} \begin{remark}\label{rem:subspace-embedding} The above lemma requires that $n=O(d_*^2)$. With a substantial amount of work (using some modern results on generic chaining bound of suprema of order-$2$ chaos process, e.g. \cite{Dirksen2015Tail}), it is possible to attain a sub-optimal scaling similar to the one in Corollary \ref{cor:BOS-implies-SRIP}, as well as a similar probability bound as in Corollary \ref{cor:BOS-implies-SRIP}. An easier way to improve the scaling is to utilize the result in \cite{Rauhut2012Restricted}, which allows to obtain $n=O(d_*^{3/2})$ with a trade-off in the probability bound that becomes $\mathrm e^{-O(n^{1/3})}$; this turns out to be a special case of the aforementioned "harder" treatment. We will not pursue these directions here to avoid unnecessary technicality. \end{remark} \begin{corollary}\label{cor:circulant-implies-SRIP} Let $\bm\Phi$ be as in Lemma \ref{lem:circulant-orthonormal-preserving}. Then there exists some constant $C>1$ such that for any $\varepsilon\in(0,1)$ and any $n>C\varepsilon^{-2}\max\{d_*,\log N,\log L\}^2$, we have \[ \Delta\le\varepsilon \] with probability at least $1-\mathrm e^{-c\varepsilon\sqrt n}$. \end{corollary} \begin{proof} By the argument at the beginning of this section, this follows from Lemma \ref{lem:circulant-orthonormal-preserving}, Theorem \ref{thm:cov-est-to-subspace-rip} and union bound. \end{proof} \begin{remark} The same is true for Toeplitz matrix since a Toeplitz matrix can be embedded into a circulant matrix with twice dimension. \end{remark} \begin{remark} In fact, we will prove the Lemma \ref{lem:circulant-orthonormal-preserving} for random vector $\vect a$ with independent complex uniformly-subgaussian entries (see Section \ref{apd:jl-property}, Definition \ref{def:subgaussian}). This involves modifying and generalizing the proof in \cite{Vybiral2011Variant}. \end{remark} {\color{black} \subsection{Example: Heavy-Tailed Distributions}} In practice one may also have to deal with heavy-tailed random matrices. Subspace RIP of heavy-tailed random matrices is now handy by Theorem \ref{thm:cov-est-to-subspace-rip} and standard results in covariance estimation. Before presenting these results, we need to set up some customary assumptions on the rows of $\bm\Phi$. Denote the rows of $\bm\Phi$ by $\frac{1}{\sqrt n}\mathbf x_1^{\rm T},\ldots,\frac{1}{\sqrt n}\mathbf x_n^{\rm T}$. \begin{enumerate} \item ${\vect x}_i$'s are centered, i.e. $\mathbb E{\vect x}_i=\mathbf 0$. \item ${\vect x}_i$'s are isotropic, i.e. $\mathbb E{\vect x}_i{\vect x}_i^{\mathrm T}=\mathbf I$. \item ${\vect x}_i$'s are independent. \end{enumerate} These assumptions (centered, isotropic and independent rows) are quite natural and often serve as the default setting in compressed sensing and non-asymptotic random matrix theory, especially in context of Bai-Yin law; see \cite{Vershynin2010Introduction} for examples and further discussions. Here we briefly mention that the a row of $\bm\Phi$ can be regarded as a linear functional that observes the data vector, and independence of rows means that different observations are independent, which is reasonable in many applications. On the other hand, centeredness and isotropy can be fulfilled by a preprocessing step before observing the data, thus are not really restrictive. Some results in random matrix theory are also valid for weakly-independent rows, but such results usually bear considerable technicality imposed by the difficulty to quantitively define weak dependence, hence are not discussed here. We will deal with two important types of heavy-tailed distributions: those with finite moments, and log-concave ensembles. \paragraph{Finite moments} Let $\eta>1$, $C'\ge1$ be constants and ${\vect x}$ be an $N$-dimensional random vectors which is centered and isotropic. The random vector ${\vect x}$ is said to satisfy the strong regularity condition \cite{Srivastava2013Covariance} if \begin{equation}\label{eqn:strong-regularity} {\mathbb P}(\|{\mathcal P}{\vect x}\|^2>t)\le C't^{-\eta},\quad\text{for $t>C'\operatorname{rank}{\mathcal P}$}. \end{equation} for every orthogonal projection ${\mathcal P}$ of rank at most $d$ in $\mathbb R^N$, where $C$ is some universal constant. This condition is satisfied, for example, by those ${\vect x}$ whose entries are independent and have uniformly bounded $(4+\varepsilon)$-moments. We will discuss the meaning of this condition later. \begin{lemma}\label{lem:strong-regularity-orthonormal-preserving} Let ${\vect U}$ be a matrix whose columns constitute an orthonormal basis for a $d$-dimensional subspace in $\mathbb R^N$. Assume ${\vect x}_1,\ldots,{\vect x}_n$ are independent centered isotropic random vectors in $\mathbb R^N$ that satisfy the strong regularity condition \eqref{eqn:strong-regularity}. Let $\bm\Phi$ be a $n\times N$ random matrix whose rows are $\frac1{\sqrt n}{\vect x}_1^{\mathrm T},\ldots,\frac1{\sqrt n}{\vect x}_n^{\mathrm T}$. Then there exists a polynomial function $\mathrm{poly}(\cdot)$ whose coefficients depend only on $\eta$ and $C'$, such that whenever $\varepsilon\in(0,1)$ and $n>\mathrm{poly}(\varepsilon^{-1})d$, we have \[1-\varepsilon<s_{\min}^2(\bm\Phi{\vect U})\le s_{\max}^2(\bm\Phi{\vect U})<1+\varepsilon\] with probability at least $1-\varepsilon$. \end{lemma} \begin{proof}[Sketch] Set ${\vect y}_i={\vect U}^{\mathrm T}{\vect x}_i$, then it is easy to check that ${\vect y}_i$'s are independent, centered and isotropic, and \[\|{\vect U}^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi{\vect U}-\vect I\|=\left\|\frac1n\sum_{i=1}^n{\vect y}_i{\vect y}_i^{\mathrm T}-\vect I\right\|.\] One may verify that ${\vect y}_i$ satisfies strong regularity condition if ${\vect x}_i$ satisfies strong regularity condition, and the main theorem in \cite{Srivastava2013Covariance} yields the desired results. Details are postponed to Section \ref{apd:heavy-tails}. \end{proof} \begin{corollary}\label{cor:strong-regularity-SRIP} Let $\bm\Phi$ be as in Lemma \ref{lem:strong-regularity-orthonormal-preserving}. Then there exists a polynomial function $\mathrm{poly}(\cdot)$ whose coefficients depend only on $\eta$ and $C'$, such that whenever $\varepsilon\in(0,1)$ and $n>\mathrm{poly}(\varepsilon^{-1})Ld_*$, we have \begin{equation*} \Delta\le\varepsilon \end{equation*} with probability at least $1-\varepsilon$. \end{corollary} \begin{proof} Note that this does \emph{not} follow from union bound. Instead, we take $\mathcal X$ to be the sum of all $\mathcal X_i$, then $\dim\mathcal X\le Ld_*$. Now the conclusion follows from Lemma \ref{lem:strong-regularity-orthonormal-preserving} (applied to an orthonormal basis of $\mathcal X$) and Theorem \ref{thm:cov-est-to-subspace-rip}. \end{proof} \begin{remark} A recent result \cite{Xu2019Convergence} of some of the authors show that the partial strong regularity condition can be further relaxed to that \eqref{eqn:strong-regularity} holds for any ${\mathcal P}$ with rank $\lceil Cd_*\rceil$. Though this assumption is logically weaker, it does not seem to yield sensible improvement in our case. \end{remark} An intriguing feature of these results is that strong regularity condition, which in its original context seems to be an artifact due to the spectral sparsfier method adopted in \cite{Srivastava2013Covariance}, arises very naturally in our case. In fact, to control the singular values of $\bm\Phi{\vect U}$ in Lemma \ref{lem:strong-regularity-orthonormal-preserving}, it is necessary to have a reasonably fast tail-decay for $\|{\vect y}_i\|$ (see \cite{Bai1988Note, Srivastava2013Covariance, Tikhomirov2017Sample} for discussions), but $\|{\vect y}_i\|$ is just the norm of the orthogonal projection of ${\vect x}_i$ onto the arbitrary $d$-dimensional subspace spanned by the columns of ${\vect U}$; thus it is necessary to assume that the tails of all orthogonal projections of ${\vect x}_i$ decay fast enough, which is exactly what is meant by strong regularity condition. If ${\vect x}$ has independent entries, \eqref{eqn:strong-regularity} is satisfied when its entries have uniformly bounded $4\eta$-th moments. (For instance, see Proposition 1.3 in \cite{Srivastava2013Covariance}). Hence one may regard strong regularity condition as a finite $4+\epsilon$ moment assumption. Note that it has been known for long that finite fourth moments are necessary for covariance estimation \cite{Bai1988Note}. \paragraph{Log-concave ensembles} If the row vectors of $\bm\Phi$ have better tail behavior, stronger probability bounds can be obtained. A class of distributions with heavy, but not too heavy tails that plays a role in geometric functional analysis is log-concave distribution. A probability distribution ${\mathbb P}$ on $\mathbb R^N$ is said to be log-concave if \[{\mathbb P}(\theta A+(1-\theta)B)\ge{\mathbb P}(A)^\theta{\mathbb P}(B)^{1-\theta}\] for any measurable sets $A$, $B$ in $\mathbb R^N$ and any $\theta\in[0,1]$. It follows from definition that the marginal of a log-concave distribution is again log-concave. Typical examples of log-concave distributions include the uniform distribution on a convex body (e.g. a Euclidean ball) or more generally, the distribution with density $C\exp(-f({\vect x}))$, where $f$ is a convex function\footnote{By setting $f$ to be the indicator function of a convex body we recover the case of uniform distribution on a convex body.}. This subsumes Laplace, Gaussian, Gamma, Beta, Weibull and Logistic distributions (in suitable region of parameters). \begin{lemma}\label{lem:log-concave-orthonormal-preserving} Let ${\vect U}$ be a matrix whose columns constitute an orthonormal basis for a $d$-dimensional subspace in $\mathbb R^N$. Assume ${\vect x}_1,\ldots,{\vect x}_n$ are independent random vectors in $\mathbb R^N$ that are centered, isotropic and log-concave. Let $\bm\Phi$ be a $n\times N$ random matrix whose rows are $\frac1{\sqrt n}{\vect x}_1^{\mathrm T},\ldots,\frac1{\sqrt n}{\vect x}_n^{\mathrm T}$. Then there exists some universal constants $c>0$ and $C>1$ such that for any $\varepsilon\in(0,1)$ and any $n>C\varepsilon^{-2}d$, we have \[1-\varepsilon<s_{\min}^2(\bm\Phi{\vect U})\le s_{\max}^2(\bm\Phi{\vect U})<1+\varepsilon\] with probability at least $1-\mathrm e^{-c\varepsilon\sqrt n}$. \end{lemma} This follows from a tricky application of a famous theorem from \cite{Adamczak2010Quantitative,Adamczak2011Sharp}. Details are postponed to Section \ref{apd:heavy-tails}. \begin{corollary}\label{cor:log-concave-SRIP} Let $\bm\Phi$ be as in Lemma \ref{lem:log-concave-orthonormal-preserving} Then there exists some universal constants $c>0$ and $C>0$ such that whenever $\varepsilon\in(0,1/2)$ and $n>C\varepsilon^{-2}\max\{d_*, \log^2L\}$, we have \begin{equation} \Delta\le\varepsilon \end{equation} with probability at least $1-\mathrm e^{-c\varepsilon\sqrt n}$. \end{corollary} \begin{proof} By the argument at the beginning of this section, this follows from Lemma \ref{lem:log-concave-orthonormal-preserving}, Theorem \ref{thm:cov-est-to-subspace-rip} and union bound. \end{proof} \begin{remark} The $e^{-\Omega(\sqrt n)}$ probability bound is optimal due to thin shell probability of log-concave ensembles, see \cite{Guedon2014Concentration}. \end{remark} \section{Proof of Theorem \ref{thm:cov-est-to-subspace-rip}}\label{sec:proof} \subsection{Some First Consequences of \eqref{eqn:orthonormal-perturbation-bound}} Recall that \eqref{eqn:orthonormal-perturbation-bound} means $\bm\Phi$ acts as a near-isometry on $\mathcal X$. An immediate consequence of this assumption is: \begin{proposition}\label{prop:1d-orthogonal-preserving} Let $\mathcal X$ be a subspace of $\mathbb R^N$ and ${\vect U}$ be a matrix whose columns constitute an orthonormal basis of $\mathcal X$. Let $\bm\Phi$ be a $n\times N$ matrix such that \eqref{eqn:orthonormal-perturbation-bound} holds for some $\delta\in(0,1)$. Then we have \begin{equation*} \sqrt{1-\delta}\|\u\|\le\|\bm\Phi\u\|\le\sqrt{1+\delta}\|\u\| \end{equation*} for any $\u\in\mathcal X$. In particular, if $\delta\in(0,1/4)$ we have $\sqrt{\frac34}\|\u\|\le\|\bm\Phi\u\|\le\sqrt{\frac54}\|\u\|$. Moreover, assume $\mathcal X_1$, $\mathcal X_2$ are two subspaces of $\mathcal X$ which are orthogonal to each other, and let ${\vect U}_1$ (resp. ${\vect U}_2$) be a matrix whose columns constitute an orthonormal basis of $\mathcal X_1$ (resp. $\mathcal X_2$), then \begin{equation*} \|{\vect U}_2^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi{\vect U}_1\|\le\delta. \end{equation*} \end{proposition} \begin{proof} For any $\u\in\mathcal X$, one may find a vector ${\vect x}$ in $\mathbb R^{\dim\mathcal X}$ such that $\u={\vect U}{\vect x}$, hence $\|\u\|=\|{\vect x}\|$. Thus $\|\bm\Phi\u\|=\|\bm\Phi{\vect U}{\vect x}\|$ is at least $s_{\min}(\bm\Phi{\vect U})\|u\|$ and at most $s_{\max}(\bm\Phi{\vect U})\|\u\|$. This proves the first part of the proposition. For the second part, note that \begin{align*} \|{\vect U}_2^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi{\vect U}_1\| &=\sup_{\substack{{\vect x}_1\in\mathbb R^{\dim\mathcal X_1}\\ {\vect x}_2\in\mathbb R^{\dim\mathcal X_2}}}{\vect x}_2^{\mathrm T}{\vect U}_2^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi{\vect U}_1{\vect x}_1\\ &=\sup_{\substack{\u_1\in\mathcal X_1, \|\u_1\|=1\\ \u_2\in\mathcal X_2,\|\u_2\|=1}}\u_2^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi\u_1. \end{align*} But \begin{equation*} \u_2^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi\u_1=\frac14(\|\bm\Phi(\u_1+\u_2)\|^2-\|\bm\Phi(\u_1-\u_2)\|^2). \end{equation*} the conclusion follows from the first part and the fact that $\|\u_1+\u_2\|^2=\|\u_1-\u_2\|^2=2$ (since $\mathcal X_1$, $\mathcal X_2$ are orthogonal to each other). \end{proof} \begin{lemma}\label{lem:line-aff-orthogonal} Under the same setting as in Lemma \ref{thm:cov-est-to-subspace-rip}, assume further that $\mathcal X_1$, $\mathcal X_2$ are orthogonal to each other. Then we have \begin{equation}\label{eqn:in-lem-line-aff-orthogonal1} \|\projP_{\set Y_2}\bm\Phi\u\|\le\sqrt{\frac43}\|\u\|\delta<\frac43\|\bm\Phi\u\|\delta \end{equation} for any $\u\in\mathcal X_1$. \end{lemma} \begin{proof} Let ${\vect U}_1$ be an orthonormal basis for $\mathcal X_1$ and ${\vect U}_2$ be an orthonormal basis for $\mathcal X_2$, thus ${\vect U}_1$ (resp. ${\vect U}_2$) is a $N\times d_1$ (resp. $N\times d_2$) matrix with orthonormal columns. Set ${\vect V}_1=\bm\Phi{\vect U}_1$, ${\vect V}_2=\bm\Phi{\vect U}_2$. By \eqref{eqn:orthonormal-perturbation-bound}, ${\vect V}_2$ is of full rank and all of its singular values are in $(\sqrt{1-\delta}, \sqrt{1+\delta})$. In this case we have \begin{equation*} \projP_{\set Y_2}={\vect V}_2({\vect V}_2^{\rm T}{\vect V}_2)^{-1}{\vect V}_2^{\rm T}. \end{equation*} Thus \begin{align*} \|\projP_{\set Y_2}{\vect V}_1\|^2&={\vect V}_1^{\rm T}{\vect V}_2({\vect V}_2^{\rm T}{\vect V}_2)^{-1}{\vect V}_2^{\rm T}{\vect V}_1\\ &\le s_{\min}^{-2}({\vect V}_2)\|{\vect V}_2^{\rm T}{\vect V}_1\|^2\\ &\le (1-\delta)^{-1}\|{\vect U}_2^{\rm T}\bm\Phi^{\rm T}\bm\Phi{\vect U}_1\|^2\\ &\le (1-\delta)^{-1}\delta^2, \end{align*} where the last inequality follows from Proposition \ref{prop:1d-orthogonal-preserving}. For any $\u\in\mathcal X_1$, it is possible to find some ${\vect x}\in\mathbb R^d_1$ such that $\u={\vect U}_1{\vect x}$, hence $\|\u\|=\|{\vect x}\|$; we thus have \begin{equation*} \|\projP_{\set Y_2}\bm\Phi\u\|=\|\projP_{\set Y_2}\bm\Phi{\vect U}_1{\vect x}\|=\|\projP_{\set Y_2}{\vect V}_1{\vect x}\|\le\|\projP_{\set Y_2}{\vect V}_1\|\|\u\|. \end{equation*} Hence $\|\projP_{\set Y_2}\bm\Phi\u\|\le(1-\delta)^{-1/2}\delta\|\u\|\le\sqrt{\frac43}\|\u\|\delta$. Along with Proposition \ref{prop:1d-orthogonal-preserving}, this proves \eqref{eqn:in-lem-line-aff-orthogonal1}. \end{proof} \subsection{One-dimensional Subspaces} \begin{lemma}\label{lem:uniform-line-SRIP} Under the same setting as in Theorem \ref{thm:cov-est-to-subspace-rip}, we have \begin{equation}\label{eqn:in-lem-uniform-line-SRIP} \left|{\operatorname{aff}}^2(\mathcal Y_2,\bm\Phi\u)-{\operatorname{aff}}^2(\mathcal X_2,\u)\right|\le C\left(1-{\operatorname{aff}}^2(\mathcal X_2,\u)\right)\delta \end{equation} for all $\u\in\mathcal X_1$, $\u\ne\mathbf0$. \end{lemma} \begin{proof} The proof is by straightforward computation. It suffices to prove \eqref{eqn:in-lem-uniform-line-SRIP} for unit vectors $\u_1\in\mathcal X_1$. For any such unit vector, there exists some unit vector $\u_{2}\in\mathcal X_2$, such that $\lambda\overset{\triangle}{=}\langle\u_1,\u_{2}\rangle={\operatorname{aff}}(\mathcal X_2,\u)$ by Lemma \ref{lem:affinity-compute}). In fact, $\u_{2}$ is the direction vector of the projection of $\u$ onto $\mathcal X_2$. We thus have \begin{equation}\label{line_decomposition} \u_1=\lambda\u_{2}+\sqrt{1-\lambda^2}\u_0, \end{equation} where $\u_0\in\mathcal X$ is some unit vector orthogonal to $\mathcal X_2$. Recall that the squared affinity of $\mathcal Y_1$ and $\mathcal Y_2$ is defined as \begin{equation*} \aff_{\set Y}^2=\frac{1}{\|\bm\Phi\u_1\|^2}\|\projP_{\set Y_2}\bm\Phi\u_1\|^2. \end{equation*} In light of (\ref{line_decomposition}), we have \begin{align} \|\bm\Phi\u_1\|^2= &~\lambda^2\|\bm\Phi\u_{2}\|^2 + (1-\lambda^2)\|\bm\Phi\u_0\|^2 \nonumber\\ & + 2\lambda\sqrt{1-\lambda^2}\langle\bm\Phi\u_{2},\bm\Phi\u_0\rangle,\nonumber\\ \|\projP_{\set Y_2}\bm\Phi\u_1\|^2 = &~\lambda^2\|\projP_{\set Y_2}\bm\Phi\u_{2}\|^2 + (1-\lambda^2)\|\projP_{\set Y_2}\bm\Phi\u_0\|^2\nonumber \\ & + 2\lambda\sqrt{1-\lambda^2}\langle\projP_{\set Y_2}\bm\Phi\u_{2},\projP_{\set Y_2}\bm\Phi\u_0\rangle.\label{proj_norm_expansion} \end{align} Note that $\projP_{\set Y_2}\bm\Phi\u_{2}=\bm\Phi\u_{2}$ since $\bm\Phi\u_{2}\in\bm\Phi\mathcal X_2=\mathcal Y_2$. Furthermore, \begin{align*} \langle\projP_{\set Y_2}\bm\Phi\u_{2},\projP_{\set Y_2}\bm\Phi\u_0\rangle&=\langle\projP_{\set Y_2}\bm\Phi\u_{2},\bm\Phi\u_0\rangle\\ &=\langle\bm\Phi\u_{2},\bm\Phi\u_0\rangle, \end{align*} where the first equality is elementary geometry. Thus \begin{align*} \|\projP_{\set Y_2}\bm\Phi\u_1\|^2 = &~\lambda^2\|\bm\Phi\u_{2}\|^2 + (1-\lambda^2)\|\projP_{\set Y_2}\bm\Phi\u_0\|^2 \\ & + 2\lambda\sqrt{1-\lambda^2}\langle\bm\Phi\u_{2},\bm\Phi\u_0\rangle. \end{align*} Combining these equations, we have \begin{align*} \left|\aff_{\set Y}^2 - \lambda^2\right|=\frac{(1-\lambda^2)}{\|\bm\Phi\u_1\|^2} \Big(&\lambda^2(\|\bm\Phi\u_{2}\|^2-\|\bm\Phi\u_0\|^2) \\ &+\|\projP_{\set Y_2}\bm\Phi\u_0\|^2 \\ &+2\lambda\sqrt{1-\lambda^2}\langle\bm\Phi\u_{2},\bm\Phi\u_0\rangle\Big). \end{align*} Since $\|\u_1\|=\|\u_2\|=\|\u_0\|=1$ and that $\u_0$ is perpendicular to $\mathcal X_2$ (hence to $\u_2$), the above quantity is bounded by $C(1-\lambda^2)\delta$ by Proposition \ref{prop:1d-orthogonal-preserving} and Lemma \ref{lem:line-aff-orthogonal}. This completes the proof. \end{proof} \subsection{The General Case} Now we prove the full version of Theorem \ref{thm:cov-est-to-subspace-rip}. Choose a principal orthonormal bases (Lemma \ref{lem:affinity-compute}) ${\vect U}_1$, ${\vect U}_2$ for $\mathcal X_1, \mathcal X_2$. In this proof we also borrow the notation of $\lambda_k$ from Lemma \ref{lem:affinity-compute}. Denote ${\vect V}_1=\bm\Phi{\vect U}_1$, ${\vect V}_2=\bm\Phi{\vect U}_2$. The $k$-th column of ${\vect U}_1$, ${\vect U}_2$ and ${\vect V}_1$ are respectively denoted by $\u_{1,k}$, $\u_{2,k}$ and $\v_{1,k}$. Note that $\v_{1,k}=\bm\Phi\u_{1,k}$ by definition. By \eqref{eqn:orthonormal-perturbation-bound}, ${\vect V}_1$, ${\vect V}_2$ are of full rank and all of their singular values lie in $(\sqrt{1-\delta}, \sqrt{1+\delta})$. We shall need two auxiliary matrices derived from ${\vect V}_1$. The first one is the column-normalized version of ${\vect V}_1$, defined as $\hat{{\vect V}}_1=[\frac{\v_{1,1}}{\|\v_{1,1}\|},\ldots,\frac{\v_{1,d_1}}{\|\v_{1,d_1}\|}]$. The second one is the orthogonal matrix obtained from Gram-Schmidt orthogonalization of columns of ${\vect V}_1$, which we denote by ${\vect Q}_1$. The $k$-th column of $\hat{{\vect V}}_1$ and ${\vect Q}_1$ are respectively denoted by $\hat{\v}_{1,k}$ and ${\vect q}_{1,k}$. Let $\projP_{\set Y_2}$ be the orthogonal projection onto $\mathcal Y_2$, i.e. the column space of ${\vect V}_2$. We have \begin{align} \aff_{\set Y}^2-\aff_{\set X}^2=&\|\projP_{\set Y_2}{\vect Q}_1\|_{\rm F}^2-\|{\vect U}_2^{\rm T}{\vect U}_1\|_{\rm F}^2\nonumber\\ =&(\|\projP_{\set Y_2}{\vect Q}_1\|_{\rm F}^2-\|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2)\nonumber\\ &+(\|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2-\|{\vect U}_2^{\rm T}{\vect U}_1\|_{\rm F}^2).\label{eqn:space_aff_telescope} \end{align} We estimate the last two quantities in (\ref{eqn:space_aff_telescope}) respectively. \begin{proposition} We have \begin{equation*} \left|\|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2-\|{\vect U}_2^{\rm T}{\vect U}_1\|_{\rm F}^2\right|\le C(d_1-\aff_{\set X}^2)\delta. \end{equation*} \end{proposition} \begin{proof} Note that \begin{equation}\label{eqn:decomposition1} \|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2-\|{\vect U}_2^{\rm T}{\vect U}_1\|_{\rm F}^2=\sum_{k=1}^{d_1}\left(\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2-\|{\vect U}_2^{\rm T}\u_{1,k}\|^2\right). \end{equation} Observe that $\|{\vect U}_2^{\rm T}\u_{1,k}\|$ is the affinity between $\mathcal X_2$ and the one-dimensional subspace spanned by $\u_{1,k}$, while $\|\projP_{\set Y_2}\hat{\v}_{1,k}\|$ is the affinity between $\mathcal Y_2$ and the one-dimensional subspace spanned by $\bm\Phi\u_{1,k}$. Furthermore, $\|{\vect U}_2^{\rm T}\u_{1,k}\|=\lambda_k$ since ${\vect U}_1$, ${\vect U}_2$ are principal orthonormal bases. By Lemma \ref{lem:line-aff-orthogonal}, we have \begin{equation}\label{eqn:error-of-normalized-col} \left|\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2-\|{\vect U}_2^{\rm T}\u_{1,k}\|^2\right|\le C(1-\lambda_k^2)\delta. \end{equation} Summing up, we obtain \begin{align*} \left|\|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2-\|{\vect U}_2^{\rm T}{\vect U}_1\|_{\rm F}^2\right|&\le C(d_1-\sum_{k=1}^{d_1}\lambda_k^2)\delta\\ &=C(d_1-\aff_{\set X}^2)\delta, \end{align*} as desired. \end{proof} \begin{proposition} We have \begin{equation*} \left|\|\projP_{\set Y_2}{\vect Q}_1\|_{\rm F}^2-\|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2\right|\le C(d_1-\aff_{\set X}^2)\delta. \end{equation*} \end{proposition} \begin{proof} Similar to (\ref{eqn:decomposition1}), we have \begin{equation*} \|\projP_{\set Y_2}{\vect Q}_1\|_{\rm F}^2-\|\projP_{\set Y_2}\hat{{\vect V}}_1\|_{\rm F}^2=\sum_{i=1}^{d_1}(\|\projP_{\set Y_2}{\vect q}_{1,k}\|^2-\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2). \end{equation*} Denote by $\mathcal Z_k$ the space spanned by $\v_{1,1},\ldots,\v_{1,k}$. Then \begin{equation}\label{eqn:gram-schmidt} {\vect q}_{1,k}=\frac{\hat{\v}_{1,k}-\pz{k-1}\hat{\v}_{1,k}}{\|\hat{\v}_{1,k}-\pz{k-1}\hat{\v}_{1,k}\|}. \end{equation} Note that ${\vect q}_{1,k}$, $\hat{\v}_{1,k}$ are unit vectors, hence by Pythagorean theorem we have \begin{equation}\label{eqn:pytha1} \|\hat{\v}_{1,k}-\pz{k-1}\hat{\v}_{1,k}\|^2=1-\|\pz{k-1}\hat{\v}_{1,k}\|^2, \end{equation} and \begin{align} &\|\projP_{\set Y_2}{\vect q}_{1,k}\|^2-\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2\nonumber\\ =&(1-\|\projP_{\set Y_2^\perp}{\vect q}_{1,k}\|^2)-(1-\|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}\|^2)\nonumber\\ =&\|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}\|^2-\|\projP_{\set Y_2^\perp}{\vect q}_{1,k}\|^2,\label{eqn:pytha2} \end{align} where $\mathcal Y_2^\perp$ denotes the orthogonal complement of $\mathcal Y_2$. Combining (\ref{eqn:gram-schmidt}) and (\ref{eqn:pytha1}) we obtain \begin{align*} \|\projP_{\set Y_2^\perp}{\vect q}_{1,k}\|^2=&\frac{\|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}-\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\|^2}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}\\ =&\phantom{+}\frac{\|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}\|^2}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}\\ &-\frac{2\langle\projP_{\set Y_2^\perp}\hat{\v}_{1,k},\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\rangle}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}\\ &+\frac{\|\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\|^2}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}. \end{align*} This together with (\ref{eqn:pytha2}) yields \begin{align} &\|\projP_{\set Y_2}{\vect q}_{1,k}\|^2-\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2\nonumber\\ =&-\frac{\|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}\|^2\|\pz{k-1}\hat{\v}_{1,k}\|^2}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}\nonumber\\ &+\frac{2\langle\projP_{\set Y_2^\perp}\hat{\v}_{1,k},\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\rangle}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}\nonumber\\ &-\frac{\|\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\|^2}{1-\|\pz{k-1}\hat{\v}_{1,k}\|^2}.\label{eqn:decomposition2} \end{align} Since $\u_{1,k}$ is perpendicular to the subspace spanned by $\u_{1,1},\ldots,\u_{1,k-1}$, by Lemma \ref{lem:line-aff-orthogonal}, $\|\pz{k-1}\hat{\v}_{1,k}\|^2\le\frac{16}9\delta^2<\frac49\delta<1/2$. The proof would be complete once we show \begin{align} \|\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\|^2&\le C(1-\lambda_k^2)\delta^2,\label{eqn:component1}\\ \|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}\|^2&\le C(1-\lambda_k^2).\label{eqn:component2} \end{align} By (\ref{eqn:error-of-normalized-col}) and the discussion prior to it (which says that $\|{\vect U}_2^{\rm T}\u_{1,k}\|=\lambda_k$), the following holds: \begin{align*} \|\projP_{\set Y_2^\perp}\hat{\v}_{1,k}\|^2&=1-\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2\\ &=(1-\lambda_k^2)-(\|\projP_{\set Y_2}\hat{\v}_{1,k}\|^2-\|{\vect U}_2^{\rm T}\u_{1,k}\|^2)\\ &\le(1-\lambda_k^2)(1+C\delta)\\ &\le C(1-\lambda_k^2), \end{align*} where the first inequality follows from Lemma \ref{lem:uniform-line-SRIP} applied to $\mathcal X_2$ and the $1$-dimensional subspace spanned by $\u_{1,k}$, noting that $\|\projP_{\set Y_2}\hat{\v}_{1,k}\|$ is the affinity between $\mathcal Y_2$ and $\bm\Phi\u_{1,k}$. This establishes \eqref{eqn:component2}. For \eqref{eqn:component1}, however, some more work is required. Let ${\vect Z}_{k-1}$ be a orthonormal basis of $\mathcal Z_{k-1}$. Then $\pz{k-1}={\vect Z}_{k-1}{\vect Z}_{k-1}^{\rm T}$, which implies \begin{align*} \|\projP_{\set Y_2^\perp}\pz{k-1}\hat{\v}_{1,k}\|^2&=\|\projP_{\set Y_2^\perp}{\vect Z}_{k-1}{\vect Z}_{k-1}^{\rm T}\hat{\v}_{1,k}\|^2\\ &\le s_{\max}^2(\projP_{\set Y_2^\perp}{\vect Z}_{k-1})\|{\vect Z}_{k-1}^{\rm T}\hat{\v}_{1,k}\|^2. \end{align*} One recognizes at once that $\|{\vect Z}_{k-1}^{\rm T}\hat{\v}_{1,k}\|^2=\|\pz{k-1}\hat{\v}_{1,k}\|^2$, which is bounded by $C\delta^2$ according to Lemma \ref{lem:line-aff-orthogonal}. It remains to prove \begin{equation}\label{eqn:last-step} s_{\max}^2(\projP_{\set Y_2^\perp}{\vect Z}_{k-1})\le C(1-\lambda_k^2). \end{equation} Since ${\vect Z}_{k-1}$ has orthonormal columns, it follows that \begin{align} s_{\max}^2(\projP_{\set Y_2^\perp}{\vect Z}_{k-1})&=\sup_{{\vect x}\in\mathbb S^{k-2}}\|\projP_{\set Y_2^\perp}{\vect Z}_{k-1}{\vect x}\|^2\nonumber\\ &=\sup_{\substack{\|{\vect x}\|=1\\ {\vect x}\in\mathcal Z_{k-1}}}\|\projP_{\set Y_2^\perp}{\vect x}\|^2.\label{eqn:s_max-conversion} \end{align} By definition, $\mathcal Z_{k-1}$ is spanned by the first $(k-1)$ columns of ${\vect V}_1=\bm\Phi{\vect U}_1$. Denote by ${\vect U}_{1,1:k-1}$ the first $(k-1)$ columns of ${\vect U}_1$. We have \begin{align*} \sup_{\substack{\|{\vect x}\|=1\\ {\vect x}\in\mathcal Z_{k-1}}}\|\projP_{\set Y_2^\perp}{\vect x}\|^2 &=1-\inf_{\substack{\|{\vect x}\|=1\\ {\vect x}\in\mathcal Z_{k-1}}}\|\projP_{\set Y_2}{\vect x}\|^2\\ &=1-\inf_{{\vect x}\in\mathbb S^{k-2}}\frac{\|\projP_{\set Y_2}\bm\Phi{\vect U}_{1,1:k-1}{\vect x}\|^2}{\|\bm\Phi{\vect U}_{1,1:k-1}{\vect x}\|^2}\\ \end{align*} Note that \[\frac{\|\projP_{\set Y_2}\bm\Phi{\vect U}_{1,1:k-1}{\vect x}\|^2}{\|\bm\Phi{\vect U}_{1,1:k-1}{\vect x}\|^2}={\operatorname{aff}}^2(\mathcal Y_2,\bm\Phi{\vect U}_{1,1:k-1}{\vect x}).\] Applying Lemma \ref{lem:uniform-line-SRIP} to $\mathcal X_2$ and the $1$-dimensional subspace spanned by ${\vect U}_{1,1:k-1}{\vect x}$, we obtain \begin{align} \sup_{\substack{\|{\vect x}\|=1\\ {\vect x}\in\mathcal Z_{k-1}}}\|\projP_{\set Y_2^\perp}{\vect x}\|^2 &\le (1-\|{\mathcal P}_{\mathcal X_2}{\vect U}_{1,1:k-1}{\vect x}\|^2)(1+C\delta)\nonumber\\ &\le C(1-\|{\mathcal P}_{\mathcal X_2}{\vect U}_{1,1:k-1}{\vect x}\|^2)\label{eqn:compression-last-step} \end{align} Finally we need to bound $\|{\mathcal P}_{\mathcal X_2}{\vect U}_{1,1:k-1}{\vect x}\|^2$. Since ${\mathcal P}_{\mathcal X_2}\u_{1,i}=\lambda_i\u_{2,i}$ by choice of principal orthonormal bases, we have \begin{align} \|{\mathcal P}_{\mathcal X_2}{\vect U}_{1,1:k-1}{\vect x}\|^2=\sum_{i=1}^{k-1}\lambda_i^2x_i^2&\ge\sum_{i=1}^{k-1}\lambda_k^2x_i^2\nonumber\\ &=\lambda_k^2.\label{eqn:eig-estimation-for-last-step} \end{align} Combining \eqref{eqn:s_max-conversion}, \eqref{eqn:compression-last-step} and \eqref{eqn:eig-estimation-for-last-step}, we obtain \eqref{eqn:last-step} as desired. \end{proof} \section{Discussions}\label{sec:discussions} This section discusses the proofs of Theorem \ref{thm:cov-est-to-subspace-rip} and other results in this paper, and their differences from previous works. {\color{black} A notable feature of Theorem \ref{thm:cov-est-to-subspace-rip} is being purely deterministic, i.e. does not involve randomness of $\bm\Phi$. This cleans up the fog spanned across the mixture of probabilistic arguments and deterministic arguments in previous works on dimensionality-reduced subspace clustering \cite{Heckel2017Dimensionality, Wang2019Theoretical} and on subspace RIP \cite{Li2019Rigorous}. Note that the concept of subspace embedding (see Remark \ref{rem:subspace-embedding}) that is somehow similar to \eqref{eqn:orthonormal-perturbation-bound} appears in the analysis in \cite{Wang2019Theoretical}. Theorem \ref{thm:cov-est-to-subspace-rip} is distinguished from theirs at least in two aspects: our analysis is deterministic and does not involve randomness, while the assumption in \cite{Wang2019Theoretical} is an probabilistic inequality; we proceed to analyze the subspace RIP of $\bm\Phi$, which is capable of handling various subspace-related tasks and algorithms, while \cite{Wang2019Theoretical} considered only SSC, a special algorithm of the specific problem of subspace clustering. Also note that the result in \cite{Wang2019Theoretical} requires $n=\Omega(d^{9/2})$ for subgaussian matrices to ensure success of CSC, which is worse than the optimal scaling $n=O(d)$ that can be obtained by our theory, for instance, using the framework in \cite{Meng2018CSC}. Readers may find some similarity between the proof of Theorem \ref{thm:cov-est-to-subspace-rip} and the proof in \cite{Li2019Rigorous}. Indeed, the routine computations in both proofs are the same, e.g. \eqref{proj_norm_expansion}, \eqref{eqn:space_aff_telescope}--\eqref{eqn:decomposition2}. However, these routine computations are merely a non-substantial part of the proof, and the core difficulty is how to bound the quantities involved in these equations. To this end, our proof deviate significantly from that in \cite{Li2019Rigorous}. In fact, \cite{Li2019Rigorous} relied heavily on the fact that the Gaussian projection of two orthogonal vectors are independent to bound terms such as $\|\projP_{\set Y_2}{\vect V}_1\|$ in Lemma \ref{lem:line-aff-orthogonal} and $\|\projP_{\set Y_2}\bm\Phi\u_0\|$ in \eqref{proj_norm_expansion}, which cannot be generalized to even subgaussian matrices, let alone partial Fourier matrices, partial circulant matrices or log-concave ensembles, and the situation got more involved when the quantity of interest is complicated: it took over four pages in \cite{Li2019Rigorous} (see Appendix 8.8 there) to obtain the sought-for conditional independence to bound $\|\projP_{\set Y_2^\perp}\pz{k-1}\hat\v_{1,k}\|$ in \eqref{eqn:component1}. In our proof, we propose the novel Lemma \ref{lem:uniform-line-SRIP} (which was proved in \cite{Li2019Rigorous} but only for one-dimensional $\mathcal X_1$) as a key intermediate step, and all the bounds needed follow easily from either assumption \eqref{eqn:orthonormal-perturbation-bound} or Lemma \ref{lem:uniform-line-SRIP}. This demonstrates the power of our abstract setting \eqref{eqn:orthonormal-perturbation-bound}. } \section{Applications}\label{sec:applications} In this section we briefly mention some of the applications of our theory. As we mentioned before, our theory provides a universal framework to analyze the effects of random compression on subspace related tasks, and providing a list of such tasks would be out of scope of this paper. However, it is possible to describe the universal framework which works for any subspace-related algorithms that admits a theoretical guarantee via affinity, as follows.\\ \textbf{Framework. }\emph{Assume that we have a theoretical guarantee for an algorithm $A$ that succeeds on a colletion of $L$ subspaces $\mathcal X_1,\ldots,\mathcal X_L$ with probability at least $1-\delta$. Let $\bm\Phi$ be the random matrix described in Corollary \ref{cor:JL-implies-SRIP}, \ref{cor:BOS-implies-SRIP}, \ref{cor:circulant-implies-SRIP}, \ref{cor:strong-regularity-SRIP} or \ref{cor:log-concave-SRIP}. Let $\varepsilon$ be a sufficiently small positive number that depends only on the relative position of subspaces. Then for $n$ satisfying the restriction in the corresponding corollary, algorithm $A$ succeeds on the data projected by $\bm\Phi$ with probability at least $1-\delta_{\text{proj}}-\delta$, where $\delta_{\text{proj}}$ is the error probability given in the corresponding corollary. } As examples, we roughly describe some useful consequences of our theory on two tasks: subspace clustering and active subspace detection \cite{Lodhi2018Detection}. \begin{theorem}[Compressed subspace clustering]\label{thm:sc} Let $\bm\Phi$ be a partial Fourier matrix. Under some technical assumptions on the algorithm parameters (that is irrelevant of $\bm\Phi$, see \cite{Meng2018CSC}), the Threshold-based Subspace Clustering (TSC) algorithm succeeds (see \cite{Meng2018CSC}) on the dataset compressed by $\bm\Phi$ with probability at least \[1-\frac{10}M-\sum_{l=1}^L\mathrm e^{-c(N_l+1)}-\mathrm e^{-c(\sqrt{d_*^2+K^{-2}\varepsilon^2n}-d_*)}\] given $n>C\varepsilon^{-3}\max\{d_*(\log^3d_*+\log L),\log^2L,\log^3N\}$ and \begin{align*} &\max _{k \ne l} \ \sqrt{ \frac{{{\operatorname{aff}}}^2(\mathcal X_k, \mathcal X_l)(1-\varepsilon) + d_*\varepsilon }{d_k \wedge d_l} } \\ &+ 6\sqrt{\frac{d_*}{n}}\left(1+\sqrt{\frac{6\log {M}}{d_*}}\right)^2 \\ \leq & \frac{1}{\sqrt{6M_{\max}\log {M}}}, \end{align*} where $c>0$ is a constant, $M$ denotes the number of data points and $M_{\max}$ denotes the maximal number of data points lying in the same subspace. \end{theorem} This follows from Corollary \ref{cor:circulant-implies-SRIP} the analysis scheme proposed in \cite{Meng2018CSC}. We have chosen partial Fourier matrix and TSC algorithm only for simplicity of presentation; similar results hold for subgaussian matrices, partial circulant matrices, etc., and for SSC, SSC-OMP algorithms, etc. It is perhaps worth mentioning that Theorem \ref{thm:sc} is, to the best of our knowledge, the first effective performance analysis with sub-optimal scaling $n=O(d_*\text{polylog}(d_*))$ for TSC compressed by partial Fourier matrices, since the analysis in \cite{Heckel2017Dimensionality} appears flawed as pointed out in a footnote of this paper. We now turn to the task of active subspace detection. \begin{theorem}[Compressed active subspace detection] Let $\bm\Phi$ be a partial Fourier matrix. The compressed maximum-likelihood detector (see \cite{Jiao2019Compressed}) for noiseless active subspace detection succeeds, under Gaussian assumption on the data distribution, with probability at least \[ 1-4\sum_{i\ne j}\mathrm e^{-K({\operatorname{aff}}_*(1-\varepsilon)+d\varepsilon)d_*}-\mathrm e^{-c(\sqrt{d_*^2+K^{-2}\varepsilon^2n}-d_*)}, \] where \[ K(x):=\frac18\frac{(1-x/d_*-8/d_*)^2}{4+(1-x/d_*-8/d_*)} \] and ${\operatorname{aff}}_*$ denotes $\max_{k\ne l}{\operatorname{aff}}(\mathcal X_k,\mathcal X_l)$, given $\varepsilon\in(0,1)$ and $n>C\varepsilon^{-3}\max\{d_*(\log^3d_*+\log L),\log^2L,\log^3N\}$. For noisy case a similar conclusion holds. \end{theorem} This follows from Corollary \ref{cor:BOS-implies-SRIP} and the analysis scheme proposed in \cite{Jiao2019Compressed}. Note that in \cite{Jiao2019Compressed} the above theorem is proved for random matrices with exponential Johnson-Lindenstrauss property (Theorem 5 there) using results from the first version of this paper. Again, the appearance of partial Fourier matrices is arbitrary and can be replaced by any other random matrices discussed in this paper. \section{Simulations}\label{sec:simulations} We verify our results on Yale Face Database B \cite{Georghiades2001Few} and test the performance of Sparse Subspace Clustering (SSC) after random projection by Gaussian matrix, partial Fourier/Hadamard matrix, partial circulant matrix, and matrix with i.i.d. Student-t distributed ($\nu=5$) entries. Yale Face Database B is a database with ambient dimension $N=32256$ that contains the face images of $10$ human subjects. For convenience we randomly select $4$ subjects whose face images are subsequently clustered. The matrices we chose are representatives of the three classes of random matrices which we have inspected: Gaussian matrix represents matrices with exponential Johnson-Lindenstrauss property, partial Fourier/Hadamard matrix and partial circulant matrix represent matrices with fast algorithms, and i.i.d. Student-t distributed matrices, which has infinite fifth moments, represents matrices with heavy tails. Performance is evaluated in terms of clustering error rate \cite{Elhamifar2013Sparse}, i.e. the rate that SSC algorithm clusters a randomly compressed image to the correct subject, see Fig. \ref{fig:err-rate}. We are also concerned with the boost-up in computational efficiency supplied by fast matrix-vector multiplication algorithms for partial Fourier/Hadamard matrices and partial circulant matrices, which will be evaluated in terms of average running time, i.e. the time it takes to compute the random projection of a high-dimensional vector, see Fig. \ref{fig:running-time}. \paragraph{Computational Complexity} For unstructured $n\times N$ random matrices such as Gaussian matrices and Student-t matrices, it takes ${\mathcal{O}}(nN)$ time to compute the random projection of a vector. For partial Fourier/Hadamard matrices and partial circulant matrices, ${\mathcal{O}}(N\log N)$-time algorithms exist, thanks to Fast Fourier Transform (FFT) and Fast Walsh-Hadamard Transform (FWHT). More precisely, one may compute the sign-randomized\footnote{ This means multiplying each entry of ${\vect x}$ by a Rademacher random variable. See Theorem \ref{thm:apd:rip-to-jl} for details. } version of a vector ${\vect x}\in\mathbb R^N$ in ${\mathcal{O}}(N)$ time, and then compute its fast Fourier/Walsh-Hadamard transform $\hat{{\vect x}}$ in ${\mathcal{O}}(N\log N)$ time. By randomly sampling $n$ entries from $\hat{{\vect x}}$, which takes ${\mathcal{O}}(n)$ time, one finally obtains the randomly projected version of ${\vect x}$. For typical scenarios in practice we have $n\gg\log N$, hence random projections by partial Fourier/Hadamard matrices and partial circulant matrices are much efficient than random projections by unstructured matrices. \paragraph{Discussions} As one may see in Fig. \ref{fig:err-rate}, the error rates of all types of random projections converge to the baseline, i.e. the error rate of SSC without random projection, as $n$ tends to $N$. This is consistent with our theory that all these random matrices preserve affinities between subspaces. The running time of random projection shown in Fig. \ref{fig:running-time} coincides with our analysis above. Note that the running time is plotted in logarithmic scale. For partial Fourier/Hadamard matrix and partial circulant matrix, the running time is almost constant in $n$. Both the running times of Student-t matrix and Gaussian matrix grow linearly with respect to $n$; this is because that they both involve a ${\mathcal{O}}(nN)$-time matrix-by-vector multiplication. Note that Student-t matrix takes a somewhat longer time than Gaussian matrix, possibly due to higher complexity in its implementation, i.e. in generating Student-t distributed variables. The running time of partial Hadamard matrix is longer than that of partial Fourier matrix and partial circulant matrix, which may be caused by a less efficient implementation of FWHT than that of FFT. Except for very small $n$, partial Fourier/Hadamard matrix and partial circulant matrix is significantly faster than Gaussian matrix and Student-t matrix. For small $n$ the running time of Gaussian/Student-t matrix is shorter than that of partial Hadamard matrix. However, as $n$ grows large, for instance when $n>10000$, partial Hadamard matrix becomes the better choice. Our analysis indicates that this advantage would become even more obvious when the ambient dimension $N$ is larger and $n\gg\log N$. \begin{figure}\centering \includegraphics[width=0.7\linewidth]{err_rate.eps} \caption{Clustering error rate vs. compressed dimension $n$ for Yale Face Dataset B. The error rate of SSC without random compression is approximately 2.3\%. }\label{fig:err-rate} \end{figure} \begin{figure}\centering \includegraphics[width=0.7\linewidth]{runtime.eps} \caption{Average running time of random projecting $256$ vectors vs. compressed dimension $n$. Note that for partial Fourier matrix and partial Hadamard matrix the running time is almost independent of $n$. Both $x$-axis and $y$-axis are drawn in logarithmic scale, so the running time of Gaussian matrix and Bernoulli matrix grows linearly in the figure.} \label{fig:running-time} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper we provided a deterministic characterization of subspace RIP in terms of near-isometry on subspaces. This result enables to analyze the subspace RIP of matrices with a unified approach. As examples, we prove with this result that a large variety of random matrices, including subgaussian matrices, partial Fourier/Hadamard matrices and partial circulant/Toeplitz matrices, random matrices with independent strongly regular rows, and log-concave ensembles. This significantly enlarges the collection of random matrices known to possess subpace RIP in literature, demonstrating the applicability of subspace RIP. Subspace RIP, or in plain language, the almost-invariance of affinity under random projections, has played an important role in the analysis of Compressed Subspace Clustering algorithms. Hence our result demonstrates more scenarios where random projection and CSC may apply and has the potential to give better performance guarantee for CSC algorithms. Furthermore, since subspace RIP is a universal concept that does not depend on any specific algorithm, our result may find its application in various subspace-based machine learning algorithms, which we leave to future research. \section{Appendix: Exponential Johnson-Lindenstrauss Property}\label{apd:jl-property} This appendix deals with random matrices with exponential Johnson-Lindenstrauss property and their subspace RIP. The appendix is divided into two parts. In the first part, we prove that subgaussian matrices and partial Fourier/Hadamard matrices satisfy exponential Johnson-Lindenstrauss property to illustrate the wide applicability of this concept. In the second part, we introduce the standard tool of covering arguments and use it to prove Lemma \ref{lem:JL-orthonormal-preserving}. \subsection{Examples of Random Matrices with Exponential JL Property} Here we provide a non-comprehensive list of common random matrices that fulfill exponential Johnson-Lindenstrauss property. Such matrices can be roughly divided into two categories, whose exponential Johnson-Lindenstrauss property respectively stems from subgaussian concentration property and Restricted Isometry Property for sparse vectors. The most important example in the first category is subgaussian random matrices, and that in the second category is randomly sampled Bounded Orthogonal Systems (BOS), a class of random matrices including partial Fourier matrices and partial Hadamard matrices. We discuss these two categories respectively. \paragraph{Subgaussian concentration} We begin by defining subgaussian random variables and subgaussian random vectors. \begin{definition}[Section 7.1, \cite{Foucart2017Mathematical}]\label{def:subgaussian} Fix a positive constant $K$. A \emph{$K$-subgaussian random variable} is a random variable $X$ satisfying \begin{equation*} \mathbb E{\rm e}^{tx}\le {\rm e}^{K^2t^2/2}, \end{equation*} for any $t\in\mathbb R$. A \emph{subgaussian matrix} is a random matrix with each entry a $K$-subgaussian random variable. \end{definition} Gaussian variables might be the most common examples of subgaussian random variables. Other examples include variables with Rademacher distribution or uniform distribution on $[-1,1]$. In fact, any centered bounded random variable is subgaussian. It is possible to generalized the above definition to a multi-dimensional setting. \begin{definition} Let $\bm\Gamma$ be a positive semidefinite matrix. An \emph{$\bm\Gamma$-subgaussian random vector} is a random vector $\u$ taking value in $\mathbb R^n$ such that for any ${\vect x}\in\mathbb R^n$, \begin{equation*} \mathbb E{\rm e}^{\langle\u,{\vect x}\rangle}\le {\rm e}^{\langle\bm\Gamma{\vect x},{\vect x}\rangle/2}. \end{equation*} Such a random vector is said to satisfy \emph{Bernstein condition}, if \begin{equation*} \mathbb E\left|\|\u\|^2-\mathbb E\|\u\|^2\right|^k\le Ck!\|\bm\Gamma\|_{\rm op}^{k-2}\|\mathbb E(\u\u^{\rm T})\|_{\rm F}^2 \end{equation*} for some constant $C>0$, where $\|\cdot\|_{\rm op}$ denotes the operator norm. \end{definition} \begin{theorem}[\cite{Chen2018Hanson}, Theorem 2.10] If ${\vect A}$ is a random matrix with independent $\bm\Gamma$-subgaussian columns satisfying Bernstein condition. Furthermore, assume $\mathbb E({\vect A}^{\rm T}{\vect A})={\bf I}$. Then ${\vect A}$ satisfies exponential Johnson-Lindenstrauss property. \end{theorem} \begin{corollary} Assume ${\vect A}$ is a random matrix satisfying $\mathbb E({\vect A}^{\rm T}{\vect A})={\bf I}$. If in addition ${\vect A}$ is in one of the following form, then ${\vect A}$ satisfies exponential Johnson-Lindenstrauss property: \begin{enumerate}[label=\alph*)] \item A random matrix with independent subgaussian rows; \item A Gaussian matrix with independent columns; \item A product of a positive semidefinite matrix and a subgaussian matrix with independent entries. \end{enumerate} \end{corollary} \begin{proof} a) is classical and can be found in \cite{Vershynin2010Introduction}; b) and c) are proved in \cite{Chen2018Hanson}. \end{proof} \paragraph{Restricted Isometry Property} Restricted isometry property for sparse vectors \cite{Candes2008Restricted} has been a very powerful tool in analysis of compressed sensing and related algorithms. A vector is called $s$-sparse if at most $s$ of its entries are non-zero. RIP for sparse vectors are defined in the following way: \begin{definition} A matrix ${\vect A}$ is said to possess the RIP if there exists a function $\delta(s)\ge0$, such that for any positive integer $s$ and any $s$-sparse vector ${\vect x}$, \begin{equation*} (1-\delta(s))\|{\vect x}\|^2\le\|{\vect A}{\vect x}\|^2\le(1+\delta(s))\|{\vect x}\|^2. \end{equation*} The function $\delta(s)$ is called the \emph{restricted isometry constant} of ${\vect A}$. \end{definition} It is easy to see that exponential Johnson-Lindenstrauss property implies RIP, see for instance Theorem 5.2 in \cite{Baraniuk2008Simple}). The converse is also true in some sense, as the following theorem shows. \begin{theorem}[\cite{Rauhut2010Compressive,Krahmer2011New}]\label{thm:apd:rip-to-jl} Assume ${\vect A}$ is an $n\times N$ matrix with RIP and restricted isometry constant $\delta(s)$. Fix some $\varepsilon\in(0,1/2)$. Assume further that for some $s>0$ we have $\delta(s)<\varepsilon/4$. Let ${\vect D}_{\epsilon}$ be a diagonal matrix with i.i.d. Rademacher random variables on its diagonal, then for any ${\vect x}\in\mathbb R^N$, we have \begin{equation*} {\mathbb P}(\left|\|{\vect A}{\vect D}_{\epsilon}{\vect x}\|^2-\|{\vect x}\|^2\right|>\varepsilon\|{\vect x}\|^2)\le 2{\rm e}^{-\tilde c s} \end{equation*} for some universal constant $\tilde c>0$. \end{theorem} Partial Fourier matrices and partial Hadamard matrices are both examples of a more general class of random matrices, namely random sampled Bounded Orthonormal Systems (BOS). For such matrices it was shown that their restricted isometry constants are sufficiently small: \begin{theorem}[\cite{Rauhut2010Compressive, Foucart2017Mathematical}]\label{thm:apd:rip-of-bos} Let ${\vect A}\in\mathbb C^{n\times N}$ be the random sampling associated to a BOS with constant $K\ge 1$. For $\zeta,\eta_1,\eta_2\in(0,1)$, if \begin{align*} \frac{n}{\log(9n)}&\ge C_1\eta_1^{-2}K^2s\log^2(4s)\log(8N),\\ n&\ge C_2\eta_2^{-2}K^2s\log(\zeta^{-1}), \end{align*} then with probability at least $1-\zeta$ the restricted isometry constant $\delta(s)$ of $\frac{1}{\sqrt n}{\vect A}$ satisfies $\delta(s)\le\eta_1+\eta_1^2+\eta_2$. (Here $C_1$, $C_2$ are universal positive constants.) \end{theorem} One may combine Theorem \ref{thm:apd:rip-to-jl} and \ref{thm:apd:rip-of-bos} to obtain several modified versions of exponential Johnson-Lindenstrauss property that random sampled BOS satisfy. For example, taking $\eta_1=\eta_2=\varepsilon/4$, $s=\lceil C\varepsilon^2n/\sqrt N\rceil$ and $\zeta={\rm e}^{-C's}$, one may obtain that \begin{equation*} {\mathbb P}(\left|\|{\vect A}{\vect D}_{\epsilon}{\vect x}\|^2-\|{\vect x}\|^2\right|>\varepsilon\|{\vect x}\|^2)\le 2{\rm e}^{- c\varepsilon^2n/\sqrt N}. \end{equation*} \subsection{Proof of Lemma \ref{lem:JL-orthonormal-preserving}} We will use some standard covering arguments, e.g. \cite{Baraniuk2008Simple}, to prove \eqref{eqn:orthonormal-perturbation-bound} for random matrices with exponential Johnson-Lindenstrauss property. \begin{definition} An \emph{$\varepsilon$-net} in a subset $X$ of a Euclidean space is a finite subset $\mathcal N$ of $X$ such that for any $x\in X$ we have \begin{equation*} \min_{z\in\mathcal N}\|x-z\|<\varepsilon. \end{equation*} The \emph{metric entropy} of $X$ is a function $N(X,\varepsilon)$ defined as the minimum cardinality of an $\varepsilon$-net of $X$. \end{definition} For subsets of Euclidean space, the metric entropy can be easily bounded by a volume packing argument. For the Euclidean unit ball the corresponding result reads as following: \begin{lemma}[Proposition C.3, \cite{Foucart2017Mathematical}]\label{lem:metric-entropy-bound} Let $B_n$ be the unit ball in $\mathbb R^n$. Then \begin{equation*} N(B_n, \varepsilon)\le\left(1+\frac{2}{\varepsilon}\right)^n. \end{equation*} \end{lemma} The usage of covering arguments is demonstrated by the following lemma: \begin{lemma}[\cite{Vershynin2010Introduction}, Lemma 5.3]\label{lem:covering-approximation} Suppose $\mathcal N$ is a $\frac12$-net of $\mathbb S^{n-1}$. Let ${\vect A}$ be a $n\times n$ matrix. Then \begin{equation*} \|{\vect A}\|\le 2\sup_{{\vect x}\in\mathcal N}\|{\vect A}{\vect x}\|. \end{equation*} \end{lemma} Now we are ready to finish the proof of Lemma \ref{lem:JL-orthonormal-preserving}. Note that it suffices to show \begin{equation}\label{eqn:in-cor-orthonormal-preserving1} {\mathbb P}\left(\max_{{\vect x}\in\mathbb S^{d-1}}\left|\|\bm\Phi{\vect U}{\vect x}\|^2-1\right|>\varepsilon\right)\le {\rm e}^{-\tilde c\varepsilon^2 n+3d}.\ \end{equation} For any ${\vect x}\in\mathbb S^{d-1}$, exponential Johnson-Lindenstrauss property implies \begin{equation}\label{eqn:in-cor-orthonormal-preserving2} {\mathbb P}(\left|\|\bm\Phi{\vect U}{\vect x}\|^2-1\right|>\varepsilon)\le 2{\rm e}^{-\tilde c\varepsilon^2 n}. \end{equation} The desired inequality (\ref{eqn:in-cor-orthonormal-preserving1}) follows from (\ref{eqn:in-cor-orthonormal-preserving2}) and a standard covering argument. By Lemma \ref{lem:metric-entropy-bound}, one may find a set $\mathcal N\subseteq\mathbb S^{d-1}$ with cardinality $5^d$ such that \begin{equation*} \max_{{\vect x}\in\mathbb S^{d-1}}\min_{{\bf z}\in\mathcal N}\|{\vect x}-{\bf z}\|\le \frac12. \end{equation*} Then by Lemma \ref{lem:covering-approximation} \begin{equation}\label{eqn:cor-orthonormal-preserving-pointwise} \max_{{\vect x}\in\mathbb S^{d-1}}\left|\|\bm\Phi{\vect U}{\vect x}\|^2-1\right|\le 4\max_{{\vect x}\in\mathcal N}\left|\|\bm\Phi{\vect U}{\vect x}\|^2-1\right|. \end{equation} By (\ref{eqn:in-cor-orthonormal-preserving2}), (\ref{eqn:cor-orthonormal-preserving-pointwise}) and union bound, \begin{equation*} {\mathbb P}(\max_{{\vect x}\in\mathcal N}\left|\|\bm\Phi{\vect U}{\vect x}\|^2-1\right|>\varepsilon)\le 2\cdot 5^d {\rm e}^{-\tilde c\varepsilon^2 n}. \end{equation*} The proof is completed once we note that $2\cdot 5^d\le e^{3d}$ for $d\ge1$. \section{Appendix: Randomly Sampled BOS and Partial Circulant Matrices}\label{apd:fast-matrices} The purpose of this appendix is to prove Lemma \ref{lem:BOS-orthonormal-preserving} and Lemma \ref{lem:circulant-orthonormal-preserving}. We will need Theorem \ref{thm:apd:rip-to-jl} and Theorem \ref{thm:apd:rip-of-bos} as stated in Section \ref{apd:jl-property} as well as the covering argument adapted there. \subsection{Randomly Sampled BOS} \begin{proof}[Proof of Lemma \ref{lem:BOS-orthonormal-preserving}] By Theorem \ref{thm:apd:rip-of-bos}, the restricted isometry constants of $\bm\Phi$ satisfy \begin{equation}\label{eqn:bos-small-rip} {\mathbb P}\left(\delta(s)\le\frac{\varepsilon}{4}\right)\ge1-\exp\left(-C^{-1}\varepsilon^2K^{-2}\frac ns\right), \end{equation} given \begin{equation}\label{eqn:large-n} n\ge CK^2\varepsilon^{-2}s\log^2s\log(K^2\varepsilon^{-2}s\log N)\log N. \end{equation} If $\delta(s)\le\varepsilon/4$ holds, then by Theorem \ref{thm:apd:rip-to-jl} and a standard covering argument (c.f. proof of Lemma \ref{lem:JL-orthonormal-preserving}), $1-\varepsilon<s_{\min}(\bm\Phi{\vect U})\le s_{\max}(\bm\Phi{\vect U})<1+\varepsilon$ holds with probability at least $1-\mathrm e^{-cs+3d}$. Thus by union bound, \[1-\varepsilon<s_{\min}(\bm\Phi{\vect U})\le s_{\max}(\bm\Phi{\vect U})<1+\varepsilon\] holds with probability at least \[ 1-\exp(-cs+3d)-\exp\left(-C^{-1}\varepsilon^2K^{-2}\frac ns\right) \] if \eqref{eqn:large-n} holds. Set \begin{equation*} s=\left\lceil\frac{3d+\sqrt{9d^2+4cC^{-1}\varepsilon^2K^{-2}n}}{2c}\right\rceil \end{equation*} Then for $n>C'\varepsilon^{-2}K^2$ the probability above is at least \[ 1-2\exp\left(\frac34d-\frac14\sqrt{9d^2+C'^{-1}\varepsilon^2K^{-2}n}\right), \] as desired. It remains to check that \eqref{eqn:large-n} holds. Note that $s\ge C'^{-1}\varepsilon K^{-1}\sqrt n$. For $n\ge C'\log N$ we have $K^2\varepsilon^{-2}\log N\le s^2$, thus $\log(K^2\varepsilon^{-2}s\log N)\le 3\log s$. It then suffices to show \begin{equation}\label{eqn:large-n-final-form} n\ge CK^2\varepsilon^{-2}s\log^3s\log N. \end{equation} But $s\le C'\max\{d, \varepsilon K^{-1}\sqrt n\}$, which implies \eqref{eqn:large-n-final-form} when $n\ge C'K^3\varepsilon^{-3}\max\{d\log^3d,\log^3N\}$. \end{proof} \subsection{Partial Circulant/Toeplitz Matrices} \begin{proof}[Proof of Lemma \ref{lem:circulant-orthonormal-preserving}] The equations (3.3)-(3.7) in \cite{Vybiral2011Variant}, together with (3.8) there replaced by Hanson-Wright inequality to control the norm of $\|\vect\Sigma{\vect V}^*\vect a\|^2$, imply that for any ${\vect x}\in\mathbb S^{N-1}$, we have $|\|\bm\Phi{\vect x}\|^2-1|<\varepsilon$ with probability at least \[1-4N\mathrm e^{-\frac{t}4}-2\mathrm e^{-\frac{cn\varepsilon^2}{t}}\] for any $t>0$. Take $t=\varepsilon\sqrt n$, the above probability is at least $1-4N\mathrm e^{-c\varepsilon\sqrt n}$. By a standard covering argument (c.f. Proof of Corollary \ref{cor:JL-implies-SRIP}), $1-\varepsilon<s_{\min}(\bm\Phi{\vect U})\le s_{\max}(\bm\Phi{\vect U})<1+\varepsilon$ holds with probability at least $1-4N\cdot 5^{d}\cdot\mathrm e^{-c\varepsilon\sqrt n}$, which is greater than $1-\mathrm e^{-c\varepsilon\sqrt n/2}$ when $n>C\varepsilon^{-2}(d+\log N)^2$. \end{proof} \section{Appendix: Random Matrices With Heavy-Tailed Distributions}\label{apd:heavy-tails} The purpose of this appendix is to provide some material on heavy-tailed distributions, in particular, distribution with finite moments characterized by strong regularity condition and log-concave ensembles, and to provide a proof of Lemma \ref{lem:strong-regularity-orthonormal-preserving} and Lemma \ref{lem:log-concave-orthonormal-preserving}. \subsection{Finite Moments} Our proof depends on a theorem from \cite{Srivastava2013Covariance}, which asserts that \begin{theorem}[\cite{Srivastava2013Covariance}]\label{thm:s&v} Consider independent isotropic random vectors ${\vect x}_i$ valued in $\mathbb R^d$. Assume that ${\vect x}_i$ satisfies the strong regularity assumption: for some $C',\eta>1$, one has \begin{equation*} {\mathbb P}(\|{\mathcal P}{\vect x}_i\|^2>t)\le C't^{-\eta},\quad\text{for $t>C'\operatorname{rank}{\mathcal P}$} \end{equation*} for every orthogonal projection ${\mathcal P}$ in $\mathbb R^d$. Then there exists a polynomial function $\operatorname{poly}(\cdot)$ whose coefficients depend only on $C'$ and $\eta$, such that for any $\varepsilon\in(0,1)$ and for $n>\operatorname{poly}(\varepsilon^{-1})d$, we have \begin{equation*} \mathbb E\left\|\frac1n\sum_{i=1}^n{\vect x}_i{\vect x}_i^{\mathrm T}-\mathbf I\right\|\le\varepsilon. \end{equation*} \end{theorem} \begin{proof}[Proof of Lemma \ref{lem:strong-regularity-orthonormal-preserving}] Let $d=\lceil\alpha n\rceil$ where $\alpha\in(0,1)$ is to be determined later (we shall choose some $\alpha$ that does not depend on $n$). Fix a $d$-dimensional subspace of $\mathbb R^N$ and denote by ${\vect U}$ any of its orthonormal basis; it follows that ${\vect U}$ is an $N\times d$ matrix. We shall prove \eqref{eqn:orthonormal-perturbation-bound} for some $\delta$ and $\varepsilon$ using Theorem \ref{thm:s&v} and partial strong regularity condition \eqref{eqn:strong-regularity}. Denote the rows of $\bm\Phi$ by $\frac{1}{\sqrt n}{\vect x}_1^{\mathrm T},\ldots,\frac{1}{\sqrt n}{\vect x}_n^{\mathrm T}$. Let ${\vect y}_i={\vect U}^{\mathrm T}{\vect x}_i$. Then ${\vect y}_i$'s are independent, centered and isotropic. Furthermore, we have \begin{equation*} \|{\vect U}^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi{\vect U}-\mathbf I\|=\left\|\frac{1}n\sum_{i=1}^n{\vect y}_i{\vect y}_i^{\mathrm T}-\mathbf I\right\| \end{equation*} Before applying Theorem \ref{thm:s&v}, we need to show that ${\vect y}_i$ fulfills strong regularity condition. For any orthogonal projection ${\mathcal P}$ of rank $k$ in $\mathbb R^d$, we consider the tail of $\|{\mathcal P}{\vect y}_i\|=\|{\mathcal P}{\vect U}^{\mathrm T}{\vect x}_i\|$. First we note that there exists some $d\times k$ matrix ${\vect V}$ with orthonormal columns such that ${\mathcal P}={\vect V}\V^{\mathrm T}$. Thus $\|{\mathcal P}{\vect U}^{\mathrm T}{\vect x}_i\|=\|{\vect U}{\mathcal P}{\vect U}^{\mathrm T}{\vect x}_i\|=\|({\vect U}{\vect V})({\vect U}{\vect V})^{\mathrm T}{\vect x}_i\|$. But ${\vect U}{\vect V}$ is a matrix of rank$\le k$ with orthonormal columns, since $\operatorname{rank}({\vect U}{\vect V})\le\operatorname{rank}{\vect V}$ and $({\vect U}{\vect V})^{\mathrm T}({\vect U}{\vect V})={\vect V}^{\mathrm T}{\vect U}^{\mathrm T}{\vect U}{\vect V}=\mathbf I$. By \eqref{eqn:strong-regularity} we have \begin{equation*} {\mathbb P}(\|{\mathcal P}{\vect y}_i\|^2>t)\le C't^{-\eta},\quad\text{for $t>C'\operatorname{rank}({\vect U}{\vect V})$,} \end{equation*} hence for $t>C'k$. This shows that ${\vect y}_i$ satisfies strong regularity condition. The corollary follows from Theorem \ref{thm:s&v} and Chebyshev inequality. \end{proof} \subsection{Log-Concave Ensembles} We will need the following well-known results\footnote{ Sharper results are known in literature, e.g. \cite{Mendelson2014Singular}, but this does not yield significant improvement in our case. } on covariance estimation with log-concave ensembles. \begin{theorem}[\cite{Adamczak2010Quantitative,Adamczak2011Sharp}]\label{thm:adamczak} Let ${\vect x}_1,\ldots,{\vect x}_n$ be independent centered isotropic random vectors in $\mathbb R^d$ with log-concave distributions. Then there exists some universal constants $c\in(0,1)$, $C>0$ such that \begin{equation*} \left\|\frac1n\sum_{i=1}^n{\vect x}_i{\vect x}_i^{\mathrm T}-\vect I\right\|\le C\sqrt{\frac dn} \end{equation*} with probability at least $1-2\exp(-c\sqrt d)$. \end{theorem} By definition, it is easy to check that the low-dimensional marginal of a log-concave distribution is log-concave. \begin{proof}[Proof of Lemma \ref{lem:log-concave-orthonormal-preserving}] Let ${\vect U}$ be an orthonormal basis for a $d$-dimensional subspace of $\mathbb R^N$, where $d=\lfloor C^{-2}\varepsilon^{2}n\rfloor$. For $n>2C^2\varepsilon^{-2}d_2$ we have $d\ge 2d_2$, which suffices for our purpose. Set ${\vect y}_i={\vect U}^{\mathrm T}{\vect x}_i$. Argue as in the proof of Lemma \ref{lem:strong-regularity-orthonormal-preserving}, we obtain that ${\vect y}_i$ is independent, centered, and isotropic, and that \begin{equation}\label{eqn:log-concave-midstep} \|{\vect U}^{\mathrm T}\bm\Phi^{\mathrm T}\bm\Phi{\vect U}-\vect I\|=\left\|\frac1n\sum_{i=1}^n{\vect y}_i{\vect y}_i^{\mathrm T}-\vect I\right\|. \end{equation} The distribution of ${\vect y}_i$ is a $d$-dimensional marginal of ${\vect x}_i$, hence is log-concave. It follows from Theorem \ref{thm:adamczak} and \eqref{eqn:log-concave-midstep} that \begin{equation*} 1-\varepsilon<s_{\min}(\bm\Phi{\vect U})\le s_{\max}(\bm\Phi{\vect U})<1+\varepsilon \end{equation*} with probability at least $1-2\exp(-c\sqrt d)$. When $n>10c^{-2}C^2\varepsilon^{-2}d_2$, we have $d>\frac12C^{-2}\varepsilon^2n+3c^{-2}$, thus the probability above is at least $1-\exp(-c'\varepsilon\sqrt n)$, as desired. \end{proof} \bibliographystyle{IEEEtran}
1,477,468,750,540
arxiv
\section{Introduction} \input{intro} \section{Setup, Notation, and Statement of Results} \input{notation} \subsection{Geometry of M}\label{SectionGeomOfM} \input{geomofm} \subsection{The Operator $\square_b$} \input{theopboxb} \subsection{Statement of Results}\label{SectionResults} \input{statementofresults} \section{Background} \input{backgroundintro} \input{nisops} \section{On Diagonal Bounds}\label{SectionOnDiag} \input{ondiag} \section{Finite Speed of Propagation}\label{SectionFiniteSpeed} \input{finitespeed} \section{Off Diagonal Bounds}\label{SectionOffDiag} \input{offdiag} \section{Multipliers}\label{SectionMultipliers} \input{multipliers} \section{Other Examples}\label{SectionOtherExamples} \input{otherexintro} \subsection{Example: A Generalization of Theorem 4 of \cite{SikoraRieszTransformGaussianBoundsAndTheMethodOfWaveEquation}}\label{ExampleSik} \input{exsikora} \subsection{Example: Pseudoconvex CR Manifolds of Finite Type}\label{ExamplePseudoConvex} \input{expseudoconvex} \subsection{Example: Polynomial Model Domains}\label{ExamplePolyModel} \input{expolymodel} \subsection{Example: Operators on a Compact Manifold, Defined by Vector Fields}\label{ExampleCptMfld} \input{excptmfld} \subsection{Example: Quasi-homogeneous Vector Fields}\label{ExampleQuasiHomog} \input{exquasihomog} \bibliographystyle{amsalpha}
1,477,468,750,541
arxiv
\section{Introduction} The clustering of galaxies is an increasingly important cosmological probe of the low-redshift Universe. Building on the success of recent surveys like SDSS BOSS/eBOSS \cite{Alam:2016hwk,Alam:2020sor}, multiple experiments will increase the volume and the number of observed galaxies in the near future, including DESI \cite{Aghamousa:2016zmz}, Subaru HSC/PFS \cite{2014PASJ...66R...1T,Hikage:2018qbn}, Euclid \cite{Amendola:2016saw}, Vera Rubin Obervatory/LSST \cite{Abell:2009aa}, SPHEREx \cite{Dore:2014cca}, Roman Telescope/WFIRST \cite{2019arXiv190205569A}, and others. Having lower cosmic variance and shot noise, these experiments demand accurate theoretical modeling at the percent-level or better to allow for unbiased cosmological parameter inference (e.g., \cite{Nishimichi:2020tvu}). Numerical simulations can provide such modeling for the clustering of dark matter, and while they are numerically expensive, interpolation schemes may be employed to overcome the computational challenge of this approach. However, the clustering of galaxies is biased with respect to that of the dark matter, in an a priori unknown way that depends on the type of galaxies observed, for example on their mass or the environment in which they formed. While numerical schemes can be implemented to place galaxies in collapsed dark matter halos in simulations, it is not known what family of assignment schemes is correct for a given galaxy sample, and how to make this fully general is a topic of ongoing research; see e.g.~\cite{Hadzhiyska:2019xnf,Hadzhiyska:2020iri} for two recent studies. \vskip 4pt An alternative approach, which we follow here, is to describe the galaxy density field perturbatively, including all possible terms in the bias expansion allowed by the symmetries of the problem~\cite{Desjacques:2016bnm}. In this case, one has an analytical model for the clustering of galaxies in redshift space, which can readily be used for cosmological parameter inference using standard MCMC techniques~\cite{Nishimichi:2020tvu,Chudaykin:2020aoj,DAmico:2020kxu}. Apart from gravitational and biasing nonlinearities, one other important part of this model is stochastic noise. Using the correct parametric form of the power spectrum of this noise is critical for cosmological parameter analyses from galaxy surveys, because the noise power spectrum contributes to the total model power spectrum so that any missing ingredient in the noise model could lead to biases in cosmological parameter inferences. \vskip 4pt The simplest approach is to assume a white noise power spectrum, roughly of the size of the shot noise. However, it can be argued based on symmetries that the stochastic noise of the galaxy overdensity in redshift space should have corrections on the white power spectrum, scaling as $k^2$ and $k^2\mu^2$, where $\mu$ is the cosine with respect to the line of sight and $k$ is the wavenumber of the galaxy overdensity~\cite{Perko:2016puo}. This prediction can be tested using simulated galaxy catalogs. When sample variance is present and a simulated galaxy power spectrum is fitted with multiple parameters (e.g., galaxy bias parameters, counter terms, and stochasticity parameters), it is challenging to identify the exact parametric form of the noise power spectrum. A more powerful technique to characterize the form of the noise is to measure it directly, subtracting the model prediction from the simulated galaxy overdensity field. This avoids sample variance if model prediction and simulation are computed for the same initial random seed. While this noise has been measured for simulations of dark matter \cite{Baldauf:2015zga,Taruya:2018jtk}, dark matter halos in real space \cite{Marcel1811}, and 21cm radiation \cite{McQuinn:2018zwa}, it has not been investigated for halos or galaxies in redshift space. This is the subject of this paper. A primary goal is to use field level methods to conduct a sample-variance-free test of the anisotropic noise prediction. At the same time, such measurement is also a test of the perturbative model for galaxy clustering in redshift space. Such investigations of the noise properties, combined with efforts to model the galaxy power spectrum at the subpercent level, provide a solid theoretical foundation for analyzing future galaxy surveys. \vskip 4pt The paper is outlined as follows. We first model and measure the velocities of a simulated sample of galaxies. Based on the redshift space distortions generated by these velocities, we introduce a model for the galaxy density in redshift space. We then measure the error of this model by comparing against simulated galaxies in redshift space. We characterize its scale dependence and compare it against theoretical expectations, before concluding. \vskip 4pt An accompanying Python software package, \textsc{perr}, is available online \href{https://github.com/mschmittfull/perr}{\faGithub} \footnote{\url{https://github.com/mschmittfull/perr}}. It is based on \textsc{nbodykit} \cite{Hand:2017pqn} \href{https://github.com/bccp/nbodykit}{\faGithub} \footnote{\url{https://github.com/bccp/nbodykit}} and can be used to generate the 3D models for the velocity and galaxy density and compare them with simulations. \vskip 4pt \section{Galaxy velocities and redshift space displacements} We start by discussing the velocity of galaxies in real space, which determines the redshift space displacement, i.e.~the line-of-sight displacement that must be applied to galaxies' real space positions to obtain their observed positions in redshift space. We review a perturbative model for this redshift space displacement and compare it against simulations. Later, we will build on this to obtain a model of the galaxy overdensity in redshift space and test that against simulations. \vskip 4pt \subsection{Velocity of Lagrangian particles} We can model velocities following Matsubara \cite{2008PhRvD..77f3530M} (also see, e.g., \cite{2013MNRAS.429.1674C}). In redshift space $\vec s$, the location of an object at $\vec x$ is mis-identified due the peculiar velocity $\vec v=a\dot\vec x$ along the line of sight $\hat \vz$, \begin{align} \label{eq:1} \vec s = \vec x + \frac{\hat \vz\cdot \vec v(\vec x)}{aH}\hat \vz\;, \end{align} where $a$ is the scale factor and $H$ Hubble parameter. Without redshift space distortions (RSD), the relationship between Lagrangian coordinates $\vq$ and Eulerian coordinates $\vec x$ is given by the nonlinear displacement $\vec \psi$, \begin{align} \label{eq:2} \vec x = \vq+\vec \psi(\vq)\;. \end{align} Including RSD, this becomes \begin{align} \label{eq:3} \vec s = \vq+\vec \psi(\vq) + \frac{\hat \vz\cdot \vec v(\vec x)}{aH}\hat \vz. \end{align} In perturbation theory, the velocity field is related to the time derivative of the displacement field \begin{align} \label{eq:EulVel} \vec v(\vec x) &= a\dot\vec x =a\dot\vec \psi=a\sum_{n=1}^\infty nfH\vec \psi_n(\vq)\;, \end{align} where we have used the perturbative expansion $\vec \psi=\sum_n\vec \psi_n$. Note that $\vec \psi_n\propto D^n(z)$, so that $\dot\vec \psi_n=nfH\vec \psi_n$, where $D(z)$ is the linear growth factor and $f\equiv d \log D/ d \log a$. Therefore, we can write \begin{align} \label{eq:5} \vec s = \vq+\vec \psi^{s}(\vq) \;. \end{align} The redshift space displacement $\vec \psi^{s}(\vq)$ can be written in compact form as follows \begin{align} \label{eq:6} \vec \psi^{s}(\vq) = \sum_{n=1}^\infty R^{[n]} (\hat {\boldsymbol{z}}) \cdot \vec \psi_n ({\boldsymbol{q}}) \; , \end{align} where the matrices $R^{[n]} (\hat{\boldsymbol{z}})$ are defined as \begin{equation} R^{[n]}_{ij} (\hat{\boldsymbol{z}}) \equiv \delta_{ij} + nf\hat z_i \hat z_j \;. \end{equation} This standard result describes the mapping of Lagrangian particles at $\vq$ to their Eulerian redshift-space coordinates $\vec s$, using the velocity predicted by Lagrangian perturbation theory. \vskip 4pt \subsection{Continuous velocity field} Before using the above mapping of Lagrangian particles to Eulerian redshift space, let us go one step back and consider the velocity field itself, so we can compare it between the model and simulations. To compute the velocity as a continuous field in Eulerian space one can proceed as follows. First, the continuity equation in Eulerian space gives \begin{align} \label{eq:Continuity} a\dot\delta(\vec x) + \nabla\cdot[(1+\delta(\vec x))\vec v(\vec x)] = 0 \;. \end{align} Second, the Lagrangian Perturbation Theory (LPT) expression for the time derivative of the Eulerian density in Fourier space is \begin{align} \label{eq:LPTDeltaDot} a\dot\delta(\vk) &= a\frac{\partial}{\partial t}\int d^3\vq\, e^{i\vk\cdot(\vq+\vec \psi(\vq,t))} = ai\vk\cdot\tilde{\dot\vec \psi}(\vk)\;. \end{align} Here, we defined the real-space shifted field \begin{align} \tilde{\dot\vec \psi}(\vk) \equiv \int d^3\vq \,\dot\vec \psi(\vq) e^{i\vk\cdot(\vq+\vec \psi(\vq))}\;. \end{align} This is similar to the Zel'dovich approximation, moving Lagrangian particles from $\vq$ to $\vq+\vec \psi$, but weighing each particle by $\dot\vec \psi$; this operation is analogous to the shifted bias operators in \cite{Marcel1811}. Combining Eqs.~\eq{Continuity} and \eq{LPTDeltaDot} shows that the curl-free part of the momentum density is \begin{align} (1+\delta(\vec x))\vec v(\vec x)=-a\tilde{\dot\vec \psi}(\vec x) \;. \end{align} The resulting curl-free velocity or RSD displacement is given by \begin{align} \label{eq:vmodel} \frac{\vec v(\vec x)}{aH}=-f\sum_{n=1}^\infty \frac{n\widetilde{\vec \psi_n}(\vec x)}{1+\delta(\vec x)} \equiv -f\sum_{n=1}^\infty n\widehat{\vec \psi_n}(\vec x) \;. \end{align} Note that we have used \begin{align} \dot\vec \psi(\vq) = \sum_{n=1}^\infty nfH \vec \psi_n(\vq)\;. \end{align} The equations above are exact if $\vec \psi$ is the true displacement field. In practice, we use the linear displacement for shifting, and we truncate the sum over $n$ keeping only $n=1$. \vskip 4pt \subsection{Evaluating the continuous velocity model in a 3D box} To obtain the shifted $n$th order displacement $\widetilde{\vec \psi_n}(\vec x)$ in \eqq{vmodel}, Lagrangian particles are weighted by $\vec \psi_n(\vq)$ and then shifted from their Lagrangian position $\vq$ by the real-space displacement $\vec \psi_1(\vq)$. When painting the particles to a regular grid, particles are summed up, including their weights. This is the same procedure as for the shifted bias operators defined in \cite{Marcel1811}. Instead of dividing by $1+\delta$ in \eqq{vmodel}, one can use a modified painting scheme that divides by the number of particles contributing to each cell. This is denoted with a hat on the right-hand-side of \eqq{vmodel}, and this is what we will use in the following. In detail, we implement the modified painting scheme for evaluating the above model for the continuous velocity field in a 3D box as follows. \begin{itemize} \item Place particles on a regular grid in Lagrangian space. Call their positions $\vq_i$. \item Compute $\dot\vec \psi_{1,i}=\dot\vec \psi_1(\vq_i)=fH \vec \psi_1(\vq)$ for each particle. \item Shift each particle to $\vec x_i=\vq_i+\vec \psi_1(\vq_i)$. \item Paint the shifted catalog to a grid, weighting each particle by $\dot\vec \psi_{1,i}$. For each cell, divide by the number of particles contributing to that cell. \item The resulting field $\widehat{\dot\vec \psi}(\vec x)$ as a function of Eulerian coordinates is our model for the Eulerian velocity. \end{itemize} Due to the averaging operation in the painting step, the field value does not increase if more $\vq$ particles with the same velocity end up in the same region; this ensures that we indeed obtain the velocity and not the momentum field. The procedure is similar to the one used to generate shifted bias operators \cite{Marcel1811}, except that we are now shifting $\dot\vec \psi(\vq_i)$ instead of bias operators $\mathcal{O}(\vq_i)$, and we modify the painting scheme such that particles contributing to a cell are averaged instead of summed. \vskip 4pt A 2D slice of the continuous Eulerian velocity field generated with this method is shown in the two right panels in \fig{VelModelSlices}. This shows that the model predicts large-scale flows that are coherent over tens of megaparsecs and represent flows towards large-scale overdense regions by a few megaparsecs. \vskip 4pt \subsection{Comparison with simulated velocity} \label{se:VelSimComparison} To test the above model for the velocity field, we compare it against velocities of different objects measured in an N-body simulation (in real space). In general it is not known exactly which objects in an N-body simulation correspond to galaxies observed by a specific galaxy survey. We produce a galaxy sample that approximately reproduces observed properties of SDSS BOSS CMASS galaxies following the procedure of \cite{Nishimichi:2020tvu}.\footnote{The \textsc{Rockstar} \cite{Rockstar} \href{https://bitbucket.org/gfcstanford/rockstar}{\faBitbucket} phase-space halo finder is used to identify halos and subhalos in the snapshot of a dark-matter-only N-body simulation with $1536^3$ particles in a $L=500\ h^{-1}\text{Mpc}$ periodic box at redshift $z=0.6$, run with \textsc{MP-Gadget} \cite{yu_feng_2018_1451799} \href{https://github.com/MP-Gadget/MP-Gadget}{\faGithub}. These halos and subhalos are then populated with galaxies with a probability that depends on the virial mass of the object; see Eq.~(1) in \cite{Nishimichi:2020tvu}. We choose their CMASS1 parameters, i.e.~$\log_{10}M_\mathrm{min}[h^{-1}\text{M}_\odot]=12.97$ and $\sigma_{\log_{10}M}=0.35$.} Note that this does not explicitly populate halos with centrals and satellites using an halo occupation distribution (HOD), but uses subhalos found with the phase-space halo finder \textsf{Rockstar} \cite{Rockstar} and selects them with a soft mass cut to represent galaxies. This accounts for the velocity offsets between the halo center of mass and central subhalos \cite{Rockstar}, where galaxies are expected to form. Also, satellites are based on actual subhalo positions and velocities within larger halos rather than assigning satellites with some manual prescription (e.g., using the velocity of random dark matter particles in a halo or an NFW profile). The velocities of these mock galaxies, converted to the corresponding RSD displacement, are shown in the left panel of \fig{PTChallVel}. The middle panel shows the analytical prediction using the model from the last section, reading out the continuous velocity field at the location of the simulated galaxies, and the right panel shows the residual displacement between simulation and model. \vskip 4pt \begin{figure}[tbp] \centering \includegraphics[width=1.0\textwidth]{{plots/slice_painted_model_RSD_disp_R0.0_043d5cb}.pdf} \caption{2D slice of the linear density, Zel'dovich density, and $x$- and $y$-component of the continuous velocity field predicted by \eqq{vmodel} for $n_\mathrm{max}=1$. The predicted velocity field is coherent over tens of Megaparsecs, with most regions flowing towards the cluster and filament in the center of the slice. To generate the Zel'dovich density and the velocity prediction, $1536^3$ particles in a Lagrangian space box with $L=500\ h^{-1}\text{Mpc}$ were shifted by the first-order displacement. All fields are evaluated at redshift $z=0.6$. } \label{fig:VelModelSlices} \end{figure} This shows that the analytical model describes the large-scale velocity field of galaxies rather well. But it fails in some highly clustered regions, where the residual displacement can be tens of Mpc and is pointing in random directions. These galaxies are subhalos whose velocity is close to the virial velocity of their parent halo rather than the large-scale velocity field in their vicinity. Given the random direction of these velocity offsets, it seems challenging to model these velocities with any deterministic model. For the galaxies in redshift space, this can be regarded as the well-known Fingers of God effect, which corresponds to random motions of satellite galaxies along the line of sight, leading to a relative suppression of clustering along the line of sight. \vskip 4pt \begin{figure}[tbp] \centering \includegraphics[width=0.99\textwidth]{plots/slice_vectors_gal_ptchall_PsiDot1_9fc36a6.pdf} \caption{RSD displacements in Mpc/h in the $x-y$ plane, for simulated galaxies (left), the model prediction from \eqq{vmodel} evaluated at galaxy positions (center), and their residual (right). The model captures the large-scale bulk flows rather well. The largest mistakes happen in clustered regions where the velocity vector in the simulation is large and goes in random directions at nearly the same location; these are likely satellites with virial velocity. Since the velocity goes in opposite directions in nearly the same location, there is little hope to model this deterministically, so it should be regarded as an unpredictable noise contribution. } \label{fig:PTChallVel} \end{figure} From \fig{PTChallVel} it is clear that only a minority of galaxies have these large velocities and corresponding large RSD displacements which are very discrepant with the analytical model prediction, while the prediction is rather accurate for the majority of galaxies. Indeed, we find that the model predicts the simulated RSD displacement with an error of less than $2\ h^{-1}\text{Mpc}$ for 80\% of the galaxies and with an error less than $3\ h^{-1}\text{Mpc}$ for 90\% of the galaxies; see the left panel of \fig{fractions}. This means that only 10-20\% of galaxies have velocities that are crudely wrong. Removing such galaxies could enable perturbation theory to reach smaller scales when modeling the power spectrum in redshift space, at the expense of a small increase in shot noise. Removing the 13\% worst-modeled galaxies reduces the rms displacement error by a factor of 2, from $2\ h^{-1}\text{Mpc}$ to $1\ h^{-1}\text{Mpc}$. In other words, about half the rms displacement error ($1\ h^{-1}\text{Mpc}$) comes from the worst 13\% of galaxies (these have a residual displacement $>2.5\ h^{-1}\text{Mpc}$), while the other half of the rms ($1\ h^{-1}\text{Mpc}$) comes from the best 87\% of galaxies (these have residual displacement $<2.5\ h^{-1}\text{Mpc}$).\footnote{For comparison, the rms RSD displacement is $3.9\ h^{-1}\text{Mpc}$ for the simulated galaxies and $3.0\ h^{-1}\text{Mpc}$ for the model prediction evaluated at galaxy positions.} Identifying these satellite galaxies observationally in redshift space is a challenging task; though see e.g.~\cite{Rodriguez:2020fos} for recent progress and application to SDSS data.\footnote{Typical approaches seek to identify clustered groups in the observed data. Nonlocal selections like this can induce additional scale dependence in the power spectrum. Another approach could be to use summary statistics in Fourier space. For example, one could optimize weights of galaxies such that the small-scale power spectrum quadrupole is maximized, corresponding to a suppression of the Fingers of God, while keeping the small-scale power spectrum monopole (shot noise) small to ensure that most galaxies are still included. A concrete way would be to maximize the ratio of the power spectrum quadrupole over monopole at high $k$. The optimal weights can be found by solving an eigenvalue problem. Going further, one can impose a prior on the weights, e.g.~to set some fraction of them to 0 and the rest to 1, or to follow the halo mass function to implement halo mass weighting and suppress shot noise. To enforce non-negative weights one can use a sigmoid function. The resulting cost function can be optimized using gradient descent, noting that derivatives of power spectrum multipoles with respect to galaxy weights can be evaluated using FFTs. As long as the weights are parameterized as functions of observed galaxy properties, the resulting weighted galaxy sample is still a biased tracer and can be described by the same galaxy bias model as in conventional galaxy clustering analyses. We leave it to future work to explore this idea.} \vskip 4pt \begin{figure}[tbp] \centering \includegraphics[width=0.45\textwidth]{plots/fraction_of_galaxies_with_small_residual_4b9d530} \includegraphics[width=0.45\textwidth]{plots/rms_displ_residual_with_worst_gals_removed_554a1e8} \caption{\emph{Left panel}: Fraction of galaxies for which the residual RSD displacement (PT challenge galaxies minus shifted $\dot\psi$ model) is smaller than some value $d$. \emph{Right panel}: Rms residual RSD displacement after removing different fractions of galaxies with the worst residual. Removing the worst 13\% of galaxies reduces the rms residual displacement by a factor of 2. } \label{fig:fractions} \end{figure} Of course the results above are specific to the mock galaxies generated, and they will differ for other tracers and models. We briefly discuss this in \app{OtherVelocityPlots}. We also note that it is challenging to quantify the velocity error in more detail, for example using power spectra, because it is difficult to compute a continuous velocity field from a discrete tracer as it is unclear how to define the velocity field at locations with no objects. While we will use the model prediction for the velocity field to model the galaxy clustering in redshift space, it may also be useful in its own right for modeling the kSZ effect or other cosmological probes of the velocity field. \vskip 4pt \MyFloatBarrier \section{Galaxy overdensity in redshift space} Having shown that the velocity field predicted by Lagrangian perturbation theory adequately traces that in simulations, we now proceed to model the galaxy overdensity in redshift space and compare it against simulations to characterize the quality and error of the model. \vskip 4pt \subsection{Model of the galaxy overdensity in redshift space} A perturbative model for the galaxy overdensity field in redshift space can be derived following the same procedure as for the real space modeling~\cite{Marcel1811}. One of the key ingredients needed for a successful model are large displacements induced by the long-wavelength density fluctuations. If they are not accounted for properly, the model will fail on scales smaller than $\mathcal O(10)\; {\rm Mpc}$, since the positions of the over- or underdensities will be wrong. It is important to stress that this decorrelation with the nonlinear field is much larger than the naive expectation from the one-loop calculation~\cite{Marcel1811,Taruya:2018jtk}. Indeed, while the effects of the bulk flows are guaranteed to cancel in any $n$-point correlation function (in real or redshift space) due to the Equivalence Principle~\cite{Creminelli:2013nua,Creminelli:2013poa,Kehagias:2013yd}, their impact on the level of realizations of density fields is much more dramatic. For this reason it is natural to use Lagrangian perturbation theory as a description for the nonlinear galaxy density field, since it properly captures the effect of large displacements by design. However, since our measurements in simulations are in Eulerian coordinates, it is useful to rewrite the model to resemble the perturbative expansion in Eulerian perturbation theory. We give the details of this derivation in this section. \vskip 4pt The galaxy density field realization in Eulerian space, including RSD, can be modeled as follows \cite{Matsubara:2008wx} \begin{align} \delta_g^s({\boldsymbol{k}},\hat{\boldsymbol{z}}) = \int d^3 {\boldsymbol{q}}\, (1+\delta_g^{\rm L}({\boldsymbol{q}})) e^{-i{\boldsymbol{k}}\cdot\left({\boldsymbol{q}}+\vec \psi^s({\boldsymbol{q}}) \right) } \;, \label{eq:ModelStart} \end{align} where the bias expansion in Lagrangian coordinates up to cubic order is given by~\cite{Desjacques:2016bnm} \begin{align} \delta_g^{{\rm L}}({\boldsymbol{q}})\ =\ b_1^{{\rm L}}\,\delta_1({\boldsymbol{q}})&\,+\,b_2^{{\rm L}}\,[\delta_2({\boldsymbol{q}})-\sigma_1^2]\, +\, b_{{\cal G}_2}^{{\rm L}}{\cal G}_2({\boldsymbol{q}})\nonumber\\[4pt] &\, +\,b_3^{{\rm L}}\,\delta_3({\boldsymbol{q}})\,+\, b_{{\cal G}_2\delta}^{{\rm L}}\,[{{\cal G}_2\delta}]({\boldsymbol{q}})\, +\, b_{{\cal G}_3}^{{\rm L}}\, {\cal G}_3({\boldsymbol{q}})\,+\, b_{\Gamma_3}^{{\rm L}}\,\Gamma_3({\boldsymbol{q}}) \; . \end{align} In our notation $\delta_n({\boldsymbol{q}})\equiv \delta_1^n({\boldsymbol{q}})$ and $\delta_1({\boldsymbol{q}})$ is the linear density field. The explicit form of all bias operators as well as their relation to other bases used in the literature can be found in~\cite{Desjacques:2016bnm}. We can also use the perturbative expansion of the nonlinear displacement \begin{equation} \vec \psi^s({\boldsymbol{q}}) = \sum_{n=1}^\infty R^{[n]} (\hat {\boldsymbol{z}}) \cdot \vec \psi_n ({\boldsymbol{q}}) \; . \end{equation} Since the linear displacement is the largest contribution to $\vec \psi$, we can expand all higher order terms from the exponent in \eqq{ModelStart} and treat them as additional nonlinearities in the bias expansion for the galaxy density field. The explicit derivation is given in Appendix~\ref{app:CubicPTmodel}. Here we report only the final result \begin{align} \label{eq:main_cubic_model} \delta_g^s({\boldsymbol{k}},\hat{\boldsymbol{z}}) = \int d^3 {\boldsymbol{q}}\, \Big[ 1+ \delta_g^{\rm L} & -\frac 3{14} {\cal G}_2 - \frac 3{14} (1+b_1^{\rm L}) \delta_1{\cal G}_2 + \frac 16 \Gamma_3 + \frac 19 {\cal G}_3 \nonumber \\ & - \frac 37 f {\cal G}_2^{\parallel} - \frac 37 f b_1^{\rm L} \delta_1 {\cal G}_2^{\parallel} - \frac 58 f \Gamma_3^{\parallel} + \frac 13 f {\cal G}_3^{\parallel} - \frac 9{14} f \mathcal K_3 - \frac 3{14} f^2 \delta_1^{\parallel} {\cal G}_2^{\parallel} \nonumber \\ & - R^{[2]}_{ij}\psi_2^i \partial_j \big( (1+b_1^L)\delta_1 + f\delta_1^{\parallel} \big)\Big] e^{-i{\boldsymbol{k}}\cdot({\boldsymbol{q}}+R^{[1]} \vec \psi_1)} \;. \end{align} Note that the explicit Lagrangian coordinates are suppressed to avoid clutter. The additional nonlinear terms that appear only in redshift space are defined as \begin{align} \mathcal O^{\parallel}({\boldsymbol{q}},\hat {\boldsymbol{z}}) & \equiv \hat z^i \hat z^j \frac{\partial_i\partial_j}{\nabla^2} \mathcal O({\boldsymbol{q}}) \;, \\ \mathcal K_3({\boldsymbol{q}},\hat {\boldsymbol{z}}) & \equiv \hat z_i \hat z_j \frac{\partial_i\partial_m}{\nabla^2} \delta_1({\boldsymbol{q}}) \frac{\partial_m\partial_j}{\nabla^2} {\cal G}_2({\boldsymbol{q}}) \;. \end{align} \vskip 4pt Let us make a couple of comments about this model for the galaxy density field in redshift space. On top of $\delta_g^{\rm L}$, all additional nonlinear operators come from expanding the second and third order displacement from the exponent. Most of these nonlinear terms can be written in the form of bias operators in redshift space, with the exception of the second and third line in~\eqref{eq:main_cubic_model}, which represent the second-order shift acting on the linear density field. It is worth noting that all new terms have fixed coefficients as expected. While this model can appear a bit cumbersome at first sight, it is equivalent (up to two-loop terms that we neglected) to the more familiar results either in Lagrangian~\cite{2013MNRAS.429.1674C,Vlah:2016bcl,Chen:2020fxs} or IR-resummed Eulerian perturbation theory~\cite{Senatore:2014via,Senatore:2014vja,Baldauf:2015xfa,Perko:2016puo,Blas:2016sfa,Ivanov:2018gjr}. Finally, for the purposes of comparing the theory and simulations, all third-order terms in square brackets can be absorbed in a transfer function multiplying $\delta_1$. We can therefore write \begin{align} \delta_g^s({\boldsymbol{k}},\hat {\boldsymbol{z}}) = & \int d^3 {\boldsymbol{q}}\, \Big[ 1 - \frac 37 f {\cal G}_2^{\parallel}(\vq) \nonumber \\ & + \beta_1(k,\mu)\,\delta_1({\boldsymbol{q}})\,+\,b_2^{{\rm L}}\,[\delta_2({\boldsymbol{q}})-\sigma_1^2]\, +\, \left(b_{{\cal G}_2}^{{\rm L}}-\tfrac{3}{14}\right){\cal G}_2({\boldsymbol{q}}) \Big] e^{-i{\boldsymbol{k}}\cdot({\boldsymbol{q}}+R^{[1]} \vec \psi_1({\boldsymbol{q}}))} \;, \label{eq:deltagModelLong} \end{align} where $\mu$ is the cosine of the angle between the Fourier mode $k$ and the line-of-sight $\hat {\boldsymbol{z}}$: $\mu\equiv\vk\cdot\hat\vz/k$. We have finally arrived at the point where we can write the simplified model for the galaxy density field in redshift space directly in Eulerian coordinates. Following~\cite{Marcel1811} and defining redshift-space shifted operators as \begin{align} \tilde{\mathcal{O}}(\vk,\hat {\boldsymbol{z}}) = \int d^3 {\boldsymbol{q}}\, \mathcal{O}(\vq) e^{-i{\boldsymbol{k}}\cdot({\boldsymbol{q}}+R^{[1]} \vec \psi_1({\boldsymbol{q}}))}\;, \end{align} the model is given by \begin{align} \delta_g^s({\boldsymbol{k}},\hat {\boldsymbol{z}}) =\; & \delta_Z^s(\vk,\hat {\boldsymbol{z}}) - \frac{3}{7}f\tilde{\mathcal G}_2^\parallel(\vk,\hat {\boldsymbol{z}}) \nonumber \\ & \quad + \beta_1(k,\mu)\tilde\delta_1(\vk,\hat {\boldsymbol{z}}) + b_2^{\rm L} \tilde\delta_2^\perp(\vk,\hat {\boldsymbol{z}}) + \left(b_{{\cal G}_2}^{{\rm L}}-\tfrac{3}{14}\right) \tilde{\mathcal G}_2^\perp(\vk,\hat {\boldsymbol{z}})\;. \end{align} Note that the transfer function $\beta_1(k,\mu)$ is defined as \begin{equation} \beta_1(k,\mu) \equiv b_1^{\rm L} + \sum_a c_a \frac{\langle \tilde \delta_1({\boldsymbol{k}},\hat {\boldsymbol{z}}) \tilde {\mathcal O}^{[3]}_a (-{\boldsymbol{k}},\hat{\boldsymbol{z}}) \rangle}{\langle \tilde \delta_1({\boldsymbol{k}},\hat {\boldsymbol{z}}) \tilde \delta_1(-{\boldsymbol{k}},\hat {\boldsymbol{z}}) \rangle} \;, \end{equation} The sum runs over all cubic terms in equation~\eqref{eq:main_cubic_model} and coefficients $c_a$ can be either Lagrangian biases or deterministic constants. Crucially, using the transfer function comes at no price, since the new model has exactly the same power spectrum up to one-loop order as the original galaxy field given in \eqq{main_cubic_model}. Note that this transfer function also contains all higher derivative counterterms and higher derivative bias operators, such as \begin{equation} \delta_g^{\rm L} ({\boldsymbol{q}}) \supset \left( R_1^2 \nabla^2 + R_2^2 (\hat {\boldsymbol{z}} \cdot \nabla)^2 + R_3^4 (\hat {\boldsymbol{z}} \cdot \nabla)^4 \right) \delta_1({\boldsymbol{q}}) + \cdots \;, \end{equation} where $R_i$ are corresponding length scales. Even though the last term seems to be of higher order in perturbation theory, it has been shown that it is very significant, particularly if the fingers of god effect is more pronounced~\cite{Chudaykin:2020hbf}. The contribution of these operators to the transfer function is a simple polynomial in $k$ and $\mu$ \begin{equation} \beta_1(k,\mu) \supset R_1^2 k^2 + R_2^2 k^2 \mu^2 + R_3^4 k^4 \mu^4 + \cdots. \end{equation} \vskip 4pt To obtain the best-possible fit to the data involving second-order fields only, and at the same time test the perturbative model, we can also promote the second-order biases to transfer functions. We will find later that they can be indeed set to constant without affecting the model error much, in agreement with the perturbation theory prediction. With this in mind, the most general model we can write is \begin{align} \label{eq:deltagModel} \delta_g^s({\boldsymbol{k}},\hat {\boldsymbol{z}}) = \;& \delta_Z^s(\vk,\hat {\boldsymbol{z}}) - \frac{3}{7}f\tilde{\mathcal G}_2^\parallel(\vk,\hat {\boldsymbol{z}}) \nonumber \\ & \quad + \beta_1(k,\mu)\tilde\delta_1(\vk,\hat {\boldsymbol{z}}) + \beta_2(k,\mu)\tilde\delta_2^\perp(\vk,\hat {\boldsymbol{z}}) + \beta_{\mathcal G_2}(k,\mu)\tilde{\mathcal G}_2^\perp(\vk,\hat {\boldsymbol{z}})\;. \end{align} The $\beta_n(k,\mu)$ are transfer functions that can absorb a part of the higher order nonlinearities as well as counterterms. The field $\delta_Z^s$ refers to the redshift-space Zel'dovich density, \begin{align} \delta_Z^s(\vk,\hat {\boldsymbol{z}}) = \int d^3 {\boldsymbol{q}}\, e^{-i{\boldsymbol{k}}\cdot({\boldsymbol{q}}+R^{[1]} \vec \psi_1({\boldsymbol{q}}))}\;. \end{align} For easier interpretation of the transfer functions, we orthogonalize $\tilde\delta_2$ with respect to $\tilde\delta_1$, and $\tilde{\mathcal G}_2$ with respect to $\tilde\delta_1$ and $\tilde\delta_2$, in every $(k,\mu)$ bin using Gram-Schmidt as in \cite{Marcel1811}. \vskip 4pt The model \eq{deltagModel} is rather similar to the real-space model of \cite{Marcel1811}. The only differences that are not absorbed by transfer functions are the RSD displacement $R^{[1]}\vec \psi_1$ in the exponent and the additional term proportional to $\mathcal G_2^\parallel$. Additionally, transfer functions now depend on $k$ as well as $\mu$. In the same way as in real space \cite{Marcel1811}, we can decide to rewrite the Zel'dovich RSD density $\delta_Z^s$ in terms of the shifted bias operators. For simplicity, however, we will keep the full Zel'dovich density without rewriting it in this way. We will include the shifted cubic operator $\delta^3(\vq)$ in the model below, because it is simple to add; results are very similar without the cubic term. \vskip 4pt \subsection{Evaluating the galaxy overdensity model in a 3D box} To evaluate the model \eq{deltagModel} in a 3D box we proceed similarly to \cite{Marcel1811}. We first draw a realization of the linear density in the 3D box. We then shift the uniform density $1$, the linear Lagrangian-space density $\delta_1(\vq)$, and the second-order fields $\delta^2(\vq)-\sigma^2$, $\mathcal G_2(\vq)$ and $\mathcal G_2^\parallel(\vq)$ by the linear RSD displacement $R^{[1]}\vec \psi_1$, and paint the result to a regular grid in Eulerian space. Given a simulated redshift-space galaxy density $\delta_\text{sim}$, the transfer functions $\beta_n(k,\mu)$ are then computed using ordinary linear least-squares regression in every $(k,\mu)$ bin. This minimizes the squared model error $P_\text{err}(k,\mu)\propto\langle|\delta_\text{sim}(\vk)-\delta_g^s(\vk)|^2\rangle$.\footnote{Having a model with small model error is useful because it has larger signal-to-noise than models with larger model error. Additionally, it is important to obtain a model for which the parametric form of the model error power spectrum is known so that the total power spectrum, composed of model and noise power, can be predicted; we will test this in the next subsection.} These transfer functions are smooth functions of $k$ and $\mu$. We will replace them with a simple 7-parameter fit as described at the end of the next subsection and in the appendix. \vskip 4pt \subsection{Comparison with simulations} To test the bias model \eq{deltagModel} for the galaxy density we compare it against the same N-body simulation galaxies described in \secref{VelSimComparison} above, serving as a proxy for SDSS BOSS CMASS galaxies. We implement RSD by moving galaxies along the line-of-sight according to the subhalo velocity computed with \textsf{Rockstar}. \vskip 4pt \fig{deltaSlice} shows a 2D slice of the resulting simulated galaxy density, the bias model \eq{deltagModel}, and the residual between the two. The bias model captures the galaxy density well on large scales, but tends to underpredict it in highly overdense regions where the bias model is not applicable. \vskip 4pt \begin{figure}[ht] \centering \includegraphics[width=0.99\textwidth]{plots/slices_v2_9e7c668.pdf} \caption{Galaxy overdensity $\delta_g$ in a 2D slice around the largest halo in the N-body simulation (red blob in the center; $\log M[h^{-1}\mathrm{M}_\odot]=15.2$). From left to right, the panels show the real-space simulation, real-space model, redshift-space simulation and redshift-space model. In the center there is a Finger-of-God effect, elongating the cluster along the line-of-sight (magenta arrow); this is not captured by the model. The structure above the cluster (magenta box) moves towards the cluster by about $8\ h^{-1}\text{Mpc}$; this large-scale flow is captured by the model. More typical redshift-space displacements are 3-4$\ h^{-1}\text{Mpc}$ and difficult to see by eye, but the model matches the simulation well on large scales. In each panel, the density is smoothed with a $2\ h^{-1}\text{Mpc}$ 3D Gaussian, and the dimension of each slice is $200\times 1 \times 500\;\ h^{-1}\text{Mpc}$. } \label{fig:deltaSlice} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{{plots/Ptot_Perr_RSD_v2_fit_delta_gPTC_fittedTk1_RunB1c_80c43a5}} \includegraphics[width=0.49\textwidth]{{plots/Perr_RSD_v2_fit_delta_gPTC_fittedTk1_RunB1c_152d2c0}} \caption{\emph{Left panel:} Power spectrum of the simulated galaxy sample (dashed), the bias model (dotted), and the residual error (solid), and error power spectrum minus a constant fit at low $k$ (thin solid). The power spectra are measured in 5 bins in the cosine $\mu$ with respect to the line of sight, $\mu=0-0.2, 0.2-0.4$, etc (colors). \emph{Right panel:} Zoom-in of the error power spectrum, $P_\text{err}(k,\mu)\propto\langle|\delta_\text{sim}(\vk)-\delta_g^s(\vk)|^2\rangle$. This error power spectrum is well fit by \eqq{PerrFit} (black lines). In both panels, the bias model \eq{deltagModel} uses transfer functions fitted with seven parameters as shown in \fig{BOSSTkFit} below. The simulated galaxies are generated by populating \textsf{Rockstar} subhalos in six independent N-body simulations with $1536^3$ DM particles in $L=1500\ h^{-1}\text{Mpc}$ cubic boxes evolved to redshift $z=0.6$; the subhalos are populated with galaxies to represent SDSS BOSS CMASS galaxies following \cite{Nishimichi:2020tvu}, with a soft lower mass cutoff of $\log_{10}M_\mathrm{min}[h^{-1}\text{M}_\odot]=12.97$. } \label{fig:BOSSPerrFittedTk} \end{figure} To investigate the model performance more quantitatively, we go to Fourier space and compute the squared model error $P_\text{err}(k,\mu)\propto\langle|\delta_\text{sim}(\vk)-\delta_g^s(\vk)|^2\rangle$. This is shown in \fig{BOSSPerrFittedTk}. To include larger scales and reduce scatter in the plots, we increased the volume of the simulation to $L=1500\ h^{-1}\text{Mpc}$ per side, still using $1536^3$ DM particles, and averaged over six realizations. The resulting error power spectrum is constant on large scales, and exhibits a $k^2\mu^2$ correction that becomes important at $k\simeq 0.1\; h\text{Mpc}^{-1}$. This is consistent with the stochastic noise power spectrum derived in \cite{Perko:2016puo}. Indeed, we find that at $k\leq 0.3\; h\text{Mpc}^{-1}$ the error power spectrum is well approximated by \begin{align} \label{eq:PerrFit} P_\text{err}(k,\mu) = \frac{1}{\bar n_g} \left(c_{\epsilon,1} + c_{\epsilon,3}f \mu^2\left(\frac{k}{k_\text{M}}\right)^2\right) \;, \end{align} where \begin{align} c_{\epsilon,1} &\;=\; 0.599\;, \nonumber\\ c_{\epsilon,3} &\;=\; 2.45\, \left(\frac{k_\text{M}}{1\,h\text{Mpc}^{-1}}\right)^2\;. \end{align} The number density of simulated galaxies is $\bar n_g=4.25\times 10^{-4}\;h^3\text{Mpc}^{-3}$ and the logarithmic growth rate is $f=0.786$ at redshift $z=0.6$. The amplitude of the noise is compatible with real space results for similar halo number density~\cite{Marcel1811}. The amplitude of the scale dependent part of the noise is related to the stochastic velocity dispersion and it is consistent with measurements in the previous section and values of the counter terms measured in large-volume simulations~\cite{Nishimichi:2020tvu} and real data~\cite{DAmico:2019fhj,Ivanov:2019pdj,Philcox:2020vvt}. From \fig{BOSSPerrFittedTk} it is clear that both stochastic noise terms are detected with high significance in the error power spectrum. This demonstrates that the field-level model for the galaxy density in redshift space is accurate with errors as expected theoretically up to $k\simeq 0.3\; h\text{Mpc}^{-1}$, for the galaxies extracted from our simulations. Since agreement at the field level is a much more stringent test than comparing power spectra only, we can see these results as yet another nontrivial check that the one-loop power spectrum in redshift space is indeed the adequate model to describe galaxy clustering on large scales. \vskip 4pt We do not find evidence for an isotropic $k^2$ correction to the error power spectrum, although this term can be present theoretically \cite{Perko:2016puo}. This is consistent with the noise of halos in real space, and can be understood by considering the scale corresponding to the typical halo size \cite{Marcel1811}. Such a $k^2$ correction may be present for other tracers, especially if they probe larger halos, and on smaller scales. \vskip 4pt In \app{BOSSFreeTk} we show the transfer functions obtained by minimizing the error power spectrum in every $(k,\mu)$ bin, and fits of them using a 7-parameter fitting function (4 parameters to describe the scale dependence of $\beta_1$, and 3 parameters to fit the other transfer functions with constants). The error power spectrum shown in \fig{BOSSPerrFittedTk} assumes these fitted transfer functions, as they are more smooth and therefore more realistic (although the error power spectrum for fully free transfer functions looks similar, see \fig{PerrBOSSFreeTk} in the appendix). Instead of the fitting functions, one could use perturbation theory to predict the functional form of the transfer functions and then fit for the bias parameters, as done in real space in \cite{Marcel1811}; we leave this to future work. \vskip 4pt In \app{DESI} we show results when including lower-mass subhalos, that will be observed by DESI. The error power spectrum is again well described by \eqq{PerrFit}, of course with different values for the fitting parameters. \vskip 4pt \MyFloatBarrier \section{Conclusions} Previous work modeled the overdensity realization of dark matter particles and dark matter halos in real space \cite{Baldauf:2015zga,Taruya:2018jtk,Marcel1811}. Here, we generalized this approach to galaxies in redshift-space. To model the redshift space distortions caused by the peculiar velocities of galaxies, we shift galaxy bias operators by an additional displacement along the line of sight predicted by the first-order Lagrangian-space velocity. We then calibrate transfer functions to obtain the best-possible deterministic large-scale model using these shifted galaxy bias operators. The resulting model captures the redshift-space galaxy overdensity in N-body simulations well on perturbative scales. \vskip 4pt We computed the stochastic noise of the model by subtracting the model prediction from the simulated galaxy overdensity. The power spectrum of this noise is white on large scales, $k\lesssim 0.1\; h\text{Mpc}^{-1}$, for the BOSS CMASS-like and the lower-mass mock galaxies we considered at redshift $z=0.6$. On smaller scales, the noise becomes anisotropic and scale-dependent. It increases along the line of sight and towards smaller scales. This is expected from noise of galaxy velocities that enters the redshift space distortions along the line of sight. We find that for mildly nonlinear scales, $k\lesssim 0.3\; h\text{Mpc}^{-1}$, the anisotropic and scale-dependent correction to the white noise power spectrum is well fit by a $k^2\mu^2$ term, where $\mu$ is the cosine with the line of sight; see \eqq{PerrFit}. This parametric form of the stochastic noise power spectrum agrees with the theoretical expectation \cite{Perko:2016puo}. We do not find evidence for an additional $k^2$ term, but this may be specific to the mock galaxies that we used and may be present for other galaxy samples. These results provide new important evidence that the one-loop power spectrum (including all relevant counter terms) with the anisotropic scale-dependent noise is a good model for galaxy clustering in redshift space on large scales and yet another justification for using it in analyzing the real data~\cite{DAmico:2019fhj,Ivanov:2019pdj,Philcox:2020vvt}. Furthermore, the field level measurements of the difference between simulations and the model provide the most realistic estimates of the so-called theoretical error, and they can be used instead of templates based on perturbation theory or calibration to simulations at the power spectrum level~\cite{Chudaykin:2020hbf}. \vskip 4pt We also studied the continuous velocity field as predicted by perturbation theory and compared it against simulated galaxy velocities. We found that roughly 90\% of the simulated galaxies have a velocity that matches the large-scale flows predicted by perturbation theory, while the remaining 10\% of galaxies have larger velocity errors that are responsible for half of the rms velocity error. Identifying the galaxies with large velocity errors and removing them in observational settings could be beneficial for parameter inference because it could allow modeling smaller scales with a modest increase in shot noise; however it is challenging to identify these galaxies with large velocity errors observationally. \vskip 4pt One aspect of our work that could be improved is that we fitted transfer functions that enter the bias model with simple smooth functions in $k$ and $\mu$ rather than modeling their parametric form from first principles. Doing the latter would be an important next step to further check the properties of the noise. Another limitation is that galaxies in galaxy surveys may have different clustering properties and velocities than the simulated mock galaxies that we considered (subhalos in dark matter-only N-body simulations). It is also possible to extend the galaxy bias model to higher order in perturbation theory, both in terms of bias operators and in terms of the order used for the displacement field. \vskip 8pt \subsection*{Acknowledgements} It is a pleasure to thank G.~Cabass, E.~Castorina, A.~Moradinezhad Dizgah, T.~Nishimichi, M.~Takada and Z.~Vlah for helpful discussions. M.S.~thanks IPMU at the University of Tokyo for the hospitality and useful discussions. Simulations and numerical analyses were performed on the Helios cluster at IAS and used the public software packages \textsc{MP-Gadget} \cite{yu_feng_2018_1451799}, \textsc{nbodykit} \cite{Hand:2017pqn} and \textsc{Rockstar} \cite{Rockstar}. M.S.~acknowledges support from the Corning Glass Works Fellowship and the National Science Foundation. M.I.~is partially supported by the Simons Foundation's \textit{Origins of the Universe} program. O.P.~acknowledges funding from the WFIRST program through NNG26PJ30C and NNN12AA01C. \vskip 4pt \MyFloatBarrier \bibliographystyle{utphys}
1,477,468,750,542
arxiv
\section{COMSOL implementation} \label{S:6 (COMSOL impl)} \subsection{Overview} \label{S:3-1 overview} COMSOL Multiphysics is a general-purpose simulation software for multi-field problems, that is based on the finite element method. In this software, multiple physics can be combined by employing the available built-in interfaces or implementing user defined physics \cite{multiphysics2019introduction}. The Solid Mechanics module offers the general formulation for the analysis of solid bulk based on the principles of continuum mechanics \cite{multiphysics2019structural}. Thus, in this approach, the discontinuities in the solution domain must be explicitly modeled and the generated mesh must conform to the interface boundaries. To overcome this restriction, the proposed XFEM implementation in this study takes advantage of six distinct Solid Mechanics modules as per weak form equations developed in section \ref{S:2-1 govenrning&discretise}; i.e., one for the standard part of the displacement field $\mathbf{u}^{\text{cont}}$, one for the discontinuous enriched component $\mathbf{u}^{\text{disc}}$, and four other modules to account for the asymptotic tip enrichment part $\mathbf{u}^{\text{tip}}$. This is plausible by exploiting the interesting feature of COMSOL in provision of the access to the definitions of the field variables (e.g., stress and strain fields). For the sake of brevity, the Solid Mechanics module that deals with the standard displacement field is referred to as SMstd, the discontinuous enriched module is denoted by SMenr, and the crack tip enriched modules are indicated by SMtip in the rest of the work. This capability of COMSOL can be elaborated identically for XFEM developments in a wide range of problems with several enrichment functions associated with complex coupled physics such as thermo-hydro-mechanical coupling analysis. The XFEM implementation of multiple physics in COMSOL could be the subject of future studies. COMSOL Multiphysics features several internal variables and functions (e.g., path/domain integration tools) which are critical to the successful execution of the proposed XFEM analysis. These options are primarily utilized to realize the enrichment concept as well as to perform the pre-processing (e.g., level set calculation) and post-processing (e.g., SIF calculation and crack propagation) functions. In this respect, the Live-link for MATLAB feature offers excellent flexibility to the developer to implement the required subroutines ground up. \cite{multiphysics2018matlab}. \subsection{Pre-processing of the crack geometry} \label{S:3-2 Pre-process} The pre-processing is performed for the identification of the crack geometry prior to the XFEM analysis. To this end, a conventional finite element mesh is generated in COMSOL. The mesh is exported as a text file using \say{*.mphtxt} format, in which the nodal coordinates and element connectivities are reported. The file is then imported into MATLAB as a script file, that is named \say{preprocess.m}, where the level set function is defined. In this code, a geometric search is carried out on all elements' nodes and edges to determine their position with respect to the existing crack interfaces. The output of the pre-processing phase is the list of all nodes and elements which their support domain is bisected by the crack interface, or contain the crack tips. This is utilized to initialize the XFEM analysis as well as to determine the enriched zone $\Omega_{h}$, over which SMenr and SMtips modules are activated. The pre-processing procedure is repeated as per geometrical update of the crack interface. This is achieved via two MATLAB functions named \say{phi.m} and \say{interpol.m}. The former calculates the Heaviside function values for any arbitrary point of interest (e.g., a Gauss-point), which are used later to modify the strain field in SMenr module. The \say{interpol.m} function is utilized to detect the enriched zone by employing an interpolation function. Since COMSOL does not provide direct access to the elemental and nodal data of the model at any stage of the analysis, the interpolation function is required to infer such data geometrically from the original mesh . This function is set to be zero across the domain except at enriched nodes, for which it is equal to unity. For any point of interest, the output variable, that is called $\psi$, is interpolated using the MATLAB built-in function \textit{scatteredInterpolant} inside the \say{interpol.m} function. Fig. \ref{fig:psi} schematically represents the definition of $\psi$ over the domain. At the end of the pre-processing phase all nodal signs, element connectivities, crack tip coordinates and interpolation values of nodal points are saved in separate MAT-files. This facilitates the access to the data at anytime during the course of the analysis, for which a MATLAB function is called. It is worth noting that the number of defined MATLAB functions must match the number of variables required throughout the analysis. Furthermore, for each function being called, all input and output vectors must have the identical sizes \cite{multiphysics2018matlab}. \subsection{Module setup: enrichment} \label{S:3-2 module} The six Solid Mechanics modules that represent the standard and enriched fields (i.e., SMstd, SMern and SMtips) are established based on the presented weak formulation in section \ref{S:2-1 govenrning&discretise}. In SMstd, the continuous part of the strain field is predefined as $\mathbf{\varepsilon }^{\text{cont}} = \nabla^{\text{s}}\mathbf{u }^{\text{cont}}$, with no modification required. In contrast, to implement the strong discontinuities in SMenr module, the default definitions of displacement gradient and the associated strain field is modified by incorporating the enrichment function $H_{\Gamma _{d}}(\varphi (\mathbf{x}))$, which is obtained from the MATLAB function \say{phi.m}, as \begin{equation} \label{eq:epsenrimplement} \mathbf{\varepsilon }^{\text{disc}}=\nabla^{\text{s}}\mathbf{u }^{\text{disc}}\times H_{\Gamma _{d}}(\varphi (\mathbf{x})) \end{equation} where $\mathbf{\varepsilon }^{\text{disc}}$ represents the discontinuous enriched part of the strain tensor. This modification is performed by enabling the \say{Equation View} option in \say{Model Builder} panel of COMSOL, that provides access to the definition of variables associated with the corresponding module. Similarly, the strain contribution from the asymptotic tip enrichments can be expressed as \begin{equation} \label{eq:epstipimplement} \mathbf{\varepsilon }^{\text{tip}}=\nabla^{\text{s}}(\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{u}_{\rm{i}}^{\text{tip}}) \end{equation} In addition to the strain field, the definition of the stress field in all Solid Mechanics modules must be modified so as to ensure a unique stress field is reproduced over the whole solution domain. Consequently, the stress field $\mathbf{\sigma }^{\text{total}}$ (i.e., second Piola-Kirchhoff stress in COMSOL) in SMstd is modified as \begin{equation} \label{eq:sigmacomsol} \mathbf{\sigma }^{\text{total}}=\mathbf{D}:({\mathbf{\varepsilon }^{\text{cont}}}+{\mathbf{\varepsilon }^{\text{disc}}}+{\mathbf{\varepsilon }^{\text{tip}}}) \end{equation} Note that the stress field in the SMenr and SMtips are also identically set according to the relation in (\ref{eq:sigmacomsol}). As described in section \ref{S:3-2 Pre-process}, SMenr and SMtips must be defined exclusively on the enriched zones of the domain i.e., $\Omega_h$. However, COMSOL restricts access to the nodal data. To circumvent this difficulty, in the approach presented herein, $\Omega_h$ is defined as a subset of $\Omega$ over which the predefined enriched degrees of freedom are not restrained; the modules for SMenr and SMtip are initialized based on the same geometry and background mesh as that of SMstd. This is effected by selecting \say{Prescribed Displacement} from \say{Domain Constraint} option of the module, that enables the imposition of a predefined displacement field to the domain. For this purpose, the field variable $\psi$, introduced in section \ref{S:3-2 Pre-process} (i.e., the output of \say{interpol.m} MATLAB function), is utilized to prescribe the displacements as \begin{equation} \label{eq:interpolconstraint} {{\bf{u}}^{{\rm{disc}}}} = \left\{ {\begin{array}{*{20}{l}} {{{\bf{u}}^{{\rm{disc}}}}}&{{\rm{ where\: } } \psi {\rm{ = 1}}}\\ 0&{{\rm{otherwise}}} \end{array}} \right. \end{equation} \begin{figure}[!t] \centering\includegraphics[width=0.55\linewidth]{Fig21.pdf} \caption{Detection of fractured elements in COMSOL using interpolation of $\psi$ field variable.} \label{fig:psi} \end{figure} The constraint in (\ref{eq:interpolconstraint}) eliminates the extra DOFs that are outside of the enriched zone in SMenr module. In this way, not only the computational costs are reduced significantly, but also the stiffness matrix singularity due to the presence of zero values in SMenr and SMtips is avoided; i.e. due to the \say{zero} extension of the enrichment function over the un-enriched zones of the domain. \subsection{Stress intensity factors} \label{S:3-3 sifs} In this work, the stress intensity factors are calculated by employing COMSOL's internal variables in conjunction with the built-in mathematical operators and functions. The list of required internal variables is defined in a COMSOL script called \say{interaction integral} in the \say{Definitions} section, that includes the crack-tip coordinates, normal vector components, displacement, stress and strain derivatives, interaction strain energy density function, interaction integral and SIF values. The equivalent domain form of the interaction integral is typically used in the calculation of the SIFs in the literature \cite{anderson2017fracture}; however, the path integral form (i.e., Eq. \ref{eq:I12}) is preferred here for the sake of simplicity of the application, and the availability of the built-in circular path integral operator in COMSOL, called \textit{circint}. In this respect, \textit{at2} operator is also used to set the center of the circular path to the crack tips, around which the interaction integral is calculated. Alternatively, the \textit{diskint} operator could be used for the calculation of domain form of the interaction integral along a circular area surrounding the crack-tip. \subsection{Numerical integration} \label{S:3-4 (integration)} In the classical FEM, piecewise continuous polynomials are used to discretize the displacement field, which are integrated accurately by relatively lower-order Gauss integration rules. However, in XFEM, due to the existence of singularities and/or discontinuities in the displacement field and its derivatives, a more precise integration strategy is required for the enriched part of the displacement field. In this respect, raising the order of integration by increasing the number of Gauss-points, triangular/rectangular partitioning of the elements, and the rectangular sub-griding are among the most frequently used methods in the literature \cite{mohammadi2008extended, khoei2014extended}. The last two methods are not available in COMSOL and therefore, the first approach is adopted in this study for SMenr and SMtips modules. \textbf{Remark 1.} In order to avoid ill-conditioned and/or singular stiffness matrices in XFEM, it is necessary to ensure that there exists at least a minimum number of Gauss points at either sides of the interface in the cracked elements. Hence, it is required to apply a criterion for the size of the support domain of each nodal point corresponding to a particular integration order (Fig. \ref{fig:effective area}a). In this respect, the nodes for which the relative support domain, i.e., the ratio $A^{+}/(A^{+}+A^{-})$ or $A^{-}/(A^{+}+A^{-})$, is smaller than a predefined tolerance $\delta$ are not enriched \cite{mohammadi2008extended} (see Fig. \ref{fig:effective area}b). \begin{figure}[!t] \centering\includegraphics[width=0.7\linewidth]{Fig3-1modified.pdf} \caption{ \textbf{a)} Definition of the support domain of a node for the enrichment criterion, \textbf{b)} enrichment modification: the criterion is met and some of the nodes are not enriched; this can happen when a crack is very close to a node point.} \label{fig:effective area} \end{figure} \subsection{Analysis and sequencing of processes} \label{S:3-5 (sequencing)} The numerical solution to the discretized form of the governing equations, represented in the preceding sections, is obtained by using the \say{Study} node in COMSOL. There are several study options in the software such as \say{Stationary} and \say{Time Dependent}, which correspond to quasi-static and dynamic analysis strategies, respectively. In the quasi-static analysis of XFEM problems, the prescribed load (or displacement) is applied incrementally; at each step of loading, the crack propagation criterion is checked, and a predefined increment is added to the crack interface, if the propagation criterion is met. In LEFM, the problem is linear both geometrically and from the material behaviour point of view. In order to avoid the issues related to data transfer following each crack propagation step, typically the problem is solved from the beginning yet with an updated configuration for the crack geometry (e.g., see \cite{khoei2014extended,BROUMAND201397}). This process can best be handled by employing the \say{Auxiliary Sweep} feature in the \say{Extended Study} section of \say{Stationary} node. This feature redefines the problem into a sequential solution related to a selection of values, for the load (or displacement), which is taken as the sweep parameter. In the case of \say{Time Dependent} study, the problem is inherently history dependent and the sweep option is not applicable. In order to retain the robustness of the solution during the crack propagation process, the crack increment is kept as small as possible such that the stress redistribution due to generation of new crack surfaces can be handled by the nonlinear Newton-Raphson solver of the software. \textbf{Remark 2.} As a result of crack propagation, for both solution strategies, the enrichment zone evolves and new nodes need to be enriched. Subsequently, a series of modifications must be applied to SMenr and SMtip modules, their variables and zones of influence. However, this task is not performed automatically in COMSOL; instead, the initial geometry is adopted throughout the analysis, disregarding any changes in the domain configuration due to crack propagation. In order to render COMSOL to update the geometry as well as the enriched region, the sweep parameter $sp$ is used in the definition of the variables and constraints that alter due to crack evolution, including the displacement constraint that is imposed by \say{Prescribed Displacement} (Eq. \ref{eq:interpolconstraint}) and the modified strain definition (Eq. \ref{eq:epsenrimplement} and Eq. \ref{eq:epstipimplement}) in SMenr and SMtip modules. This is simply achieved by adding a fictitious $\lambda \cdot sp$ to these terms, whereas $\lambda$ is assigned to a very small value (i.e., $\simeq 0$) such that it does not introduce any notable error to the solution. \textbf{Remark 3.} XFEM modeling of cracks, in essence, is a sequential trinary analysis which consists of pre-processing and level-set update, solution of the governing equations, and post-processing and crack propagation stages. This requires that in addition to the pre-processing task that is needed to initialize the problem, certain processes must be executed following each crack increment. This includes retrieving the updated crack details (e.g., crack tip locations and crack body orientations) and corresponding enriched zones. COMSOL does not automatically elaborate such sequencing and therefore, this needs to be effected by the developer. To this end, the \say{Global Variable Probe} tool is used after each step of the analysis to monitor the state of field quantities of the domain, and to store the history variables that are updated in the previous step. Several MATLAB functions are called after each step of the analysis, which include: (i) \say{readcrack.m} and \say{lastangle.m} functions, that provide the previous crack-tip locations and crack increment angles, used in the calculation of the SIFs, and (ii) \say{crackupdate.m} function, that updates the crack configuration according to the calculated SIFs in conjunction with crack propagation criteria, and modifies the field variable $\psi$ which is used to determine the enriched zone for the next step of the solution. The overall implementation procedure of the XFEM implementation in COMSOL is presented in Algorithm \ref{alg:localmins}. \begin{algorithm}[t] \caption{Step by step implementation of XFEM in COMSOL.} \label{alg:localmins} \begin{algorithmic} \STATE 1. Global Definitions \STATE \hspace{10mm} Define all constants (material, load, etc) \STATE \hspace{10mm} Define MATLAB functions (\say{phi.m}, \say{interpol.m}, etc) \STATE 2. Create Geometry (2D/3D) \STATE 3. Local Variables definition \STATE \hspace{10mm} Define interaction integral equations \STATE \hspace{10mm} Assign Global Variable Probe; call MATLAB functions for crack update, crack tip and angle \STATE 4. Select physical model (Modules) \STATE \hspace{10mm} Standard Solid Mechanics (SMstd) \STATE \hspace{17mm} Select material model \STATE \hspace{17mm} Select shape function \STATE \hspace{17mm} Modify stress definitions (Eq. \ref{eq:sigmacomsol}) \STATE \hspace{10mm} Discontinuous enriched Solid Mechanics (SMenr) \STATE \hspace{17mm} Select material model \STATE \hspace{17mm} Select shape function \STATE \hspace{17mm} Modify strain definitions (Eq. \ref{eq:epsenrimplement}) \STATE \hspace{17mm} Modify stress definitions (Eq. \ref{eq:sigmacomsol}) \STATE \hspace{17mm} Apply field variable $\psi$ as a constraint using domain Prescribed Displacement option \STATE \hspace{10mm} Crack tip enriched Solid Mechanics (SMtip) \STATE \hspace{17mm} Select material model \STATE \hspace{17mm} Select shape function \STATE \hspace{17mm} Modify strain definitions (Eq. \ref{eq:epstipimplement}) \STATE \hspace{17mm} Modify stress definitions (Eq. \ref{eq:sigmacomsol}) \STATE 5. Assign initial and boundary conditions \STATE 6. Discretization and mesh generation \STATE 7. Specify Study type \STATE \hspace{10mm} Select Parametric Sweep analysis \STATE 8. Post-processing and visualization \end{algorithmic} \end{algorithm} \section{Highlights} \begin{itemize} \item XFEM implementation in COMSOL Multiphysics is presented for the first time.\\ \item Single and multiple stationary and propagating cracks are studied in 2D/3D solids.\\ \item The proposed straight forward implementation procedure is extendable to multi-field problems. \end{itemize} \section{XFEM formulation} \label{S:2 (Formulation)} In essence, XFEM decouples the interfaces, such as cracks or material discontinuities, from the background mesh by enriching the finite element space with special enrichment functions, based on the partition of unity method \cite{PUM1997}. Therefore, it eliminates the remeshing step which is required in the classical finite element modeling of moving interfaces. In this method, the handling of crack interface topology and its evolution are performed by using nodal distances to the corresponding projection points on the interface \cite{moes1999finite}. Alternatively, the Level Set Method (LSM) can be employed, for which the extension to higher dimensions and coupling with the XFEM is straightforward \cite{khoei2014extended}. The special treatment of the Galerkin finite element formulation, that is elaborated in the following section, facilitates the separation of the weak forms of the standard and enriched parts of the governing equations. The proposed formulation is inspired by the work of Borja et al. \cite{borja2008assumed} which is amenable to the modeling structure of COMSOL Multiphysics. \subsection{Governing equations and XFEM discretization} \label{S:2-1 govenrning&discretise} As shown in Fig. \ref{fig:potato}, consider a cracked body $\Omega$ that is bounded by $\Gamma =\Gamma _{u}\cup \Gamma _{t}$ and crack surfaces $\Gamma_d$, with $\Gamma _{u}\cap \Gamma _{t}=\varnothing $. The equation of motion of the domain can be expressed as, \begin{figure}[!t] \centering\includegraphics[width=0.65\linewidth]{Fig1-1.pdf} \caption{Schematics of problem domain and boundaries of fractured media.} \label{fig:potato} \end{figure} \begin{equation} \label{eq:equilibrium} \begin{matrix} \nabla \cdot \mathbf{\sigma} - \rho \ddot{\mathbf{u}} +\rho \mathbf{b}=0 & \text{in }\Omega \end{matrix} \end{equation} subjected to the following boundary and initial conditions, \begin{equation} \label{eq:equilibrium BC} \begin{matrix} \mathbf{u}=\mathbf{\bar{u}} & \text{on }\Gamma _{u} \\ \mathbf{\sigma}\cdot \mathbf{n}_{\Gamma }=\mathbf{\bar{t}}& \text{on }\Gamma _{t} \\ \mathbf{\sigma}\cdot \mathbf{n}_{\Gamma_{\text{d}} }=\mathbf{\bar{t}}_{\text{d}}& \text{on }\Gamma _{\text{d}}\\ \mathbf{u}=\mathbf{u}_0 & \text{in }\Omega \\ \dot{\mathbf{u}}=\dot{\mathbf{u}}_0 & \text{in }\Omega \\ \end{matrix} \end{equation} where, $\rho$ is density and, $\mathbf{b}$, $\mathbf{u}$, $\dot{\mathbf{u}}$ and $\ddot{\mathbf{u}}$ are the body force, displacement, velocity and acceleration vectors, respectively. In this equation, $\mathbf{\bar{u}}$ and $\mathbf{\bar{t}}$ denote the prescribed displacement and traction vectors on the boundary of the domain and, $\mathbf{u}_0$ and $\dot{\mathbf{u}}_0$ are the initial displacement and velocity vectors of the domain, respectively. $\mathbf{n}_{\Gamma }$ and $\mathbf{n}_{\Gamma _{\text{d}}}$ are the unit normal vectors to the external boundary and crack surfaces. $\mathbf{\sigma}$ is the Cauchy's stress tensor which is related to the strain tensor $\mathbf{\varepsilon}$ through Hooke’s law for isotropic elastic materials as $\mathbf{\sigma} = \mathbf{D}:\mathbf{\varepsilon}$, where $\mathbf{D}$ is the fourth order elasticity tensor. As shown in Fig. \ref{fig:potato}, the displacement field is discontinuous across $\Gamma _{d}$, while the stress field is singular at the crack-tips; hence, in XFEM formulation it can be expressed as, \begin{equation} \label{eq:stdenr} \mathbf{u}=\mathbf{u}^{\text{cont}}+ M_{\Gamma _{\text{d}}}(\mathbf{x}) \mathbf{u}^{\text{disc}}+\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{u}_{\rm{i}}^{\text{tip}} \end{equation} where $\mathbf{u}^{\text{cont}}$, $M_{\Gamma _{\text{d}}}(\mathbf{x}) \mathbf{u}^{\text{disc}}$ and $\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{u}_{\rm{i}}^{\text{tip}}$ are the continuous, discontinuous and crack tip terms associated with the displacement field. $M_{\Gamma _{\text{d}}}(\mathbf{x})$ is the shifted Heaviside enrichment function that generates discontinuity across $\Gamma _{d}$ by $M_{\Gamma _{\text{d}}}(\mathbf{x}) = H_{\Gamma _{d}}(\varphi (\mathbf{x}))=\mathbb{H}_{\Gamma _{d}}(\varphi (\mathbf{x}))-\mathbb{H}_{\Gamma _{d}}(\varphi (\mathbf{x^{I}}))$ \cite{liu2008contact}, where \begin{equation} \label{eq:Heavisied} \mathbb{H}_{\Gamma _{d}}(\varphi (\mathbf{x}))=\left\{\begin{matrix} 1 & \varphi (\mathbf{x})\geq 0\\ -1 & \varphi (\mathbf{x})< 0 \end{matrix}\right. \end{equation} In the above relation, $\varphi (\mathbf{x})$ is the signed distance function corresponding to the discontinuity $\Gamma _{d}$, which is used to determine the enriched nodes and associated elements (see Fig. \ref{fig:potato}). Also, $F(\mathbf{x})=\left \{ F_{1},F_{2},F_{3},F_{4} \right \}$ is the set of asymptotic crack tip enrichment functions which are adopted from the analytical solutions of the crack tip process zone. Considering Eq. \ref{eq:stdenr} for the discrete form of the displacement field, the infinitesimal strain tensor can be expressed as \begin{equation} \label{eq:epsdefine} \mathbf{\varepsilon }=\nabla^{\text{s}}\mathbf{u}=\nabla^{\text{s}}\mathbf{u}^{\text{cont}}+ H_{\Gamma _{d}}(\varphi (\mathbf{x}))\nabla^{\text{s}}\mathbf{u}^{\text{disc}}+ \delta _{\Gamma _{\text{d}}} (\mathbf{u}^{\text{disc}}\otimes \mathbf{n_{\Gamma _{\text{d}}}} )^{\text{s}}+\nabla^{\text{s}}(\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{u}_{\rm{i}}^{\text{tip}}) \end{equation} where $\nabla^{\text{s}}$ and $(\cdot )^{\text{s}}$ denote the symmetric parts of the spatial gradient operator and tensor, respectively, and $\delta _{\Gamma _{\text{d}}}$ is the Dirac's delta function on $\Gamma_d$. In order to derive the weak form of Eq. \ref{eq:equilibrium}, a costume tailored test function $\mathbf{\eta }$ which is consistent with the displacement field is adopted as $\mathbf{\eta}=\mathbf{\eta}^{\text{cont}}+ M_{\Gamma _{\text{d}}}(\mathbf{x}) \mathbf{\eta}^{\text{disc}}+\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{\eta}_{\rm{i}}^{\text{tip}}$. Following the standard approach in the calculus of variations, the weak form of the equation of motion is obtained as \begin{equation} \label{eq:vartotal} \int_{\Omega }^{}\nabla^{\text{s}}\mathbf{\eta}:\mathbf{\sigma}\text{d}\Omega = \int_{\Omega }^{}\mathbf{\eta}\cdot \rho \mathbf{b}\text{d}\Omega - \int_{\Omega }^{}\mathbf{\eta}\cdot \rho \ddot{\mathbf{u}}\text{d}\Omega +\int_{\Gamma _{\text{t}}}^{}\mathbf{\eta}\cdot \mathbf{\bar{t}}\text{d}\Gamma \end{equation} Substituting $\mathbf{\eta }$ in form of two independent weight functions $\mathbf{\eta}^{\text{cont}}$ and $\mathbf{\eta}^{\text{disc}}$ into Eq. \ref{eq:vartotal}, the weak form of the continuous part of the governing equations yields as \begin{equation} \label{eq:varstd} \int_{\Omega }^{}\nabla^{\text{s}}\mathbf{\eta}^{\text{cont}}:\mathbf{\sigma}\text{d}\Omega = \int_{\Omega }^{}\mathbf{\eta}^{\text{cont}} \cdot \rho \mathbf{b}\text{d}\Omega - \int_{\Omega }^{}\mathbf{\eta}^{\text{cont}}\cdot \rho \ddot{\mathbf{u}}\text{d}\Omega +\int_{\Gamma _{\text{t}}}^{}\mathbf{\eta}^{\text{cont}} \cdot \mathbf{\bar{t}}\text{d}\Gamma \end{equation} and, the discontinuous and singular tip-enrichment parts can be expressed as \begin{equation} \label{eq:varenr} \begin{split} \int_{\Omega }^{}[H_{\Gamma _{d}}(\varphi (\mathbf{x}))\nabla^{\text{s}}\mathbf{\eta}^{\text{disc}}]:\mathbf{\sigma}\text{d}\Omega + \int_{\Gamma _{\text{d} }}^{}\mathbf{\eta}^{\text{disc}} \mathbf{\sigma}\cdot \mathbf{n}_{\Gamma _{\text{d}}}\text{d}\Gamma= &\int_{\Omega }^{}H_{\Gamma _{d}}(\varphi (\mathbf{x}))\mathbf{\eta}^{\text{disc}} \cdot \rho \mathbf{b}\text{d}\Omega\\ & - \int_{\Omega }^{}H_{\Gamma _{d}}(\varphi (\mathbf{x}))\mathbf{\eta}^{\text{disc}} \cdot \rho \ddot{\mathbf{u}}\text{d}\Omega +\int_{\Gamma _{\text{t}}}^{}H_{\Gamma _{d}}(\varphi (\mathbf{x}))\mathbf{\eta}^{\text{disc}} \cdot \mathbf{\bar{t}}\text{d}\Gamma \end{split} \end{equation} \begin{equation} \label{eq:vartip} \int_{\Omega }^{}\nabla^{\text{s}}(\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{\eta}_{\rm{i}}^{\text{tip}}):\mathbf{\sigma}\text{d}\Omega = \int_{\Omega }^{}\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{\eta}_{\rm{i}}^{\text{tip}} \cdot \rho \mathbf{b}\text{d}\Omega - \int_{\Omega }^{}\sum_{i=1}^{4}F_{\rm{i}}(\mathbf{x})\mathbf{\eta}_{\rm{i}}^{\text{tip}}\cdot \rho \ddot{\mathbf{u}}\text{d}\Omega \end{equation} The integration domain of Eq. \ref{eq:varenr} and Eq. \ref{eq:vartip} are limited to the supports of $M_{\Gamma _{\text{d}}}(\mathbf{x})$ and $F(\mathbf{x})$, which are the enriched zone detected by the signed distance function $\Omega _{\text{h}}$ and asymptotic crack tip function $\Omega _{\rm{tip}}$, respectively. Since in this study, cracks are stipulated as traction free, the second term in Eq. \ref{eq:varenr} vanishes. Adopting a Galerkin formulation, the trial and test functions are discretized by $C^{0}$ continuous shape functions $N_{\text{\rm{i}}} (\mathbf{x})$ which are associated with the vector of nodal displacements for standard $\hat{\mathbf{u}}$ and enriched parts including discontinuous $\tilde{\mathbf{u}}$ and crack tip $\bar{\mathbf{u}}$ as \begin{equation} \label{eq:descretise} \left\{ {\begin{array}{*{20}{l}} {{{\bf{u}}^{{\rm{cont}}}}({\bf{x}}) = \sum\nolimits_{{\rm{i}} \in {m_{{\rm{std}}}}} {{N_{\rm{i}}}({\bf{x}}){\bf{\hat u}_{\rm{i}}}} }&{{\rm{in}}\,\Omega }\\ {{{\bf{u}}^{{\rm{disc}}}}({\bf{x}}) = \sum\nolimits_{{\rm{i}} \in {m_{{\rm{disc}}}}} {{N_{\rm{i}}}({\bf{x}}){H_{{\Gamma _{\rm{d}}}}}(\varphi ({\bf{x}})){\bf{\tilde u}_{\rm{i}}}} }&{{\rm{in}}\,{\Omega _{\rm{h}}}}\\ {{{\bf{u}}^{{\rm{tip}}}}({\bf{x}}) = \sum\nolimits_{{\rm{i}} \in {m_{{\rm{tip}}}}} {{N_{\rm{i}}}({\bf{x}})\sum_{\rm{i}=1}^{4}F_{\rm{j}}(\mathbf{x}){\bf{\bar u}_{\rm{ij}}}} }&{{\rm{in}}\,{\Omega _{\rm{tip}}}} \end{array}} \right. \end{equation} where $m_\text{std}$, $m_\text{disc}$ and $m_\text{tip}$ are sets of standard, discontinuous and tip enrichment nodes, respectively. \subsection{Fracture criteria and crack propagation} \label{S:3-2 SIF} The interaction integral method is an effective energy approach which is based on $J$-integral concept, and it is widely used in the calculation of mixed-mode stress intensity factors (SIFs) \cite{anderson2017fracture}. This method takes advantage of auxiliary fields, available from analytical solutions, that is superimposed on the calculated fields. Typically, boundary or domain form of the interaction integral is used to evaluate the stress intensity factors as a post-process. The energy release rate of a solid body in two dimensions is expressed as \begin{equation} \label{eq:J1} J=\frac{K_{I}^{2}+K_{II}^{2}}{{E}'} \end{equation} where ${E}'$ is defined as $E/(1-\upsilon ^2)$ and $E$ for plane strain and plane stress problems, respectively. The contour form of the $J$-integral is represented as \begin{equation} \label{eq:J2} J=\int_{\Gamma _{J}}^{}\left [ w\cdot n_{{x}'}-(\mathbf{\sigma \cdot \nabla_{{x}'}\mathbf{u}})\cdot \mathbf{n} \right ]d\Gamma \end{equation} where $w$ is the strain energy density function, $\mathbf{n}$ and $n_{{x}'}$ are the unit normal vector and its horizontal component (with respect to local crack coordinates ${x}'-{y}'$) of the closed curved path $\Gamma _{J}$ encompassing the crack-tip, respectively. $\nabla_{{x}'}$ is the directional gradient operator in the local horizontal direction. Applying Eq. \ref{eq:J2} to the actual and auxiliary fields, the interaction integral takes the form \begin{equation} \label{eq:I12} I^{(1+2)}=\int_{\Gamma _{J}}^{}\left [ W^{(1,2)}\cdot n_{{x}'}-(\mathbf{\sigma }^{{(1)}}\nabla_{{x}'}\mathbf{u}^{(2)}+\mathbf{\sigma }^{{(2)}}\nabla_{{x}'}\mathbf{u}^{(1)}) \right ]d\Gamma \end{equation} in which superscripts $(1)$ and $(2)$ respectively represent the actual and auxiliary states, and $W^{(1,2)} = \mathbf{\sigma }^{(1)}\cdot \mathbf{\varepsilon }^{(2)}=\mathbf{\sigma }^{(2)}\cdot \mathbf{\varepsilon }^{(1)}$ is the interaction strain energy. Combining Eqs. \ref{eq:J1}, \ref{eq:J2} and \ref{eq:I12}, it can be concluded that \begin{equation} \label{eq:I12K} I^{(1+2)}=\frac{2}{{E}'}(K_{\text{I}}^{(1)}K_{\text{I}}^{(2)}+ K_{\text{II}}^{(1)}K_{\text{II}}^{(2)}) \end{equation} By appropriate selection of auxiliary fields for pure mode I (i.e., $K_{I}^{(2)}=1$, $K_{II}^{(2)}=0$) and mode II (i.e., $K_{I}^{(2)}=0$, $K_{II}^{(2)}=1$), the stress intensity factors of mixed-mode problems can be calculated. In addition, a domain form of Eq. \ref{eq:I12} can be obtained by application of Gauss-divergence theorem and use of special weighting functions (see Anderson \cite{anderson2017fracture}). In order to estimate the fracture propagation direction, the maximum hoop stress criteria \cite{mohammadi2008extended,giner2009abaqus} is employed. Based on the calculated values of SIFs, the propagation angle $\theta _{c}$ is obtained as \begin{equation} \label{eq:thetac} \theta _{c}= \text{cos}^{-1}\left ( \frac{3K_{\text{II}}^{2} + \sqrt{K_{\text{I}}^{4}+8K_{\text{I}}^{2}K_{\text{II}}^{2}}}{K_{\text{I}}^{2}+9K_{\text{II}}^{2}} \right ) \end{equation} where $\theta _{c}$ is measured with respect to the current local coordinate system of the associated crack tip. Using the propagation angle $\theta _{c}$, an arbitrary crack increment is added to the existing crack configuration, and the solution continues. For more information on XFEM implementation of fractures and issues related to blending elements, refer to \cite{khoei2014extended, mohammadi2008extended}. \section{Conclusions} \label{S:5 (Conclusions)} In this study, an XFEM implementation in COMSOL Multiphysics is presented and applied to crack analysis in 2D and 3D solid domains. By employing a special weak form of the governing equations, the enrichment strategy is implemented within the framework of COMSOL Multiphysics software. Distinct Solid Mechanics modules are adopted to incorporate the standard and enriched parts of the displacement field in the context of XFEM. The stress intensity factor calculations, pre-processing of the model, level set updating and the crack propagation analysis are performed by means of the built-in features of the software in conjunction with external MATLAB functions. The implementational aspects and available remedies for modeling issues are explained in detail. In the first example, the accuracy of the SIF analysis of the proposed implementation is validated against benchmark analytical solutions in 2D settings. The second example is devoted to highlight the capability of the proposed strategy in dealing with heavily fractured bulks. Next, two crack growth studies, involving single and multiple crack propagation, are presented to demonstrate the capabilities of the extended framework in cases where the geometry is subject to changes. In the final example, an extension of the proposed procedure for modeling single/multiple cracks in 3D domains is carried out. In all numerical examples, the results obtained indicate excellent agreement with the existing analytical/computational solutions or experimental measurements. This implies the soundness of the proposed implementation strategy and its tremendous potential for modeling fractures. Future developments could be aimed at complex multi-field problems, which is the cornerstone of COMSOL Multiphysics package. \section{Introduction} \label{S:1 (introduction)} Since its inception in 1999 by Belytschko and collaborators \cite{belytschko1999elastic,moes1999finite,daux2000arbitrary,dolbow2000discontinuous}, eXtended Finite Element Method (XFEM) has emerged as a versatile and rigorous computational tool for tackling \textit{weak/strong} discontinuities as well as high-gradients (i.e., \textit{singularities}). In XFEM, the special characteristics of the solution field are incorporated into the approximation space by means of the so-called enrichment functions. With the aid of the partition of unity concept, special enrichment functions are added to the standard approximation space associated with the classical finite element description \cite{khoei2014extended}. The mathematical foundation of the enrichment strategy to enhance the solution field traces back to the partition of unity finite element method (PUFEM) and the generalized finite element method (GFEM) contributions (e.g., see Melenk and Babuška \cite{melenk1996partition}, Strouboulis et al. \cite{strouboulis2000design}), in which enrichment functions are employed at a universal level, contrary to XFEM where enrichments are utilized locally. Early contributions in XFEM were focused on the crack growth problem to demonstrate its performance in circumventing the need for remeshing, mesh refinements and data transfer (see Mohammadi \cite{mohammadi2008extended}). XFEM is now regarded as a proven technique in dealing with a broad range of applications, including linear elastic fracture mechanics (LEFM) (Moës et al. \cite{moes1999finite}, Sukumar et al. \cite{sukumar2000extended}, Chen et al. \cite{chen2012extended}), cohesive fractures (Zi and Belytschko \cite{zi2003new}, de'Borst et al.\cite{de2006mesh}), composite materials (Sukumar et al. \cite{sukumar2004partition}, Gracie and Belytschko \cite{gracie2009concurrently}, Akhondzadeh et al. \cite{akhondzadeh2017efficient}, Karimi et al. \cite{karimi2019adapting}, Pike and Oskay\cite{pike2015xfem}), shear band localization (Mikaeili and Schrefler \cite{mikaeili2018xfem}), contact mechanics (Liu et al. \cite{liu2008contact}, Broumand et al. \cite{BROUMAND201397}, Hirmand et al. \cite{hirmand2015augmented}), fluid–structure interaction (Legay et al. \cite{legay2006eulerian}), fractured porous media (de'Borst et al. \cite{de2006numerical}, Khoei et al. \cite{khoei2014mesh,khoei2018enriched}, Mohammadnejad and Khoei \cite{mohammadnejad2013extended}, Jafari et al. \cite{jafari2021fully}), and thermo-hydro-mechanical coupling processes (Khoei et al. \cite{khoei2012thermo}, Salimzadeh and Khalili \cite{salimzadeh2016fully}, Parchei and Gracie \cite{parchei2019undrained}), to name a few. The increasing interests shown by both the computational mechanics communities and engineering end users have led into a variety of developments dedicated to open-access XFEM simulators. The notable examples include the open-source XFEM implementations by Sukumar and Prévost \cite{sukumar2003modeling}, who developed a Fortran implementation, and Dunant et al. \cite{dunant2007architecture}, who established an object-oriented programming library, for XFEM. Nonetheless, these and other similar in-house simulators often lack computational efficiency, a key ingredient in real-world engineering applications which commonly involve complex geometries, three dimensional settings, and extensive heterogeneities. As a remedy, there is a growing trend for the implementation of XFEM in general-purpose FE softwares featuring efficient built-in solvers and advanced meshing tools, such as ABAQUS, which permit new developments through addition of user defined subroutines. The substructuring approach to XFEM implementation in commercial packages, with no need to modification of the kernel, was first suggested by Wyart et al. \cite{wyart2008substructuring}. Giner et al. \cite{giner2009abaqus} employed user subroutine feature in ABAQUS (i.e., UEL) to simulate elastic fracture growth. Further improvements in relation to ABAQUS implementation of XFEM has been due to contributions by Cruz et al. \cite{cruz2019xfem} and Dehghan et al. \cite{dehghan20173d}, for intersecting fractures, by Xu and Yuan \cite{xu2009damage} and Haddad and Sepehrnoori \cite{haddad2016xfem}, for cohesive fractures, and by Ooi et al. \cite{ooi2018investigating}, regarding contact mechanics. In recent years, there has been an overwhelming demand for the elaboration of XFEM into multi-physics problems involving chemo-thermo-hydro-mechanical coupling analysis (e.g., see Khoei et al. \cite{khoei2012thermo}, Vahab et al. \cite{vahab2019x}, de'Borst et al. \cite{de2006numerical}). The inclusive capabilities of “COMSOL Multiphysics” in dealing with the simulation of multi-field problems and its attraction among researchers and engineers have been the incentive in this work to pursue the first XFEM implementation of COMSOL. A straightforward procedure is presented for the proposed implementation of XFEM by exploiting COMSOL's built-in features endowed with necessary external MATLAB functions, which can be clustered in the following tasks:\\ i) Adopt a compatible XFEM formulation according to the structure of COMSOL;\\ ii) Regenerate and modify the generic Solid Mechanics module of COMSOL to account for the presence of crack interfaces;\\ iii) Conduct the level set analysis, for tracking the interfaces, via external MATLAB functions to overcome the software's restriction in accessing data at nodal/elemental level prior to (i.e., at pre-processing stage) and during the analysis; and,\\ iv) Perform SIF evaluation by taking advantage of the internal functions and variables at the post-processing stage.\\ The procedure proposed is robust and enables handling of complex scenarios in cracked media in 2D/3D domains. While it is formulated for solid mechanics simulations, it is eminently amenable to extension to multi-physics problems. The paper is organized as follows: In section \ref{S:2 (Formulation)}, the governing equations for XFEM formulation of fracture growth in an elastic domain are briefly described in conjunction with the weak forms and fracture growth criteria. Section \ref{S:6 (COMSOL impl)} is dedicated to the implementation of XFEM in COMSOL, which involves detailed algorithms employed for identification of the enriched elements, module setup, evaluation of the stress intensity factor, and numerical integration. In section \ref{S:4 (results)}, the performance of the proposed framework is investigated using a selection of benchmark examples, in 2D and 3D settings. Concluding remarks are presented in section \ref{S:5 (Conclusions)}. Transfer of the knowledge to academia and industry is a cornerstone of this paper, hence the proposed model is made available at \href{https://github.com/ahmadjafari93/xfem-comsol.git}{https://github.com/ahmadjafari93/xfem-comsol.git}. \section*{References} \bibliographystyle{model1-num-names} \section{Numerical simulations} \label{S:4 (results)} In this section, the accuracy and robustness of the proposed XFEM implementation in COMSOL are thoroughly investigated by several numerical simulations. In the first example, the performance of the proposed solution strategy is investigated in the case of stationary cracks. A convergence study is conducted and the SIF values of an inclined crack for pure mode I and mixed-mode cases are acquired, and compared to the available analytical solutions in the literature. The flexibility of the proposed implementation to handle heavily fractured domains is demonstrated in another 2D example. In the subsequent two examples, mixed-mode crack propagation in complex geometries is studied comprehensively. Finally, a selection of three-dimensional fracture analysis is carried out to illustrate the capability of the proposed implementation in dealing with more complex geometric settings. In all the examples, a linear elastic material with Young's modulus of $E=200\text{GPa}$ and Poisson's ratio of $\nu=0.3$ is supposed, unless specified otherwise. Quasi-static formulation is used for crack propagation analysis under displacement controlled boundary conditions. The 2D analyses are performed by assuming plane strain state, and bi-linear quadrilateral elements are used to discretize the solution domain. Tetrahedral and brick elements are employed for 3D analysis. The integration in the enriched modules SMenr and SMtip is carried out by using 35-point and 40-point Gaussian quadrature, respectively, while $\delta$ is set to 0.002 to ensure the existence of sufficient number of Gauss integration points at either sides of the crack interfaces, within the enriched elements. \subsection{Center crack in an infinite domain; model verification and SIF analysis} \label{S:4-1 (middlecrackSIF)} In this example, the simulation results associated with the proposed XFEM implementation are compared to a series of benchmark analytic solutions in 2D settings \cite{anderson2017fracture}. As depicted in Fig. \ref{fig:ex1-SIF}, a square plate is considered with side length of $w=5\text{ m}$ that contains an inclined center crack of size $2a=0.2\text{ m}$. The ratio $w/a$ is chosen as $50$ to emulate a crack in an infinite domain. The plate is subjected to uniaxial far-field tension of $1$ MPa at the top edge, while the bottom edge is fixed. \begin{figure}[!t] \centering\includegraphics[width=0.45\linewidth]{Fig5.pdf} \caption{Geometry and boundary conditions of a center crack in an infinite domain.} \label{fig:ex1-SIF} \end{figure} In the first part of this example, a convergence study is performed on the stress intensity factors and the significance of the crack tip enrichment for the case of a horizontal center crack ($\beta=0$). To this end, the mesh in the vicinity of each crack-tip (i.e., over a square of length $w_{\text{s}}=1\text{m}$) is refined using the normalized element sizes of $a/s = 2,5,6.7,9.1$ and $12.5$, where $s$ is the element size in the refinement zone. The radius of the circular integration path employed for the interaction-integral calculations is set to $a$ (see section \ref{S:3-2 module}). In Table \ref{t:meshsens}, the simulations results of the developed model are compared against the exact solutions expressed as, \begin{equation} \label{eq:sifanal} \begin{matrix} K_{\text{I}}=\sigma \sqrt{\pi a}\text{ cos}^{2}\beta \\ K_{\text{II}}=\sigma \sqrt{\pi a}\text{ sin}\beta \text{ cos}\beta \end{matrix} \end{equation} \begin{table}[ht] \caption{Relative errors of the stress intensity factors for a horizontal crack in infinite plate with (w) and without (w/o) crack tip enrichment (SIFs are in MPa$\sqrt{\text{m}}$). \centering \begin{tabular}{c c c c c c c \hline \hline $\bar{a}^{*}(\text{m})$ & $a/s$ & $K_{\text{I}}^\text{exact}$ & $K_{\text{I}}^{\text{w/o tip}^{\textcolor{white}{A}}}$ & $K_{\text{I}}^{\text{w tip}^{\textcolor{white}{A}}}$ & $\text{error}^{\text{w/o tip}}(\%)$ & $\text{error}^{\text{w tip}}(\%)$ \\ [0.5ex \hline 0.1 & 2 & 0.5605 & 0.5893 & 0.5658 & 4.88 & 0.94 \\% inserting body of the table 0.1 & 5 & 0.5605 & 0.5731 & 0.5632 & 2.20 & 0.48 \\ 0.108 & 6.7 & 0.5825 & 0.5742 & 0.5588 & 1.42 & 0.29\\ 0.103 & 9.1 & 0.5675 & 0.5643 & 0.5601 & 0.55 & 0.07\\ 0.102 & 12.5 & 0.5647 & 0.5650 & 0.5605 & 0.05 & 0.00\\ [1ex] \hlin \end{tabular}\\ \small * $\bar{a}$ represents the crack length in the COMSOL model w/o tip enrichment; this can be slightly different from the nominal crack length, since the enriched elements are considered fully fractured up to the element edges. \label{t:meshsens \end{table} As can be seen from Table \ref{t:meshsens}, the proposed procedure evaluates the SIFs correctly and with high accuracy. It is also observed that Employing the crack tip enrichment functions in the model can minimize the errors in SIFs even for relatively coarse discretizations; however, this is achieved at the expense of increased computational cost, since, four additional Solid Mechanics modules with maximized integration order are required to accommodate the asymptotic crack tip functions. On the other hand, the results for the cases where the crack tip enrichment functions are excluded show a satisfying accuracy for the range of $a/s>7$. Hence, in favour of computational efficiency and simplicity in implementation, from here onward merely the discontinuous Heaviside enrichment is considered in the following examples. In Fig. \ref{fig:errornorm}, the convergence in the energy error norm of the proposed formulation versus element size is studied. The energy error norm $\left \| e \right \|_{E}$ is defined as \cite{liu2013smoothed} \begin{equation} \label{eq:errornorm} \left \| e \right \|_{E}=\frac{1}{\Omega }\sqrt{\int_{\Omega }^{}(\mathbf{\varepsilon}^{\text{XFEM}}-\mathbf{\varepsilon}^{\text{exact}})\mathbf{D}(\mathbf{\varepsilon}^{\text{XFEM}}-\mathbf{\varepsilon}^{\text{exact}})d\Omega } \end{equation} where $\mathbf{\varepsilon}^{\text{XFEM}}$ is the strain field associated with the XFEM simulation, whereas $\mathbf{\varepsilon}^{\text{exact}}$ is the high-fidelity solution due to a FEM analysis using an extremely fine mesh. The rate of variations in the error norm is used to demonstrate the validity of the numerical analysis. Fig. \ref{fig:errornorm} demonstrates that the optimal convergence rate of almost 1 is achieved by the proposed XFEM approach \cite{khoei2014extended}. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ xlabel={ Log (mesh size (m))}, ylabel={Log ($\left \| e \right \|_{E}$)}, xmin=-2.2, xmax=-0.7, ymin=-2.2, ymax=-0.7, xtick={-2,-1.5,-1,-0.5}, ytick={-2,-1.5,-1,-0.5}, legend pos=south east, ymajorgrids=false, grid style=dash, ] \addplot[ color=blue, dashed, mark=non, ] coordinates { (-2,-1.8681)(-1,-0.9094) }; \addplot[ only marks, color=blue, mark=square, ] coordinates { (-2,-1.843875053)(-1.602059991, -1.528855035)(-1.301029996, -1.194274916 )(-1, -0.895273956)}; \addlegendentry{$m=0.96$ ($R^{2}=0.99$)} \end{axis} \end{tikzpicture} \caption{Energy error norm for the horizontal crack problem using the proposed XFEM implementation.} \label{fig:errornorm} \end{figure} In the remainder of this example, the effectiveness of the proposed procedure in handling mixed-mode fracturing is demonstrated through examining the stress intensity factors for the case of inclined cracks. In this case, values for $K_{\text{I}}$ and $K_{\text{II}}$ are obtained by means of a locally refined mesh, using a normalized element size of $a/s=12$, over a square zone of length $w_{\text{s}}$ that encompasses the crack. A circular path with a radius of $0.9a$ is adopted for the calculation of the interaction integral. All other assumptions are similar to the horizontal crack problem definition. The calculated SIFs for the mixed-mode crack analysis are depicted in Fig. \ref{fig:sifcompare}, which are in excellent agreement with the exact values given by Eq. \ref{eq:sifanal}. Notably, the maximum error in the calculation of the crack propagation angle by means of the obtained SIFs does not exceed $0.5$ degrees (see Eq. \ref{eq:thetac}), which further highlights the accuracy of the proposed approach. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ xlabel={ $\beta$ (deg)}, ylabel={$K_{\text{I}}$,$K_{\text{II}}$ (MPa$\sqrt{\text{m}}$)}, xmin=0, xmax=90, ymin=0, ymax=0.6, xtick={0,10,20,30,40,50,60,70,80,90}, ytick={0.0,0.1,0.2,0.3,0.4,0.5,0.6}, legend pos=north east, ymajorgrids=false, grid style=dash, ] \addplot[ color=blue, mark=square, ] coordinates { (10,0.546)(20,0.498)(30,0.422)(40,0.331)(50,0.233)(60,0.141)(70,0.066)(80,0.017) }; \addlegendentry{$K_{\text{I}}$ COMSOL} \addplot[ color=blue, mark=triangle, ] coordinates { (10,0.094)(20,0.184)(30,0.251)(40,0.286)(50,0.284)(60,0.251)(70,0.184)(80,0.095) }; \addlegendentry{$K_{\text{II}}$ COMSOL} \addplot[ color=red, dashed, mark=square, ] coordinates { (10,0.544)(20,0.495)(30,0.420)(40,0.329)(50,0.232)(60,0.140)(70,0.066)(80,0.017) }; \addlegendentry{$K_{\text{I}}$ Analytical} \addplot[ color=red, dashed, mark=triangle, ] coordinates { (10,0.096)(20,0.180)(30,0.243)(40,0.276)(50,0.276)(60,0.243)(70,0.180)(80,0.096) }; \addlegendentry{$K_{\text{II}}$ Analytical} \end{axis} \end{tikzpicture} \caption{Comparison of $K_{\text{I}}$ and $K_{\text{II}}$ values for the inclined crack problem; COMSOL results vs analytical exact solutions.} \label{fig:sifcompare} \end{figure} \subsection{Square plate with multiple randomly distributed cracks} \label{S:4-2 heavilyfractured} The heterogeneity caused due to the presence of pre-existing cracks is a crucial subject in a wide range of research fields including: the micro-mechanical behaviour of concrete \cite{kurumatani2019simulations}, micro-cracks in biological organs \cite{hammond2019mechanics}, and natural fractures in geological formations \cite{vahab2021numerical,hirmand2019robust}, to name a few. This example studies the robustness and flexibility of the proposed implementation in handling domains containing randomly distributed cracks. As Fig. \ref{fig:ex-3 geometry} shows, the problem consists of 17 equally-sized cracks, with the length of $0.2$ m, which are randomly distributed in a square plate of side length $L=1$ m. The plate is subjected to tensile traction of $\bar{\mathbf{t}}=1$ MPa at the top edge, while the bottom edge is supposed to be fixed. The material properties are identical to example \ref{S:4-1 (middlecrackSIF)}. In lieu of exact solution, a high-fidelity FEM model using the same configuration is employed. The domain meshes consists of 14,641 and 15,000 quadrilateral elements, respectively, for the XFEM implementation and the FEM model, in which an average element size of $8$ mm is adopted. \begin{figure}[!t] \centering\includegraphics[width=0.45\linewidth]{Fig13-0.pdf} \caption{Randomly distributed cracks problem; geometry and boundary conditions. The marked crack is chosen to study the crack opening displacement. } \label{fig:ex-3 geometry} \end{figure} Fig. \ref{fig:ex3-Uy} shows the contours of the vertical displacement $u_y$ for both XFEM and FEM simulations, where an excellent agreement is observed between the results. In addition, the profile of crack opening displacement (COD) is presented for one of the cracks in Fig. \ref{fig:ex3opening2D} (marked by an ellipse in Fig. \ref{fig:ex-3 geometry}) . It is observed that the maximum difference between the two opening profiles is less than $2\%$. Note minor discrepancies in here are attributed to the introduction of crack surfaces by finite width bodies of width $0.01$ m in FEM model. This is accompanied by the need for extraction of fracture profile by means of displacement field at either sides of the crack in FE model; yet, this task in the XFEM implementation is readily performed by using the enriched component of the displacement field (i.e., $\mathbf{u}^{\text{disc}}$ in SMenr). Finally, contours of the vertical stress $\sigma_{yy}$ is presented in Fig. \ref{fig:ex3=Syy} for the both simulations. Both contours match reasonably well, in particular, at locations the cracks-tips stress fields are interrupted by each other. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Fig13-1.jpeg} \caption{XFEM} \label{fig:ex3-UyXFEM} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.95\linewidth]{Fig13-2.jpeg} \caption{FEM} \label{fig:ex3-UyFEM} \end{subfigure} \caption{Contours of vertical displacement distribution $u_{\text{y}}$ in the domain containing randomly distributed cracks; XFEM vs FEM results.} \label{fig:ex3-Uy} \end{figure} \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ xlabel={ ${x}'$ (m)}, ylabel={opening ($\times10^{-3}$mm)}, xmin=0, xmax=0.22, ymin=0.0, ymax=2, xtick={0,0.05,0.10,0.15,0.2}, ytick={0.0,0.5,1.0,1.5,2.0}, legend pos=north east, ymajorgrids=false, grid style=dash, ] \addplot[ color=red, mark=false, ] file[] {openingXFEM.dat}; \addlegendentry{XFEM} \addplot[ color=blue, mark=false, ] file[]{openingFEM.dat}; \addlegendentry{FEM} \end{axis} \end{tikzpicture} \caption{Crack opening displacement profile of an arbitrary crack in a domain with randomly distributed cracks; XFEM vs FEM results.} \label{fig:ex3opening2D} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Fig14-1.JPG} \caption{XFEM} \label{fig:ex3-SyyXFEM} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.95\linewidth]{Fig14-2.jpeg} \caption{FEM} \label{fig:ex3-SyyFEM} \end{subfigure} \caption{Contours of vertical stresses $\sigma _{\text{yy}}$ in a domain with randomly distributed cracks.} \label{fig:ex3=Syy} \end{figure} \subsection{Mixed-mode crack propagation} \label{S:4-2 (crackpropagate)} The following two numerical examples are presented to show the outstanding applicability of the proposed implementation in dealing with mixed-mode crack propagation in complex geometries. In both cases, quasi-static loading condition is considered, while the propagation angle of the cracks is determined based on the calculated SIFs during the course of the analysis (see Eq. \ref{eq:thetac}). \subsubsection{Crack propagation in a rectangular plate with a hole} \label{S:4-2-1 crackhole} This example is adopted from Giner et. al. \cite{giner2009abaqus}, which aims to investigate the effects of a hole in a rectangular plate on the crack propagation pattern. Fig. \ref{fig:ex2-1geo} illustrates the geometry and boundary conditions of the plate, that is made of an aluminum alloy with $E=71.7\text{ GPa}$ and $\nu=0.33$. Consistent with the reference, the initial crack length and crack growth increment are set to $a_{0}=10\text{ mm}$ and $\Delta a=3\text{ mm}$, respectively. An incrementally increasing traction is applied to the top edge of the plate with a maximum of 15 kN/m. The domain is discretized with 7,601 quadrilateral elements with an average element size of $0.67\text{ mm}$. \begin{figure} \centering \includegraphics[width=.35\linewidth]{Fig6.pdf} \caption{Crack in a plate with hole: Geometry and boundary conditions.} \label{fig:ex2-1geo} \end{figure} Figs. \ref{fig:ex2-1path} and \ref{fig:ex2-1mises} respectively show the crack trajectory and the von-Mises stress contour at the end of the analysis. The former is deduced by using the field variable $\psi$, which is equal to unity for fractured elements. The numerical results are in excellent agreement with the experimental observations as depicted in Fig. \ref{fig:ex2-1exp}. \begin{figure} \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.70\linewidth]{Fig8.JPG} \caption{} \label{fig:ex2-1path} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=0.90\linewidth]{Fig9.JPG} \caption{} \label{fig:ex2-1mises} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.8\linewidth]{Fig7.JPG} \caption{} \label{fig:ex2-1exp} \end{subfigure} \caption{Crack propagation in a rectangular plate with hole; a) crack trajectory based on $\psi$ field, b) von-Mises stress $\sigma_{\text{v}}$ contour at the end of the analysis, and c) experimental observations by \cite{giner2009abaqus}.} \label{fig:ex2-1pathmises} \end{figure} \subsubsection{Multiple Crack propagation in a plate with double holes} \label{S:4-2-2 crackhole} In the second example, mixed-mode crack propagation in a plate involving two holes is investigated by means of the proposed model (Fig. \ref{fig:ex2-2geomtry}). Note this problem was originally introduced by Bouchard et. al. \cite{bouchard2003numerical}. The aim of the simulation is to further demonstrate the capability of the proposed implementation in dealing with multiple crack propagation in more complex geometries. Due to the ideal antisymmetry incorporated in definition of the geometry, FE mesh and boundary conditions, both of the pre-cracks are expected to propagate identically. The initial length of both cracks is $a_{0}=1\text{mm}$, and the critical fracture toughness is taken as $K_{\text{IC}}=47.4\text{ MPa}\sqrt{\text{m}}$. Two sets of variables, corresponding to each crack tip, are introduced in order to calculate and store the SIFs during the solution. To retain the antisymmetry of the solution, the plate is subjected to prescribed vertical displacement $\delta=0.05\text{ mm}$ at both top and bottom edges. The simulation is performed by means of a FE mesh with 12,149 quadrilateral elements. To ensure best outcome, the mesh is designed to be relatively structured and symmetrical. As Fig. \ref{fig:ex2-2pathcomsol} shows, the cracks initially deviate towards the adjacent holes and then gradually realign with their initial path. This perfectly matches the crack trajectory obtained by Khoei et al. \cite{khoei2008modeling} using the adaptive finite element method, as depicted in Fig. \ref{fig:ex2-2pathadaptive}. The corresponding von-Mises stress distribution contour at the final crack increment is presented in Fig. \ref{fig:ex2-2mises}. \begin{figure}[!t] \centering\includegraphics[width=0.7\linewidth]{Fig10.pdf} \caption{A plate with two holes and multiple cracks; problem geometry and boundary conditions.} \label{fig:ex2-2geomtry} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.93\linewidth]{Fig11-1.JPG} \caption{} \label{fig:ex2-2pathcomsol} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.95\linewidth]{Fig11-2.JPG} \caption{} \label{fig:ex2-2pathadaptive} \end{subfigure} \caption{Crack propagation trajectory in a plate with two holes: a) COMSOL simulation and b) adaptive finite element method \cite{khoei2008modeling}.} \label{fig:ex2-2path} \end{figure} \begin{figure}[!t] \centering\includegraphics[width=0.55\linewidth]{Fig12.jpg} \caption{Final von-Mises stress contour in a plate with two holes and multiple cracks.} \label{fig:ex2-2mises} \end{figure} \subsection{Penny-shaped crack in three-dimensional media} \label{S:4-4 planar3D} The final example is presented to illustrate the extendibility of the proposed XFEM implementation to three dimensional problems. As depicted in Fig. \ref{fig:ex-4 geometry}, the problem consists of a cube with side length of $1$ m which contains a penny-shaped crack of radius $0.25$ m at its center. The top surface is subjected to a tensile traction of $10$ MPa, and the bottom surface is fixed. The domain is discretized by a cluster of 50,280 brick elements with average size of $13$ mm in the vicinity of the crack zone, in conjunction with additional 100,843 tetrahedral elements in the remainder of the domain. For the sake of simplicity, here the penny-shaped geometry is explicitly introduced in COMSOL by mathematical relations, instead of the more general yet complicated procedure through MATLAB functions (i.e., \say{phi.m} and \say{interpol.m}). For comparison purposes, an FEM analysis of the same problem is performed in which the penny-shaped crack is modeled by a narrow cylindrical void. Fig. \ref{fig:ex4isodurface} illustrates iso-surfaces of the vertical displacement $u_{\text{y}}$ contour at either sides of the crack. It is observed that the XFEM implementation of the strong discontinuity is in perfect agreement with the FEM results. The crack opening displacements associated with both models, along their diameter, are presented in Fig. \ref{fig:ex4-opening3D}. Note that the accuracy of the proposed XFEM results lies within 3\% of the FE analysis. Again, this slight discrepancy primarily pertains to the relatively negligible void thickness in the finite element simulation. Contour of the stress distributions in z-direction $\sigma_\text{zz}$ for both methods is presented in Fig. \ref{fig:ex4Szz}, which further validates the accuracy of the proposed implementation procedure. \begin{figure}[!t] \centering\includegraphics[width=0.5\linewidth]{Fig15.pdf} \caption{A cube with penny-shaped crack problem; geometry and boundary conditions.} \label{fig:ex-4 geometry} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Fig16-3.jpg} \caption{XFEM} \label{fig:ex4-isodurfaceXFEM} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.95\linewidth]{Fig16-4.jpg} \caption{FEM} \label{fig:ex4-isodurfaceFEM} \end{subfigure} \caption{Comparison of Iso-surfaces of vertical displacement $u_{\text{z}}$ in a cube with penny-shaped crack problem. } \label{fig:ex4isodurface} \end{figure} \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ xlabel={ ${x}'$ (m)}, ylabel={opening ($\times10^{-5}$mm)}, xmin=0, xmax=0.53, ymin=0.0, ymax=3.5, xtick={0.0,0.1,0.2,0.3,0.4,0.5}, ytick={0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5}, legend pos=north east, ymajorgrids=false, grid style=dash, ] \addplot[ color=red, mark=false, ] file[] {openingXFEM3D.dat}; \addlegendentry{XFEM} \addplot[ color=blue, mark=false, ] file[]{openingFEM3D.dat}; \addlegendentry{FEM} \end{axis} \end{tikzpicture} \caption{Opening profile along the diameter of the planar penny-shaped crack; XFEM vs FEM results.} \label{fig:ex4-opening3D} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Fig17-1.jpg} \caption{XFEM} \label{fig:ex4xfemstress} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.95\linewidth]{Fig17-2.jpg} \caption{FEM} \label{fig:ex4femstress} \end{subfigure} \caption{Comparison of vertical stress $\sigma _{\text{zz}}$ contours in y-z plane in the cube with a penny-shaped crack problem. } \label{fig:ex4Szz} \end{figure} At the end of this example, the above-mentioned three-dimensional crack tool is elaborated to simulate multiple penny-shaped cracks as depicted in Fig. \ref{fig:ex-4 geometrymultiple}. Same problem definition as the previous case is adopted except for the presence of six penny-shaped cracks of radius $0.15$ m inside the domain. The cracks are located parallel to the cube faces with $e_{\text{x}}=e_{\text{y}}=e_{\text{z}}=0.15$ m lateral distance from the boundary surfaces. Three faces of the cube are stipulated as fixed in normal direction, and the remaining three are subjected to tensile tractions with the magnitude of $10$ MPa. Contours of the displacement field as well as the orthogonal components of the stress field are respectively depicted in Figs. \ref{fig:ex4-umultiple} and \ref{fig:ex4-Smultiple}. The results confirm the flexibility of the proposed framework in handling more complex scenarios in 3D crack analysis problems. \begin{figure}[!t] \centering\includegraphics[width=0.5\linewidth]{Fig22.pdf} \caption{A cube with multiple penny-shaped cracks; geometry and boundary conditions.} \label{fig:ex-4 geometrymultiple} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.99\linewidth]{Fig23-1.jpg} \caption{XFEM} \label{fig:ex4-uXFEMmultiple} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.99\linewidth]{Fig23-2.jpg} \caption{FEM} \label{fig:ex4-uFEMmultiple} \end{subfigure} \caption{Comparison of the displacement distribution contours for a cube with multiple penny-shaped cracks. } \label{fig:ex4-umultiple} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Fig24-1.jpg} \caption{XFEM} \label{fig:ex4-SXFEMmultiple} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.95\linewidth]{Fig24-2.jpg} \caption{FEM} \label{fig:ex4-SFEMmultiple} \end{subfigure} \caption{Comparison of the distribution contours of normal components of the stress field for a cube with multiple penny-shaped cracks. } \label{fig:ex4-Smultiple} \end{figure}
1,477,468,750,543
arxiv
\section{Introduction} The atomic emission lines of elements evaporated from the meteoroid provide insights into the meteoritic elemental composition and physical conditions during meteor flights in the atmosphere \citep{Borovicka1994c}. The lines and bands of oxygen and nitrogen are a regular part of the meteor spectrum, which can have either meteoroid and atmospheric origins. The oxygen is abundant in the minerals of stony meteoroids, but most of the radiation usually originates in the atmospheric elements and molecules excited during the meteoroid flight. The meteoritic lines in meteor spectra can have their origin in different parts of the meteor. There are low-temperature lines with the origin in meteor head plasma ($\approx 4500$K) and high-temperature lines ($\approx 10 000$K) that originate in the shockwave \citep{Borovicka1994b}. The lines of a meteor wake are naturally part of the spectrum \citep{Halliday1958}. They originate in the low-energy excitation intercombination region behind the meteor and last for a fraction of a second. The intensity of spectral lines of different elements can depend on the overall abundances of these elements in the meteoroid (i. e., the chemical composition). The plasma temperature also affects the intensity of spectral lines and bands due to different excitation and ionization potentials of meteoritic and atmospheric species. Other factors such as the meteoroid mass, the altitude, and the optical thickness of formed plasma can also affect the shape of the meteor spectrum. Lastly, the velocity of the meteor in the atmosphere significantly changes the spectrum as the mass and the relative luminosity of the high-temperature component rises in line with the meteor velocity \citep{Borovicka1994b}. The relative intensity of atmospheric lines depends on the velocity of the meteor in the atmosphere \citep{Vojacek2015, Segon2018} and other parameters such as the altitude. The atmospheric lines are part of every meteor spectrum. In slow meteors, the excitation of atmospheric elements can be low and the brightness of atmospheric lines is often under the observation limit. On the other hand, in the spectra of some small and fast meteors, only the atmospheric lines have been detected \citep{Vojacek2015}. The excited atmospheric atoms can also interact with elements of meteoritic material and form metallic oxides. The iron oxide is then observed in the spectrum in the form of molecular bands. These bands can be detected in spectra of very slow meteoroids with high content of iron \citep{Vojacek2020}. Current lightning detection instruments on weather satellites are observing in a narrow spectral band of oxygen triplet at $777$ nm and can continuously detect bright bolides in the atmosphere as a byproduct of its original designation. These instruments can constantly observe a significant part of the Earth's globe. Observations from the Geostationary Lightning Mapper (GLM) are now filtered for fireball events and are regularly used in meteor research. Due to the observation in a narrow spectral band, calibrating the lightning detector data is not trivial. \cite{Jenniskens2018} successfully calibrated the radiation of slow fireballs observed by GLM assuming continuum radiation at $777$ nm. \cite{Brown2019} calibrated the GLM data of the Hamburg 2018 fireball using results of \cite{Jenniskens2018}, whereby the threshold signal for GLM corresponds to an absolute magnitude brightness of -14 \cite{Vida2022} derived a simple calibration for fast fireballs by comparing GLM observation and ground observations of a fireball over Alberta in 2020. A better understanding of the behavior of oxygen lines in meteor spectra is thus more important than ever before. However, considering all aforementioned effects influencing oxygen brightness, establishing fireball properties using only a very narrow spectral window can be challenging. In this work, we used precise spectral measurements to calibrate narrow spectral satellite observations of bolides in the full range of their velocities. All the optical data in this work were obtained by the newly modernized Czech part of the European Fireball Network (EN) \citep{Spurny2017}. This network consists of a battery of meteor observing instruments. The main instrument is the Digital Automated Fireball Observatory (DAFO). As of 2021, there were $20$ DAFOs placed in the Czech Republic (15), Slovakia (4), and Austria (1). The Spectral Digital Automated Fireball Observatories (SDAFO) are a new part of the European Fireball Network \citep{Borovicka2019IMC}. The first SDAFO was installed in 2015, and in 2021, eight cameras were running in the Czech Republic:\ one in Slovakia, and one in Germany. SDAFOs are located at the stations next to the DAFOs, with the exception of the SDAFO located at the Tautenburg Observatory as the only DAFO type instrument in Germany. Although we focus mainly on the intensity of the oxygen line in this work, as this is the only region observed by GLM, we also analyzed two other spectral regions in spectra obtained by SDAFO. The spectral areas dominated by magnesium and sodium were measured and the results were compared to other works as a test of our calibration methods. Most of the meteor spectra research projects such as those of the All-Sky Meteor Orbit System (AMOS, \cite{Matlovic2019}) and Canary Island Long-Baseline Observatory (CILBO, \cite{RUDAWSKA2020}), as well as the observations of \cite{Abe2020} are focused on middle-sized meteoroids and their spectra, bright fireballs are usually saturated in these systems. As a result, their observations are not well suited for the calibration of bright fireballs. \section{Observations of fireballs using optical and lightning detection instruments} \subsection{Satellite observations of fireballs using lightning detection instruments} To adequately observe lightning on the daylight hemisphere, the weather satellites on the geostationary orbits use a very narrow spectral band with the center at the near-infrared oxygen line at $777.4$ nm. Geostationary Lightning Mapper on board the GOES-16 and GOES-17 carry out observations in a $1.1$ nm wide spectral band with a time resolution of $2$ ms and spatial resolution of about $10$ km \citep{GOODMAN2013}. The GOES–16 satellite was initially placed in a non-operational test position at $89.5^{\circ}$W. In December 2017, the satellite was moved to its operational, geostationary position at $75.2^{\circ}$W. The GOES-17 satellite was launched on March 1, 2018 and orbits at $137.2^{\circ}$W longitude and has been in full operation since $2019$. The GLM detectors are equipped with $134$ mm lenses and with FOV $\pm8^{\circ}$. They cover the globe roughly from $16^{\circ}$ W to $165^{\circ}$ W longitude and from $55^{\circ}$ N to $55^{\circ}$ S latitude, especially including both the Americas and most of both the Atlantic and the Pacific Oceans \citep{Edgington2019}. These observations are freely accessible and they can be filtered for events lasting longer than lightning flashes. These potential fireball data are published by NASA \footnote{ \url{https://neo-bolide.ndc.nasa.gov/} \label{footnote_1} }. According to \cite{Jenniskens2018} the GLM detectors can detect fireballs of absolute magnitude of $\approx-14$ mag and brighter. This corresponds to sizes from several decimeters to bodies that are several meters in length. To block the sunlight, the GLM sensor uses a sun-blocking filter on top of the narrow $1.1$ nm filter, which changes the observed bandwidth and also the center wavelength according to the angle of incidence. The correction to the flux is less than $20 \%$ for observation less than $7^{\circ}$ from the nadir and it increases with the increasing angle from the nadir. The Chinese satellite of the Fengyun series FY-4A, launched in December 2016, is China's second-generation geostationary meteorological satellite. It carries the Lightning Mapping Imaging (LMI), whose parameters are the same as those of the GLM detector: a $1.1$ nm narrow band with center at $777.4$ nm and a time resolution is $2$ ms. The satellite orbits at $86.5^{\circ}$E longitude and the LMI covers the surface of China \citep{Cao2021}. The data are provided by the National Satellite Meteorological Center, China Meteorological Administration. The Lightning Imager (LI) on the EUMETSAT's (European Organisation for the Exploitation of Meteorological Satellites) Meteosat Third Generation (MTG-I) satellite will provide real-time data on the location and intensity of lightning, covering Europe and Africa. It will be observing at $777.4$ nm with $1.9$ nm wide narrow band with time resolution $1$ ms. The first satellite MTG-I is planned to be launched in the autumn of 2022 \citep{Holmlund2021}. The requirement of narrowband observation (i. e., the ability of lightning observation on the daylight side) makes it challenging for meteor observations. To be able to estimate such parameters as the absolute magnitude, some assumptions and simplifications are inevitable -- which may certainly can lead to uncertainties and discrepancies. This work compares the meteor radiation at $777.4$ nm with other spectral regions and with the radiation of the spectrum as a whole. Because the velocity of a meteoroid in the atmosphere can significantly affect the intensity of the oxygen line \citep{Vojacek2015}, a representative sample of meteor velocities was used. Fireball observations from the European Fireball Network were used in the current analysis. These results can be useful for interpreting satellite observations of fireballs from the lightning imaging instruments that are observed in this narrowband spectral region. \subsection{Optical observations using the European Fireball Network} The spectral camera is a modification of the normal non-spectral cameras (DAFO). Two Canon 6D digital cameras were used. The IR cut filter was removed to allow observation of a broader spectral range. The cameras are equipped with 15mm lenses. This gives the camera almost all-sky coverage. Exposures $30$ sec long are taken for the whole night. For spectroscopy, there is a spectral grating with $1000$ grooves/mm in front of the lens. The system covers the spectral range between approximately $360$ nm and $950$ nm (see Figure \ref{Fig:sensitivity}). The sensitivity curve in Figure \ref{Fig:sensitivity} was obtained using laboratory measurements and the spectrum of solar light reflected by the Moon and Venus. The spectral resolution is somewhere of about $1.2$ nm between a low-resolution video spectrum and a high-resolution analog film spectrum. The system can produce non-saturated spectra for fireballs between magnitudes of $-6$ and $-15$. \begin{figure}[ht]\centering { \includegraphics[width=\hsize]{figures/sensitivity_6D.pdf}} \caption{Spectral sensitivity of the system with the atmospheric absorption included. The curve was normalized at the maximum ($538$ nm).} \label{Fig:sensitivity} \end{figure} \section{Data reduction} \subsection{Observations and calibrations of optical spectra} The spectra in this work were observed by the EN between December 2015 and April 2021. We selected $43$ meteors with representative speeds to cover the whole range of entry velocities. The absolute brightness of these selected meteors was in a range between $-8$ and $-15$ mag. Since multiple spectral cameras are in operation, some spectra were captured with multiple cameras from different observatory sites. In these cases, we chose the spectrum with the best combination of brightness and resolution for further measurements. Typical spectra from SDAFOs can be seen in Figure \ref{Fig:examplePic}. In this figure, we selected three spectra with slow, medium, and high entry velocities. In particular, we note the difference in oxygen intensity. The names of fireballs given in Figure \ref{Fig:examplePic} are in the date time format: SSS\_YYYY--MM--DD\_HHMM, where SSS is the EN station number of the SDAFO camera YYYY is the year, while MM and DD are the month and the day, respectively, and HHMM is the time in UT of the start of the $30$s exposure. Observations from non-spectral cameras were used to determine the trajectory in the atmosphere for all $43$ meteors. The important parameter for this work is the velocity of the meteoroid in the atmosphere. For further analysis, we used the average velocity on the trajectory. We note that the velocity of the meteoroid at the altitude where the spectrum is captured is aptly represented by this average velocity. The images from SDAFOs were processed by our self-developed software. All RAW RGB images were converted into grayscale images using the weighted method: $I = 0.299*R + 0.587*G + 0.114*B$. The images were dark frame-subtracted and flat-fielded, and star photometry was also performed for each image. The background image created from the previous photograph, taken just $30$ seconds before, was subtracted to remove stars and sky radiation from the image. The spectrum was scanned only in its brightest part for long meteors. For short meteors, the entire length of the spectrum was scanned. The distortion of the $15$ mm was taken into account when scanning the curved spectrum. The photometry of stars was used to calculate the energy emitted by the spectrum. All spectra were calibrated for wavelength using known wavelengths of lines identified in the spectrum. All spectra were then calibrated for the spectral sensitivity of our system using the normalized curve of spectral sensitivity. Examples of three calibrated spectra are shown in Figure \ref{Fig:exampleSpec}. These three spectra are the same as those shown in Figure \ref{Fig:examplePic}. Calibrated spectra names are given in the format: SPYYYYMMDD\_HHMM, where SP indicates the spectrum. \begin{figure}[ht]\centering { \includegraphics[width=\hsize]{figures/compoAlignedVelocityDirection.jpg}} \caption{Examples of images of spectra of meteors from low to high velocity. The arrows indicate the direction of the meteor flight. Individual spectra were cropped from original images. The spectral lines on images were aligned in the same direction and scaled for better comparison.} \label{Fig:examplePic} \end{figure} \begin{figure}[ht]\centering { \includegraphics[width=\hsize]{figures/spectra_examples_popisky.pdf}} \caption{Examples of calibrated meteor spectra from Figure \ref{Fig:examplePic}. The prominent lines are marked. $v_{avg}$ is the average velocity on the trajectory and mag is the fireball magnitude computed from the DAFO photometry.} \label{Fig:exampleSpec} \end{figure} \subsection{Measuring the energy radiated in spectrum} \label{section:energyMeasure} Knowing the distance of the meteor from the observatory that captured the given spectrum, we measured the intensity in the units of energy radiated by the given meteor per unit wavelength as the spectral intensity $I_{e,\Omega, \lambda}$ in W.ster$^{-1}$.nm$^{-1}$. As a main region of interest, we measured the region at $777$ nm, where the oxygen line triplet O I -- 1 can be observed. To compare the oxygen line region with meteoritic lines, we measured spectra at $517$ nm, where lines of magnesium Mg I -- 2 are observed (also different multiplets of iron are overlapping here), and the region at $589$ nm, where sodium doublet Na I -- 1 dominates. We further refer to these regions in Figure \ref{Fig:exampleSpec}. \begin{figure}[htb] \centering \begin{subfigure}{0.5\textwidth} \includegraphics[width=\hsize]{figures/line_measure2.pdf} \caption{ } \label{Fig:LineMeasure_b} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\hsize]{figures/line_measure1.pdf} \caption{} \label{Fig:LineMeasure_a} \end{subfigure} \caption{Explanation of the method of measurement of the radiant intensity. (a) Measurement of the background as an estimate of local continuum and a measurement of the spectral line as an integral. (b) the case when only noise is observed. } \label{Fig:LineMeasure} \end{figure} To measure the energy radiated by each region in EN meteor spectra, we simulated the measurement of narrowband filter observation. We measured the radiated energy in two steps. We measured the level of the radiation from the background in kW/nm/ster. This measured spectral intensity $I_{e,\Omega, \lambda}$ was then simply multiplied with $1.1$ nm narrow band filter spectral range to simulate the lightning imaging instrument measurement and we received the estimation of the radiant intensity $I_{e,\Omega}$ of the given $1.1$ nm spectral band. Then we measured radiation from the spectral line. The observed lines were instrumentally broadened due to the low spectral resolution of our cameras and thus wider than $1.1$ nm. Therefore, we integrated the whole broadened spectral line since the broadening happened in the camera and a high spectral-resolution camera would measure the same radiant intensity. An illustration of the measured line integral and level of the background can be seen in Figure \ref{Fig:LineMeasure_b}. The level of the continuum and the integrated line intensity were then summed up to obtain the radiation in the given region. If there was only a continuum and no visible spectral line in the given region, we only measured the level of the continuum in the same manner as the level of the background: measuring the level of radiation in kW/nm/ster and multiplying it with $1.1$ nm spectral band (see Figure \ref{Fig:LineMeasure_a}). These cases were then marked with crosses in Figure \ref{Fig:Energy}. In the case when no radiation was observed and only noise was detected, we measured the level of the noise as the upper value of the noise in a given spectral region (see Figure \ref{Fig:LineMeasure_a}. In other words, it is the upper limit of possible radiation that SDAFO was able to detect. Measurements of the noise are then shown as triangles in subsequent figures. For further analysis, we measured the energy emitted by the spectral range between $380$ nm and $850$ nm (the limits were chosen mainly due to low sensitivity below and above these limits); hereafter, this energy marked as $I_{total}$. We also simulated measurements in the standard UBV system using the filter passbands of \cite{Bessell1990}. Specifically, we measured the absolute magnitude in the V filter (between $\approx 480$ nm and $\approx 650$ nm) and the B filter (between $\approx 380$ nm and $\approx 550$ nm). To estimate the uncertainty of the energy measurement, we manually estimated the level of the noise near the measured region. This value was then used to estimate the uncertainty for all integrated energy radiated in a given region. \section{Results} \subsection{Oxygen at $777nm$ } The key analysis for this work is the behavior of the oxygen line O I at $777$ nm. We computed the relative intensity of the given region to the total radiant intensity $I_{total}$. Then we plotted this relative intensity as a function of the average velocity of the meteor in the atmosphere. The results can be seen in Figure \ref{Fig:Energy}. The dependence of intensities of atmospheric lines in meteor spectra on velocity is well known \citep{Millman&Halliday1961, Vojacek2015}. As expected, we observed this dependence in our data (see Figure \ref{Fig:Energy}). In seven cases, we detected only a continuum. In four cases, there was only noise without any detectable radiation. In Figure \ref{Fig:Energy}, we show the meteors colored according to the absolute magnitude in the V filter computed from their spectra. \begin{figure}[htb] \centering \includegraphics[width=\hsize]{figures/EnergyOLog-velocity_noSecOrder_MC-fit_Uncertainty.pdf} \caption{Radiation at O I -- 1 triplet region (777 nm). Relative radiant intensities as a function of meteor velocity. Symbol colors mark the meteor absolute magnitude. Symbol shapes mark the presence or absence of the oxygen line.} \label{Fig:Energy} \end{figure} \begin{figure}[htb] \centering { \includegraphics[width=\hsize]{figures/EnergyOLog-velocity_noSecOrderFlare.pdf}} \caption{Radiation at O I -- 1 triplet region (777 nm). Relative radiant intensities as a function of meteor velocity. Symbol colors mark meteors with flares or fluctuations and showing fits for meteors with or without flare. Symbol shapes mark the presence or absence of the oxygen line.} \label{Fig:O_Flare} \end{figure} The least-squares fit of all meteors, except those with only noise detected, gives us the dependence of the ratio between the radiation at $777$ nm and the radiation from the whole observed spectrum $I_{777}/I_{total}$ on the velocity $v$ in km/s as follows: \begin{equation} \label{eqnO_vel} log(I_{777}/I_{total})=0.026(\pm0.001) \times v - 3.294(\pm0.077) .\end{equation} To obtain an uncertainty estimate for least-squares fits, we used the Monte Carlo approach and we generated 10000 clones of each meteor point with normal distribution. The measurement error of the given point was used as the standard deviation of the normal distribution. Obtained sets of Monte Carlo clones were then fitted using the least-squares method. The standard deviation of parameters of all least-squares fits were then used as the uncertainty for the slope and the y-intercept of the final fit. We note that meteors faster than $\approx 40$ km/s with flare on the light curve show lower relative radiation at $777$ nm. This is displayed in Figure \ref{Fig:O_Flare}, where meteors with flare and without flare in their light curves are fitted separately. The possible explanation is that a large amount of meteoric material is released during the flare and thus meteoric lines brighten more than atmospheric lines. In Figure \ref{Fig:O_Flare}, we also marked meteors that showed fluctuations on their light curve. Light curves of these meteors showed periodical brightness changes that were not as intense as flares, they occurred only in very slow meteors. We did not observe any significant difference for relative radiation at $777$ nm between meteors, with or without flares or with fluctuation for slow meteors. \subsection{Estimating GLM detector response to meteors}\label{Caption_EstimateGLMtoMeteors} With the known dependence of the radiation at $777$ nm on velocity, it is possible to estimate the absolute magnitude of the meteor observed by the GLM detectors if meteor velocity is known. From the GLM data, the radiant intensity $I_{777}$ at $777$ nm in $W.ster^1$ can be computed. We measured the same quantity for the sample of EN fireballs and we also computed the meteor V-band magnitude, $m_V,$ (see Section \ref{section:energyMeasure}). In the simplest case, the relation would be expressed as: \begin{equation} \label{eqnMAGsimple} m_V = -2.5 \times log_{10}(I_{777}) + b ,\end{equation} where $b$ is a constant. However, since we know that $I_{777}$ depends strongly on velocity, we can expect the following dependency: \begin{equation} \label{eqnMAG_first} m_V = -2.5 \times log_{10}(I_{777}) + a \times v + b .\end{equation} To find constants $a$, $b,$ we computed the sum of the absolute magnitude of the meteor, $m_V$, and the measured radiation, $I_{777}$, at $777$ nm as $m_V + 2.5 \times log_{10}(I_{777})$. The dependence of this quantity on the velocity is shown in Figure \ref{Fig:O_Mag}. Using a least-squares fit of the data, we obtained parameters $a$ and $b$ as follows: \begin{equation} \label{eqnMAG} m_V = - 2.5 \times log_{10}(I_{777}) + 0.0948(\pm0.002) \times v - 3.45(\pm0.1), \end{equation} where $v$ is in $km.s^{-1}$. To be able to use Eq. (\ref{eqnMAG}) for the GLM data, we need to convert the energy measured by GLM detectors $E_{GLM}$ and reported in joules to radiant intensity $I_{777}$ emitted per unit solid angle in $W.ster^1$. The light curves of fireballs are reported by NASA as energy measured directly at the satellite's detector. On the other hand, the radiant intensity of $I_{777}$ is computed at the source. The conversion must then be expressed as: \begin{equation} \label{eqnE_GLM} I_{777} = \frac{E_{GLM} \times R^2 } {\Delta t \times A}, \end{equation} where $\Delta t$ is the exposure time of the GLM detector $0.002$s, $A$ is the effective lens aperture, $0.0098 m^2$ \citep{Jenniskens2018}, and $R$ is the distance in meters between the fireball and the GOES satellite. The distance $R$ can be estimated for any geostationary GOES satellite. We know the position of the geostationary GOES satellites and the longitude and latitude of the fireball is provided along with the measured energy at the NASA website. The provided coordinates assume a "lightning ellipsoid" with an imaginary surface above $16$ km altitude at the equator and $6$ km at the poles \citep{Jenniskens2018}. The normal fireball altitude is, of course, several times higher and this creates noticeable parallax and adds some uncertainty to the magnitude estimation, which is, however, small in comparison with other uncertainties. \begin{figure}[htb] \centering { \includegraphics[width=\hsize]{figures/Mag_Iof_v_itV3_colorbar_MagFit.pdf}} \caption{Magnitude in V filter and radiation at $777nm$ compared to the velocity of meteoroids in the atmosphere. The least-squares fit of all meteors and the Monte Carlo uncertainty of the fit are shown.} \label{Fig:O_Mag} \end{figure} \subsection{Sodium and magnesium}\label{NaMg} The spectral regions of sodium at $589$ nm and magnesium at $517$ nm contain relatively well-studied spectral lines for meteors. To test our spectral measurements, we analyzed them and compared these results with the known behavior of lines in these regions and compared them with previous works. In Figure \ref{Fig:EnergyNaMg}, the relative radiation in regions at $589$ nm and $517$ nm, where sodium and respectively magnesium dominates, to the broad spectral radiation $I_{total}$ can be seen. Also, three least-squares fits of three brightness bins (meteors brighter than $-12$ mag, meteors with a brightness between $-12$ mag and $-10$ mag, and meteors weaker than $-10$ mag) are shown. For magnesium, only meteors faster than $25$ km/s were used for the brightness bin fits. In Figure \ref{Fig:NaMg} the ratio of intensities in the magnesium and sodium region to each other is shown and it is also compared to previous findings in the works of \cite{borovicka2005} and \cite{Matlovic2019}. \begin{figure}[htb] \centering \begin{subfigure}{0.5\textwidth} \includegraphics[width=\hsize]{figures/plot_EnergyNaNoSecondOrder-MagLOG.pdf} \caption{} \label{Fig:Energy_Na} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\hsize]{figures/plot_EnergyMgNoSecondOrder-MagFItLOG.pdf} \caption{} \label{Fig:Energy_Mg} \end{subfigure} \caption{Relative radiant intensities in regions of magnesium and sodium as a function of meteor velocity. Symbol colors mark the meteor's absolute magnitude. Fits for three different magnitude intervals are shown. Figure \ref{Fig:Energy_Na}: Radiation at Na I -- 1 region (589 nm). Figure \ref{Fig:Energy_Mg}: Radiation at Mg I -- 2 region (517 nm).} \label{Fig:EnergyNaMg} \end{figure} \begin{figure}[ht]\centering { \includegraphics[width=\hsize]{figures/Na-Mg-mag_PS.pdf}} \caption{Dependence of the ratio of the radiant intensity at $589nm$ (Na I -- 1) and radiant intensity at $517nm$ (Mg I -- 2) on the velocity. All meteors are marked according to their absolute magnitudes. Monte Carlo fits of meteors slower than $25$km/s (red dashed line) and faster than $25$km/s (blue dashed line) are shown. Uncertainties of these fits are shown as red and blue colored regions. } \label{Fig:NaMg} \end{figure} \section{Application of empirical parameters on real GLM data and comparison with known fireballs} To test the derived parameters in Eq. (\ref{eqnMAG}) we chose several fireball events observed both by the GLM sensor by ground-based cameras. Since our European Fireball Network does not overlap with the coverage of the GOES satellites, we had to use other available ground-based observations. We developed a Python script that directly calculates the absolute magnitude from GLM data using the method described in Section \ref{Caption_EstimateGLMtoMeteors}. The GLM detector parameters are already included in the script. To calculate the distance between a fireball and a geostationary satellite, it is necessary to know the latitude and the longitude of the fireball. If the altitude is known, it can be added to the script. If the event was recorded simultaneously from GLM--16 and GLM--17, NASA's website provides a corrected approximate altitude calculated from this stereo observation. Alternatively, some typical fireball altitude can be used, because the altitude is negligible compared to the total distance between the fireball and the geostationary satellite. The longitude and latitude of the meteor are provided along with the measured energy by the NASA website in a CSV file. The script does not take into account the parallax of the lightning ellipsoid and uses longitude and latitude as they are. As a result, a figure and an output table file with the light curve are created. If necessary, the light curve can be corrected for frame gaps caused by data overflow using interpolation. The script is available on the GitHub server \footnote{ \url{https://github.com/vojacekasu/GLM_Fireball_Magnitude} \label{footnote_2} }. \subsection{Comparison with ground-based observations}\label{FireballsCompare} \subsubsection{British Columbia fireball, September 5, 2017} The British Columbia fireball and meteorite fall that occurred on September 5, 2017, at Crawford Bay in British Columbia, Canada, is a well-documented event. It was recorded by many observers, ground-based video cameras, and United States Government (USG) sensors \citep{Hildebrand2018}. The GLM sensors detected the four brightest flares. The magnitudes in the peaks derived from the ground-based videos were in the range from $\approx -15$ to $-18$. We estimated the absolute magnitude using Equations (\ref{eqnMAG}) and (\ref{eqnE_GLM}). The value for the velocity ($\approx 16.5$ km/s) and for the altitude ($\approx 35$ km) were used as reported in \cite{Hildebrand2018}. The estimated light curve, with a peak magnitude of $\approx -20.75$, can be seen in Figure \ref{Fig:GLM_British}. Our result is in good agreement with the light curve of \cite{Jenniskens2018} obtained also from the GLM using their calibration and assuming radiation from oxygen triplet at $777$ nm (gray color in Figure \ref{Fig:GLM_British}). With that assumption, the light curve was corrected for the Sun-blocking filter (the bandpass is not covering the O I triplet completely) and the peak magnitude was $\approx -20.5$. The difference was mostly less than $1$ mag. The blue-colored light curve in Figure \ref{Fig:GLM_British} is also from \cite{Jenniskens2018}, but without the correction on the Sun-blocking filter, that is, assuming only the continuum radiation at $777$ nm. This light curve differs a bit more from our estimate, namely, by more than one magnitude in brigtness. The peak magnitude computed in this case was $\approx -19.5$. The difference between GLM light curve and ground-based observation is assumed to be due to the saturation in video data. \begin{figure*} \sidecaption \includegraphics[width = 12cm]{figures/GLM_LC_British_Columbia.pdf} \caption{British Columbia fireball, with the GLM-computed light curves shown as: red (this work), gray (\cite{Jenniskens2018} radiation of oxygen lines at $777$ nm), and blue (\cite{Jenniskens2018} only blackbody radiation at $777$ nm). The black, green, and pink are light curves from the ground-based video observations in terms of visual magnitude (source \cite{Jenniskens2018}).} \label{Fig:GLM_British} \end{figure*} \subsubsection{Arizona fireball, November 15, 2017} The Arizona fireball was observed on November 15, 2017. It entered the atmosphere with a velocity of $26$ km/s. It was recorded by the SkySentinel video network and by the LO-CAMS, Lowell part of the California All-sky Meteor Surveillance (CAMS) camera network. The peak magnitude was between $-16$ and $-17$ (for details, see \cite{Jenniskens2018}). Only two terminal flares were recorded by GLM. The time resolution of video observations is low compared to GLM. Our light curve estimate is about one magnitude fainter than the light curve from the video observations, with a peak magnitude of $-16$. A comparison is shown in Figure \ref{Fig:GLM_Arizona}. \begin{figure*}[htb] \sidecaption { \includegraphics[width=12cm]{figures/Arizona_compare_V2_Merge.pdf}} \caption{Light curve of the Arizona fireball. Comparison of the estimate from our method and GLM light curve from \cite{Jenniskens2018} and observations from LO-CAMS and SkySentinel (source \cite{Jenniskens2018}). The lower part of the figure shows a detail of the flare part of the light curve. } \label{Fig:GLM_Arizona} \end{figure*} \subsubsection{Hamburg meteorite fall, January 17, 2018} The Hamburg fireball from January 17, 2018 resulted in a meteorite fall in the area of Ann Arbor, Michigan. The fireball was observed by several security video cameras from the ground and the GLM--16 satellite recorded the two brightest peaks. The initial velocity was $15.83 \pm 0.05$ km/s and the two main flares occurred at altitudes of $24.1$ km and $21$ km \citep{Brown2019}. The light curve from the ground-based observations presented in \cite{Brown2019} is shown in Figure \ref{Fig:GLM_Hamburg}. In \cite{Brown2019}, the observed spectral energy density from GLM was converted into the visual absolute magnitude using assumptions from \cite{Jenniskens2018}, indicating that the limiting sensitivity for GLM is near the peak visual absolute magnitude of -$14$ and connecting this limiting magnitude with the floor of the observed spectral energy density. This GLM-converted light curve is also shown in Figure \ref{Fig:GLM_Hamburg}. The agreement between these two light curves is very good. We compared these light curves with the light curve of the two brightest peaks calculated from GLM data using our method. In this case, the GLM estimate is about two orders of magnitude brighter than the video observations as well as the GLM converted magnitude in \cite{Brown2019}. This brightest part of the light curve reconstructed in \cite{Brown2019} from the scattered light was calibrated to unsaturated parts of the light curve computed from direct fireball measurements in the video. Still, our GLM-derived light curve suggests that the peak magnitude was underestimated by \cite{Brown2019}. \begin{figure}[htb] \centering { \includegraphics[width=\hsize]{figures/HamburgCorrAlt.pdf}} \caption{Light curve of the Hamburg fireball, computed from GLM data (red points) and video observations (black points). Time t=33s corresponds to Jan 17, 01:08:33 UT. } \label{Fig:GLM_Hamburg} \end{figure} \subsubsection{Alberta event, February 22, 2021} The Alberta fireball occurred on February 22, 2021. It had a high velocity of $62.1$ km/s. It is the first directly observed decimeter-sized rocky meteoroid on a long-period comet orbit \citep{Vida2022}. Using a manual calibration of ground-based observations and GLM observations of three fast bolides, the empirical equation between absolute magnitude, $m,$ and energy, $E,$ in femto Joules observed by GLM, \begin{math}m = -9.2 - 2.5log_{10}(E)\end{math}, was derived in \cite{Vida2022}. This equation can be applied only to fireballs with velocities around $60$ km/s. Using Equations (\ref{eqnMAG}) and (\ref{eqnE_GLM}), and substituting energy $E$ in femto Joules, we get the following relation: \begin{math}m = -9.8 - 2.5log_{10}(E)\end{math}. This gives us a difference of $0.6$ magnitude between our calibration and that of Vida, which we consider to be a reasonable agreement. \subsection{Comparison with USG sensors} To test our calibration of GLM data we also used observations from space-based US Government (USG) sensors published by NASA \footnote{https://cneos.jpl.nasa.gov/fireballs/}. We compared $27$ bolides for which both GLM and USG are available. Velocities from $11$ km/s to $42$ km/s were reported for them by NASA. From the GLM narrowband energy radiated at $777$ nm, we computed the broadband energy radiated from the fireball in Joules using Eq. (\ref{eqnE_GLM}), integrating the reported light curve, and computing emission $I_{total}$ using Eq. (\ref{eqnO_vel}). When necessary, we corrected the light curve for gaps due to the overflow of the lightning detector. Although GLM can observe only the brightest part of the bolide in most cases, we assume that most of the energy is emitted in these bright parts, but some underestimation may be present, especially for meteors at the threshold detection level. The energy reported by USG is the energy radiated from the whole spectral range, assuming radiation of black body at temperature of $\approx 6000$K. The radiation $I_{total}$ computed from radiation at $777$ nm using Eq. (\ref{eqnO_vel}) is in fact only the radiation in the range of $380$ nm -- $850$ nm. To compare these two quantities, we divided the USG energy with the factor of $1.85,$ since the radiation in the whole spectral range of the black body is $1.85 \times $ the radiation of the black body in the spectral range between $380$ nm and $850$ nm. We can see the result of this comparison in Figure \ref{Fig:USGa}. Some fireballs have been detected from both GOES satellites. We computed the radiation for both detections. These points are connected by a green line. We also computed the radiated energy from GLM data assuming a $6000$ K black body spectrum. Following \cite{Jenniskens2018}, the GLM reported energy was multiplied by a factor of $1018$ to obtain the energy in the whole spectral range of the black body. Then it was also divided by the factor of $1.85$ to obtain radiation of the black body in the spectral range $380$ nm -- $850$ nm. These points are marked in gray in Figure \ref{Fig:USGa}. In Figure \ref{Fig:USGb} we show the difference between the computed GLM energy and the energy reported by USG as a function of the velocity reported by USG. The difference is given in the units of order, namely a value of $1.0$ on the vertical axis means that the GLM energy was exactly one order larger than the USG energy. Here, again, color-coded points are computed using our calibration from Eq. (\ref{eqnO_vel}), and the gray points are energies computed assuming only blackbody radiation. According to Eq. (\ref{eqnO_vel}), for velocities of $22.6$ km/s, both methods give identical results. And while for velocities below $\approx 22.6$ km/s, the black body assumption gives the energy closer to the energy reported by USG -- for velocities higher than $22.6$ km/s, the energy computed assuming the oxygen triplet radiation is closer to the energy reported by USG. However, there are only two meteors with a velocity above $22.6$ km/s and, thus, the scatter of points at lower velocites is large. \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figures/USG-GLM_noVelCorr.pdf} \caption{} \label{Fig:USGa} \end{subfigure} \hfill \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figures/USG-GLM_residua.pdf} \caption{} \label{Fig:USGb} \end{subfigure} \caption{GLM radiated energy estimate and USG reported energy for selected fireballs in the spectral range from $380$ nm to $850$ nm. Gray points are computed assuming the blackbody spectrum. Colored points are computed assuming O I -- 1 radiation with correction on velocity. Figure \ref{Fig:USGa}: GLM and USG comparison of observations of same fireballs. Fireballs observed simultaneously with GLM 16 and GLM 17 are connected with the green line. Figure \ref{Fig:USGb}: Residuals ($J, log_{10}$ scale) from the GLM--USG matching curve.} \label{Fig:USG} \end{figure} \section{Extrapolation of the calibration} \subsection{Brightness correction} Since our sample of EN fireballs contains limited range of meteor magnitudes, we examined how the velocity calibration in Equations \ref{eqnO_vel} and \ref{eqnMAG} for the oxygen region radiation can be extrapolated for fireballs with magnitudes outside of the range of our sample. We investigated how the relative radiation in the oxygen region changed with meteor brightness. To eliminate the influence of velocity on this relative radiation, we stacked the EN fireballs into groups according to similar velocities. In this way, we can study how the relative radiation at $777$ nm $I_{777}/I_{total}$ depends on the absolute radiation $I_{777}$ at $777$ nm in given velocity group using customized Eq. \ref{eqnO_vel}: \begin{multline}\label{eqnO_vel_extrapol} log(I_{777}/I_{total}) - 0.026(\pm0.001) \times v + 3.294(\pm0.077) = \\ c(v) \times log(I_{777}) + d(v), \end{multline} where constants $c(v)$ and $d(v)$ depend on velocity and are thus different for different velocity groups. The dependence for each velocity group is shown in Figure \ref{AP:Fig:Fit_I777} in the appendix. To compute the uncertainties of each fit, we used the Monte-Carlo method with $10000$ points generated with normal distribution within errors for each point in Figure \ref{AP:Fig:Fit_I777}. Errors for each point were computed using the measured noise level for each spectrum. For each clone set, the least-squares fit was performed and then the final fit and its uncertainty were obtained by computing the mean and the standard deviation of these fits. Then, $c(v)$ is the slope of the fit and $d(v)$ is the intercept. To obtain how parameters $c(v)$ and $d(v)$ depend on velocity, we computed average velocity for each velocity group and plotted the dependence of $c(v)$ and $d(v)$ on this velocity. This is shown in Figure \ref{AP:Fig:parameterFit_I777} of the Appendix. Errors for the average velocity of each velocity group were computed using uncertainties of velocity measurements of each fireball. As an error of $c(v)$ and $d(v),$ the above-mentioned standard deviations of least-squares fits were used. To obtain the fit of these dependencies, we used the least-squares method. The uncertainty of the fit was obtained by once again using the Monte Carlo method, generating $10000$ clones with normal distribution and using means and standard deviations of fits of these clones. The results of the fits were: \begin{equation}\label{cv} c(v) = -0.0053(\pm0.0041) \times v + 0.261(\pm0.224) \end{equation} and \begin{equation}\label{dv} d(v) = 0.021(\pm0.013) \times v - 1.00(\pm0.71). \end{equation} With these parameters, we can compute corrected radiation $I_{total}$ (i. e., radiation between $380$ nm and $850$ nm) from known radiation at $777$ nm $I_{777}$ and known fireball velocity, $v$. When we assume that this correction can simply be extrapolated for fireballs outside of the magnitude range of the EN fireballs sample in this work (within a reasonable range), then this can be applied for radiation calibration of the GLM observations of bolides brighter than those in the EN sample. We applied this correction to the GLM and USG comparison in Figure \ref{Fig:USGCorr}. We can see that after this correction the GLM reported energies are in a bit better agreement with USG reported energies, compared to energies computed without this correction in Eq. \ref{Fig:USG}. In addition, within the uncertainty of this correction, they are in an agreement with energies computed from GLM assuming only blackbody radiation. A similar correction as for total intensities can be applied to magnitudes computed from GLM observations, Eq. \ref{eqnMAG} is modified to: \begin{multline}\label{eqnMag_vel_extrapol} m_V + 2.5 \times log_{10}(I_{777}) - 0.0948 \times v + 3.45 = cm(v) \times log(I_{777}) + dm(v). \end{multline} Results of particular fit of each velocity group can be seen in Figure \ref{AP:Fig:Fit_mag} in the appendix and in the Figure \ref{AP:Fig:parameterFit_mag} the dependence of $cm(v)$ and $dm(v)$ on velocity can be seen. The result of the least-squares fitting of these parameters is: \begin{equation}\label{cv} cm(v) = -0.022(\pm0.005) \times v + 0.79 (\pm0.24) \end{equation} and \begin{equation}\label{dv} dm(v) =0.102(\pm0.016) \times v -3.31 (\pm0.77). \end{equation} \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figures/USG-GLM_plot_Figure-I-forReviewCorr.pdf} \caption{} \label{Fig:USGaCorr} \end{subfigure} \hfill \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figures/USG-GLM_plot_Figure-II-forReviewCorr.pdf} \caption{} \label{Fig:USGbCorr} \end{subfigure} \caption{GLM radiated energy estimate and USG reported energy for selected fireballs in the spectral range from $380$ nm to $850$ nm. Points are computed assuming O I -- 1 radiation with correction on velocity and after the correction on fireball brightness. Figure \ref{Fig:USGaCorr}: GLM and USG comparison. Figure \ref{Fig:USGbCorr}: Residuals ($J, log_{10}$ scale) from the GLM--USG matching curve.} \label{Fig:USGCorr} \end{figure} If we again assume that this correction can be linearly extrapolated, we can apply it for fireballs in Section \ref{FireballsCompare}. For the British Columbia fireball, we can see in Figure \ref{Fig:GLM_British} that the corrected magnitude is within the uncertainty of the correction in agreement with the light curve of \cite{Jenniskens2018}, when they assumed only radiation in the continuum. The correction for the Arizona fireball was only minor and the corrected and uncorrected light curves were within the uncertainty of the correction (see Figure \ref{Fig:GLM_Arizona}). Also, the corrected light curve of the Hamburg fireball in Figure \ref{Fig:GLM_Hamburg} was only slightly different from the uncorrected light curve. This shows that the dependence of the relative radiation in the oxygen region on the brightness of the meteor is minor, but our method can be extrapolated for fireballs outside the magnitude range of our sample. \subsection{Dependence on altitude for meteors with and without flare} We observed that deviations of meteor altitude from the altitude typical for the given meteoroid velocity can affect the radiation at $777$ nm and, thus, the derived estimate of meteor magnitude. To examine this, we used the middle altitude of the spectrum scan and its dependence on the meteor velocity (see Figure \ref{Fig:VelAlti}). Naturally, the scan was correlated with the altitude of maximal brightness to maximize the spectrum S/N ratio. Meteors without flare showed a steeper dependency of altitude on velocity than those with flare. Fast meteors from both groups had the altitude more or less similar, slow meteors were lower in the atmosphere when they did not show any flare on their light curve. The least-squares fits of these two groups were used to determine the typical altitude for a given velocity. For meteors without flare in the spectrum, the typical altitude $H(v)_{N}$ in km was: \begin{equation} H(v)_{N} = 0.82 \times v + 34.0. \label{eqn:HvNo} \end{equation} For meteors with the flare in the spectrum the typical altitude H(v)$_{F}$ was: \begin{equation} H(v)_{F} = 0.37 \times v + 61.4. \label{eqn:HvF} \end{equation} Here, $v$ is the velocity of the meteor in km/s. The difference between this altitude and the actual altitude of the scanned spectrum ($H_{obs}$ - $H(v)$) was computed to determine correction on altitude for both groups. We refer to Figure \ref{Fig:magCorrHeight}, where the dependence of the difference between actual visual and estimated magnitude ($m_V - m_1$) on the altitude difference ($H_{obs}$ - $H(v)$) is shown. The least-squares fit slope can be used as the parameter of the magnitude correction on altitude. When no flare is in the light curve it can be: \begin{equation} m_H = m_1 + 0.14(H_{obs} - H(v)). \label{eqn:corr3} \end{equation} Here, $m_H$ is the magnitude corrected for meteor altitude and H$_{obs}$ is the middle altitude at which the spectrum was scanned to estimate the magnitude $m_1$. If there is a flare, we then have: \begin{equation} m_H = m_1 + 0.10(H_{obs} - H(v)). \label{eqn:corr3Flare} \end{equation} \begin{figure}[htb] \centering { \includegraphics[width=\hsize]{figures/HMaxRex_velocityAll_Flare_NewFit_V2II.pdf}} \caption{Altitude of scanned spectrum and velocities of meteors. Meteors with fluctuations, flares, or without flares are colored.} \label{Fig:VelAlti} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\hsize]{figures/magCorrection2_V2_correction_on_height.pdf} \caption{Difference between visual magnitude, $m_V$, and magnitude, $m_1$, estimated from radiation at $777$ nm and its dependence on the deviation of the scanned spectrum altitude, $H_{obs}$, from the altitude typical for given velocity, $H(v)$.} \label{Fig:magCorrHeight} \end{figure} In Figure \ref{Fig:magCorrHeight}, we marked meteor 20170227, with a magnitude estimate, m$_1$, very close to the visual magnitude, $m_V$, even though its spectrum scan altitude, $H_{obs}$, was lower than for any other meteor. This meteor is also marked in Figure \ref{Fig:Energy}, showing a low abundance of sodium and normal magnesium abundance for a given velocity. This indicates the high strength of the meteoroid material \citep{borovicka2005}. This is also in agreement with the parameter, $P_E$, computed from the terminal altitude of the luminous trajectory, the initial velocity, the initial mass, and the slope of the atmospheric trajectory \citep{CeplechaMcCrosky1976}. This $P_E$ can be used for the classification of meteoroid material \citep{Ceplecha1988}. For this fireball, we computed $P_E = -3.96$ and it was classified as type I (ordinary chondrite), with an origin in asteroids according to Ceplecha classification. This can explain why, for a given velocity, the fireball penetrated so deep in the atmosphere; on the other hand (as it can be seen in Figure \ref{Fig:Energy}), the normal relative radiation at $777$ nm for a given velocity caused an accurate magnitude estimate, computed from this radiation. We excluded this meteor from the correction on altitude estimation and did not apply this correction to this fireball. \subsection{Applying both corrections} The brightness correction of the calibration was applied to our EN fireball sample in Figure \ref{Fig:MagCorr2b}, where observed visual magnitudes, $m_V$, are compared with magnitudes computed using the calibration in Eq. \ref{eqnMag_vel_extrapol}. When comparing the root mean square (rms) of this plot with the rms of the data in Figure \ref{Fig:MagCorr2a}, where no meteor brightness was applied, we can see a small improvement. The dependence of the relative radiation of oxygen is small but observable. The altitude correction of the calibration was applied to the EN fireball sample. Result can be seen in Figure \ref{Fig:MagCorr2c}, where observed visual magnitudes, $m_V$, are compared with magnitudes computed using the calibration in Eq. \ref{eqnMAG} and these magnitudes were then corrected for the altitude using Equations \ref{eqn:corr3} and \ref{eqn:corr3Flare}. The rms of the sample improved overall when deviating fireballs with fast velocities and with velocities between $40$ km/s and $50$ km/s were corrected. The altitude correction was also applied to magnitudes that have already been corrected with regard to brightness. This result can be seen in Figure \ref{Fig:MagCorr2d}. \begin{figure*} \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{figures/magCorrection_velocity_mv-m1.pdf} \caption{} \label{Fig:MagCorr2a} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{figures/magCorrection_velocity_mv-mBcorr.pdf} \caption{} \label{Fig:MagCorr2b} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{figures/magCorrection_velocity_mv-mH.pdf} \caption{} \label{Fig:MagCorr2c} \end{subfigure} \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{figures/magCorrection_velocity_mv-mHB.pdf} \caption{} \label{Fig:MagCorr2d} \end{subfigure} \caption[ Difference between visual magnitude $m_V$ and magnitude $m_1$ estimated from radiation at $777$nm and magnitudes corrected for brightness and altitude of a meteor. ] {\small Difference between the visual magnitude, $m_V$, and the magnitude, $m_1$, estimated from radiation at $777$nm (Fig. \ref{Fig:MagCorr2a}), the magnitude $m_B$ corrected for brightness (Fig. \ref{Fig:MagCorr2b}), the magnitude, $m_H$, corrected for the altitude of a meteor (Fig. \ref{Fig:MagCorr2c}), and the magnitude, $m_{HB}$, corrected both for the brightness and the altitude of meteor (Fig. \ref{Fig:MagCorr2d}).} \label{Fig:MagCorr2} \end{figure*} \section{Discussion} Using fireball observations of the Europan fireball network (EN) we developed a simple method to calibrate observations of bolides in the oxygen region at $777$ nm by GLM detectors on board the GOES satellites. This method uses the dependence of the oxygen radiation on the velocity of fireballs. The difference in radiation between slow and fast meteors in this region for similarly bright fireballs can be up to two orders (see Figure \ref{Fig:Energy}). The Europan fireball network is not overlapping with the coverage of the GLM detectors thus no simultaneous observations can be used. To test our method, we compared visual magnitudes of meteors observed by EN with magnitudes of the same meteors, computed from the radiation of the oxygen region using our calibration method and spectral cameras of EN. The difference between these two brightnesses was usually less than $2$ mag and most of the meteors showed a difference of less than $1$ mag. We also applied our method to fireballs that were simultaneously observed by GLM and from the ground. For slow meteors, our method was more or less in an agreement with calibration assuming only blackbody radiation in the whole spectrum (\cite{Jenniskens2018} or \cite{Brown2019}. The calibration was also in good agreement with the calibration of the fast Alberta fireball in the work of \cite{Vida2022}. What is crucial for our method is the knowledge of the fireball velocity, which is not always available for GLM observations. We also compared our calibration of GLM observations that were simultaneously observed with USG. The method was in good agreement with calibration that assumed only black body radiation since most of the fireballs were slower than $20$ km/s (see Figures \ref{Fig:USGa} and \ref{Fig:USGb}). In the future, when the Lightning Imager (LI) on the EUMETSAT's Meteosat Third Generation will be deployed, the overlap with our fireball network will allow the possibility to directly compare the same fireballs observed by lightning detectors and by the spectral cameras of our network (under the condition that it will be possible to filter the fireball data and those observations will be available). Other influences that can affect oxygen radiation were studied. As it can be seen in Figure \ref{AP:Fig:Fit_I777} the radiation at $777$ nm relative to the broadband radiation, $I_{total}$, is increasing with absolute radiation at $777$ nm (i. e., with the brightness of meteor) for slow meteors. In slow meteors, we observed only weak or no oxygen lines. Due to the low excitation of atmospheric atoms and molecules in slow meteors the oxygen line is not present in the spectrum and only continuous radiation contributes to the 777 nm band. As the meteor brightness increases, the radiation becomes optically thick (approaching the black body spectrum) and the continuum becomes more important in relation to spectral lines in other parts of the spectrum. For meteors faster than $\approx 30$ km/s, the relative brightness at $777$ nm was constant, with increasing meteor brightness and for the fastest meteors, it was decreasing. In fast and bright meteors, the oxygen line can be already optically thick and thus its relative brightness can be smaller than in fainter meteors with similar velocity. When we applied this correction to our sample with a limited magnitude range, the result was quite negligible. However, when we linearly extrapolated this correction to brighter fireballs (in Section \ref{FireballsCompare}), the results in general improved. The altitude calibration showed that spectra observed at an altitude lower than the altitude typical for a given velocity showed lower relative brightness of radiation at the oxygen region compared to spectra observed at a higher altitude. As the meteor penetrates lower in the atmosphere, the ablation rate increases, and the relative brightness of oxygen is smaller. This behavior was similar for spectra observed during the flare, but the typical altitude for a given velocity was different when observed during the flare compared to observation without the flare. When we applied the altitude correction to the sample of EN fireballs and compared the corrected magnitudes, $m_H$, with visual magnitudes $m_V$, we were able to correct some of the most deviating meteors (see Figure \ref{Fig:MagCorr2c}). Without the altitude correction, some meteors with velocity $\approx 40$ km/s and higher altitudes than expected had the oxygen line brighter than expected for their velocity. Two very fast meteors were at lower altitudes with fainter oxygen lines. The meteor that deviated most from the typical height had high $P_E$ parameter (see Figure \ref{Fig:magCorrHeight}). Thus, we investigated if there is some relation between the meteoroid strength (represented with the $P_E$ parameter) and the difference $m_V - m_1$, where $m_1$ is the first, uncorrected, estimate of the magnitude computed from the radiation at $777$ nm. As it can be seen in Figure \ref{AP:Fig:Hscan_Pe_vel}, there is no clear relation between the $P_E$ parameter, the altitude of the spectrum, and the velocity of the fireball. Most of the meteors deviated less than $5$ km from the expected altitude for a given velocity. Those meteors for which spectra were scanned at a lower altitude than expected had a wide range of velocities and also they had different values of the $P_E$ parameter. Only three meteors had spectrum scanned higher than $5$ km above the expected altitude. They had velocities between $40$ and $50$ km/s with similar $P_E$ values between $-5.0$ and $-5.6,$ corresponding to classes II and IIIA according to the classification of \cite{CeplechaMcCrosky1976} as carbonaceous chondrites and regular cometary material, respectively. It seems that apart from the velocity of the meteor, the altitude where the spectrum is observed can affect the relative radiation of the oxygen. In other words, the altitude and flare correction show that at higher altitudes and outside flares the oxygen line is relatively brighter since the ablation rate for meteoritic lines is lower compared to observations of the spectrum during the flare or at lower meteor altitudes. When we applied the altitude correction on magnitudes that have already been corrected for brightness, the overall rms was a bit worse than the rms of magnitudes corrected only for altitude (without previous correction on brightness). This shows that the brightness correction is minor for the magnitude range of fireballs in our sample. Moreover, we can see in Figure \ref{Fig:MagCorr2} that meteors with medium and fast velocities showed the best improvements when both brightness and altitude corrections were applied. On the other hand, the slowest meteors did not improve (in general). While using correction on altitude and brightness, we were able to improve the difference between the magnitude estimated from the radiation at $777$ nm and the magnitude in the visible spectral range, $m_V$, within one magnitude in brightness for most fireballs observed by SDAFO. The application of the altitude-flare correction to the observations from GLM is not trivial. The altitude calibration is based on a sample of cm-sized meteoroids, larger meteoroids can, in general, penetrate deeper in the atmosphere and the typical altitude for a given velocity is lower, thus we assume that this calibration is limited for meteoroids of brightness between $\approx -8$ and $-15 $ mag. As an example, we used the altitude correction for the Hamburg fireball. We used the correction for meteors with flares (Equations \ref{eqn:HvF} and \ref{eqn:corr3Flare}). As an observed altitude, we used the altitude reported for the first flare $H_{obs}$: $24.1$ km. The typical altitude for the reported velocity is $H(v)_{F} = 67.3 km$. This large difference between $H_{obs}$ and $H(v)_{F}$ caused a probable over-correction of the magnitude in Figure \ref{Fig:GLM_Hamburg}. For the given velocity, we did not observe any EN fireball with a flare at such a low altitude. Unfortunately only for a small fraction of GLM fireballs is the altitude estimated and if it is not known from ground observation, it can be unreliable. \subsection{Threshold magnitude for GLM observations} Based on the comparison of GLM events observed with simultaneous ground observations \cite{Jenniskens2018} estimated the threshold magnitude for slow fireballs observed by GLM detectors to be $-14$. Since we observed that radiation in the oxygen region is $\approx 1/1000$ of overall radiation in the visible spectrum and $> 1/100$ for meteors faster than $\approx 50$ km/s, the threshold brightness for fast meteors observed by GLM has to be lower. Using the relation in Eq. \ref{eqnMAG}, we can estimate the threshold brightness for fast fireballs. If according to \cite{Jenniskens2018} some threshold radiation in oxygen region $I_{777}$ corresponds to a magnitude of $m_v = -14$ for slow meteors, we can use Eq. \ref{eqnMAG} and as a given slow velocity we can use for example $v = 15$ km/s. The same threshold value for radiation in the oxygen region, the term $-2.5 \times log_{10}(I_{777})$, will correspond to a different threshold magnitude if we use different fireball velocities. For example, for velocity of $70$ km/s, the threshold magnitude, $mv$, is $\approx -8.8$. Although only three meteors in our sample are brighter than $-14$ magnitude, if we count the velocity dependence of oxygen radiation, then $11$ fireballs from our sample were above the threshold sensitivity of the GLM detectors. Our GLM data calibration method is independent on this threshold value. When using only one threshold value for any given meteor velocity, inaccuracy can arise. For example, the threshold value of $-14$ mag used by \cite{Brown2019} for the absolute calibration of the Hamburg fireball GLM observation corresponds to the fireball's velocity of $15.83$ km/s in this case - but if the same threshold value was used for faster meteors, it would overestimate the derived magnitudes. \subsection{Accuracy of the GLM observations} When comparing the calibrated radiation observed by GLM with observations reported by USG in Figure \ref{Fig:USGa}, we can notice that fireballs observed simultaneously by two GOES satellites can differ in calibrated radiation up to one order of magnitude. These stereo bolide detections are connected by a green line in Figure \ref{Fig:USGa}. Since there were only three stereo cases we compared a larger number of stereo GLM observations in Figure \ref{Fig:stereo}. The problem, namely, that for most of these meteors the velocity is unknown, was solved using an assumed velocity of $15$ km/s for all meteors since we are not interested in absolute values of radiation, but we are comparing relative radiation of the same fireball observed by two detectors. Thus, the absolute values in the Figure \ref{Fig:stereo} are artificial, but the ratio between calibrated radiation from both detectors is velocity-independent. We can see that for some cases, the difference was more than one order and the dispersion is in an agreement with the one observed in Figure \ref{Fig:USGa}. This is another uncertainty in the GLM data that has to be taken into account. The source of this discrepancy is not in our calibration, but it is in the reported data and thus it is unknown to the authors. Considering this uncertainty, the difference between the radiation estimated with the velocity calibration and the radiation estimated only by the blackbody radiation at $777$ nm is well within this uncertainty. \begin{figure}[htb] \centering { \includegraphics[width=\hsize]{figures/STEREO.pdf}} \caption{Comparison of radiation for stereo GLM meteors observed by both GLM 16 and GLM 17 detectors for an assumed velocity $15$ km/s.} \label{Fig:stereo} \end{figure} \subsection{Sodium and magnesium in cm-sized meteoroids} As an additional part of the study, the radiation of sodium and magnesium in cm-sized meteors were studied. As expected, the relative sodium contribution to the spectrum was greater in slower meteors than in faster meteors (Figure \ref{Fig:Energy_Na}). Slow meteors with colder plasma are more favorable for the radiation of sodium with low-excitation potential. Moreover, meteors slower than $\approx 30$ km/s showed a large scatter in Na radiation in cases with a weak relative intensity of Na. This can be explained by variations in the actual amount of sodium in each meteoroid. The variation of sodium abundance in meteoroids is well known \citep{borovicka2005, Vojacek2015, Vojacek2019, Matlovic2019} and can be explained by either close approaches to the Sun or by space weathering. For sodium, the three brightness fits implied that for bright meteors the slope of the relative brightness of Na to the velocity was smaller than for fainter meteors. This suggests that the high-temperature component is not as dominant in fast bright meteors as it is in weaker meteors of the same velocity. Moreover, the sodium line can be optically thick in bright slow meteors, and therefore not as dominant as would correspond to its actual abundance. Due to the relatively high excitation potential of magnesium Mg I -- 2, the relative brightness of magnesium in (Figure \ref{Fig:Energy_Mg}) increased with increasing meteor velocity, but for meteors faster than $\approx 25$ km/s, the high-temperature spectra components started to contribute to the spectrum, thus lowering the magnesium relative contribution. Moreover, above $25$ km/s, the temperature probably does not increase anymore, as evidenced by the nearly constant Na/Mg ratio, shown in Figure \ref{Fig:NaMg}. Similarly, as for sodium, we observed different slopes of three brightness bins for magnesium. The decrease is slowest for the brightest meteors. We offer the same explanation as for the case of sodium: the reduced dominance of the high-temperature component in bright fireballs is due to their higher optical thickness. The ratio of Na/Mg radiation is in an agreement with the work of \cite{borovicka2005} and \cite{Vojacek2019} for millimeter-sized meteoroids. It increases for slow meteors, with a ratio is more or less constant for velocities faster than $25$ km/s. We did not observe a shift in absolute values of ratio $I_{589} { / } I_{517}$ for larger meteoroids, as reported in \cite{Matlovic2019}. These authors explained the shift by the presence of larger bodies in their sample and thus the reduced level of space weathering with regard to the volatile sodium from the meteoroid body. The sample of \cite{Matlovic2019} overlapped with our work in terms of absolute brightness of meteors ($-1$ to $-14$ mag, compared to the range from $-8$ to $-15$ mag in our work) and, thus, also sizes of meteoroids. Therefore, we cannot confirm this absolute Na/Mg ratio for this meteoroid size. \section{Conclusions} We studied the spectra of bright meteors with a focus on the oxygen triplet O I -- 1 at $777$ nm. The intensity of the oxygen line steeply increases with meteor velocity. The line is invisible in slow meteors, but it is one of the brightest lines in the spectra of fast meteors. The radiant intensity in the narrow $1.1$ nm spectral window around $777$ nm used by the GOES satellites amounts to only $1/1000$ of the radiant intensity of the $380$ - $850$ nm window in meteors of $\approx 11$ km.s$^{-1}$. The radiation is mostly due to a continuum. However, this share increases to $1/30$ at $70$ km.s$^{-1}$, when the oxygen line dominates. As a consequence, the GLM limiting magnitude depends on meteor velocity. For slow meteors, it is $\approx -14$ mag, but according to our spectral observations of photographic fireballs, the dependence of relative radiation at $777$ nm on the fireball velocity suggests that meteors as faint as $-9$ mag with speeds up to $70$ km/s can be observed by the GLM. We investigated the discrepancy between GLM--16 and GLM--17 on more stereo observations and we found that they can differ by about one order of magnitude when compared. We also investigated the discrepancy between GLM–16 and GLM–17 on multiple stereo observations and found that they can vary by about one order of magnitude. We have provided an empirical formula for converting the energy observed in the $777$ nm band into a meteor V-band magnitude. The formula can be used if the meteor velocity is known. Our data also suggest that the oxygen line intensity depends on the meteor altitude as well as on whether the observation is carried out during a meteor flare. The brightness of the fireball also influences the relative radiation in the oxygen region. Second-order refinements are therefore provided to the conversion formula. The altitude and flare refinement reflect the fact that for a given velocity, the oxygen line is more important when the ablation rate is low, namely, outside flares and at higher altitudes. The brightness correction reflects the fact that the dependence of the $777$ nm band intensity on meteor velocity is less steep for very bright meteors, where the radiation is optically thicker. The typical altitude for a given velocity used in the altitude correction formula was derived from the sample of fireballs observed by the European Fireball Network with a magnitude range from $-8$ to $-15$ mag. This correction was determined using this range of meteor brightnesses only and it is likely that it cannot be applied to brighter events penetrating deeper in the atmosphere. We note that as a byproduct of this work, we also studied spectral regions at $517$ nm and $589$ nm, where magnesium and sodium (respectively) dominate. We did not observe a shift in absolute values of ratio $I_{589} { / } I_{517}$ for larger meteoroids reported in \cite{Matlovic2019}. \section*{Acknowledgment} This work was supported by the grant 19-26232X of the Grant Agency of the Czech Republic, GA \v CR and by the Praemium Academiae of the CAS. We would like to thank E. Sansom for her valuable comments in the review that helped to significantly improve the manuscript.
1,477,468,750,544
arxiv
\subsection{Extensions of the basic NORST-miss approach} \label{sec:ext} In this section we describe three simple heuristics that help improve the performance of the basic NORST-miss idea described earlier. The first heuristic allows us to tolerate much higher fraction of outliers once a good enough estimate of the subspace is obtained. The second and third heuristics help to practically improve the performance of Algorithm \ref{algo:NORST-st-basic}. The same ideas in fact can also be used to improve the performance of Algorithm \ref{algo:auto-dyn-rmc} and in fact also of the original NORST algorithm for RST \cite{rrpcp_icml}. {\color{blue} \subsubsection{Sample-Efficient-NORST-miss} We explain here a simple modification of NORST-miss that will reduce its sample complexity under the i.i.d. Bernoulli model. The reason that NORST-miss needs many more observed entries is because of the projected LS step which solves for the missing entries vector, $\bm{z}_t$, after projecting $\bm{y}_t$ orthogonal to $\hat{\bm{P}}_{(t-1)}$. This step is computing the pseudo-inverse of $(\bm{I} - \hat{\bm{P}}_{(t-1)} \hat{\bm{P}}_{(t-1)}{}')_{{\mathcal{T}_{t}}}$. Our bound on $\small{\text{max-miss-frac-col}}$ helps ensure that this matrix is well conditioned for any set ${\mathcal{T}_{t}}$ of size at most $\small{\text{max-miss-frac-col}} \cdot n$. Notice however that we prove that NORST-miss recovers $\P_j$ to $\epsilon$ accuracy with a delay of just $(K+2) \alpha = C r \log n \log(1/\epsilon)$. Once the subspace has been recovered to $\zz$ accuracy, there is no need to use projected LS to recover $\bm{z}_t$. One just needs to recover $\bm{a}_t$ given a nearly perfect subspace estimate and the observed entries. This can be done more easily as follows (borrows PETRELS idea): let $\hat{\bm{P}}_{(t)} \leftarrow \hat{\bm{P}}_{(t-1)}$, solve for $\bm{a}_t$ as $\hat\a_t:= (\bm{I}_{\Omega_t}{}' \hat{\bm{P}}_{(t)})^{\dagger} \bm{I}_{\Omega_t}{}'\bm{y}_t$, and set $\l_t \leftarrow \hat{\bm{P}}_{(t)} \hat\a_t$. Recall here that $\Omega_t = {\mathcal{T}_{t}}^c$. If the set of observed or missing entries was i.i.d. Bernoulli for just the later time instants, this approach will only need $\Omega (r \log r \log^2 n)$ samples at each time $t$, whp. This follows from \cite[Lemma 3]{laura_subspace_match}. Thus, with this approach, if $d \le n$, the number of observed entries needed is only $n(1-1/r) K \alpha + C r \log r \log^2 n (d - K \alpha) = C[ n(1-1/r) r \log n \log(1/\epsilon) + d r \log r \log^2 n ] = \Omega ( n r \log^3 n \log(1/\epsilon) )$ as long as the observed entries follow the i.i.d. Bernoulli model for the time instants after the first $K \alpha$ time instants after a subspace change. Or, we need the observed entries to be i.i.d. Bernoulli($1 - c/r)$) for first $K \alpha$ frames and i.i.d. Bernoulli($ r\log r (\log n)^2 / n$) afterwards. } \subsubsection{NORST-sliding-window} In the basic NORST approach we use a different set of estimates $\l_t$ for each subspace update step. So, for example, the first subspace estimate is computed at ${\hat{t}}_j + \alpha-1$ using $[\l_{{\hat{t}}_j}, \l_{{\hat{t}}_j+1}, \dots, \l_{{\hat{t}}_j+\alpha-1}]$; the second is computed at ${\hat{t}}_j+2 \alpha-1$ using $[\l_{{\hat{t}}_j+\alpha}, \l_{{\hat{t}}_j+ \alpha+1}, \dots, \l_{{\hat{t}}_j+2\alpha-1}]$; and so on. This is done primarily to ensure mutual independence of the set of $\bm{\ell}_t$'s in each interval because this is what makes the proof easier (allows use of matrix Bernstein for example). However, in practice, we can get faster convergence to an $\epsilon$-accurate estimate of $\P_j$, by removing this restriction. This approach is of course motivated by the sliding window idea that is ubiquitous in signal processing. For any sliding-window method, there is the window length which we keep as $\alpha$ and the hop-length which we denote by $\beta$. Thus, NORST-sliding-window ($\beta$) is Algorithm \ref{algo:NORST-st-basic} with the following change: compute $\hat{\bm{P}}_{j,1}$ using $[\l_{{\hat{t}}_j}, \l_{{\hat{t}}_j+ \alpha+1}, \dots, \l_{{\hat{t}}_j+\alpha-1}]$; compute $\hat{\bm{P}}_{j,2}$ using $[\l_{{\hat{t}}_j + \beta}, \l_{{\hat{t}}_j+ \beta+1}, \dots, \l_{{\hat{t}}_j+\beta + \alpha-1}]$; compute $\hat{\bm{P}}_{j,3}$ using $[\l_{{\hat{t}}_j +2 \beta}, \l_{{\hat{t}}_j+ 2\beta+1}, \dots, \l_{{\hat{t}}_j+2\beta + \alpha-1}]$; and so on. Clearly $\beta < \alpha$ and $\beta=\alpha$ returns the basic NORST-miss. \subsubsection{NORST-buffer} Another question if we worry only about practical performance is whether re-using the same $\alpha$ data samples $\bm{y}_t$ in the following way helps: {\color{blue} At $t = {\hat{t}}_j + k\alpha -1$, the $k$-th estimate is improved $R$ times as follows. First we obtain $\hat{\bm{L}}_{t;\alpha}:=[\l_{t-\alpha+1}, \l_{t-\alpha+2}, \dots \l_t]$ which are used to compute $\hat{\bm{P}}_{j,k}$ via $r$-SVD. Let us denote this by $\hat{\bm{P}}_{j,k}^{0}$. Now, we use this estimate to obtain a second, and slightly more refined estimate of the same $\bm{L}_{t;\alpha}$. We denote these as $\hat{\bm{L}}_{t;\alpha}^{(1)}$ and use this estimate to get $\hat{\bm{P}}_{j,k}^{(1)}$.} This process is repeated for a total of $R + 1$ (reuse) times. We noticed that using $R=4$ suffices in most synthetic data experiments and for real data, $R=0$ (which reduces to the basic NORST algorithm) suffices. This variant has the same memory requirement as NORST-original. The time complexity, however, increases by a factor of $R + 1$ since there are $R + 1$ times more subspace estimation steps. In other words, the computational complexity increases to $\mathcal{O}(n d r (R + 1) \log(1/ \epsilon))$. \section{Proof of Theorem \ref{thm:stmiss} and Corollary \ref{cor:noisy}\label{sec:proof_outline}} This appendix can be shortened/removed after review. Much of the proof is a simplification of the proof for NORST for RST \cite[Sections 4, 5 and Appendix A]{rrpcp_icml}. The analysis of subspace change detection is exactly the same as done there (see Lemma 4.8 and Appendix A of \cite{rrpcp_icml}) and hence we do not repeat it here. We explain the main ideas of the rest of the proof. To understand it simply, assume that ${\hat{t}}_j=t_j$, i.e, that $t_j$ is known. We use the following simplification of \cite[Remark 2.3]{pca_dd_isit} to analyze the subspace update step. \begin{corollary}[PCA in sparse data-dependent noise (Remark 2.3 of \cite{pca_dd_isit})]\label{cor:pca_dd} For $t = 1, \cdots, \alpha$, suppose that $\bm{y}_t = \bm{\ell}_t + \bm{w}_t + \v_t$ with $\bm{w}_t= \bm{I}_{\mathcal{T}_t}\bm{M}_{s,t}\bm{\ell}_t$ being sparse noise with support $\mathcal{T}_t$, and $\bm{\ell}_t = \P \bm{a}_t$ where $\P$ is a $n\times r$ basis matrix and $\bm{a}_t$'s satisfy the statistical right-incoherence assumption given in the theorem. Let $\hat{\bm{P}}$ be the matrix of top $r$ eigenvectors of $\frac{1}{\alpha} \sum_t \bm{y}_t \bm{y}_t{}'$. Assume that $\max_t \|\bm{M}_{s,t} \P\| \leq q$ for a $q \le 3$ and that the fraction of non-zeros in any row of the matrix $[\bm{w}_1, \cdots, \bm{w}_{\alpha}]$ is bounded by $b} %{b_0$. Pick an $\epsilon_{\mathrm{SE}} >0$. If $6 \sqrt{b} %{b_0} q f + \lambda_v^+ / \lambda^- < 0.4 \epsilon_{\mathrm{SE}}$ and if $\alpha \ge \alpha^*$ where \[ \alpha^* := C \max\left( \frac{q^2 f^2}{\epsilon_{\mathrm{SE}}^2} r \log n, \frac{\frac{\lambda_v^+}{\lambda^-} f}{\epsilon_{\mathrm{SE}}^2} r_v \log n\right), \] then, w.p. at least $1- 10n^{-10}$, $\sin\theta_{\max}(\hat{\bm{P}}, \P) \le \epsilon_{\mathrm{SE}}$. \end{corollary} First assume that $\v_t=0$ so that $\lambda_v^+ = 0$ and $r_v=0$. Also, let $b_0:= \frac{c_2}{f^2}$ denote the bound on $\small{\text{max-miss-frac-row}}_\alpha$ assumed in the Theorem. Using the expression for $\hat\bm{z}_t$ given in \eqref{eq:zhatt}, it is easy to see that the error $\bm{w}_t}%{\bm{e}_t := \bm{\ell}_t - \hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t$ satisfies \begin{align}\label{eq:etdef} \bm{w}_t}%{\bm{e}_t = \bm{I}_{{\mathcal{T}_{t}}} \left( \bm{\Psi}_{{\mathcal{T}_{t}}}{}'\bm{\Psi}_{{\mathcal{T}_{t}}} \right)^{-1} \bm{I}_{{\mathcal{T}_{t}}}{}' \bm{\Psi} \bm{\ell}_t, \end{align} with $\bm{\Psi} = \bm{I}-\hat{\bm{P}}_{(t-1)} \hat{\bm{P}}_{(t-1)}{}'$. For the first $\alpha$ frames, $\hat{\bm{P}}_{(t-1)} = \bm{0}$ (zero initialization) and so, during this time, $\bm{\Psi} = \bm{I}$. We need to analyze the subspace update steps one at a time. We first explain the main ideas of how we do this for $j>0$ and then explain the different approach needed for $j=0$ (because of zero initialization). Consider a general $j>0$ and $k=1$, i.e., the first subspace update interval of estimating $\P_j$. In this interval $\bm{\Psi} = \bm{I} - \hat{\bm{P}}_{j-1} \hat{\bm{P}}_{j-1}{}'$ and recall that $\hat{\bm{P}}_{j-1} = \hat{\bm{P}}_{j-1,K}$. Assume that $\sin\theta_{\max}(\hat{\bm{P}}_{j-1}, \P_{j-1}) \le \zz$. Using the $\mu$-incoherence assumption, the bound on $\small{\text{max-miss-frac-col}}:= \max_t |{\mathcal{T}_{t}}|/n$, $\sin\theta_{\max}(\hat{\bm{P}}_{j-1}, \P_{j-1}) \le \zz$ (assumed above), and recalling from the algorithm that $\hat{\bm{P}}_j : = \hat{\bm{P}}_{j,K}$, it is not hard to see that\footnote{Use the RIP-denseness lemma from \cite{rrpcp_perf} and some simple linear algebra which includes a triangle inequality type bound for $\sin\theta_{\max}$. See the proof of item 1 of Lemma 4.7 of \cite{rrpcp_icml}}, for all $j$, \\ $\sin\theta_{\max}(\hat{\bm{P}}_{j-1}, \P_j) \le \sin\theta_{\max}(\hat{\bm{P}}_{j-1}, \P_{j-1}) + \sin\theta_{\max}(\P_{j-1}, \P_j)$\\ $\| \bm{I}_{\mathcal{T}_{t}}{}' \P_j\| \le 0.1$, \\ $\| \bm{I}_{\mathcal{T}_{t}}{}' \hat{\bm{P}}_{j,k}\| \le \sin\theta_{\max}(\hat{\bm{P}}_{j,k}, \P_j) + 0.1$, \\ $\| \bm{I}_{\mathcal{T}_{t}}{}' \hat{\bm{P}}_{j-1}\| \le \zz + 0.1$, \\ $\| \left( \bm{\Psi}_{{\mathcal{T}_{t}}}{}' \bm{\Psi}_{{\mathcal{T}_{t}}}\right)^{-1} \| \le 1.2$ with $\bm{\Psi} = \bm{I} - \hat{\bm{P}}_{j,k} \hat{\bm{P}}_{j,k}{}'$. Next we apply Corollary \ref{cor:pca_dd} to the $\hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t$'s. This bounds the subspace recovery error for PCA in sparse data-dependent noise. Since $\l_t = \bm{\ell}_t + \bm{w}_t}%{\bm{e}_t$ with $\bm{w}_t}%{\bm{e}_t$ satisfying \eqref{eq:etdef}, clearly, $\bm{w}_t}%{\bm{e}_t$ is sparse and dependent on $\bm{\ell}_t$ (true data). In the notation of Corollary \ref{cor:pca_dd}, $\bm{y}_t \equiv \l_t$, $\bm{w}_t \equiv \bm{w}_t}%{\bm{e}_t$, $\v_t = 0$, $\mathcal{T}_t \equiv {\mathcal{T}_{t}}$, $\bm{\ell}_t \equiv \bm{\ell}_t$, $\hat{\bm{P}} = \hat{\bm{P}}_{j,1}$, $\P = \P_j$, and $\bm{M}_{s,t} = -\left( \bm{\Psi}_{{\mathcal{T}_{t}}}{}' \bm{\Psi}_{{\mathcal{T}_{t}}}\right)^{-1} \bm{\Psi}_{{\mathcal{T}_{t}}}{}'$ with $\bm{\Psi} = \bm{I} - \hat{\bm{P}}_{j-1} \hat{\bm{P}}_{j-1}{}'$. Thus, using bounds from above, $\norm{\bm{M}_{s,t} \P} = \| \left( \bm{\Psi}_{{\mathcal{T}_{t}}}{}' \bm{\Psi}_{{\mathcal{T}_{t}}}\right)^{-1} \bm{I}_{\mathcal{T}_{t}}{}' \bm{\Psi} \P_j\| \leq \| \left( \bm{\Psi}_{{\mathcal{T}_{t}}}{}' \bm{\Psi}_{{\mathcal{T}_{t}}}\right)^{-1} \| \| \bm{I}_{\mathcal{T}_{t}}{}' \| \| \bm{\Psi} \P_j\| \le 1.2 (\zz + \sin\theta_{\max}(\P_{j-1}, \P_j)) \equiv q$. Also, $b} %{b_0 \equiv b_0 := \frac{c_2}{f^2}$ ($c_2 = 0.001$) which is the upper bound on $\small{\text{max-miss-frac-row}}_{\alpha}$ and so $1.2 (\zz + \sin\theta_{\max}(\P_{j-1}, \P_j)) < 1.2(0.01+\Delta) < 1.3$ since $\Delta \le 1$. Thus $q < 3$. We apply Corollary \ref{cor:pca_dd} with $\varepsilon_{\mathrm{SE}} = q/4$. All its assumptions hold because we have set $\alpha = C f^2 r \log n$ and because we have let $b_0=0.001/f^2$ and so the required condition $3\sqrt{b} f q \le 0.9 \varepsilon_{\mathrm{SE}} / (1 + \varepsilon_{\mathrm{SE}})$ holds. We conclude that $\sin\theta_{\max}(\hat{\bm{P}}_{j,1}, \P_j) \le 1.2 (0.01 + \Delta) / 4 = 0.3(0.01+\Delta):= q_1$ whp. The above is the base case for an induction proof. For the $k$-th subspace update interval, with $k > 1$, we use a similar approach to the one above. Assume that at the end of the $(k-1)$-th interval, we have $\sin\theta_{\max}(\hat{\bm{P}}_{j,k-1}, \P_j) \le q_{k-1}:= 0.3^{k-1} (0.01 + \Delta)$ whp In this interval, $\norm{\bm{M}_{s,t} \P} \leq 1.2 \| \bm{I}_{\mathcal{T}_{t}}{}' \| \| \bm{\Psi} \P_j\| \le 1.2 \sin\theta_{\max}(\hat{\bm{P}}_{j,k-1}, \P_j) \le q_{k-1} = 1.2 \cdot 0.3^{k-1} (0.01 + \Delta) \equiv q$. We apply Corollary \ref{cor:pca_dd} with $\varepsilon_{\mathrm{SE}} = q/4$. This is possible because we have let $b_0=0.001/f^2$ and so the required condition $3\sqrt{b} f q \le 0.9 (q/4) / (1 + q/4)$ holds. Thus we can conclude that $\sin\theta_{\max}(\hat{\bm{P}}_{j,k}, \P_j) \le 1.2 \cdot 0.3^{k-1} (0.01 + \Delta) / 4 = 0.3^{k} (0.01 + \Delta):=q_k$ whp Thus starting from $\sin\theta_{\max}(\hat{\bm{P}}_{j,k-1}, \P_j) \le q_{k-1}:= 0.3^{k-1} (0.01 + \Delta)$, we have shown that $\sin\theta_{\max}(\hat{\bm{P}}_{j,k}, \P_j) \le 0.3^{k} (0.01 + \Delta)$. This along with the base case, implies that we get $\sin\theta_{\max}(\hat{\bm{P}}_{j,k}, \P_j) \le 0.3^{k} (0.01 + \Delta)$ for all $k=1,2,\dots,K$. The choice of $K$ thus implies that $\sin\theta_{\max}(\hat{\bm{P}}_{j}, \P_j) =\sin\theta_{\max}(\hat{\bm{P}}_{j,K}, \P_j) \le \zz$. For $j=0$ and first subspace interval ($k=1$), the proof is a little different from that of \cite{rrpcp_icml} summarized above. The reason is we use zero initialization. Thus, in the first update interval for estimating $\P_0$, we have $\bm{\Psi} = \bm{I}$. In applying the PCA in sparse data-dependent noise result of Corollary \ref{cor:pca_dd}, everything is the same as above except that we now have $\bm{M}_{s,t} = \bm{I}_{\mathcal{T}_{t}}{}'$ and so we get $\norm{\bm{M}_{s,t} \P} \leq 0.1$. Thus in this case $q =0.1 < 3$. The rest of the argument is the same as above. Now consider $\v_t \neq 0$. Recall that the effective noise dimension of $\v_t$ is $r_v = \max_t \|\v_t\|^2/\lambda_v^+$ where $\lambda_v^+ = \|\mathbb{E}[\v_t\vt{}']\|$. Furthermore, recall that $\epsilon_{\mathrm{SE}} = q/4$. Thus, in order to obtain $\zz$-accurate estimate in the noisy case, we will require that $\alpha = \mathcal{O}\left( \max\left( f^2 r \log n, \frac{\frac{\lambda_v^+}{\lambda^-} f r_v \log n}{\epsilon_{\mathrm{SE}}^2} \right)\right)$. Thus, we set $\epsilon_{\mathrm{SE}} = c \sqrt{{\lambda_v^+}/{\lambda^-}}$ to ensure that the dependence on $\zz$ is on logarithmic (that comes from expression for $K$). The above provides the basic proof idea in a condensed fashion but does not define events that one conditions on for each interval, and also does not specify the probabilities. For all these details, please refer to Sections IV and V and Appendix A of \cite{rrpcp_icml}. \section{Proof of Corollary \ref{cor:dyn_rmc}}\label{sec:proof_rmc} This proof is also similar to that of NORST for RST \cite{rrpcp_icml}. The difference is NORST-miss-robust uses noisy modified CS \cite{modcsjournal,stab_jinchun_jp} to replace $l_1$ min. In comparison to the ST-miss proof summarized above, we also have to deal with arbitrary outliers, in addition to missing data. This uses requires sparse support recovery with partial subspace knowledge. This is solved by modified-CS followed by thresholding based support recovery. To bound the modified-CS error, we apply Lemma 2.7 of \cite{stab_jinchun_jp}. This uses a bound on $\|\bm{b}_t\| = \|\bm{\Psi} \bm{\ell}_t\|$ and a bound on the $(\small{\text{max-miss-frac-col}} \cdot n + 2 \small{\text{max-outlier-frac-col}} \cdot n)$-RIC of $\bm{\Psi}$. We obtain both these exactly as done for \cite[Lemma 4.7, Item 1]{rrpcp_icml}: the former uses the slow subspace change bound and the boundedness of $\bm{a}_t$; for the latter we use the $\mu$-incoherence/denseness assumption and bounds on $\small{\text{max-outlier-frac-col}}$ and $\small{\text{max-miss-frac-col}}$, and the RIP-denseness lemma of \cite{rrpcp_perf}. With the modified-CS error bound, we prove exact support recovery using the lower bound on $x_{\min}$. algorithm parameter values of $\xi$ and $\omega_{supp}$. \section{Experimental Comparisons} \label{sec:sims_main} We present the results of numerical experiments on synthetic and real data\footnote{We downloaded the PETRELS' and GROUSE code from the authors' website and all other algorithms from \url{https://github.com/andrewssobral/lrslibrary}.}. All the codes for our experiments are available at \url{https://github.com/vdaneshpajooh/NORST-rmc}. {\em In this section, we refer to NORST-miss as just NORST.} All time comparisons are performed on a Desktop Computer with Intel Xeon E3-1200 CPU, and 8GB RAM. \Subsection{Parameter Setting for NORST} \label{sec:sims_param} The algorithm parameters required for NORST are $r$, $K$, $\alpha$ and $\omega_{evals}} %{\lambda_{\mathrm{thresh}}$. For our theory, we assume $r$, $\lambda^+$, $\lambda^-$, are known, and we pick a desired accuracy, $\epsilon$. We set $K = C \log(1/\epsilon)$, $\alpha = Cf^2 r \log n$, and $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = 2 \epsilon^2 \lambda^-$ with $C$ being a numerical constant more than one. Experimentally, the value of $r$ needs to be set from model knowledge, however, overestimating it by a little does not significantly affect the results. In most of our experiments, we set $\alpha = 2r$ (ideally it should grow as $r \log n$ but since $\log n$ is very small for practical values of $n$ it can be ignored). $\alpha$ should be a larger multiple of $r$ when either the data is quite noisy or when few entries are observed. We set $K$ based on how accurately we would like to estimate the subspace. The parameter $\omega_{evals}} %{\lambda_{\mathrm{thresh}}$ needs to be set as a small fraction of the minimum signal space eigenvalue. In all synthetic data experiments, we set $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = 0.0008 \lambda^-$. Another way to set $\omega_{evals}} %{\lambda_{\mathrm{thresh}}$ is as follows. After $K\alpha$ frames, we can estimate $\hat{\lambda}^-$ as the $r$-th eigenvalue of $\sum_{\tau = t-\alpha+1}^t \l_\tau \l_\tau{}' / \alpha$ and set $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = c \hat{\lambda}^-$ as mentioned before. We use the Conjugate Gradient Least Squares (CGLS) method \cite{cgls} for the LS step with tolerance as $10^{-16}$, and maximum iterations as $20$. For the video experiments, we estimated $r$ using training data from a few videos and fixed it as $r=30$. We let $\lambda^-$ be the $r$-th eigenvalue of the training dataset. We used $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = 1.6 \times 10^{-6} \lambda^- = 0.002$, $\alpha = 2r$ and $K = 3$ for the video data. The reason that we use a smaller fraction of $\lambda^-$ as $\omega_{evals}} %{\lambda_{\mathrm{thresh}}$ is because videos are only approximately low-rank. } \Subsection{Fixed Subspace, Noise-free data} \label{sec:sims_fixed} We generated the data according to \eqref{orpca_eq} and set $\v_t = 0$. We assume a fixed subspace i.e. $J=1$. We generate the subspace basis matrix $\P \in \mathbb{R}^{n \times r}$ by ortho-normalizing the columns of a random Gaussian matrix with $n = 1000$ and $r = 30$. The $\bm{a}_t$'s (for $t = 1, \cdots, d$ and $d = 4000$) are generated independently as $(\a_t)_i \stackrel{\text{i.i.d}}{\sim} \text{unif}[-q_i, q_i]$ where $q_i = \sqrt{f} - \sqrt{f}(i-1)/2r \quad \text{for} \quad i = 1, 2, \cdots, r-1$ and $q_r = 1$. Thus, the condition number of $\bm{\Lambda}$ is $f$ and we set $f = 100$. For our first experiment, the observed entries' set was i.i.d. Bernoulli with fraction of observed entries $\rho=0.7$. We compared all NORST extensions and PETRELS. We set the algorithm parameters for NORST and extensions as mentioned before and used $K = 33$ to see how low the NORST error can go. For PETRELS we set \texttt{max$\_$cycles} $=1$, forgetting parameter $\lambda = 0.98$ as specified in the paper. We display the results in Table \ref{tab:convergence} (top). Notice that NORST-miss and its extensions are significantly faster than PETRELS. Also, the $\beta=10,R=1$ is the best of all the NORST extensions and is as good as PETRELS. \begin{figure*}[t!] \centering \begin{tikzpicture} \begin{groupplot}[ group style={ group size=3 by 1, horizontal sep=.4cm, vertical sep=1cm, x descriptions at=edge bottom, y descriptions at=edge left, }, enlargelimits=false, width = .37\linewidth, height=6cm, ymin=1e-10, ymax=1e0, grid=both, grid style={line width=.1pt, draw=gray!10}, major grid style={line width=.2pt,draw=gray!50}, minor tick num=5, ] \nextgroupplot[ legend entries={ NORST-miss ($3.1$ms), NORST-sliding ($5.8$ms), PETRELS ($35$ms), GROUSE ($2.9$ms) }, legend style={at={(2.8, 1.2)}}, legend columns = 4, legend style={font=\footnotesize}, ymode=log, xlabel={\small{Number of Samples ($t$)}}, ylabel={{$\sin\theta_{\max}(\hat{\bm{P}}_{(t)},\P_{(t)})$}}, title style={at={(0.5,-.3)},anchor=north,yshift=1}, title={\small{(a) Piecewise Constant (Noisy)}}, xticklabel style= {font=\footnotesize, yshift=-1ex}, yticklabel style= {font=\footnotesize}, ] \addplot [black, line width=1.2pt, mark=Mercedes star,mark size=6pt, mark repeat=1, select coords between index={0}{15}] table[x index = {0}, y index = {1}]{\changenoise}; \addplot [red, line width=1.2pt, mark=square,mark size=3pt, mark repeat=1] table[x index = {2}, y index = {3}, select coords between index={0}{15}]{\changenoise}; \addplot [olive, line width=1.2pt, mark=o,mark size=4pt, mark repeat=1] table[x index = {4}, y index = {5}, select coords between index={0}{15}]{\changenoise}; \addplot [teal, line width=1.2pt, mark=diamond,mark size=5pt, mark repeat=1] table[x index = {6}, y index = {7}, select coords between index={0}{15}]{\changenoise}; \nextgroupplot[ legend style={at={(1.2, 1.6)}}, legend columns = 3, legend style={font=\footnotesize}, ymode=log, xlabel={\small{Number of Samples ($t$)}}, xticklabel style= {font=\footnotesize, yshift=-1ex}, yticklabel style= {font=\footnotesize}, title style={at={(0.5,-.3)},anchor=north,yshift=1}, title={\small{(b) Piecewise Constant (Noise-Free)}}, ] \addplot [black, line width=1.2pt, mark=Mercedes star,mark size=6pt, mark repeat=1] table[x index = {0}, y index = {1}, select coords between index={0}{24}]{\pwjustall}; \addplot [olive, line width=1.2pt, mark=o,mark size=4pt, mark repeat=1] table[x index = {6}, y index = {7}, select coords between index={0}{24}]{\pwjustall}; \addplot [teal, line width=1.2pt, mark=diamond,mark size=5pt, mark repeat=1] table[x index = {12}, y index = {13}, select coords between index={0}{24}]{\pwjustall}; \nextgroupplot[ legend style={at={(.8, 1.6)}}, legend columns = 1, legend style={font=\footnotesize}, ymode=log, xlabel={\small{Number of Samples ($t$)}}, title style={at={(0.5,-.3)},anchor=north,yshift=1}, title={\small{(c) Subspace change at each time}}, xticklabel style= {font=\footnotesize, yshift=-1ex}, yticklabel style= {font=\footnotesize}, ] \addplot [black, line width=1.2pt, mark=Mercedes star,mark size=6pt, mark repeat=2] table[x index = {0}, y index = {1}]{\changetime}; \addplot [olive, line width=1.2pt, mark=o,mark size=4pt, mark repeat=1] table[x index = {2}, y index = {3}]{\changetime}; \addplot [teal, line width=1.2pt, mark=diamond,mark size=5pt, mark repeat=1] table[x index = {4}, y index = {5}]{\changetime}; \end{groupplot} \end{tikzpicture} \caption{\small{Subspace error versus time plot for changing subspaces. We plot the $\sin\theta_{\max}(\hat{\bm{P}}_{(t)},\P_{(t)})$ on the y-axis and the number of samples ($t$) on the x-axis. The entries are observed under Bernoulli model with $\rho=0.9$. The computational time taken per sample (in milliseconds) is provided in the legend parenthesis. {\bf (a) Piecewise constant subspace change and noise-sensitivity:} Observe that after the first subspace change, NORST-sliding adapts to subspace change using the least number of samples and is also $\approx$ 6x faster than PETRELS whereas GROUSE requires more samples than our approach and thus is unable to converge to the noise-level ($\approx 10^{-4}$); {\bf (b) Piecewise Constant and noise-free:} All algorithms perform significantly better since the data is noise-free. We clip the y-axis at $10^{-10}$ for the sake of presentation but NORST and PETRELS attain a recovery error of $10^{-14}$. {\bf (c) Subspace changes a little at each time:} All algorithms are able to track the span of top-$r$ singular vectors of $[\P_{(t-\alpha+1)}, \cdots , \P_{(t)}]$ to an accuracy of $10^{-4}$. As explained, the subspace change at each time can be thought of as noise. GROUSE needs almost $2$x number of samples to obtain the same accuracy as NORST while PETRELS is approximately $10$x slower than both NORST and GROUSE. }} \label{fig:stmiss} \end{figure*} In our second set of experiments, we compared NORST (and a few extensions) with PETRELS and GROUSE for three settings of missing data. For GROUSE, we set maximum cycles as $1$ as specified in the documentation and set the step size, $\eta = 0.1$ and the step-size is udpated according to \cite{grouse_global}. The first was for missing generated from the Moving Object model \cite[Model 6.19]{rrpcp_dynrpca} with $s = 200$, and $b_0 = 0.05$. This translates to $\rho = 0.8$ fraction of observed entries. This is an example of a deterministic model on missing entries. We plot the subspace recovery error versus time for this case in Fig. \ref{fig:fixed_ss}(a) As can be seen, NORST-buffer (R=4) and NORST-sliding-window ($\beta=10,R=4$) have the best performance, followed by PETRELS, basic NORST, and then GROUSE. PETRELS is the slowest in terms of time taken. In Fig. \ref{fig:fixed_ss}(b), we plot the results for Bernoulli observed entries' set with $\rho=0.9$. Here again, NORST-sliding has the best performance. Basic NORST is only slightly worse than PETRELS. As can be seen from the time taken (displayed in the legend), NORST and its extensions are much faster than PETRELS.} In Fig. \ref{fig:fixed_ss}(c), as suggested by an anonymous reviewer, we evaluate the same case but with the covariance matrix of $\bm{\ell}_t$ being time-varying. We generate the $\bm{a}_t$'s as described earlier but with $q_{t,i} = \sqrt{f} - \sqrt{f}(i-1)/2r - \lambda^-/2$ for $t = 2, 4, 6,\cdots$ and $q_{t,i} = \sqrt{f} - \sqrt{f}(i-1)/2r + \lambda^-/2$ for $t = 1, 3, 5,\cdots $ and $q_{t,r} = 1$. As can be seen all approaches still work in this case. PETRELS converges with the fewest samples but is almost $18x$ slower. \Subsection{Changing Subspaces, Noisy and Noise-free Measurements} \label{sec:sims_change} \textbf{{Piecewise constant subspace change, noisy and noise-free:}} We generate the changing subspaces using $\P_j = e^{\gamma_j \bm{B}_j} \P_{j-1}$ as done in \cite{chi_review} where $\gamma_j$ controls the amount subspace change and $\bm{B}_j$'s are skew-symmetric matrices. We used the following parameters: $n = 1000$, $d = 10000$, $J = 6$, and the subspace changes after every $800$ frames. The other parameters are $r = 30$, $\gamma_j = 100$ and the matrices $\bm{B}_i$ are generated as $\bm{B}_i = (\tilde{\bm{B}}_i - \tilde{\bm{B}}_i{}')$ where the entries of $\tilde{\bm{B}}_i$ are generated independently from a standard normal distribution and $\bm{a}_t$'s are generated as in the fixed subspace case. For the missing entries supports, we consider the Bernoulli Model with $\rho = 0.9$. The noise $\v_t$'s are generated as i.i.d. Gaussian r.v.'s with $\sqrt{\lambda_v^+}= 3 \times 10^{-3} \sqrt{\lambda^-}$. The results are summarized in Fig. \ref{fig:stmiss}(a). For NORST we set $\alpha = 100$ and $K=7$. We observe that all algorithms except GROUSE are able to attain final accuracy approximately equal to the noise-level, $10^{-3}$ within a short delay of the subspace change. We also observe that NORST-sliding-window adapts to subspace change using the fewest samples possible. Moreoever it is much faster than PETRELS. } In Fig. \ref{fig:stmiss}(b), we plot results for the above setting but with noise $\nu_t=0$. In this case, the underlying subspace is recovered to accuracy lower than $10^{-12}$ by NORST and PETRELS but GROUSE only tracks to error $10^{-7}$ \begin{table*}[t!] \caption{\small{Comparison of $\|\bm{L} - \hat{\bm{L}}\|_F/ \|\bm{L}\|_F$ for MC. We report the time taken per sample in milliseconds in parenthesis. Thus the table format is Error (computational time per sample). The first three rows are for the fixed subspace model. The fourth row contains results for time-varying subspace and with noise of standard deviation $0.003 \sqrt{\lambda^-}$ added. The last row reports Background Video Recovery results (for the curtain video shown in Fig. \ref{fig:vid_mo_st} when missing entries are Bernoulli with $\rho=0.9$.}} \begin{center} \resizebox{.95\linewidth}{!}{ \begin{tabular}{ c c c c c} \toprule Subspace model & {NORST-smoothing} & \multicolumn{2}{c}{nuclear norm min (NNM) solvers} & projected-GD \\ \cmidrule(lr){3-4} & & IALM & SVT & \\ \midrule Fixed (Bern, $\rho=0.9$) &\textbf{$1.26 \times 10^{-15}$ ($10$)} & $1.43 \times 10^{-12}$ ($150$) & $7.32 \times 10^{-7}$ ($164$) & $0.98$ ($1$) \\ Fixed (Bern, $\rho=0.3$) & $3.5 \times 10^{-6}$ ($11$) & $5.89 \times 10^{-13}$ ($72$) &-- & $0.98$ ($9$) \\ Noisy, Changing (Bern, $\rho=0.9$) & $3.1 \times 10^{-4}$ ($3.5$) & $3.47 \times 10^{-4}$ ($717$) & $2.7 \times 10^{-3}$ ($256$) & $0.97$ ($2$) \\ Video Data & $0.0074$ ($83.7$) & $0.0891$ ($57.5$) & $0.0034$ ($6177 $) & -- \\ \bottomrule \end{tabular} } \end{center} \label{tab:all_MCalgos_frob} } \vspace{-.2in} \end{table*} \textbf{Subspace change at each time:} Here we generate the data using the approach of \cite{grouse}: $\P_{(1)}$ is generated by ortho-normalizing the columns of a i.i.d. Gaussian matrix and let $\P_{(t)} = e^{\gamma \bm{B}} \P_{(t-1)}$. We set $\gamma = 10^{-7}$. No extra noise $\v_t$ was added, i.e., $\v_t=0$, in this experiment. We plot $\sin\theta_{\max}(\hat{\bm{P}}_{(t)}, \P_{(t)})$ in Fig. \ref{fig:stmiss}(c). Notice that, even without added noise $\v_t$, all algorithms are only able to track the subspaces to accuracy at most $10^{-3}$ in this case. The reason is, as explained earlier in Sec. \ref{identif}, subspace change at each time can be interpreted as $r$ dimensional piecewise constant subspace change plus noise } \begin{figure*}[t!] \centering \resizebox{.9\linewidth}{!}{ \begin{tabular}{@{}c @{}c @{}c @{}c @{}c @{}c @{}c @{}} \includegraphics[scale=0.995, trim={5.5cm, 0.4cm, 5.35cm, 1.1cm}, clip]{videoFrames/Original_Frame980201020.pdf} & \includegraphics[scale=1.015, trim={5.69cm, 0.4cm, 5.5cm, 1.1cm}, clip]{videoFrames/Corrupted_Frame980201020.pdf} & \includegraphics[scale=1, trim={5.5cm, 0.4cm, 5.5cm, 1.1cm}, clip]{videoFrames/NORST_recons_Frame980201020.pdf} & \includegraphics[scale=0.995, trim={5.5cm, 0.4cm, 5.35cm, 1.1cm}, clip]{videoFrames/GROUSE_recons_Frame980201020.pdf} & \includegraphics[scale=1, trim={5.50cm, 0.4cm, 5.6cm, 1.cm}, clip]{videoFrames/PETRELS10_recons_Frame980201020.pdf} & \includegraphics[scale=1, trim={5.5cm, 0.4cm, 5.5cm, 1.1cm}, clip]{videoFrames/IALM_recons_Frame980201020.pdf} & \includegraphics[scale=1, trim={5.5cm, 0.4cm, 5.5cm, 1.1cm}, clip]{videoFrames/SVT_recons_Frame980201020.pdf} \\ \scalebox{1.5}{Original} & \scalebox{1.5}{Corrupted} &\scalebox{1.5}{NORST} & \scalebox{1.5}{GROUSE } & \scalebox{1.5}{PETRELS($10$)} & \scalebox{1.5}{IALM} & \scalebox{1.5}{SVT} \\ & &\scalebox{1.5}{(7.5ms)} & \scalebox{1.5}{(9ms)} & \scalebox{1.5}{(1698ms)} & \scalebox{1.5}{(45.5ms)} & \scalebox{1.5}{(3238ms)} \end{tabular} } \caption{\small{Background Recovery under Moving Object Model missing entries ($\rho = 0.98$). We show the original, observed, and recovered frames at $t = \{980, 1000, 1020\}$. NORST and SVT are the only algorithms that work although NORST is almost $3$ orders of magnitude faster than SVT. PETRELS($10$) exhibits artifacts, while IALM and GROUSE do not capture the movements in the curtain. The time taken per sample for each algorithm is shown in parenthesis.}} \label{fig:vid_mo_st} \end{figure*} \begin{figure*}[t!] \centering \resizebox{.7\linewidth}{!}{ \begin{tabular}{@{}c @{}c @{}c @{}c @{}c} \includegraphics[scale=0.995, trim={5.5cm, 0.4cm, 5.35cm, 1.1cm}, clip]{videoFrames/Original_Frame105910781157.pdf} & \includegraphics[scale=0.995, trim={5.5cm, 0.4cm, 5.35cm, 1.1cm}, clip]{videoFrames/BgFg_corrupted_Frame105910781157.pdf} & \includegraphics[scale=1.01, trim={5.65cm, 0.4cm, 5.5cm, 1.1cm}, clip]{videoFrames/BgFg_recons_NORST_Frame105910781157.pdf} &\includegraphics[scale=0.995, trim={5.5cm, 0.4cm, 5.35cm, 1.1cm}, clip]{videoFrames/BgFg_recons_GRASTA_Frame105910781157.pdf} & \includegraphics[scale=0.995, trim={5.5cm, 0.4cm, 5.35cm, 1.1cm}, clip]{videoFrames/BgFg_recons_NCRMC_Frame105910781157.pdf} \\ \scalebox{1.5}{Original} & \scalebox{1.5}{Corrupted} &\scalebox{1.5}{NORST-miss-rob} & \scalebox{1.5}{GRASTA-RMC} & \scalebox{1.5}{projected-GD} \\ & &\scalebox{1.5}{(31.6ms)} & \scalebox{1.5}{(25ms)} & \scalebox{1.5}{(11ms)} \end{tabular} } \caption{\small{Background Recovery with foreground layer, and Bernoulli missing entries ($\rho = 0.9$). We show the original, observed and recovered frames at $t = 1755+\{1059, 1078, 1157\}$. NORST-miss-rob exhibits artifacts, but is able to capture most of the background information, whereas, GRASTA-RMC and projected-GD fail to obtain meaningful estimates. The time taken per sample for each algorithm is shown in parenthesis.}} \label{fig:vid_rmc} \end{figure*} \Subsection{Matrix Completion} \label{sec:sims_mc} In Table \ref{tab:all_MCalgos_frob}, we compare NORST-smoothing with existing MC solutions (for which code is available). This table displays the Monte-Carlo mean of the normalized Frobenius norm error along with time-taken per column displayed in parentheses. We compare two solvers for nuclear norm min (NNM) -- (i) Singular Value Thresholding (SVT) with maximum iterations as $500$, tolerance as $10^{-8}$, $\delta = 1.2/\rho$, and $\tau = 5 \sqrt{nd}$ and (ii) Inexact Augmented Lagrangian Multiplier (IALM) \cite{ialm_nnm} with maximum iterations $500$ and tolerance $10^{-16}$. We also evaluate the projected Gradient Descent (projected-GD) algorithm of \cite{rmc_gd}, this is a non-convex and hence fast approach, with the best sample complexity among non-convex approaches. This seems to be the only provable non-convex MC approach for which code is available. NORST-smoothing used $K=33$ and $\alpha=2r$. The matrix $\bm{L}$ was generated as described in Sec. \ref{sec:sims_fixed} for the ``fixed'' subspace rows and as in Sec. \ref{sec:sims_change} (piecewise constant subspace change) for the ``Noisy, Changing'' subspace row. The observed entries set followed the Bernoulli model with different values of $\rho$ in the different rows. The table demonstrates our discussion from Sec. \ref{sec:mainres_noisefree}. (1) In all cases, NORST-smoothing is much faster than both the solvers for convex MC (NNM), but is slower than the best non-convex MC approach (projected-GD). (2) NORST-smoothing is always better than projected-GD (implemented using default code, it is not easy to change the code parameters). It is nearly as good as IALM (one of the two solvers for NNM) when $\rho$ is large, but is worse than IALM when $\rho$ is small. } } \Subsection{Real Video Data} \label{sec:sims_real} Here we consider the task of Background Recovery for missing data. We use the \texttt{Meeting Room} video which is a benchmark dataset in Background Recovery. It contains $1755$ images of size $64$x$80$ in which a curtain is moving in the wind. Subsequently, there are $1209$ frames in which a person walks into the room, writes on a blackboard, and exits the room. The first $1755$ frames are used for ST-miss while the subsequent frames are used for RST-miss (since we can model the person as a sparse outlier \cite{rpca}). We generate the set of observed entries using the Bernoulli model with $\rho = 0.9$. In all experiments, we use the estimate of rank as $r=30$. The parameters of NORST-miss are $\alpha = 60$, $K = 3$, and $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = 2 \times 10^{-3}$. We noticed that PETRELS failed to retrieve the background with default parameters so we increased \texttt{max$\_$cycles}$=10$ and refer to this as PETRELS($10$) in the sequel. Furthermore, we also ensured that the input data matrix has more columns than rows by transposing the matrix when necessary. All other algorithms are implemented as done in the previous experiments. We observed that NORST-miss and SVT provide a good estimate of the background and NORST is $\approx 150$x faster. The relative Frobenius error is provided in the last row of Table. \ref{tab:all_MCalgos_frob}. Notice that, in this case, SVT outperforms IALM and NORST, but NORST is the fastest one. These results are averaged over $10$ independent trials. {\bf Moving Object Missing Entries:} In our second video experiment, we generated the set of missing entries using the moving object model with $\rho = 0.98$. All algorithms are implemented as in the previous experiment. Interestingly, even though we observe $98\%$ of the entries, the performance of all algorithms degrade compared to the Bern($0.9$). This is possibly because the support sets are highly correlated over time and thus the assumptions of other algorithms break down. The results are shown in Fig. \ref{fig:vid_mo_st}. Observe that NORST-miss and SVT provide the best visual comparison and NORST-miss is faster than SVT by $\approx 400$x. PETRELS($10$) contains significant artifacts in the recovered background and IALM provides a {\em static} output in which the movements of the curtain are not discernible. \begin{table}[ht!] \caption{\small{Comparing recover error for Robust MC methods. Missing entries were Bernoulli with $\rho = 0.9$, and the outliers were sparse Moving Objects with $\rho_{\mathrm{sparse}} = 0.95$. The time taken per sample is shown in parentheses.}} \begin{center} \resizebox{.8\linewidth}{!}{ \begin{tabular}{cc c c } \toprule {NORST-miss-rob} & GRASTA-RMC & projected-GD\\ \midrule ${0.0832}$ ($ 3$) & $0.1431$ ($2.9$) & $0.5699$ ($2$)\\ \bottomrule \end{tabular} } \end{center} \label{tab:rmc} } \end{table} \Subsection{RST-miss and RMC} \label{sec:sims_rmc} In this experiment, we consider the RST-miss problem, i.e., we generate data according to \eqref{eq:rmc_prob}. We generate the low rank matrix, $\bm{L}$, as done in experiment $1$ (single subspace). We generate the sparse matrix, $\bm{S}$ as follows: we use the Moving Object Model to generate the support sets such that $s/n = 0.05$ and $b_0 = 0.05$ which translates to $\rho_{\mathrm{sparse}} = 0.05$ {\em fraction of sparse outliers}. The non-zero magnitudes of $\bm{X}$ are generated uniformly at random between $[x_{\text{min}}, x_{\text{max}}]$ with $x_{\text{min}} = 10$ and $x_{\text{max}}=25$. We generated the support of observed entries using Bernoulli Model with probability $\rho_{\text{obs}} = 0.9$. For initialization step of NORST-miss-robust (Algorithm 2), for the first $t_{\mathrm{train}} = 400$ data samples, we set $(\bm{y}_t)_i = 10$ for all $i \in {\mathcal{T}_{t}}$. We do this to allow us to use AltProj \cite{robpca_nonconvex}, which is an RPCA solution, for obtaining the initial subspace estimate. The parameters for this step are set as $500$ maximum iterations of AltProj, and tolerance $10^{-3}$. The other algorithm parameters for NORST-miss-robust are $\alpha = 60$, $K = 33$, $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = 7.8 \times 10^{-4}$, $\xi = x_{\min}/15$, and $\omega_{supp} = \bm{x}_{\text{min}}/2 = 5$. We compare\footnote{we do not compare it with NNM based methods for which code is not available online} GRASTA-RMC \cite{grass_undersampled} and projected-GD \cite{rmc_gd}. For GRASTA-RMC we used the tolerance $10^{-8}$, and \texttt{max$\_$cycles}$=1$. For projected-GD, we use the default tolerance $10^{-1}$ and max. iterations $70$. The results are given in Table. \ref{tab:rmc}. Observe that NORST-miss-robust obtains the best estimate among the RMC algorithms {\bf Real video data:} In this experiment, we consider Background recovery applied on the second part of the dataset (last $1209$ frames). In addition to the person who enters the room and writes on the board (sparse component), we generate missing entries from the Bernoulli model with $\rho= 0.9$. We initialize using AltProj with tolerance $10^{-2}$ and $100$ iterations. We set $\omega_{supp,t} = 0.9 \|\bm{y}_t\|/\sqrt{n}$ using the approach of \cite{rrpcp_icml}. The comparison results are provided in Fig. \ref{fig:vid_rmc}. Notice that both GRASTA-RMC and projected-GD fail to accurately recover the background. Although NORST-miss-robust exhibits certain artifacts around the edges of the sparse object, it is able to capture most of the information in the background. \subsection{#1} \vspace{-0.05in} } \begin{document} \title{Subspace Tracking from Errors and Erasures} \title{Provable Subspace Tracking from Missing Data and Matrix Completion} \author{Praneeth Narayanamurthy,~\IEEEmembership{Student Member,~IEEE,} Vahid Daneshpajooh, and Namrata Vaswani,~\IEEEmembership{Fellow,~IEEE} \thanks{Parts of this paper will be presented at IEEE International Conference on Acoustics, Speech, and Signal Processing, 2019 \cite{rst_miss_icassp} and IEEE International Symposium on Information Theory, 2019 \cite{st_miss_isit}.} \thanks{The authors are with Department of Electrical and Computer Engineering, Iowa State University, Ames, IA, 50010 USA (e-mail: {\{pkurpadn, vahidd, namrata\} @iastate.edu}).} } \maketitle \begin{abstract} We study the problem of subspace tracking in the presence of missing data (ST-miss). In recent work, we studied a related problem called robust ST. In this work, we show that a simple modification of our robust ST solution also provably solves ST-miss and robust ST-miss. To our knowledge, our result is the first ``complete'' guarantee for ST-miss. This means that we can prove that under assumptions on only the algorithm inputs, the output subspace estimates are close to the true data subspaces at all times. Our guarantees hold under mild and easily interpretable assumptions, and allow the underlying subspace to change with time in a piecewise constant fashion. In contrast, all existing guarantees for ST are partial results and assume a fixed unknown subspace. Extensive numerical experiments are shown to back up our theoretical claims. Finally, our solution can be interpreted as a provably correct mini-batch and memory-efficient solution to low rank Matrix Completion (MC). \end{abstract} \begin{IEEEkeywords} Subspace Tracking, Matrix Completio \end{IEEEkeywords} \renewcommand{\subsubsection}[1]{{\bf #1. }} \section{Introduction} Subspace tracking from missing data (ST-miss) is the problem of tracking the (fixed or time-varying) low-dimensional subspace in which a given data sequence approximately lies when some of the data entries are not observed. The assumption here is that consecutive subsets of the data are well-approximated as lying in a subspace that is significantly lower-dimensional than the ambient dimension. Time-varying subspaces is a more appropriate model for long data sequences (e.g. long surveillance videos). For such data, if a fixed subspace model is used, the required subspace dimension may be too large. As is common in time-series analysis, the simplest model for time-varying quantities is to assume that they are piecewise constant with time. We adopt this model here. If the goal is to provably track the subspaces to any desired accuracy, $\zz>0$, then, as we explain later in Sec. \ref{identif}, this assumption is, in fact, necessary. Of course, experimentally, our proposed algorithm, and all existing ones, ``work'' (return good but not perfect estimates) even without this assumption, as long as the amount of change at each time is small enough. The reason is one can interpret subspace changes at each time as a ``piecewise constant subspace'' plus noise. The algorithms are actually tracking the ``piecewise constant subspace'' up to the noise level. We explain this point further in Sec. \ref{identif} } ST-miss can be interpreted as an easier special case of robust ST (ST in the presence of additive sparse outliers) \cite{rrpcp_icml}. {We also study robust ST-miss which is a generalization of both ST-miss and robust ST.} Finally, our solutions for ST-miss and robust ST-miss also provide novel mini-batch solutions for low-rank matrix completion (MC) and robust MC respectively. Example applications where these problems occur include recommendation system design and video analytics. % In video analytics, foreground occlusions are often the source of both missing and corrupted data: if the occlusion is easy to detect by simple means, e.g., color-based thresholding, then the occluding pixel can be labeled as ``missing''; while if this cannot be detected easily, it is labeled as an outlier pixel. Missing data also occurs due to detectable video transmission errors (typically called ``erasures''). In recommendation systems, data is missing because all users do not label all items. In this setting, time-varying subspaces model the fact that, as different types of users enter the system, the factors governing user preferences change. \subsubsection{Brief review of related work} ST has been extensively studied in both the controls' and the signal processing literature, see \cite{golubtracking, chi_review, sslearn_jmlr, rrpcp_proc} for comprehensive overviews of both classical and modern approaches. Best known existing algorithms for ST and ST-miss include Projection Approximate Subspace Tracking (PAST) \cite{past,past_conv}, Parallel Estimation and Tracking by Recursive Least Squares (PETRELS) \cite{petrels} and Grassmannian Rank-One Update Subspace Estimation (GROUSE) \cite{grouse,local_conv_grouse, grouse_global, grouse_enh}. Of these, PETRELS is known to have the best experimental performance. There have been some attempts to obtain guarantees for GROUSE and PETRELS for ST-miss \cite{local_conv_grouse,grouse_global, petrels_new}, however all of these results assume the statistically stationary setting of a {\em fixed unknown subspace} and all of them provide only {\em partial guarantees.} This means that the result does not tell us what assumptions the algorithm inputs (input data and/or initialization) need to satisfy in order to ensure that the algorithm output(s) are close to the true value(s) of the quantity of interest, either at all times or at least at certain times. For example, \cite{local_conv_grouse} requires that the intermediate algorithm estimates of GROUSE need to satisfy certain properties (see Theorem \ref{thm:grouse} given later). It does not tell us what assumptions on algorithm inputs will ensure that these properties hold. On the other hand, \cite{petrels_new} guarantees closeness of the PETRELS output to a quantity other than the true value of the ``quantity of interest'' (here, the true data subspace); see Theorem \ref{thm:petrels}. Of course, the advantage of GROUSE and PETRELS is that they are streaming solutions (require a single-pass through the data). This may also be the reason that a complete guarantee is harder to obtain for these. Other related work includes streaming PCA with missing data \cite{streamingpca_miss,eldar_jmlr_ss}. A provable algorithmic framework for robust ST is Recursive Projected Compressive Sensing (ReProCS) \cite{rrpcp_allerton,rrpcp_perf,rrpcp_aistats,rrpcp_dynrpca,rrpcp_icml}. Robust ST-miss has not received much attention in the literature Provable MC has been extensively studied, e.g., \cite{matcomp_candes,lowrank_altmin,rmc_gd}. We discuss these works in detail in Sec. \ref{sec:prior_art} \subsubsection{Contributions} (1) We show that a simple modification of a ReProCS-based algorithm called Nearly Optimal Robust ST via ReProCS (NORST for short) \cite{rrpcp_icml} also provably solves the ST-miss problem while being fast and memory-efficient. An extension for robust ST-miss is also presented. Unlike all previous work on ST-miss, our guarantee is a {\em complete guarantee (correctness result)}: we show that, with high probability (whp), under simple assumptions on only the algorithm inputs, the output subspace estimates are close to the true data subspaces and get to within $\zz$ accuracy of the current subspace within a ``near-optimal'' delay. Moreover, unlike past work, our result allows time-varying subspaces (modeled as piecewise-constant with time) and shows that NORST-miss can provably detect and track each changed subspace quickly. Here and below, {\em near-optimal} means that our bound is within logarithmic factors of the minimum required. For $r$-dimensional subspace tracking, the minimum required delay is $r$; thus our delay of order $r\log n \log(1/\zz)$ is {\em near-optimal}. Moreover, since ST-miss is an easier problem than robust ST, our guarantee for ST-miss is significantly better than the original one \cite{rrpcp_icml} that it follows from. It does not assume a good first subspace initialization and does not require slow subspace change. (2) Our algorithm and result can also be interpreted as a novel provably correct mini-batch and memory-efficient solution to low rank MC. We explain in Sec. \ref{sec:mainres_noisefree} that our guarantee is particularly interesting in the regime when subspace changes frequently enough, e.g., if it changes every order $r\log n \log(1/\zz)$ time instants. \subsubsection{Organization} We explain the algorithm and provide the guarantees for it in Sec. \ref{sec:norstmiss}; first for the noise-free case and then for the noisy case. A detailed discussion is also given that explains why our result is an interesting solution for MC. In this section, we also develop simple heuristics that improve the experimental performance of NORST-miss. We provide a detailed discussion of existing guarantees and how our work relates to the existing body of work in Sec. \ref{sec:prior_art}. Robust ST-miss is discussed in Sec. \ref{sec:norstmissrob}. Exhaustive experimental comparisons for simulated and partly real data (videos with simulated missing entries) are provided in Sec. \ref{sec:sims_main}. These show that as long as the fraction of missing entries is not too large, (i) basic NORST-miss is nearly as good as the best existing ST-miss approach (PETRELS), while being faster and having a {\em complete guarantee}; (ii) its extensions have better performance than PETRELS and are also faster than PETRELS; (iii) the performance of NORST-miss is worse than convex MC solutions, but much better than non-convex ones (for which code is available); however, NORST-miss is much faster than the convex MC methods. We conclude in Sec. \ref{sec:conc}. \Subsection{Notation} We use the interval notation $[a, b]$ to refer to all integers between $a$ and $b$, inclusive, and we use $[a,b): = [a,b-1]$. $\|.\|$ denotes the $l_2$ norm for vectors and induced $l_2$ norm for matrices unless specified otherwise, and $'$ denotes transpose. We use $\bm{M}_\mathcal{T}$ to denote a sub-matrix of $\bm{M}$ formed by its columns indexed by entries in the set $\mathcal{T}$. For a matrix $\P$ we use $\P^{(i)}$ to denote its $i$-th row. A matrix $\P$ with mutually orthonormal columns is referred to as a {\em basis matrix} and is used to represent the subspace spanned by its columns. For basis matrices $\P_1,\P_2$, we use $\sin\theta_{\max}(\P_1,\P_2):=\|(\bm{I} - \P_1 \P_1{}')\P_2\|$ as a measure of Subspace Error (distance) between their respective subspaces. This is equal to the sine of the largest principal angle between the subspaces. If $\P_1$ and $\P_2$ are of the same dimension, $\sin\theta_{\max}(\P_1, \P_2) = \sin\theta_{\max}(\P_2, \P_1)$. We use $\hat{\bm{L}}_{t; \alpha} := [\l_{t-\alpha + 1}, \cdots, \hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t]$ to denote the matrix formed by $\l_t$ and $(\alpha-1)$ previous estimates. Also, $r$-SVD$[\bm{M}]$ refers to the matrix of top $r$ left singular vectors of $\bm{M}$.% A set $\Omega$ that is randomly sampled from a larger set (universe), $\mathcal{U}$, is said be {\em ``i.i.d. Bernoulli with parameter $\rho$''} if each entry of $\mathcal{U}$ has probability $\rho$ of being selected to belong to $\Omega$ independent of all others. We reuse $C,c$ to denote different numerical constants in each use; $C$ is for constants greater than one and $c$ for those less than one. \begin{definition} [$\mu$-incoherence] An $n \times r_{\scriptscriptstyle{P}}$ basis matrix $\P$ is $\mu$-incoherent if $ \max_{i} \|\P^{(i)}\|_2^2 \le {\mu \frac{r_{\scriptscriptstyle{P}}}{n}} $ ($\P^{(i)}$ is $i$-th row of $\P$). Clearly, $\mu \ge 1$. \label{defmu} \end{definition} Throughout this paper, we assume that $f$, which is the condition number of the population covariance of $\bm{\ell}_t$, and the parameter, $\mu$, are constants. This is assumed when the $\mathcal{O}(\cdot)$ notation is used. \Subsection{Problem Statement} ST-miss is precisely defined as follows. At each time $t$, we observe a data vector $\bm{y}_t \in \Re^n$ that satisfies% \begin{align} \bm{y}_t = \mathcal{P}_{\Omega_t}(\bm{\ell}_t) + \v_t, \text{ for } t = 1, 2, \dots, d} %t_{\mathrm{final}} \label{orpca_eq} \end{align} where $\mathcal{P}_{\Omega_t}(\bm{z}_i) = \bm{z}_i$ if $i \in \Omega_t$ and $0$ otherwise. Here $\v_t$ is small unstructured noise, $\Omega_t$ is the set of observed entries at time $t$, and $\bm{\ell}_t$ is the true data vector that lies in a fixed or changing low ($r$) dimensional subspace of $\Re^n$, i.e., $\bm{\ell}_t = \P_{(t)} \a_t$ where $\P_{(t)}$ is an $n \times r$ basis matrix with $r \ll n$. The goal is to track $\operatorname{span}} %{\operatorname{range}(\P_{(t)})$ and $\bm{\ell}_t$ either immediately or within a short delay. Denoting the set of missing entries at time $t$ as ${\mathcal{T}_{t}}$, \eqref{orpca_eq} can also be written as \begin{align} \bm{y}_t := \bm{\ell}_t - \bm{I}_{{\mathcal{T}_{t}}} \bm{I}_{{\mathcal{T}_{t}}}{}' \bm{\ell}_t + \v_t. \label{eq:dynmc} \end{align} We use $\bm{z}_t:=- \bm{I}_{{\mathcal{T}_{t}}}{}' \bm{\ell}_t$ to denote the missing entries. Clearly, ${\mathcal{T}_{t}} = (\Omega_t)^c$ (here $^c$ denotes the complement set w.r.t. $\{1,2,\dots,n\}$). Writing $\bm{y}_t$ as above allows us to tap into the solution framework from earlier work \cite{rrpcp_perf,rrpcp_icml}. This was developed originally for solving robust ST which involves tracking $\bm{\ell}_t$ and $\P_{(t)}$ from $\bm{y}_t : = \bm{\ell}_t + \v_t + \bm{x}_t$ where $\bm{x}_t$ is a sparse vector with the outliers as its nonzero entries. ST-miss can be interpreted as its (simpler) special case if we let $\bm{x}_t = - \bm{I}_{{\mathcal{T}_{t}}} \bm{I}_{{\mathcal{T}_{t}}}{}' \bm{\ell}_t$. It is simpler because the support of $\bm{x}_t$, ${\mathcal{T}_{t}}$, is known. Defining the $n \times d} %t_{\mathrm{final}}$ matrix $\bm{L}:= [\l_1, \l_2, \dots \l_d]$, the above is also a matrix completion (MC) problem; with the difference that for MC the estimates are needed only in the end (not on-the-fly). We use $r_{\mathrm{mat}}$ to denote the rank of $\bm{L}$. \begin{figure}[t!] \centering \begin{tikzpicture} \begin{groupplot}[ group style={ group size=1 by 1, horizontal sep=1cm, vertical sep=1cm, x descriptions at=edge bottom, y descriptions at=edge left, }, enlargelimits=false, width = .95\linewidth, height=5cm, grid=both, grid style={line width=.1pt, draw=gray!10}, major grid style={line width=.2pt,draw=gray!50}, minor tick num=5, ] \nextgroupplot[ legend entries={ NORST-miss (piecewise-constant), NORST-miss (changing each time), }, legend style={at={(.85, 1.3)}}, legend columns = 1, legend style={font=\footnotesize}, ymode=log, xlabel={\small{Number of Samples ($t$)}}, ylabel={\small{$\sin\theta_{\max}(\hat{\bm{P}}_{(t)},\P_{(t)}$)}}, xticklabel style= {font=\footnotesize, yshift=-1ex}, yticklabel style= {font=\footnotesize}, ] \addplot [black, line width=1.2pt, mark=oplus,mark size=3pt, mark repeat=2] table[x index = {0}, y index = {1}]{\pwjust}; \addplot [red, line width=1.2pt, mark=square,mark size=3pt, mark repeat=1, select coords between index={0}{24}] table[x index = {2}, y index = {3}] {\pwjust}; \end{groupplot} \end{tikzpicture} \caption{\small{ Demonstrating the need for the piecewise constant subspace change model. The black circles plot is for subspace changing at each time $t$, while the red squares one is for piecewise constant subspace change, with change occurring at $t=t_1$. The data is generated so that, in both experiments, $\sin\theta_{\max}(\P_{(t_1)}, \P_{(0)})$ is the same. In the piecewise constant case (red squares), we can achieve near perfect subspace recovery. But this is not possible in the ``changing at each time'' (black circles) case. For details, see Sec. \ref{sec:sims_main} and Fig. \ref{fig:stmiss}(c). }} \label{fig:pwconst} \vspace{.2cm} \end{figure} \Subsection{Identifiability assumptions} \label{identif} The above problem definition does not ensure identifiability. If $\bm{L}$ is sparse, it is impossible to recover it from a subset of its entries. Moreover, even if it is dense, it is impossible to complete it if all the missing entries are from a few rows or columns. Finally, if the subspace changes at every time $t$, the number of unknowns ($nr$) is more than the amount of available data at time $t$ ($n$) making it impossible to recover all of them. One way to ensure subspaces' identifiability is to assume that they are piecewise constant with time, i.e., that \[ \P_{(t)} = \P_j \text{ for all } t \in [t_j, t_{j+1}), \ j=1,2,\dots, J. \] with $t_{j+1}-t_j \ge r$. Let $t_0=1$ and $t_{J+1}=d} %t_{\mathrm{final}}$. This ensures that at least $r$ $n$-dimensional data vectors $\bm{y}_t$ are available (this is the minimum needed to compute the subspace even if perfect data $\bm{y}_t=\bm{\ell}_t$ were available). The $t_j$'s are the subspace change times. With this model, $r_{\mathrm{mat}} \le r J$. When the above model is not assumed, one cannot track to any desired accuracy, see the black circles plot in Fig. \ref{fig:pwconst}. This is because the subspace change at each time can be interpreted as a $r$-dimensional piecewise constant subspace change plus noise. To understand this precisely, consider the first $\alpha$ frames, for any $\alpha \ge r$. Let $\P$ be the matrix of top $r$ left singular vectors of $[\P_{(0)}, \P_{(1)}, \dots, \P_{(\alpha-1)}]$. Then, in this interval, $\bm{y}_t := \mathcal{P}_{\Omega_t}( \P_{(t)} \bm{a}_t )$ can be rewritten as $\bm{y}_t = \mathcal{P}_{\Omega_t}( \P (\P'\P_{(t)} \bm{a}_t) ) + \v_t$ where $\v_t = \mathcal{P}_{\Omega_t}(\P_{(t)} \bm{a}_t - \P (\P'\P_{(t)}) \bm{a}_t )$. A similar argument can be extended to any set of $\alpha$ frames. } As explained in earlier work on MC \cite{matcomp_first, matcomp_candes, recht_mc_simple}, one way to ensure that $\bm{L}$ is not sparse is to assume that its left and right singular vectors are dense. This is the well-known incoherence or denseness assumption. Left singular vectors incoherent is nearly equivalent to imposing $\mu$-incoherence of the $\P_j$'s with $\mu$ being a numerical constant. As explained in \cite[Remark 2.4]{rrpcp_icml}, the following assumption on $\bm{a}_t$'s is similar to right incoherence, and hence we call it ``statistical right incoherence''.% \begin{definition}[Statistical Right Incoherence] We assume that the $\bm{a}_t$'s are zero mean, i.e., $\mathbb{E}[\bm{a}_t]= 0$; are mutually independent over time; have identical diagonal covariance matrix $\bm{\Lambda}$, i.e., that $\mathbb{E}[\bm{a}_t \bm{a}_t{}'] = \bm{\Lambda}$ with $\bm{\Lambda}$ diagonal; and are element-wise bounded. Element-wise bounded means that there exists a numerical constant $\mu \ge 1$, such that $\max_i \max_t (\a_t)_i^2 \le \mu \max_t \lambda_{\max}(\mathbb{E}[\a_t\a_t{}'])$. This implies that the $\a_t$'s are sub-Gaussian with sub-Gaussian norm bounded by $\mu \max_t \lambda_{\max}(\mathbb{E}[\a_t\a_t{}']) = \mu \lambda_{\max}(\bm{\Lambda})$. A simple example of element-wise bounded random vectors (r.v) is uniform r.v.s. \label{def_right_incoh} \end{definition} Motivated by the Robust PCA literature \cite{robpca_nonconvex}, one way to ensure that the missing entries are spread out is to bound the maximum fraction of missing entries in any row and in any column. We use $\small{\text{max-miss-frac-row}}$ and $\small{\text{max-miss-frac-col}}$ to denote these. Since NORST-miss is a mini-batch approach that works on batches of $\alpha$ frames, we actually need to bound the maximum fraction of missing entries in any sub-matrix of $\bm{L}$ with $\alpha$ consecutive columns. We denote this by $\small{\text{max-miss-frac-row}}_{\alpha}$. We precisely define these below. \begin{definition}[$\small{\text{max-miss-frac-col}}, \small{\text{max-miss-frac-row}}_\alpha$] For a discrete time interval, $\mathcal{J}$, let $ \gamma(\mathcal{J}): = \max_{i=1,2,\dots,n} \frac{1}{|\mathcal{J}|} \sum_{t \in \mathcal{J}} \bm{1}_{ \{i \in {\mathcal{T}_{t}} \} } $ where $\bm{1}_{S}$ is the indicator function for statement $S$. Thus, $\sum_{t \in \mathcal{J}} \bm{1}_{ \{ i \in {\mathcal{T}_{t}} \} }$ counts the maximum number of missing entries in row $i$ of the sub-matix $\bm{L}_\mathcal{J}$ of the data matrix $\bm{L}:=[\l_1, \l_2, \dots,\l_d]$. So, $\gamma(\mathcal{J})$ is the maximum fraction of missing entries in any row of $\bm{L}_\mathcal{J}$. Let $\mathcal{J}^\alpha$ denote a time interval of duration $\alpha$. Then, $ \small{\text{max-miss-frac-row}}_{\alpha}:= \max_{\mathcal{J}^\alpha \subseteq [1, d} %t_{\mathrm{final}}]} \gamma(\mathcal{J}^\alpha). $ Also, $\small{\text{max-miss-frac-col}} := \max_t|{\mathcal{T}_{t}}|/n$. \end{definition} } \section{The NORST-miss algorithm and guarantees} \label{sec:norstmiss} We explain the basic algorithm next. We give and discuss the guarantee for the noise-free $\v_t=0$ case in Sec. \ref{sec:mainres_noisefree}. The corollary for the noisy case is given in Sec. \ref{sec:mainres_noisy}. Extensions of basic NORST-miss are given in Sec. \ref{sec:ext}. \Subsection{NORST-miss algorithm} \label{sec: norstmiss_algo} The complete psedo-code for our algorithm is provided in Algorithm \ref{algo:NORST-st-basic}. After initialization, the algorithm iterates between a {\em projected Least Squares (LS)} step and a {\em Subspace Update (including Change Detect)} step. Broadly, projected LS estimates the missing entries of $\bm{\ell}_t$ at each time $t$. Subspace update toggles between the ``update'' phase and the change ``detect'' phase. In the update phase, it improves the estimate of the current subspace using a short mini-batch of ``filled in'' versions of $\bm{\ell}_t$. In the detect phase, it uses these to detect subspace change. \textbf{Initialization:} The algorithm starts in the ``update'' phase and with zero initialization: $\hat{\bm{P}}_0 \leftarrow \bm{0}_{n \times r}$. For the first $\alpha$ frames, the projected LS step (explained below) simply returns $\l_t = \bm{y}_t$. Thus, a simpler way to understand the initialization is as follows: wait until $t=\alpha$ and then compute the first estimate of $\operatorname{span}} %{\operatorname{range}(\P_0)$ as the $r$-SVD (matrix of top $r$ left singular vectors) of $[\bm{y}_1, \bm{y}_2, \dots \bm{y}_\alpha]$. This step is solving a PCA with missing data problem which, as explained in \cite{pca_dd_isit}, can be interpreted as a problem of PCA in sparse data-dependent noise. Because we assume that the number of missing entries at any time $t$ is small enough, and the set of missing entries changes sufficiently over time\footnote{Equivalently, we bound the maximum number of missing entries in any column and in any row of the data matrix}, we can prove that this step gives a good first estimate of the subspace. \textbf{Projected LS:} Recall that NORST-miss is a modification of NORST for robust ST from \cite{rrpcp_icml}. In robust ST, sudden subspace changes cannot be detected because these are confused for outliers. Its projected-LS step is thus deigned using a slow (small) subspace change assumption. However, as we will explain later, for the current missing data setting, it also works in case of sudden changes. Suppose that the previous subspace estimate, $\hat{\bm{P}}_{(t-1)}$, is a ``good enough'' estimate of the previous subspace $\P_{(t-1)}$. Under slow subspace change, it is valid to assume that $\operatorname{span}} %{\operatorname{range}(\P_{(t-1)})$ is either equal to or close to $\operatorname{span}} %{\operatorname{range}(\P_{(t)})$. Thus, under this assumption, it is a good idea to project $\bm{y}_t$ onto the orthogonal complement of $\hat{\bm{P}}_{(t-1)}$ because this will nullify most of $\bm{\ell}_t$, i.e., the not-nullified part of $\bm{\ell}_t$, $\bm{b}_t: = \bm{\Psi} \bm{\ell}_t$, will be small. Here $\bm{\Psi} := \bm{I} - \hat{\bm{P}}_{(t-1)}\hat{\bm{P}}_{(t-1)}{}'$. Using this idea, we compute $\tilde{\bm{y}}_t:= \bm{\Psi} \bm{y}_t = \bm{\Psi} _{{\mathcal{T}_{t}}} \bm{Z}_t + \bm{b}_t + \bm{\Psi} \v_t$. Estimating $\bm{z}_t$ can be interpreted as a LS problem $\min_{\bm{z}} \|\tilde{\bm{y}}_t - \bm{\Psi}_{{\mathcal{T}_{t}}} \bm{z}\|^2$. Solving this gives \begin{align} \hat{\bm{z}}_t = \left(\bm{\Psi}_{{\mathcal{T}_{t}}}{}'\bm{\Psi}_{{\mathcal{T}_{t}}} \right)^{-1} \bm{\Psi}_{{\mathcal{T}_{t}}}{}'\tilde{\bm{y}}_t. \label{eq:zhatt} \end{align} Next, we use to this to compute $\l_t = \bm{y}_t - \bm{I}_{{\mathcal{T}_{t}}} \hat{\bm{z}}_t$. Observe that the missing entries $\bm{z}_t$ are recoverable as long as $\bm{\Psi}_{{\mathcal{T}_{t}}}$ is well-conditioned. A necessary condition for this is $(n-r) > |{\mathcal{T}_{t}}|$. As we will see later, a sufficient condition is $|{\mathcal{T}_{t}}| < c n / r$ because this ensures that the restricted isometry constant (RIC) \cite{candes_rip} of $\bm{\Psi}$ of level $|{\mathcal{T}_{t}}|$ is small In settings where $\operatorname{span}} %{\operatorname{range}(\P_{(t-1)})$ is {\em not} close to $\operatorname{span}} %{\operatorname{range}(\P_{(t)})$ (sudden subspace change), the above approach still works. Of course, in this case, it is not any better (or worse) than re-initialization to zero, because, in this case, $ \|\bm{\Psi} \bm{\ell}_t\|$ is of the same order as $\|\bm{\ell}_t\|$. We can use the same arguments as those used for the initialization step to argue that the first subspace update works even in this case. \textbf{Subspace Update:} The $\hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t$'s are used for subspace update. In its simplest (and provably correct) form, this is done once every $\alpha$ frames by $r$-SVD on the matrix formed by the previous $\alpha$ $\l_t$'s. Let ${\hat{t}}_j$ be the time at which the $j$-th subspace change is detected (let ${\hat{t}}_0 := 0$). For each $k=1,2,\dots, K$, at $t = {\hat{t}}_j + k \alpha-1$, we compute the $r$-SVD of $\hat{\bm{L}}_{t; \alpha}$ to get $\hat{\bm{P}}_{j,k}$ ($k$-th estimate of subspace $\P_j$). After $K$ such updates, i.e., at $t = {\hat{t}}_j + K\alpha - 1:={\hat{t}}_{j,fin}$ the update is complete and the algorithm enters the ``detect'' phase. Each update step is a PCA in sparse data-dependent noise problem. This allows us to use the result from \cite{pca_dd_isit} to show that, as long as the missing entries' set changes enough over time ($\small{\text{max-miss-frac-row}}_\alpha$ is bounded for each interval), each update step reduces the subspace recovery error to $0.3$ times its previous value. Thus, by setting $K=C \log (1/\zz)$, one can show that, after $K$ updates, the subspace is recovered to $\zz$ accuracy. \textbf{Subspace change detect:} To simply understand the detection strategy, assume that the previous subspace $\P_{j-1}$ has been estimated to $\zz$ accuracy by $t= {\hat{t}}_{j-1,fin} = {\hat{t}}_{j-1} + K\alpha - 1$ and denote it by $\hat{\bm{P}}_{j-1}:= \hat{\bm{P}}_{j-1,K}$. Also assume that $\v_t=0$. At every $t = {\hat{t}}_{j-1,fin} + u \alpha-1$, $u=1,2,\dots$, we detect change by checking if the maximum singular value of the matrix $(\bm{I}-\hat{\bm{P}}_{j-1}\hat{\bm{P}}_{j-1}{}') \hat{\bm{L}}_{t;\alpha}$ is above a pre-set threshold, $\sqrt{\omega_{evals}} %{\lambda_{\mathrm{thresh}} \alpha}$, or not. This works because, if the subspace has not changed, this matrix will have all singular values of order $\zz \sqrt{\lambda^+}$. If it has changed, its largest singular value will be at least $\sin\theta_{\max}(\P_{j-1}, \P_j) \sqrt{\lambda^-}$. By picking $\zz$ small enough, one can ensure that, whp, all changes are detected. \textbf{NORST-miss-smoothing for MC:} The above is the tracking/online/filtering mode of NORST-miss. It outputs an estimate of $\bm{\ell}_t$ as soon as a new measurement vector $\bm{y}_t$ arrives and an estimate of the subspace every $\alpha$ frames. Notice that, order-wise, $\alpha$ is only a little more than $r$ which is the minimum delay needed to compute the subspace even if perfect data $\bm{y}_t=\bm{\ell}_t$ were available. Once an $\zz$-accurate estimate of the current subspace is available, one can improve all past estimates of $\bm{\ell}_t$ to ensure that all estimates are $\zz$-accurate. This is called the smoothing mode of operation. To be precise, this is done as given in line $25$ of Algorithm \ref{algo:NORST-st-basic}. This allows us to get a completed matrix $\hat{\bm{L}}$ with all columns being $\zz$-accurate \textbf{Memory Complexity:} In online or filtering mode, NORST-miss needs $\alpha =O( r \log n)$ frames of storage. In smoothing mode, it needs $\mathcal{O}((K+2)\alpha)=\mathcal{O}( r \log n \log(1/\epsilon))$ frames of memory. Therefore its memory complexity, even in the smoothing mode, is just $\mathcal{O}(n r \log n \log(1/\epsilon))$. Thus, it provides a nearly memory-optimal mini-batch solution for MC. \textbf{Algorithm parameters:} The algorithm has 4 parameters: $r$, $K$, $\alpha$, and $\omega_{evals}} %{\lambda_{\mathrm{thresh}}$. Theoretically these are set as follows: assume that $r,\lambda^+,\lambda^-$ are known and pick a desired recovery error $\zz$. Set $\alpha = C_1 f^2 r \log n$ with $f=\lambda^+/\lambda^-$, $K=C_2 \log(1/\zz)$ and $\omega_{evals}} %{\lambda_{\mathrm{thresh}} = c \lambda^-$ with $c$ a small constant. We explain practical approaches in Sec \ref{sec:sims_main}.% \newcommand{x_{\min}}{x_{\min}} \newcommand{\rho_{\text{row}}}{\rho_{\text{row}}} \newcommand{\rho_{\text{col}}}{\rho_{\text{col}}} \Subsection{Main Result: noise-free ST-miss and MC} \label{sec:mainres_noisefree} First, for simplicity, consider the noise-free case, i.e., assume $\v_t = 0$. Let ${\text{dif}}_j:= \sin\theta_{\max}(\P_{j-1}, \P_j)$. \begin{theorem}[NORST-miss, $\v_t=0$ case] Consider Algorithm \ref{algo:NORST-st-basic}. Let $\alpha := C f^2 r \log n$, $\bm{\Lambda}:= \mathbb{E}[\a_1 \a_1{}']$, $\lambda^+:=\lambda_{\max}(\bm{\Lambda})$, $\lambda^-:=\lambda_{\min}(\bm{\Lambda})$, $f:=\lambda^+/\lambda^-$. Pick an $\zz \leq \min(0.01,0.03 \min_j \sin\theta_{\max}(\P_{j-1}, \P_j)^2/f)$. Let $K := C \log (1/\zz)$. If \begin{enumerate} \item left and statistical right incoherence: $\P_j$'s are $\mu$-incoherent and $\bm{a}_t$'s satisfy statistical right incoherence (Definition \ref{def_right_incoh}); \item $\small{\text{max-miss-frac-col}} \le \frac{c_1}{\mu r}$, $\small{\text{max-miss-frac-row}}_{\alpha} \le \frac{c_2}{f^2}$; \item subspace change: assume $t_{j+1}-t_j > C r \log n \log(1/\zz)$; \item $\bm{a}_t$'s are independent of the set of missing entries ${\mathcal{T}_{t}}$; \end{enumerate} then, with probability (w.p.) at least $1 - 10 d} %t_{\mathrm{final}} n^{-10} $, \begin{enumerate} \item subspace change is detected quickly: $t_j \le {\hat{t}}_j \le t_j+2 \alpha$, \item the subspace recovery error satisfies \[ \sin\theta_{\max}(\hat{\bm{P}}_{(t)}, \P_{(t)}) \le \left\{ \begin{array}{ll} (\zz + {\text{dif}}_j) & \text{ if } t \in \mathcal{J}_1, \\ (0.3)^{k-1} (\zz + {\text{dif}}_j) & \text{ if } t \in \mathcal{J}_k, \\ \zz & \text{ if } t \in \mathcal{J}_{K+1}. \end{array} \right. \] \item and {$\|\l_t-\bm{\ell}_t\| \le 1.2 (\sin\theta_{\max}(\hat{\bm{P}}_{(t)}, \P_{(t)}) + \zz) \|\bm{\ell}_t\|$}. \end{enumerate} Here $\mathcal{J}_0:= [t_j, {\hat{t}}_j+\alpha)$, $\mathcal{J}_{k} := [{\hat{t}}_j+k\alpha, {\hat{t}}_j+ (k+1)\alpha)$ and $\mathcal{J}_{K+1} := [{\hat{t}}_j+ (K+1)\alpha, t_{j+1})$ and ${\text{dif}}_j:= \sin\theta_{\max}(\P_{j-1}, \P_j)$. The memory complexity is $\mathcal{O}(n r \log n \log(1/\zz))$ and the time complexity is $\mathcal{O}(n d} %t_{\mathrm{final}} r \log(1/\zz))$. \label{thm:stmiss} \end{theorem} \begin{corollary}[NORST-miss for MC]\label{cor:thm1} Under the assumptions of Theorem \ref{thm:stmiss}, NORST-miss-smoothing (line 25 of Algorithm \ref{algo:NORST-st-basic}) satisfies {$\|\l_t-\bm{\ell}_t\| \le \zz \|\bm{\ell}_t\|$ } for all $t$. Thus, $\|\hat{\bm{L}} - \bm{L}\|_F \le \zz \|\bm{L}\|_F$. \end{corollary} The proof is similar to that given in \cite{rrpcp_icml} for the correctness of NORST for robust ST. Please see the Appendix for the changes. For the purpose of this discussion, we treat the condition number $f$ and the incoherence parameter $\mu$ as constants. The above result proves that NORST-miss tracks piecewise constant subspaces to $\epsilon$ accuracy, within a delay that is near-optimal, under the following assumptions: left and ``statistical'' right incoherence holds; the fraction of missing entries in any column of $\bm{L}$ is $\mathcal{O}(1/r)$ while that in any row (of $\alpha$-consecutive column sub-matrices of it) is $\mathcal{O}(1)$. Moreover, ``smoothing mode'' NORST-miss returns $\zz$-accurate estimates of each $\bm{\ell}_t$ and thus also solves the MC problem. Even in this mode, it has {\em near-optimal memory complexity and is order-wise as fast as vanilla PCA}. The above result is the {\em first complete guarantee} for ST-miss. Also, unlike past work, it can {\em deal with piecewise constant subspaces while also automatically reliably detecting subspace change} with a near-optimal delay. Consider the total number of times a subspace can change, $J$. Since we need the subspace to be constant for at least $(K+3)\alpha$ frames, $J$ needs to satisfy $J (K+3) \alpha \le d$. Since we need $(K+3)\alpha$ to be at least $C r \log n \log(1/\zz)$, this means that $J$ must satisfy \[ J \le c \frac{d}{r \log n \log(1/\zz)}. \] This, in turn, implies that the rank of the entire matrix, $r_{\mathrm{mat}}$, can be at most` \[ r_{\mathrm{mat}} = r J \le c \frac{d}{\log n \log (1/\zz)}. \] Observe that this upper bound is nearly linear in $d$. This is what makes our corollary for MC interesting. It implies that we can recover $\bm{L}$ to $\zz$ accuracy even in this {\em nearly linearly growing rank regime}, of course only if the subspace changes are piecewise constant with time and frequent enough so that $J$ is close to its upper bound. In contrast, existing MC guarantees, these require left and right incoherence of $\bm{L}$ and a Bernoulli model on observed entries with observation probability $m / n d$ where $m$ is the required number of observed entries on average. The convex solution \cite{recht_mc_simple} needs $m = C n r_{\mathrm{mat}} \log^2 n$ while the best non-convex solution \cite{rmc_gd} needs $m = C n r_{\mathrm{mat}}^2 \log^2 n $ observed entries. The non-convex approach is much faster, but its required $m$ depends on $r_{\mathrm{mat}}^2$ instead of $r_{\mathrm{mat}}$ in the convex case. See Sec. \ref{sec:prior_art} for a detailed discussion, and Table \ref{tab:comp_mc} for a summary of it. On the other hand, our missing fraction bounds imply that the total number missing entries needs to at most $ \min( n d \cdot \small{\text{max-miss-frac-row}}, d n \cdot \small{\text{max-miss-frac-col}}) = c \frac{nd}{r}$, or that we need at least $m = (1 - c/r) nd$ observed entries. If subspace changes are infrequent ($J$ is small) so that $r_{\mathrm{mat}} \approx r \ll d$, our requirement on observed entries is much stronger than what existing MC approaches need. However, suppose that $J$ equals its allowed upper bound so that $r_{\mathrm{mat}} = c \frac{d}{\log n \log (1/\zz)}$; but $r$ is small, say $r = \log n$. In this setting, we need $nd(1 - c/\log n)$ while the convex MC solution needs $c n \frac{d}{\log n \log (1/\zz)} \log^2 n = c n d \frac{\log n}{\log(1/\zz)}$. If $\zz =1/n$, this is $c \cdot nd$, if $\zz$ is larger, this is even larger than $c \cdot nd$. Thus, in this regime, our requirement on $m$ is only a little more stringent. Our advantage is that we do not require a Bernoulli (or any probability) model on the observed entries' set {\em and} our approach is much faster, memory-efficient, and nearly delay-optimal. This is true both theoretically and in practice; see Tables \ref{tab:comp_mc} and \ref{tab:all_MCalgos_frob}. If we consider non-convex MC solutions, they are much faster, but they cannot work in this nearly linear rank regime at all because they will need $C n d^2 / \log^2 n$ observed entries, which is not possible. A possible counter-argument to the above can be: what if one feeds smaller batches of data to an MC algorithm. Since the subspace change times are not known, it is not clear how to do this. One could feed in batches of size $K \alpha$ which is the memory size used by NORST-miss-smoothing. Even in this case the discussion is the same as above. To simplify writing suppose that $\zz = 1/n$. The convex solution will need $m = c n (C r \log^2 n )$ observed entries for a matrix of size $n \times (C r \log^2 n)$. Thus $m$ required is again linear in the matrix size. NORST-miss-smoothing will need this number to be $ (1 - c/r) n (C r \log^2 n) $ which is again only slightly worse when $r$ is small. The non-convex methods will again not work. The Bernoulli model on the observed entries' set can often be an impractical requirement. For example, erasures due to transmission errors or image/video degradation often come in bursts. Similarly video occlusions by foreground objects are often slow moving or occasionally static, rather than being totally random. Our guarantee does not require the Bernoulli model but the tradeoff is that, in general, it needs more observed entries. A similar tradeoff is observed in the robust PCA literature. The guarantee of \cite{rpca} required a uniform random or Bernoulli model on the outlier supports, but tolerated a constant fraction of corrupted entries. In other words it needed the number of uncorrupted entries to be at least $ c \cdot nd$. Later algorithms such as AltProj \cite{robpca_nonconvex} did not require any random model on outlier support but needed the number of un-corrupted entries to be at least $(1 -c/r) nd$ which is a little more stringent requirement. \begin{table}[t!] \caption{\small{List of Symbols and Assumptions used in Theorem \ref{thm:stmiss}.}} \vspace{.1cm} \centering \resizebox{\linewidth}{!}{ \renewcommand{\arraystretch}{1.5} \begin{tabular}{cccc} \toprule \multicolumn{2}{c}{{\bf Observations: } $\bm{y}_t = \mathcal{P}_{\Omega_t}(\bm{\ell}_t) + \v_t = \mathcal{P}_{\Omega_t}(\P_{(t)} \bm{a}_t) + \v_t$} \\ \toprule Symbol & Meaning \\ \midrule $t_j$ & $j$-th subspace change time \\ for $t \in [t_j, t_{j+1})$, $\P_{(t)} = \P_j$ & Subspace at time $t$ \\ $\mathcal{P}_{\Omega_t}(\cdot)$ & mask to select elements present in $\Omega_t$ \\ $\Omega_t$ & Support set of observed entries \\ ${\mathcal{T}_{t}} (= \Omega_t^c) $ & Support set of missing entries \\ $\v_t$ & dense, unstructured noise \\ \midrule \multicolumn{2}{c}{{\bf Principal Subspace Coefficients} ($\bm{a}_t$'s)} \\ \midrule \multicolumn{2}{c}{element-wise bounded, zero mean,} \\ \multicolumn{2}{c}{mutually independent with identical and diagonal covariance} \\ \multicolumn{2}{c}{$\mathbb{E}{[\bm{a}_t \bm{a}_t{}']} := \bm{\Lambda}$} \\ $\lambda_{\max}(\bm{\Lambda}) = \lambda^+ (\lambda_{\min}(\bm{\Lambda}) = \lambda^-)$ & Max. (min.) eigenvalue of $\bm{\Lambda}$ \\ $f := \lambda^+/\lambda^-$ & Condition Number of $\bm{\Lambda}$ \\ \midrule \multicolumn{2}{c}{{\bf Missing Entries} ($\bm{Z}_t = - \bm{I}_{{\mathcal{T}_{t}}}{}' \bm{\ell}_t$)} \\ \midrule Row-Missing Entries & $\small{\text{max-miss-frac-row}}_{\alpha} \leq 0.001/f^2$ \\ Column-Missing Entries & $\small{\text{max-miss-frac-col}} \leq 0.01/\mu r$ \\ \midrule \multicolumn{2}{c}{{\bf Intervals for $j$-th subspace change and tracking}} \\ \midrule ${\hat{t}}_j$ & $j$-th subspace change detection time \\ ${\hat{t}}_{j, fin}$ & $j$-th subspace update complete \\ $\mathcal{J}_0 := [t_j, {\hat{t}}_j)$ & interval before $j$-th subspace change detected \\ $\mathcal{J}_k := [{\hat{t}}_j + (k-1)\alpha, {\hat{t}}_j + k\alpha)$ & $k$-th subspace update interval \\ $\mathcal{J}_{K+1}:=[{\hat{t}}_j + K\alpha, t_{j+1})$ & subspace update completed \\ \midrule \multicolumn{2}{c}{{\bf Algorithm \ref{algo:NORST-st-basic} Parameters}} \\ \midrule $\alpha$ & \# frames used for subspace update \\ $K$ & \# of subspace updates for each $j$ \\ $\omega_{evals}} %{\lambda_{\mathrm{thresh}}$ & threshold for subspace detection \\ \bottomrule \end{tabular} \label{tab:not1}} } \end{table} \Subsection{Main Result -- ST-miss and MC with noise} \label{sec:mainres_noisy} So far we gave a result for ST-miss and MC in the noise-free case. A more practical model is one that allows for small unstructured noise (modeling error). Our result also extends to this case with one extra assumption. In the noise-free case, there is no real lower bound on the amount of subspace change required for reliable detection. Any nonzero subspace change can be detected (and hence tracked) as long as the previous subspace is recovered to $\zz$ accuracy with $\zz$ small enough compared to the amount of change. If the noise $\v_t$ is such that its maximum covariance in any direction is smaller than $\zz^2 \lambda^-$, then Theorem \ref{thm:stmiss} and Corollary \ref{cor:thm1} hold with almost no changes. If the noise is larger, as we will explain next, we will need the amount of subspace change to be larger than the noise-level. Also, we will be able to track the subspaces only up to accuracy equal to the noise level. Suppose that the noise $\v_t$ is bounded. Let $\lambda_v^+:=\|\mathbb{E}[\v_t \v_t{}']\|$ be the noise power and let $r_v:= \max_t \|\v_t\|^2 / \lambda_v^+$ be the effective noise dimension. Trivially, $r_v \le n$. To understand things simply, first suppose that the subspace is fixed. If the noise is isotropic (noise covariance is a multiple of identity), then, as correctly pointed out by an anonymous reviewer, one can achieve noise-averaging in the PCA step by picking $\alpha$ large enough: it needs to grow as \footnote{$\alpha$ needs to grow as $ C \min(r_v \log n, n) (\lambda_v^+/\lambda^-)/\epsilon^2$; for the isotropic case, $r_v = n$ and thus the discussion follows.} $ n (\lambda_v^+ /\lambda^-)/\zz^2$. Isotropic noise is the most commonly studied setting for PCA, but it is not the most practical. In the more practical non-isotropic noise case, it is not even possible to achieve noise-averaging by increasing $\alpha$. In this setting, with any choice of $\alpha$, the subspace can be recovered only up to the noise level, i.e., we can only achieve recovery accuracy $c \lambda_v^+ /\lambda^-$. If we are satisfied with slightly less accurate estimates, i.e., if we set $\zz= c \sqrt{\frac{\lambda_v^+}{\lambda^-} }$, and if the effective noise dimension $r_v = C r$, then the required value of $\alpha$ does not change from what it is in Theorem \ref{thm:stmiss}. Now consider the changing subspace setting. We can still show that we can detect subspace changes that satisfy $0.03 \min_j \sin\theta_{\max}(\P_{j-1}, \P_j)^2/f \ge \zz$, but now $\zz = c \sqrt{\frac{\lambda_v^+}{\lambda^-} }$. This imposes a non-trivial lower bound on the amount of change that can be detected. The above discussion is summarized in the following corollary. \begin{corollary}[ST-miss and MC with $\v_t \neq 0$]\label{cor:noisy} Suppose that $\v_t$ is bounded, mutually independent and identically distributed (iid) over time, and is independent of the $\bm{\ell}_t$'s. Define $\lambda_v^+:=\|\mathbb{E}[\v_t \v_t{}']\|$ and $r_v:= \frac{\max_t\|\v_t\|^2}{\lambda_v^+}$. \begin{itemize} \item If $r_v = C r$ and $\lambda_v^+ \le c \zz^2 \lambda^-$, then the results of Theorem \ref{thm:stmiss} and Corollary \ref{cor:thm1} hold without any changes. \item For a general $\lambda_v^+$, we have the following modified result. Suppose that $r_v = C r$, $\min_j \sin\theta_{\max}(\P_{j-1}, \P_j)^2 \ge C f \sqrt{\frac{\lambda_v^+}{\lambda^-} }$, and conditions 1, 2, 3 of Theorem \ref{thm:stmiss} hold. Then all conclusions of Theorem \ref{thm:stmiss} and Corollary \ref{cor:thm1} hold with $\zz = c \sqrt{\frac{\lambda_v^+}{\lambda^-} }$. \item For a general $r_v $, if we set $\alpha = C f^2 \max(r \log n, \min(n, r_v \log n))$ then the above conclusions hold. \end{itemize} \end{corollary} If the noise is {\em isotropic}, the next corollary shows that we can track to any accuracy $\zz$ by increasing the value of $\alpha$. It is not interesting from a tracking perspective because its required value of $\alpha$ is much larger. However, it provides a result that is comparable to the result for streaming PCA with missing data from \cite{streamingpca_miss} that we discuss later. \begin{corollary}[ST-miss and MC, isotropic noise case]\label{cor:noisy2} If the noise $\v_t$ is isotropic (so that $r_v=n$), then, for any desired recovery error level $\zz$, if $\alpha = C n \frac{ \frac{\lambda_v^+}{\lambda^-} }{\zz^2} $, and all other conditions of Theorem \ref{thm:stmiss} hold, then all conclusions of Theorem \ref{thm:stmiss} and Corollary \ref{cor:thm1} hold. \end{corollary} We should mention here that the above discussion and results assume that PCA is solved via a simple SVD step (compute top $r$ left singular vectors). In the non-isotropic noise case, if its covariance matrix were known (or could be estimated), then one can replace simple SVD by pre-whitening techniques followed by SVD, in order to get results similar to the isotropic noise case, e.g., see \cite{leeb_non_iso}. \renewcommand{\arraystretch}{1.3} \begin{table*}[t!] \caption{\small{Comparing guarantees for ST-miss. We treat the condition number and incoherence parameters as constants for this discussion. }} \begin{center} \resizebox{.9\linewidth}{!}{ \begin{tabular}{ccccccc} \toprule \textbf{Algorithm} &\textbf{Tracking} & \textbf{Memory} & \textbf{Time} & \textbf{Allows changing} & \textbf{Observed Entries} \\ &\textbf{delay} & & & \textbf{subspaces?} \\ \midrule GROUSE \cite{grouse} & Partial Guarantee & $\mathcal{O}(nr)$ & $\mathcal{O}(n d \rho}%{\rho_{\text{obs}} r^2)$ & No & i.i.d. Bernoulli($\rho}%{\rho_{\text{obs}}$) \\ PETRELS \cite{petrels_new} & Partial Guarantee & $\mathcal{O}(nr^2)$ & $\mathcal{O}(n d \rho}%{\rho_{\text{obs}} r^2)$ & No & i.i.d. Bernoulli($\rho}%{\rho_{\text{obs}}$) \\ MBPM \cite{streamingpca_miss, eldar_jmlr_ss} & $ d \succsim \frac{r^2 \log^2 n \log (1/\zz)}{\rho}%{\rho_{\text{obs}}^2}$ & $\mathcal{O}(nr)$ & $\mathcal{O}(ndr)$ & No & i.i.d. Bernoulli($\rho}%{\rho_{\text{obs}}$) \\ NORST-miss & $d \geq r \log n \log(1/\epsilon)$ & $\mathcal{O}\left(nr \log n \log \frac{1}{\epsilon}\right)$ & $\mathcal{O}\left(n d r \log \frac{1}{\epsilon}\right)$ & Yes & bounded fraction, \\ (this work) & & & & & $c/r$ per column, $c$ per row \\ \bottomrule \end{tabular} } \label{tab:comp_st} \end{center} } \caption{\small{Comparing MC guarantees. Recall $r_{\mathrm{mat}}:=\operatorname{rank}(\bm{L}) \le r J$. In the regime when the subspace changes frequently so that $J$ equals its upper bound and $r_{\mathrm{mat}} \approx d/\log^2 n$, NORST-miss is better than the non-convex methods (AltMin, projGD, SGD) and only slightly worse than the convex ones (NNM). In general, the sample complexity for NORST-miss is significantly worse than all the MC methods. }} \vspace{.14cm} \begin{center} \resizebox{.95\linewidth}{!}{ \begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}c} \toprule \textbf{Algorithm} & \textbf{Sample complexity} & \textbf{Memory} & \textbf{Time} & \multicolumn{2}{c}{\textbf{Observed entries}} \\ & \textbf{(\# obs. entries, $m$)} & \ & \ & \multicolumn{2}{c}{} \\ \midrule nuc norm min (NNM) \cite{matcomp_first} & $\Omega(n r_{\mathrm{mat}} \log^2 n)$ & $\mathcal{O}(nd)$ & $\mathcal{O}(n^3/\sqrt{\epsilon})$ & \multicolumn{2}{c}{i.i.d. Bernoulli ($m/nd$)} \\ weighted NNM \cite{coherent_mc} & $\Omega(n r_{\mathrm{mat}} \log^2 n)$ & $\mathcal{O}(nd)$ & $\mathcal{O}(n^3 /\sqrt{\epsilon})$ & \multicolumn{2}{c}{indep. Bernoulli} \\ AltMin \cite{optspace} & $\Omega(n r_{\mathrm{mat}}^{4.5} \log \frac{1}{\epsilon})$ & $\mathcal{O}(nd)$ & $\mathcal{O}(n r_{\mathrm{mat}} \log \frac{1}{\epsilon})$ & \multicolumn{2}{c}{i.i.d. Bernoulli ($m/nd$)} \\ projected-GD \cite{rmc_gd} & $\Omega(n r_{\mathrm{mat}}^2 \log^2 n)$ & $\mathcal{O}(nd)$ & $\mathcal{O}(n r_{\mathrm{mat}}^3 \log^2 n \log \frac{1}{\epsilon})$ & \multicolumn{2}{c}{i.i.d. Bernoulli ($m/nd$)} \\ online SGD \cite{onlineMC1} & $\Omega\left(n r_{\mathrm{mat}}^2 \log n \left(r_{\mathrm{mat}}+\log \frac{1}{\epsilon}\right)\right)$ & $\mathcal{O}(nd)$ & $\mathcal{O}\left(n r_{\mathrm{mat}}^4 \log n \log \frac{1}{\epsilon}\right)$ & \multicolumn{2}{c}{i.i.d. Bernoulli ($m/nd$)} \\ \textbf{NORST-miss} & $\bm{\Omega((1 - \frac{c}{r}) nd)}$ & $\mathcal{O}\left(nr \log n \log \frac{1}{\epsilon}\right)$ & $\mathcal{O}\left(n d r \log \frac{1}{\epsilon}\right)$ & \multicolumn{2}{c}{\textbf{$\le c \cdot d$ per row}} \\ {\bf (this work)} & & & & \multicolumn{2}{c}{\textbf{$\le (1 -\frac{c}{r}) \cdot n$ per column}} \\ {\bf Sample-Efficient} & $\Omega ( n r_{\mathrm{mat}} \log^2 n \log r )$ \ & \ $\mathcal{O}\left(nr \log n \log \frac{1}{\epsilon}\right)$ & $\mathcal{O}\left(n d r \log \frac{1}{\epsilon}\right)$ & \multicolumn{2}{c}{i.i.d. Bernoulli($\rho_t$) where,} \\ {\bf NORST-miss} & & & & \multicolumn{2}{c}{$\rho_t = 1-c/r$ for $t \in [t_j, t_j +(K+2)\alpha)$} \\ {\bf (this work)} & & & & \multicolumn{2}{c}{$\rho_t = r \log^2 n \log r /nd$ other times} \\ \bottomrule \end{tabular} } \label{tab:comp_mc} \end{center} {\em Note:} Here, $f(n) = \Omega(g(n))$ implies that there exists a $G>0$ and an $n_0 > 0 $ s.t for all $n > n_0, \ |f(n)| \geq G \cdot |g(n)|$ \vspace{-.2in} \end{table*} \begin{algorithm}[t!] \caption{NORST-miss.} \label{algo:NORST-st-basic} \begin{algorithmic}[1] \STATE \textbf{Input}: $\bm{y}_t$, ${\mathcal{T}_{t}}$ \textbf{Output}: $\hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t$, $\hat{\bm{P}}_{(t)}$ \textbf{Parameters:} $r$, $K=~C\log(1/\zz)$, $\alpha= C f^2 r \log n$, $\omega_{evals}} %{\lambda_{\mathrm{thresh}} =2 \zz^2 \lambda^+$ \STATE $\hat{\bm{P}}_0 \leftarrow \bm{0}_{n \times r}$, $j~\leftarrow~1$, $k~\leftarrow~1$ \STATE $\mathrm{phase} \leftarrow \mathrm{update}$; ${\hat{t}}_{0} \leftarrow 0$; ${\hat{t}}_{-1, fin} = 0$ \FOR {$t > 0$} \STATE $\bm{\Psi} \leftarrow \bm{I} - \hat{\bm{P}}_{(t-1)}\hat{\bm{P}}_{(t-1)}{}'$; $\tilde{\bm{y}}_t \leftarrow \bm{\Psi} \bm{y}_t$; \STATE $\hat{\bm{\ell}}_t \leftarrow \bm{y}_t - \bm{I}_{{\mathcal{T}_{t}}} ( \bm{\Psi}_{{\mathcal{T}_{t}}}{}' \bm{\Psi}_{{\mathcal{T}_{t}}} )^{-1} \bm{\Psi}_{{\mathcal{T}_{t}}}{}'\tilde{\bm{y}}_t$. \IF{$\text{phase} = \text{update}$} \IF {$t = {\hat{t}}_j + u \alpha - 1$ for $u = 1,\ 2,\ \cdots,$} \STATE $\hat{\bm{P}}_{j, k} \leftarrow r$-SVD$[\hat{\bm{L}}_{t; \alpha}]$, $\hat{\bm{P}}_{(t)} \leftarrow \hat{\bm{P}}_{j,k}$, $k \leftarrow k + 1$. \ELSE \STATE $\hat{\bm{P}}_{(t)} \leftarrow \hat{\bm{P}}_{(t-1)}$ \ENDIF \IF{$t = {\hat{t}}_j + K\alpha - 1$} \STATE $\hat{t}_{j, fin} \leftarrow t$, $\hat{\bm{P}}_{j} \leftarrow \hat{\bm{P}}_{(t)}$ \STATE $k \leftarrow 1$, $j \leftarrow j+1$, $\text{phase} \leftarrow \text{detect}$. \ENDIF \ENDIF \IF{$\text{phase} = \text{detect}$ and $t = \hat{t}_{j-1, fin} + u\alpha$} \STATE $\bm{\Phi} \leftarrow (\bm{I} - \hat{\bm{P}}_{j-1}\hat{\bm{P}}_{j-1}{}')$, $\bm{B} \leftarrow \bm{\Phi}\hat{\bm{L}}_{t, \alpha}$ \IF {$\lambda_{\max}(\bm{B}\bm{B}{}') \geq \alpha \omega_{evals}} %{\lambda_{\mathrm{thresh}}$} \STATE $\text{phase} \leftarrow \text{update}$, $\hat{t}_j \leftarrow t$, \ENDIF \ENDIF \ENDFOR \STATE {\bf Smoothing mode}: At $t = {\hat{t}}_j + K \alpha$ \textbf{for} {$t \in [{\hat{t}}_{j-1}+ K \alpha, {\hat{t}}_j + K \alpha-1]$} \\ $\hat{\bm{P}}_{(t)}^{\mathrm{smooth}} \leftarrow \operatorname{basis}([\hat{\bm{P}}_{j-1}, \hat{\bm{P}}_{j}])$ \\ $\bm{\Psi} \leftarrow \bm{I} - \hat{\bm{P}}_{(t)}^{\mathrm{smooth}} \hat{\bm{P}}_{(t)}^{\mathrm{smooth}}{}'$ \\ $\hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t^{\mathrm{smooth}} \leftarrow \bm{y}_t - \bm{I}_{\mathcal{T}_t} (\bm{\Psi}_{\mathcal{T}_t}{}'\bm{\Psi}_{\mathcal{T}_t})^{-1} \bm{\Psi}_{\mathcal{T}_t}{}' \bm{y}_t$ \end{algorithmic} \end{algorithm} } \Subsection{Extensions of basic NORST-miss} \label{sec:ext} \subsubsection{Sample-Efficient-NORST-miss} This is a simple modification of NORST-miss that will reduce its sample complexity. The reason that NORST-miss needs many more observed entries is because of the projected LS step which solves for the missing entries vector, $\bm{z}_t$, after projecting $\bm{y}_t$ orthogonal to $\hat{\bm{P}}_{(t-1)}$. This step is computing the pseudo-inverse of $(\bm{I} - \hat{\bm{P}}_{(t-1)} \hat{\bm{P}}_{(t-1)}{}')_{{\mathcal{T}_{t}}}$. Our bound on $\small{\text{max-miss-frac-col}}$ helps ensure that this matrix is well conditioned for any set ${\mathcal{T}_{t}}$ of size at most $\small{\text{max-miss-frac-col}} \cdot n$. Notice however that we prove that NORST-miss recovers $\P_j$ to $\epsilon$ accuracy with a delay of just $(K+2) \alpha = C r \log n \log(1/\epsilon)$. Once the subspace has been recovered to $\zz$ accuracy, there is no need to use projected LS to recover $\bm{z}_t$. One just needs to recover $\bm{a}_t$ given a nearly perfect subspace estimate and the observed entries. This can be done more easily as follows (borrows PETRELS idea): let $\hat{\bm{P}}_{(t)} \leftarrow \hat{\bm{P}}_{(t-1)}$, solve for $\bm{a}_t$ as $\hat\a_t:= (\bm{I}_{\Omega_t}{}' \hat{\bm{P}}_{(t)})^{\dagger} \bm{I}_{\Omega_t}{}'\bm{y}_t$, and set $\l_t \leftarrow \hat{\bm{P}}_{(t)} \hat\a_t$. Recall here that $\Omega_t = {\mathcal{T}_{t}}^c$. If the set of observed or missing entries was i.i.d. Bernoulli for just the later time instants, this approach will only need $\Omega (r \log r \log^2 n)$ samples at each time $t$, whp. This follows from \cite[Lemma 3]{laura_subspace_match}. Suppose that $\zz =1/n$, then $K \alpha = C r \log^2 n$. Let $d_j:= t_{j+1}-t_j$ denote the duration for which the subspace is $\P_j$. Thus $\sum_j d_j = d$. Also recall that $r_{\mathrm{mat}} \le r J$. Thus, with this approach, the number of observed entries needed is $m = \Omega\left( \sum_{j=1}^J \left( n(1-c/r) K \alpha + C r \log r \log^2 n (d_j - K \alpha) \right) \right) = \Omega \left( \sum_j [ n(1-c/r) r \log^2 n + d_j r \log r \log^2 n ]\right) = \Omega( \max(n,d) r_{\mathrm{mat}} \log^2 n (\log r - c/r) )$ as long as the observed entries follow the i.i.d. Bernoulli model for the time after the first $K \alpha$ time instants after a subspace change. Or, we need the observed entries to be i.i.d. Bernoulli($1 - c/r)$ for first $K \alpha$ frames and i.i.d. Bernoulli($ r \log^2 n \log r / n$) afterwards. Observe that the $m$ needed by sample-efficient-NORST-miss is only $(\log r - c/r)$ times larger than the best sample complexity needed by any MC technique - this is the convex methods (nuclear norm min). However sample-efficient-NORST-miss is much faster and memory-efficient compared to nuclear norm min. \subsubsection{NORST-sliding-window} In the basic NORST approach we use a different set of estimates $\l_t$ for each subspace update step. So, for example, the first subspace estimate is computed at ${\hat{t}}_j + \alpha-1$ using $\hat{\bm{L}}_{{\hat{t}}_j+\alpha-1; \alpha}$; the second is computed at ${\hat{t}}_j+2 \alpha-1$ using $\hat{\bm{L}}_{{\hat{t}}_j+2\alpha-1; \alpha}$; and so on. This is done primarily to ensure mutual independence of the set of $\bm{\ell}_t$'s in each interval because this is what makes the proof easier (allows use of matrix Bernstein for example). However, in practice, we can get faster convergence to an $\epsilon$-accurate estimate of $\P_j$, by removing this restriction. This approach is of course motivated by the sliding window idea that is ubiquitous in signal processing. For any sliding-window method, there is the window length which we keep as $\alpha$ and the hop-length which we denote by $\beta$. Thus, NORST-sliding-window ($\beta$) is Algorithm \ref{algo:NORST-st-basic} with the following change: compute $\hat{\bm{P}}_{j,1}$ using $\hat{\bm{L}}_{{\hat{t}}_j+\alpha-1;\alpha}$; compute $\hat{\bm{P}}_{j,2}$ using $\hat{\bm{L}}_{{\hat{t}}_j+\alpha+\beta-1;\alpha}$; compute $\hat{\bm{P}}_{j,3}$ using $\hat{\bm{L}}_{{\hat{t}}_j+\alpha+2\beta-1;\alpha}$; and so on. Clearly $\beta < \alpha$ and $\beta=\alpha$ returns the basic NORST-miss. \subsubsection{NORST-buffer} Another question if we worry only about practical performance is whether re-using the same $\alpha$ data samples $\bm{y}_t$ in the following way helps: At $t = {\hat{t}}_j + k\alpha -1$, the $k$-th estimate is improved $R$ times as follows. First we obtain $\hat{\bm{L}}_{t;\alpha}:=[\l_{t-\alpha+1}, \l_{t-\alpha+2}, \dots \l_t]$ which are used to compute $\hat{\bm{P}}_{j,k}$ via $r$-SVD. Let us denote this by $\hat{\bm{P}}_{j,k}^{0}$. Now, we use this estimate to obtain a second, and slightly more refined estimate of the same $\bm{L}_{t;\alpha}$. We denote these as $\hat{\bm{L}}_{t;\alpha}^{(1)}$ and use this estimate to get $\hat{\bm{P}}_{j,k}^{(1)}$.} This process is repeated for a total of $R + 1$ (reuse) times. We noticed that using $R=4$ suffices in most synthetic data experiments and for real data, $R=0$ (which reduces to the basic NORST algorithm) suffices. This variant has the same memory requirement as NORST-original. The time complexity, however, increases by a factor of $R + 1$ \section{Detailed discussion of prior art}\label{sec:prior_art} \subsubsection{Streaming PCA with missing data, complete guarantee} The problem of streaming PCA with missing data was studied and a provable approach called modified block power method (MBPM) was introduced in \cite{streamingpca_miss}. A similar problem called ``subspace learning with partial information'' is studied in \cite{eldar_jmlr_ss}. These give the following complete guarantee. \begin{theorem}[streaming PCA, missing data \cite{streamingpca_miss, eldar_jmlr_ss}] Consider a data stream, for all $t = 1, \cdots, d$, $\bm{\ell}_t = A \bm{Z}_t + \bm{w}_t$ where $\bm{Z}_t$ are $r$ length vectors generated i.i.d from a distribution $\mathcal{D}$ s.t. $\mathbb{E}[(\bm{Z}_t)_i] = 0$ and $\mathbb{E}[(\bm{Z}_t)_i^2] = 1$ and $A$ is an $n \times r$ matrix with SVD $A = \bm{U} \bm{\Lambda} \bm{V}{}'$ with $\lambda_1 =1 \geq \lambda_2 \geq \cdots \lambda_r = \lambda^- >0$. The noise $\bm{w}_t$ is bounded: $|(\bm{w}_t)_i| \leq M_{\infty}$, and $\mathbb{E}[(\bm{w}_t)_i^2] = \sigma^2$. Assume that (i) $A$ is $\mu$-incoherent; and (ii) we observe each entry of $\bm{\ell}_t$ independently and uniformly at random with probability $\rho}%{\rho_{\text{obs}}$; this is the Bernoulli($\rho$) model. If $d} %t_{\mathrm{final}} \ge \alpha$ with $\alpha:=$ \begin{align*} & {\Omega}\left(\frac{M_{\infty}^2(r \mu^2/n + \sigma^2 + nr^2(\mu^2/n + \sigma^2)^2) (\log n)^2 \log (1/\zz) }{\log\left( \frac{\sigma^2 + 0.75 {\lambda^-}}{\sigma^2 + 0.5 {\lambda^-}}\right)(\lambda^-)^2\epsilon^2\rho}%{\rho_{\text{obs}}^2} \right) \end{align*} then, $\sin\theta_{\max}(\hat{\bm{P}}_{(d)}, \bm{U}) \leq \epsilon$ w.p. at least 0.99. \end{theorem} There are many differences between this guarantee and ours: (i) it only recovers a single unknown subspace (since it is solving a PCA problem), and is unable to detect or track changes in the subspace; (ii) it requires the missing entries to follow the i.i.d. Bernoulli model; and (iii) it only provides a guarantee that the final subspace estimate, $\hat{\bm{P}}_{(d)}$, is $\epsilon$-accurate (it does not say anything about the earlier estimates). (iv) Finally, even with setting $\sigma^2 = \epsilon^2 \lambda^-$ in the above (to simply compare its noise bound with ours), the required lower bound on $d$ implied by it is $d \ge C r^2 \log^2 n \log (1/\epsilon)/\rho}%{\rho_{\text{obs}}^2$. This is $r \log n$ times larger than what our result requires. The lower bound on $d$ can be interpreted as the tracking delay in the setting of ST-miss. The Bernoulli model on missing entries is impractical in many settings as discussed earlier in Sec. \ref{sec:mainres_noisefree}. On the other hand, MBPM is streaming as well as memory-optimal while our approach is not streaming and only nearly memory optimal. For a summary, see Table \ref{tab:comp_st}. Here {\em ``streaming''} means that it needs only one pass over the data. Our approach uses SVD which requires multiple passes over short batches of data of size of order $r \log n$. \subsubsection{ST-miss, partial guarantees} In the ST literature, there are three well-known algorithms for {ST-miss}: PAST \cite{past,past_conv}, PETRELS \cite{petrels} and GROUSE \cite{grouse,local_conv_grouse, grouse_global,grouse_enh}. All are motivated by stochastic gradient descent (SGD) to solve the PCA problem and the Oja algorithm \cite{ojasimplified}. These and many others are described in detail in a review article on subspace tracking \cite{chi_review}. GROUSE can be understood as an extension of Oja's algorithm on the Grassmanian. It is a very fast algorithm since it only involves first order updates. It has been studied in \cite{grouse, local_conv_grouse, grouse_global}. The best partial guarantee for GROUSE rewritten in our notation is as follows. \begin{theorem}[GROUSE \cite{local_conv_grouse} (Theorem 2.14)]\label{thm:grouse} Assume that the subspace is fixed, i.e., that $\P_{(t)} = \P$ for all $t$. Denote the unknown subspace by $\P$. Let $\epsilon_t := \sum_{i=1}^r \sin^2\theta_i(\hat{\bm{P}}_{(t)}, \P)$ where $\theta_i$ is the $i$-th largest principal angle between the two subspaces. Also, for a vector $\bm{z} \in \mathbb{R}^n$, let $\mu(\bm{z}):= \frac{n \|\bm{z}\|_{\infty}^2}{\|\bm{z}\|_{2}^2}$ quantify its denseness. Assume that (i) $\P$ is $\mu$-incoherent; (ii) the coefficients vector $\bm{a}_t$ is drawn independently from a standard Gaussian distribution, i.e., $(\bm{a}_t)_i \overset{i.i.d.}{\sim} \mathcal{N}(0, 1)$; (iii) the size of the set of observed entries at time $t$, $\Omega_t$, satisfies $|\Omega_t| \geq (64/3) r (\log^2 n) \mu \log(20r)$; and the following assumptions on intermediate algorithm estimates hold: \begin{itemize} \item $\epsilon_t \leq \min(\frac{r \mu}{16n}, \frac{q^2}{128 n^2 r} )$; \item the residual at each time, $\bm{r}_t := \bm{\ell}_t - \hat{\bm{P}}_{(t)}\hat{\bm{P}}_{(t)}' \bm{\ell}_t$ is ``dense'', i.e., it satisfies {\small $ \mu(\bm{r}_t) \leq \min\{ \log n [\frac{0.045}{\log 10} C_1 r \mu \log(20 r)]^{0.5}, \log^2 n \frac{0.05}{8 \log 10} C_1 \log(20 r)\} $} with probability at least $1 - \bar{\delta}$ where $\bar{\delta} \leq 0.6$. \end{itemize} Then, $ \mathbb{E}[\epsilon_{t+1} | \epsilon_t] \leq \epsilon_t -.32(.6 - \bar{\delta}) \frac{q}{nr}\epsilon_t + 55 \sqrt{\frac{n}{q}} \epsilon_t^{1.5}. $ \end{theorem} Observe that the above result makes a denseness assumption on the residual $\bm{r}_t$ and the residual is a function of $\hat{\bm{P}}_{(t)}$. Thus it is making assumptions on intermediate algorithm estimates and hence is a partial guarantee. In follow-up work, the PETRELS \cite{petrels} approach was introduced. It is slower than GROUSE, but has much better performance in numerical experiments. To understand the main idea of PETRELS, let us ignore the small noise $\v_t$. Then, $\bm{y}_t$ can be expressed as $\bm{y}_t = \bm{I}_{\Omega_t} \bm{I}_{\Omega_t}{}' \bm{\ell}_t = \bm{I}_{\Omega_t} \bm{I}_{\Omega_t}{}' \P_{(t)} \bm{a}_t $. Let $\tilde{\P}:= \P_{(t)}$. If $\tilde{\P}$ were known, one could compute $\bm{a}_t$ by solving a LS problem to get $\hat\a_t:= (\bm{I}_{\Omega_t}{}' \tilde{\P})^{\dagger} \bm{I}_{\Omega_t}{}'\bm{y}_t$. This of course implicitly assumes that $ \bm{I}_{\Omega_t}{}' \tilde{\P}$ is well-conditioned. This matrix is of size $(n - |{\mathcal{T}_{t}}|) \times r$, thus a necessary condition for it to be well conditioned is the same as the one for NORST-miss: it also needs $n - |{\mathcal{T}_{t}}| \ge r$ although the required sufficient condition is different\footnote{If $\Omega_t$ follows an i.i.d. Bernoulli model, a sufficient condition would be $n - |{\mathcal{T}_{t}}| \ge C r \log r \log^2n$ \cite{laura_subspace_match}, or equivalently, $\small{\text{max-miss-frac-col}} \le 1 - (Cr\log r \log^2n) /n$.}. Of course $\tilde{\P}$ is actually unknown. PETRELS thus solves for $\tilde{\P}$ by solving the following \[ \min_{\tilde{\P}} \sum_{m = 1}^t \lambda^{t-m} \| \bm{y}_m - \bm{I}_{\Omega_m} \bm{I}_{\Omega_m}{}' \tilde{\P} (\bm{I}_{\Omega_m}{}' \tilde{\P})^{\dagger} \bm{I}_{\Omega_m}{}'\bm{y}_m\|^2. \] Here $\bm{M}^\dagger:=(\bm{M}'\bm{M})^{-1} \bm{M}'$ and $\lambda$ is the discount factor (set to 0.98 in their code). To solve this efficiently, PETRELS first decomposes it into updating each row of $\tilde\P$, and then parallely solves the $n$ smaller problems by second-order SGD. \begin{table*}[t!] \caption{\small{Comparing robust MC guarantees. We treat the condition number and incoherence parameters as constants for this table. }} \begin{center} \resizebox{.95\linewidth}{!}{ \begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}c} \toprule \textbf{Algorithm} & \textbf{Sample complexity} & \textbf{Memory} & \textbf{Time} & \textbf{Observed entries} & \textbf{Outliers} \\ \midrule NNM \cite{matcomp_first} & $\Omega(n d)$ & $\mathcal{O}(nd)$ & $\mathcal{O}(n^3/\sqrt{\epsilon})$ & i.i.d. Bernoulli ($c$) & i.i.d. Bernoulli ($c$) \\ Projected GD \cite{rmc_gd} & $\Omega(n r^2 \log^2 n)$ & $\mathcal{O}(nd)$ & $\Omega(n r^3 \log^2 n \log^2 (1/\epsilon))$ & i.i.d. Bernoulli ($m/nd$) & bounded fraction ($\mathcal{O}(1/r)$ per row and col) \\ NORST-miss-rob & $\Omega(nd (1 - 1/r))$ & $\mathcal{O}(n r \log n \log(1/\epsilon))$ & $\mathcal{O}(ndr \log(1/\epsilon))$ & bounded frac & bounded frac. \\ (this work)& & & & $\mathcal{O}(1/r)$ per row, $\mathcal{O}(1)$ per col & $\mathcal{O}(1/r)$ per row, $\mathcal{O}(1)$ per col \\ & & & & \multicolumn{2}{c}{Extra assumptions: Slow subspace change and lower bound on outlier magnitude} \\ \bottomrule \end{tabular} } \label{tab:comp_rmc} \end{center} \vspace{-.2in} } \end{table*} The best guarantee for PETRELS from \cite{petrels_new} is summarized next. \begin{theorem}[PETRELS \cite{petrels_new}(Theorem 2)]\label{thm:petrels} Assume that the subspace is fixed, i.e., that $\P_{(t)} = \P$ for all $t$. Assume that (i) the set of observed entries are drawn from the i.i.d. Bernoulli model with parameter $\rho$; (ii) the coefficients $(\bm{a}_t)$'s are zero-mean random vectors with diagonal covariance $\bm{\Lambda}$ and all higher-order moments finite; (iii) the noise, $\v_t$ are i.i.d and independent of $\bm{a}_t$; (iv) the subspace $\P$ and the initial estimate $\hat{\bm{P}}_0$ satisfies the following incoherence assumption $ \sum_{i=1}^n \sum_{j=1}^r (\P)_{ij}^4 \leq \frac{C}{n},\ \text{and} \ \sum_{i=1}^n \sum_{j=1}^r (\hat{\bm{P}}_0)_{ij}^4 \leq \frac{C}{n}; $ (v) the step-size is appropriately chosen; and (v) the initialization satisfies $ \mathbb{E}\left[\|\bm{Q}_0^{(n)} - \bm{Q}(0)\|_2\right] \leq \frac{C}{\sqrt{n}}. $ Here $\bm{Q}_0^{(n)} := \hat{\bm{P}}_0{}' \P$ denotes the matrix of initial cosine similarities and $\bm{Q}(\tau)$ is the ``scaling limit'' which is defined as the solution of the following coupled ordinary differential equations: \begin{align*} \frac{d}{d\tau} \bm{Q}(\tau) = &[\rho}%{\rho_{\text{obs}} \bm{\Lambda}^2 \bm{Q}(\tau) - 1/2 \bm{Q}(t)\bm{G}(\tau) - \\ &\bm{Q}(\tau)(\bm{I} - 1/2\bm{G}(\tau))\bm{Q}{}'(\tau)\rho}%{\rho_{\text{obs}} \bm{\Lambda}^2\bm{Q}(\tau)]\bm{G}(\tau)\\ \frac{d}{d\tau} \bm{G}(\tau) = & \bm{G}(\tau)[ \mu - \bm{G}(\tau)(\bm{G}(\tau) + \bm{I})(\bm{Q}{}'(\tau) \rho}%{\rho_{\text{obs}} \bm{\Lambda}^2 \bm{Q}(\tau) + \bm{I})] \end{align*} where $\rho}%{\rho_{\text{obs}}$ is the subsampling ratio and $\mu = n(1-\lambda)$ where $\lambda$ is the discount parameter defined earlier. Then, for any fixed $d} %t_{\mathrm{final}} >0$, the time-varying cosine similarity matrix $\bm{Q}^{(n)}_{\lfloor n\tau\rfloor} = \hat{\bm{P}}_{(\lfloor n \tau \rfloor)}{}' \P$ satisfies $\sup_{n\geq 1} \mathbb{E}\left[ \|\bm{Q}^{(n)}_{\lfloor n\tau\rfloor} - \bm{Q}(\tau)\| \right] \leq \frac{C_{d} %t_{\mathrm{final}}}}{\sqrt{n}}.$ \end{theorem} For further details, please refer to \cite[Eq's 29, 33, 34]{petrels_new}. The above is a difficult result to further simplify since, even for $r=1$, it is not possible to obtain a closed form solution of the above differential equation. This is why it is impossible to say what this result says about $\sin\theta_{\max}(\hat{\bm{P}}_{(t)}, \P)$ or any other error measure. Hence the above is also a {\em partial guarantee}. \cite{petrels_new} also provides a guarantee for GROUSE that has a similar flavor to the above result. \subsubsection{Online MC, different model} There are a few works with the term {\em online MC} in their title and a reader may wrongly confuse these as being solutions to our problem. All of them study very different ``online'' settings than ours, e.g., \cite{onlineMC1} assumes one matrix entry comes in at a time. The work of \cite{onlineMC2} considers a problem of designing matrix sampling schemes based on current estimates of the matrix columns. This is useful only in settings where one is allowed to choose which samples to observe. This is often not possible in applications such as video analytics \subsubsection{MC} There has been a very large amount of work on provable MC. We do not discuss everything here since MC is not the main focus of this work. The first guarantee for MC was provided in \cite{matcomp_first}. This studied the nuclear norm minimization (NNM) solution. After NNM, there has been much later work on non-convex, and hence faster, provable solutions: alternating-minimization, e.g., \cite{optspace, lowrank_altmin, mc_luo, lowrank_altmin_no_kappa}, and projected gradient descent (proj GD), e.g., \cite{fastmc, ge_1, ge_best} and alternating-projection \cite{rmc_altproj, mc_altproj}. All these works assume a uniform random or i.i.d. Bernoulli model on the set of missing entries (both are nearly equivalent for large $n,d$). There has been some later work that relaxes this assumption. This includes \cite{coherent_mc, noniid_mc} which assumes independent but not identical probability of the (i,j)-th entry being missed. The authors allow this probability to be inversely proportional to row and column ``leverage scores'' (quantifies denseness of a row or a column of $\bm{L}$) and hence allows the relaxing of the incoherence requirement on $\bm{L}$. If leverage scores were known, one could sample more frequently from rows or columns that are less dense (more sparse). Of course it is not clear how one could know or approximate these scores. There is also work that assumes a completely different probabilistic models on the set of observed entries, e.g., \cite{universal_mc}. In summary, all existing MC works need a probabilistic model on the set of observed (equivalently, missed) entries, typically i.i.d. Bernoulli. As noted earlier this can be an impractical requirement in some applications. Our work does not make any such assumption but needs more observed entries, a detailed discussion of this is provided earlier. \begin{algorithm}[t!] \caption{NORST-miss-robust. Obtain $\hat{\bm{P}}_0$ by $C \log r$ iterations of AltProj applied to $\bm{Y}_{[1;t_\mathrm{train}]}$ with $t_\mathrm{train} = Cr$ and with setting $(\bm{y}_t)_{\mathcal{T}_{t}} = 10$ (or any large nonzero value) for all $t=1,2,\dots,t_\mathrm{train}$.} \label{algo:auto-dyn-rmc} \begin{algorithmic}[1] \STATE \textbf{Input}: $\bm{y}_t$, ${\mathcal{T}_{t}}$ \textbf{Output}: $\hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t$, $\hat{\bm{P}}_{(t)}$ \STATE \textbf{Extra Parameters:} $\omega_{supp} \leftarrow x_{\min}/2 $, $\xi \leftarrow x_{\min}/15$ \STATE $\hat{\bm{P}}_0 \leftarrow$ obtain as given in the caption; \STATE $j~\leftarrow~1$, $k~\leftarrow~1$, $\mathrm{phase} \leftarrow \mathrm{update}$; ${\hat{t}}_{0} \leftarrow t_\mathrm{train}$; \FOR {$t > t_\mathrm{train}$} \STATE $\bm{\Psi} \leftarrow \bm{I} - \hat{\bm{P}}_{(t-1)}\hat{\bm{P}}_{(t-1)}{}'$; $\tilde{\bm{y}}_t \leftarrow \bm{\Psi} \bm{y}_t$; \STATE $\bm{\hat{x}}_{t,cs} \leftarrow \arg\min_{\bm{x}} \norm{(\bm{x})_{{\mathcal{T}_{t}}^{c}}}_1 \ \text{s.t}\ \norm{\tilde{\bm{y}}_t - \bm{\Psi} \bm{x}} \leq \xi$. \STATE $\hat{\mathcal{T}}_t \leftarrow {\mathcal{T}_{t}} \cup \leftarrow \{i:\ |\bm{\hat{x}}_{t,cs}| > \omega_{supp} \}$ \STATE $\hat{\l}_t} %{\bm{y}_t} %{\bm{y}_t \leftarrow \bm{y}_t - \bm{I}_{\hat{\mathcal{T}}_t} ( \bm{\Psi}_{\hat{\mathcal{T}}_t}{}' \bm{\Psi}_{\hat{\mathcal{T}}_t} )^{-1} \bm{\Psi}_{\hat{\mathcal{T}}_t}{}'\tilde{\bm{y}}_t$ \STATE Lines $9 - 27$ of Algorithm \ref{algo:NORST-st-basic} \ENDFOR \STATE {\bf Offline (RMC solution): } line 25 of Algorithm \ref{algo:NORST-st-basic}. \end{algorithmic} \end{algorithm} \subsubsection{NORST for robust ST \cite{rrpcp_icml}} While both the NORST-miss algorithm and guarantee are simple modifications of those for NORST for robust ST, our current result has two important advantages because it solves a simpler problem than robust ST. Since there are no outliers, there is no need for the amount of subspace change or the initial estimate's accuracy to be smaller than the outlier magnitude lower bound. This was needed in the robust ST case to obtain an estimate of the outlier support $\mathcal{T}_t$. Here, this support is known. This is why NORST-miss has the following two advantages. (i) It works with a zero initialization where as NORST (for robust ST) required a good enough initialization for which AltProj or PCP needed to be applied on an initial short batch of observed data. (ii) It does not need an upper bound on the amount of subspace change at each $t_j$, it allows both slow and sudden changes. \begin{table*}[ht!] \caption{\small{(top) Number of samples (frames) required by NORST and its heuristic extensions, and PETRELS to attain $\approx 10^{-16}$ accuracy. The observed entries are drawn from a i.i.d. Bernoulli model with $\rho = 0.7$ fraction of observed entries. Notice that NORST-buffer($4$) and NORST-sliding-window ($\beta=10, R=1$) converges at the same rate as PETRELS and the time is also comparable. The other variants require more samples to obtain the same error but are faster compared to PETRELS. (bottom) Evaluation of Sample Efficient NORST with $\rho_1 = 0.9$ and $\rho_2 = 0.15$.}} \begin{center} \begin{tabular}{ c c c c c c c c c } \toprule Algorithm & NORST & \multicolumn{4}{c}{NORST-buffer} & \multicolumn{2}{c}{NORST-sliding-window and buffer} & PETRELS \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-6} \cmidrule(lr){7-8} \cmidrule(lr){9-9} Parameter $R$, $\beta$ & & $R=1$ & $R=2$ & $R=3$ & $R=4$ & $\beta = 1$, $R = 0$ & $\beta = 10$, $R=1$ & \\ Time taken (ms) & $1.9$ & $10.8$ & $18.6$ & $27.5$ & $34.5$ & $16$ & $35$ & $33$ \\ Number of samples & $3540$ & $2580$ & $2100$ & $2050$ & $1950$ & $2400$ & $1740$ & $1740$ \\ \bottomrule \end{tabular} \end{center} \label{tab:convergence} \begin{center} \resizebox{.8\linewidth}{!}{ \begin{tabular}{c c c c c} \toprule Algorithm & NORST-miss ($6$)& NORST-samp-eff ($ 1$) & PETRELS ($15$) & GROUSE ($2$)\\ \midrule Average Error & ${0.04}$ & ${0.04}$ & $0.02$ & $0.13$ \\ \bottomrule \end{tabular} } \end{center} } \vspace{-.2in} \end{table*} \section{Robust ST with missing entries} \label{sec:norstmissrob} Robust ST with missing entries (RST-miss) is a generalization of robust ST and of ST-miss. In this case, we observe $n$-dimensional data vectors that satisfy \begin{align} \bm{y}_t = \mathcal{P}_{\Omega_t}(\bm{\ell}_t + \g_t) + \v_t, \text{ for } t = 1, 2, \dots, d} %t_{\mathrm{final}}. \label{eq:rmc_prob} \end{align} where $\g_t$'s are the sparse outliers. Let $\bm{x}_t := \mathcal{P}_{\Omega_t}(\g_t)$. We use ${\mathcal{T}_{\sparse,t}}$ to denote the support of $\bm{x}_t$. This is the part of the outliers that actually corrupt our measurements, thus in the sequel we will only work with $\bm{x}_t$. With $\bm{x}_t$ defined as above, $\bm{y}_t$ can be expressed as \begin{align} \bm{y}_t = \mathcal{P}_{\Omega_t}(\bm{\ell}_t) + \bm{x}_t + \v_t \end{align} Observe that, by definition, $\bm{x}_t$ is supported outside of ${\mathcal{T}_{t}}$ and hence ${\mathcal{T}_{t}}$ and ${\mathcal{T}_{\sparse,t}}$ are disjoint. Defining the $n \times d} %t_{\mathrm{final}}$ matrix $\bm{L}:= [\l_1, \l_2, \dots \l_d]$, the above is a robust MC problem The main modification needed in this case is outlier support recovery. The original NORST for robust ST \cite{rrpcp_icml} used $l_1$ minimization followed by thresholding based support recovery for this purpose. In this case, the combined sparse vector is $\tilde{\bm{x}}_t:= \bm{x}_t - \bm{I}_{\mathcal{T}_{t}} \bm{I}_{\mathcal{T}_{t}}{}' \bm{\ell}_t$. Support recovery in this case is thus a problem of sparse recovery with partial support knowledge ${\mathcal{T}_{t}}$. In this case, we can still use $l_1$ minimization followed by thresholding. However a better approach is to use noisy modified-CS \cite{modcsjournal,stab_jinchun_jp} which was introduced to exactly solve this problem. We use the latter. The second modification needed is that, just like in case of robust ST, we need an accurate subspace initialization. To get this, we can use the approach used in robust ST \cite{rrpcp_icml}: for the initial $C r \log n \log (1/\zz)$ samples, use the AltProj algorithm for robust PCA (while ignoring the knowledge of ${\mathcal{T}_{t}}$ for this initial period). We summarize the approach in Algorithm \ref{algo:auto-dyn-rmc}. We have the following guarantee for NORST-miss-robust. Let $\small{\text{max-outlier-frac-row}}_{\alpha}$ be the maximum fraction of outliers per row of any sub-matrix of $\bm{X}$ with $\alpha$ consecutive columns; $\small{\text{max-outlier-frac-col}}$ be the maximum fraction of outlier per column of $\bm{X}$. Also define $x_{\min}:=\min_t \min_{i \in {\mathcal{T}_{\sparse,t}}} |(\bm{x}_t)_i|$ to denote the minimum outlier magnitude and let ${\text{dif}}:=\max_j \Delta_j = \max_j \sin\theta_{\max}(\P_{j-1}, \P_j)$. \begin{corollary Consider Algorithm \ref{algo:auto-dyn-rmc}. Assume all conditions of Theorem \ref{thm:stmiss} hold and \begin{enumerate} \item $\small{\text{max-miss-frac-col}} + 2\cdot \small{\text{max-outlier-frac-col}} \le \frac{c_1}{\mu r}$; and $\small{\text{max-miss-frac-row}}_{\alpha} + \small{\text{max-outlier-frac-row}}_{\alpha} \le \frac{c_2}{f^2}$; \item subspace change: \begin{enumerate} \item $t_{j+1}-t_j > (K+2)\alpha$, and \item ${\text{dif}} \le 0.8$ and $C_1 \sqrt{ r \lambda^+} (\Delta + 2 \zz) \leq x_{\min}$ \end{enumerate} \item initialization satisfies $\sin\theta_{\max}(\hat{\bm{P}}_0,\P_0) \le 0.25$ and $C_1 \sqrt{r \lambda^+} \sin\theta_{\max}(\hat{\bm{P}}_0,\P_0) \le x_{\min}$;% \end{enumerate} then, all guarantees of Theorem \ref{thm:stmiss} and Corollary \ref{cor:thm1} hold. \label{cor:dyn_rmc} \end{corollary} \begin{remark}[Relaxing outlier magnitudes lower bound] As also explained in \cite{rrpcp_icml}, the outlier magnitude lower bound can be significantly relaxed. First, without any changes, if we look at the proof, our required lower bound on outlier magnitudes is actually $0.3^{k-1} \sqrt{ r \lambda^+} (\Delta + 2 \zz)$ in interval $k$ of subspace update. To be precise, we only need $\min_{t \in \mathcal{J}_k} \min_{i \in {\mathcal{T}_{\sparse,t}}} |(\bm{x}_t)_i| \ge 0.3^{k-1} \sqrt{ r \lambda^+} (\Delta + 2 \zz)$. Here $\mathcal{J}_k$ is the interval defined in Theorem \ref{thm:stmiss}. Thus, for $t \in \mathcal{J}_{K+1}$ (after the update step is complete but the subspace has not changed), we only need $ \min_{i \in {\mathcal{T}_{\sparse,t}}} |(\bm{x}_t)_i| \ge \zz \sqrt{ r \lambda^+}$. Moreover, this can be relaxed even more as explained in Remark 2.4 of \cite{rrpcp_icml}. \end{remark} } The proof is similar to that given in \cite{rrpcp_icml}. Please see the Appendix for an explanation of the differences. The advantage of using modified-CS to replace $l_1$ min when recovering the outlier support is that it weakens the required upper bound on $\small{\text{max-miss-frac-col}}$ by a factor of two. If we used $l_1$ min, we would need $2 \cdot (\small{\text{max-miss-frac-col}} + \small{\text{max-outlier-frac-col}})$ to satisfy the upper bound given in the first condition. \subsubsection{Comparison with existing work} Existing solutions for robust ST-miss include GRASTA \cite{grass_undersampled}, APSM \cite{chouvardas2015robust} and ROSETA \cite{mansour_robust_ss_track}. APSM comes with a partial guarantee, while GRASTA and ROSETA do not have a guarantee. The first few provable guarantees for robust MC were \cite{rpca, ranksparSanghavi}. Both studied the convex optimization solution which was slow. Recently, there have been two other works \cite{rpca_gd, rmc_gd} which are projected-GD based approaches and hence are much faster. These assume an $\mathcal{O}(1/r)$ bound on outlier fractions per row and per column. All these assume that the set of observed entries is i.i.d. Bernoulli. Compared with these, our result needs slow subspace change and a lower bound on outlier magnitudes; but it does not need a probabilistic model on the set of missing or outlier entries, and improves the required upper bound on outlier fractions per row by a factor of $r$. Also, our result needs more observed entries in the setting of $r_{\mathrm{mat}} \approx r$, but not when $r_{\mathrm{mat}}$ is significantly larger than $r$, for example not when $r_{\mathrm{mat}}$ is nearly linear in $d$. A summary of this discussion is given in Table \ref{tab:comp_rmc}. \input{sims_final_pn} \section{Conclusions and Open Questions}\label{sec:conc} This work studied the related problems of subspace tracking in missing data (ST-miss) and its robust version. We show that our proposed approaches are provably accurate under simple assumptions on only the observed data (in case of ST-miss), and on the observed data and initialization (in case of robust ST-miss). Thus, in both cases, the required assumptions are only on the algorithm inputs, making both results {\em complete guarantees}. Moreover, our guarantees show that our algorithms need near-optimal memory; are as fast as vanilla PCA; and can detect and track subspace changes quickly. We provided a detailed discussion of related work on (R)ST-miss, (R)MC, and streaming PCA with missing data, that help place our work in the context of what already exists. We also show that NORST-miss and NORST-miss-robust have good experimental performance as long as the fraction of missing entries is not too large. Our guarantee for ST-miss is particularly interesting because it does not require slow subspace change and good initialization. Thus, it can be understood as a novel mini-batch and nearly memory-optimal solution for low-rank Matrix Completion, that works under similar assumptions to standard MC, but needs more numbers of observed entries in general (except in the regime of frequently changing subspaces). While our approaches have near-optimal memory complexity, they are not streaming. This is because they use SVD and hence need multiple passes over short batches of stored data. A key open question is whether a fully streaming provably correct solution can be developed without assuming the i.i.d. Bernoulli model on the set of missing entries? Two other important open questions include: (i) can the required number of observed entries be reduced (the limiting bound here is the bound on missing fractions per column); and (ii) in case of robust ST-miss, can the lower bound on outlier magnitudes be removed? Another question is whether we can use the tracked estimates for ``control''? For example, can we use the current estimate of the subspace and of the true data vectors to decide how to sample the set of observed entries at the next time instant or later (in applications where one can design this set)?}% \input{proof_main} } \bibliographystyle{IEEEbib}
1,477,468,750,545
arxiv
\section{Introduction} \begin{figure}[t] \includegraphics[width=1\linewidth]{ECCV_Figures/teaser.pdf} \vspace{-1.5cm} \caption{Results of our true end-to-end DfF framework with comparisons to state-of-the-art methods.} \label{fig:teaser} \vspace{-0.3cm} \end{figure} As commercial demand for high-quality photographic applications increases, images have been increasingly utilized in scene depth computation. Most commercial cameras, including smartphone and DSLR cameras have two interesting configurations: large-aperture lens and a dual-pixel (DP) sensor. Both are reasonable choices to collect more light and to quickly sweep the focus through multiple depths. Because of this, images appear to have a shallow depth of field (DoF) and are formed as focal stacks with corresponding meta-data such as focal length and principal points. One method to accomplish this is to use single dual-pixel (DP) images which have left and right sub-images with narrow baselines and limited DoFs. A straightforward way is to find correspondences between the left and right sub-images~\cite{wadhwa2018synthetic,garg2019learning,zhang20202}. Despite an abundance of research, such methods are heavily dependent on the accurate retrieval of correspondences due to the inherent characteristics of DP images. Pixel disparities between the two sub-images result in blurred regions, and the amount of spatial shifts is proportional to the degree of blurrings. Another group of approaches solves this problem using different angles. The out-of-focus regions make it possible to use depth-from-defocus (DfD) techniques to estimate scene depths~\cite{anwar2017depth,pan2021dual,zhang2021joint}. Since there is a strong physical relationship between scene depths and the amount of defocus blurs, the DfD methods account for it in data-driven manners by learning to directly regress depth values. However, there is a potential limitation to these works~\cite{anwar2017depth,pan2021dual,zhang2021joint}. A classic issue, an aperture effect, makes an analysis of defocus blur in a local window difficult. In addition, some of them recover deblurred images from input, but image deblurring also belongs to a class of ill-posed inverse problems for which the uniqueness of the solution cannot be established~\cite{levin2007image}. These shortcomings motivate us to examine depth from focus (DfF) as an alternative. DfF takes in a focal stack to a depth map during focus sweeping, which is available in most off-the-shelf cameras, and determines the focus in the input focal stack. In particular, the inherent operations of convolutional neural networks (CNNs), convolution and maxpooling, are suitable for measuring the values obtained from derivatives of the image/feature map based on the assumption that focused images contain sharper edges~\cite{hazirbas2018deep,maximov2020focus,Wang-ICCV-2021}. Nevertheless, there is still room for improvements with respect to model generalization, due to the domain gap between public datasets and real-world focal stack images, and an alignment issue that we will discuss. In this work, we achieve a high-quality and well-generalized depth prediction from single focal stacks. Our contributions are threefold (see \Figref{fig:teaser}): First, we compensate the change in image appearance due to magnification during the focus change, and the slight translations from principal point changes. Compared to most CNN-based DfD/DfF works~\cite{hazirbas2018deep,maximov2020focus,Wang-ICCV-2021} which either assume that input sequential images are perfectly aligned or use hand-crafted feature-based alignment techniques, we design a learnable context-based image alignment, which works well in defocusing blurred images. Second, the proposed sharp region detection (SRD) module addresses blur ambiguities resulting from subtle defocus changes in weakly-textured regions. SRD consists of convolution layers and a residual block, and allows the extraction of more powerful feature representations for image sharpness. Third, we also propose an efficient downsampling (EFD) module for the DfF framework. The proposed EFD combines output feature maps from upper scales using a stride convolution and a 3D convolution with maxpooling and incorporates them to both keep the feature representation of the original input and to ease the flow of informative features for focused regions. To optimize and generalize our network, we develop a high performance simulator to produce photo-realistic focal stack images with corresponding meta-data such as camera intrinsic parameters. With this depth from focus network, we achieve state-of-the-art results over various public datasets as well as the top rank in the DDFF benchmark~\cite{hazirbas2018deep}. Ablation studies indicate that each of these technical contributions appreciably improves depth prediction accuracy. \section{Related Work} The mainstream approaches for depth prediction such as monocular depth estimation~\cite{fu2018deep,godard2017unsupervised,godard2019digging}, stereo matching~\cite{chang2018pyramid,shen2021cfnet} and multiview stereo~\cite{gu2020cascade,im2019dpsnet} use all-in-focus images. As mentioned above, they overlook the functional properties of off-the-shelf cameras and are out-of-scope for this work. In this section, we review depth from defocus blur images, which are closely related to our work. \noindent\textbf{Depth from Defocus.}\quad Some unsupervised monocular depth estimation~\cite{srinivasan2018aperture,gur2019single} approaches utilize a defocus blur cue as a supervisory signal. A work in~\cite{srinivasan2018aperture} proposes differentiable aperture rendering functions to train a depth prediction network which generates defocused images from input all-in-focus images. The network is trained by minimizing distances between ground truth defocused images and output defocused images based on an estimated depth map. Inspired by~\cite{srinivasan2018aperture}, a work in~\cite{gur2019single} introduces a fast differentiable aperture rendering layer from hypothesis of defocus blur. In spite of depth-guided defocus blur, both these works need all-in-focus images as input during an inference time. Anwar~\textit{et al.}~\cite{anwar2017depth} formulate a reblur loss based on circular blur kernels to regularize depth estimation, and design a CNN architecture to minimize input blurry images and reblurred images from output deblurring images as well. Zhang and Sun~\cite{zhang2021joint} propose a regularization term to impose a consistency between depth and defocus maps from single out-of-focus images. \noindent\textbf{Depth from DP images.}\quad Starting with the use of traditional stereo matching, CNN-based approaches have been adopted for depth from DP images~\cite{wadhwa2018synthetic}. A work in~\cite{garg2019learning} introduces that an affine ambiguity exists between a scene depth and its disparity from DP data, and then alleviates it using both novel 3D assisted loss and folded loss. In~\cite{zhang20202}, a dual-camera with DP sensors is proposed to take advantage of both stereo matching and depth from DP images. In~\cite{punnappurath2020modeling}, unsupervised depth estimation by modeling a point spread function of DP cameras. The work in~\cite{pan2021dual} proposes an end-to-end CNN for depth from single DP images using both defocus blur and correspondence cues. In addition, they provide a simulator that makes a synthetic DP dataset from all-in-focus images and the corresponding depth map. In \cite{xin2021defocus}, single DP images are represented via multi-plane images \cite{tucker2020single} with a calibrated point spread function for a certain DP camera model. The representation is used for both unsupervised defocus map and all-in-focus image generation. \noindent\textbf{Depth from Focus.}\quad DfF accounts for changes in blur sizes in the focal stack and determines scene depths according to regions adjacent to the focus~\cite{pertuz2013analysis,levin2007image,maximov2020focus}. In particular, conventional DfF methods infer depth values from a focal stack by comparing the sharpness of a local window at each pixel~\cite{jeon2019ring,sakurikar2017composite,suwajanakorn2015depth}. The research in~\cite{hazirbas2018deep} introduces a CNN-based DfF by leveraging focal stack datasets made with light field and RGB-D cameras. In~\cite{maximov2020focus}, domain invariant defocus blur is used for dealing with the domain gap. The defocus blur is supervised to train data-driven models for DfF as an intermediate step, and is then utilized for permutation-invariant networks to achieve a better generalization from synthetic datasets to real photos. In addition, the work uses a recurrent auto-encoder to handle scene movements which occur during focal sweeps\footnote{Unfortunately, both the source codes for training/test and its pre-trained weight are not available in public.}. In~\cite{Wang-ICCV-2021}, a CNN learns to make an intermediate attention map which is shared to predict scene depth prediction and all-in-focus images reconstruction from focal stack images. \begin{figure}[t] \includegraphics[width=1\linewidth]{ECCV_Figures/overview.pdf} \vspace{-0.7cm} \caption{An overview of the proposed network.} \label{fig:overview} \vspace{-0.2cm} \end{figure} \section{Methodology} Our network is composed of two major components: One is an image alignment model for sequential defocused images. It is a prerequisite that we should first address the non-alignment issue on images captured with smartphones whose focus is relayed to focus motors adjusting locations of camera lenses. Another component is a focused feature representation, which encodes the depth information of scenes. To be sensitive to subtle focus changes, it requires two consecutive feature maps of the corresponding modules from our sharp region detector (SRD) and an effective downsampling module for defocused images (EFD). The overall procedure is depicted in~\Figref{fig:overview}. \subsection{A Network for Defocus Image Alignment} \label{sec:alignment} Since camera field of views (FoVs) vary according to the focus distance, a zoom-like effect is induced during a focal sweep~\cite{herrmann2020learning}, called focal breathing. Because of the focal breathing, an image sharpness cannot be accurately measured on the same pixel coordinates across focal slices. As a result, traditional DfF methods perform feature-based defocus image alignment to compensate this, prior to depth computations. However, recent CNN-based approaches disregard the focal breathing because either all public synthetic datasets for DfF/DfD, whose scale is enough to generalize CNNs well, provide well-aligned focal stacks, or are generated by single RGB-D images. Because of this gap between real-world imagery and easy to use datasets, their generality is limited. Therefore, as a first step to implementing a comprehensive, all-in-one solution to DfF, we introduce a defocus image alignment network. \noindent\textbf{Field of view.}\quad Scene FoVs are calculated by work distances, focus distances, and the focal length of cameras in \eqnref{eq:Fov}. Since the work distances are fixed during a focal sweep, relative values of FoVs (Relative FoVs) are the same as the inverse distance between sensor and lens. We thus perform an initial alignment of a focal stack using these relative FoVs as follows: \begin{gather} \label{eq:Fov} FoV_{n} = W \times \frac{A}{s_{n}}\\ Relative \ FoV_{n} = \frac{FoV_{n}}{FoV_{min}} = \frac{s_{min}}{s_{n}} \notag \ \ (s_{n} = \frac{F_{n}\times f}{F_{n}-f})\notag,\\ \notag \end{gather} where $s_n$ is the distance between the lens and the sensor in an $n$-$th$ focal slice. $A$ is the sensor size, and $W$ is the working distance. $f$ and $F_{n}$ are the focal length of the lens and a focus distance, respectively. $min$ denotes an index of a focal slice whose FoV is the smallest among focal slices. In this paper, we call this focal slice with the $min$ index as the target focal slice. We note that the values are available by accessing the metadata information in cameras without any user calibration. Nevertheless, the alignment step is not perfectly appropriate for focal stack images due to hardware limitations, as described in~\cite{herrmann2020learning}. Most smartphone cameras control their focus distances by spring-installed voice coil motors (VCMs). The VCMs adjust the positions of the camera lens by applying voltages to a nearby electromagnet which induces spring movements. Since the elasticity of the spring can be changed by temperature and usage, there will be an error between real focus distances and values in the metadata. In addition, the principal point of cameras also changes during a focal sweep because the camera lens is not perfectly parallel to the image sensor, due to some manufacturing imperfections. Therefore, we propose an alignment network to adjust this mis-alignment and a useful simulator to ensure realistic focal stack acquisition. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{ECCV_Figures/alignment_network.pdf} \vspace{-0.7cm} \caption{An illustration of our alignment network. Given initially-aligned images with camera metadata, this network produces an aligned focal stack. In the flow estimation, we use three basis functions to model radial, horizontal and vertical motions of VCMs.} \label{fig:123} \vspace{-0.2cm} \end{figure} \noindent\textbf{Alignment network.}\quad As shown in~\Figref{fig:123}, our alignment network has 3-level encoder-decoder structures, similar to the previous optical flow network~\cite{hui2018liteflownet}. The encoder extracts multi-scale features, and multi-scale optical flow volumes are constructed by concatenating the features of a reference and a target focal slice. The decoder refines the multi-scale optical flow volumes in a coarse-to-fine manner using feature warping (F-warp). However, we cannot directly use the existing optical flow framework for alignment because defocus blur breaks the brightness constancy assumption~\cite{suwajanakorn2015depth}. To address this issue, we constrain the flow using three basis vectors with corresponding coefficients ($\alpha,~\beta,~\gamma$) for each scene motion. To compute the coefficients instead of the direct estimation of the flow field, we add an adaptive average pooling layer to each layer of the decoder. The first basis vector accounts for an image crop which reduces errors in the FoVs. We elaborate the image crop as a flow that spreads out from the center. The remaining two vectors represent $x-$ and $y-$axis translations, which compensate for errors in the principal point of the cameras. These parametric constraints of flow induce the network to train geometric features which are not damaged by defocus blur. We optimize this alignment network using a robust loss function $L_{align}$, proposed in~\cite{liu2019ddflow}, as follows: \begin{equation} \label{eq:alignment loss} L_{align} = \sum_{n=0}^N \rho ( I_{n}( \Gamma + D(\Gamma) ) - I_{min} (\Gamma)), \end{equation} where $ \rho(\cdot) = ( |\cdot| + \varepsilon )^q$. $q$ and $\varepsilon$ are set to 0.4 and 0.01, respectively. $I_{n}$ is a focal slice of a reference image, and $I_{min}$ is the target focal slice. $D(\Gamma)$ is an output flow of the alignment network at a pixel position, $\Gamma$. We note that the first basis might be insufficient to describe the zooming effects with spatially-varying motions. However, our design for the image crop shows consistently promising results for the alignment, thanks to the combination of the three basis functions that compensate for a variety of motions in real-world. \begin{figure}[ht] \includegraphics[width=1\linewidth]{ECCV_Figures/Experiment.pdf} \vspace{-0.9cm} \caption{A pipeline of our simulator. Red and green dots mean a center of a circle. The misalignment error occurs due to inaccurate intrinsic parameters. Our simulator produces misaligned focal stack images because of the hardware limitations for autofocus.} \label{fig:experiment} \vspace{-0.1cm} \end{figure} \noindent\textbf{Simulator.}\quad Because public datasets do not describe changes in FoVs or hardware limitations in off-the-shelf cameras, we propose a useful simulator to render realistic sequential defocus images for training our alignment network. Here, the most important part is to determine the error ranges of the intrinsic camera parameters, such as principal points and focal distances. We estimate them as the following process in~\Figref{fig:experiment}: (1) We capture circle patterns on a flat surface using various smartphone models by changing focus distances. (2) We initially align focal stacks with the recorded focus distances. (3) After the initial alignment, we decompose the residual motions of the captured circles using 3 basis vectors, image crop and $x-$ and $y-$axis translations. (4) We statistically calculate the error ranges of the principal points and focus distances from the three parameters of the basis vectors. Given metadata of cameras used, our simulator renders focal stacks induced from blur scales based on the focus distance and the error ranges of the basis vector. \subsection{Focal Stack-oriented Feature Extraction} \label{sec:feature_extraction} For high-quality depth prediction, we consider three requirements that must be imposed on our network. First, to robustly measure focus in the feature space, it is effective to place a gap space in the convolution operations, as proved in~\cite{jeon2019ring}. In addition, even though feature downsampling such as a convolution with strides and pooling layers is necessary to reduce the computations in low-level computer vision tasks like stereo matching~\cite{mayer2016large}, such downsampling operations can make a defocused image and its feature map sharper. This fails to accurately determine the focus within the DfF framework. Lastly, feature representations for DfF need to identify subtle distinctions in blur magnitudes between input images. \begin{figure}[ht] \includegraphics[width=1\linewidth]{ECCV_Figures/CR_feature_extraction.pdf} \vspace{-0.7cm} \caption{An architecture of our feature extraction. If feature maps from neighbor focal slices have similar values, our SRD gives an attention score to the sharpest focal slice. Our EFD preserves informative defocus feature representation during downsampling.} \label{fig:FeatureExtraction} \vspace{-0.1cm} \end{figure} \noindent\textbf{Initial feature extraction.}\quad In an initial feature extraction step, we utilize a dilated convolution to extract focus features. After the dilated convolution, we extract feature pyramids to refine the focal volumes in the refinement step. Given an input focal stack $S\in R ^{H*W*N*3}$ where $H$, $W$ and $N$ denote the height, width and the number of focal slices respectively, we extract three pyramidal feature volumes whose size is $H/2^{L}\times W/2^{L} \times N \times C*2^{L}$ where $L \in \{ 0,1,2\}$ and $C$ is the number of channels in the focal volume. This pyramidal feature extraction consists of three structures in which SRD and EFD are iteratively performed, as described in~\Figref{fig:FeatureExtraction}. Each pyramidal feature volume is then used as the input to the next EFD module. The last one is utilized as an input of the multi-scale feature aggregation step in~\secref{sec:refinement}. \noindent\textbf{Sharp Region Detector.}\quad The initial feature of each focal slice is needed to communicate with other neighboring focal slices, to measure the focus of the pixel of interest. A work in~\cite{maximov2020focus} extracts focus features using a global pooling layer as a communication tool across a stack dimension. However, we observe that the global pooling layer causes a loss of important information due to its inherent limitation that all values across focal slices become single values. Using our SRD module consisting of both a 2D convolution and a 3D convolution, we overcome the limitation. In~\Figref{fig:FeatureExtraction} (left), we extract features using a 2D ResNet block and add an attention score which is computed from them by 3D convolutions and a ReLU activation. The 3D convolution enables the detection of subtle defocus variations in weakly texture-less regions by communicating the features with neighbor focal slices. With this module, our network encodes more informative features for the regions than previous works~\cite{maximov2020focus,Wang-ICCV-2021}. \noindent\textbf{EFfective Downsampling.}\quad Unlike stereo matching networks that use convolutions with strides for downsampling features~\cite{chang2018pyramid,shen2021cfnet}, the stride of a convolution causes a loss in spatial information because most of the focused regions may not be selected. As a solution to this issue, one previous DfF work~\cite{maximov2020focus} uses a combination of maxpooling and average pooling with the feature extraction step. Inspired by \cite{maximov2020focus}, we propose a EFD module leveraging a well-known fact that a feature has higher activation in a focused region than weakly textured regions in~\Figref{fig:FeatureExtraction} (right). The EFD module employs a 2D max-pooling as a downsampling operation and applies a 3D convolution to its output. Through our EFD module, our network can both take representative values of focused regions in a local window and communicate the focal feature with neighbor focal slices. \subsection{Aggregation and Refinement} \label{sec:refinement} Our network produces a final depth map after multi-scale feature aggregation and refinement steps. \noindent\textbf{Multi-scale feature aggregation.}\quad The receptive field of our feature extraction module might be too small to learn non-local features. Therefore, we propose a multi-scale feature aggregation module using one hour-glass module to expand the receptive field, which is similar to the stereo matching network in~\cite{shen2021cfnet}. At an initial step, we use three different sizes of kernels (2$\times$2, 4$\times$4, 8$\times$8) in the average pooling layer. Unlike~\cite{shen2021cfnet}, the reason for using average pooling is to avoid a memory consumption issue because DfF requires more input images. We then apply a ResBlock on each output of average pooling in order to extract multi-scale features. These features are embedded into the encoder and aggregated by the decoder of the hour-glass module. The aggregated feature volume is utilized as an input in the refinement step. \noindent\textbf{Refinement and Regression.}\quad The refinement module has three hour-glass modules with skip-connections like~\cite{chang2018pyramid}. Here, we add transposed convolutions to resize the output of each hourglass whose size is the same as each level of a pyramidal feature volume from the feature extraction module. We construct an input focal volume of each hourglass by concatenating pyramidal feature volumes of the feature extraction module with the output focal volume of the previous hourglass. As each hourglass handles increasingly higher resolutions with pyramidal feature volumes, the focal volumes are refined in a coarse to fine manner. To obtain a depth map from the output focal volumes, we multiply a focus distance value and the probability of each focus distance leading to maximal sharpness. The probability is computed by applying a normalized soft-plus in the output focal volumes in a manner similar to~\cite{Wang-ICCV-2021}. The whole depth prediction network is optimized using a weighted loss function $L_{depth}$ from scratch as follows: \begin{gather} \label{eq:depth loss} L_{depth} =\sum_{i=1}^4 w_{i} * ||D_{i}- D_{gt}||_2 \end{gather} where $||\cdot||_2 $ means a $l_2$ loss and $D_{gt}$ indicates a ground truth depth map. $i\in\{1,2,3,4\}$ means the scale level of the hour-glass module. In our implementation, we set $w_{i}$ to 0.3, 0.5, 0.7 and 1.0, respectively. \noindent\textbf{Implementation details.}\quad We train our network using the following strategy: (1) We first train the alignment network in~\secref{sec:alignment} during 100 epochs using the alignment loss in~\eqnref{eq:alignment loss}. (2) We freeze the alignment network and merge it with the depth prediction network. (3) We train the merged network during 1500 epochs with the depth loss in~\eqnref{eq:depth loss}. (4) In an inference step, we can estimate the depth map from the misaligned focal stack in an end-to-end manner. We note that our network is able to use an arbitrary number of images as input, like the previous CNN-based DfF/DfD~\cite{maximov2020focus,Wang-ICCV-2021}. The number of parameters of our alignment network and feature extraction module is 0.195M and 0.067M, respectively. And, the multi-scale feature aggregation module and the refinement module have 2.883M and 1.067M learnable parameters, respectively. That's, the total parameters of our network is 4.212M. We implement our network using a public PyTorch framework~\cite{paszke2019pytorch}, and optimize it using Adam optimizer~\cite{kingma2014adam} ($\beta_1 = 0.9, \; \beta_2 = 0.99$) with a learning rate $10^{-3}$. Our model is trained on a single NVIDIA RTX 2080Ti GPU with 4 mini-batches, which usually takes three days. For data augmentation, we apply random spatial transforms (rotation, flipping and cropping) and color jittering (brightness, contrast and gamma correction). \section{Evaluation} We compare the proposed network with state-of-the-art methods related to DfD, DfF and depth from light field images. We also conduct extensive ablation studies to demonstrate the effectiveness of each component of the proposed network. For quantitative evaluation, we use standard metrics as follows: mean absolute error (MAE), mean squared error (MSE), absolute relative error (AbsRel), square relative error (SqRel), root mean square error (RMSE), log root-mean-squared error (RMSE log), bumpiness (Bump), inference time (Secs) and accuracy metric $\delta_i = 1.25^i$ for $i \in\{1,2,3\}$. Following~\cite{Wang-ICCV-2021}, we exclude pixels whose depth ranges are out of focus distance at test time. \subsection{Comparisons to State-of-the-art Methods} We validate the robustness of the proposed network by showing experimental results on various public datasets: DDFF 12-Scene~\cite{hazirbas2018deep}, DefocusNet Dataset~\cite{maximov2020focus}, 4D Light Field Dataset~\cite{honauer2016dataset}, Smartphone~\cite{herrmann2020learning} as well as focal stack images generated from our simulator. The datasets provide pre-aligned defocused images. We use the training split of each dataset to build our depth estimation network in both \secref{sec:feature_extraction} and \secref{sec:refinement} from scratch, and validate it on the test split. \begin{table}[t] \caption{Quantitative evaluation on DDFF 12-Scene~\cite{hazirbas2018deep}. We directly refer to the results from \cite{Wang-ICCV-2021}. Since the result of DefocusNet~\cite{maximov2020focus} is not uploaded in the official benchmark, we only bring the MAE value from~\cite{maximov2020focus}. \textbf{bold}: Best, \underline{Underline}: Second best. Unit: pixel.} \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccccccc} Method~~ & \multicolumn{1}{c}{~~MSE~$\downarrow$~~} & \multicolumn{1}{c}{~RMSE log~$\downarrow$~} & \multicolumn{1}{c}{~~AbsRel~$\downarrow$~~} & \multicolumn{1}{c}{~~SqRel~$\downarrow$~~} &\multicolumn{1}{c}{~~Bump~$\downarrow$~~} &{~~$\delta = 1.25$~$\uparrow$~~} & {~~$\delta =1.25^{2}$~$\uparrow$~~} &{~~$\delta=1.25^{3}$~$\uparrow$~~}\\ \hline Lytro & \multicolumn{1}{c}{$~~2.1e^{-3}~~$} & \multicolumn{1}{c}{~~0.31~~} & \multicolumn{1}{c}{~~0.26~~} & \multicolumn{1}{c}{\bf{~~0.01~~}} & \multicolumn{1}{c}{~~1.0~~} & \multicolumn{1}{c}{~~55.65~~} & \multicolumn{1}{c}{~~82.00~~} & \multicolumn{1}{c}{~~93.09~~} \\ VDFF~\cite{moeller2015variational} & \multicolumn{1}{c}{$~~7.3e^{-3}~~$} & \multicolumn{1}{c}{~~1.39~~} & \multicolumn{1}{c}{~~0.62~~} & \multicolumn{1}{c}{~~0.05~~} & \multicolumn{1}{c}{~~0.8~~} & \multicolumn{1}{c}{~~8.42~~} & \multicolumn{1}{c}{~~19.95~~} & \multicolumn{1}{c}{~~32.68~~} \\ PSP-LF~\cite{zhao2017pyramid} & \multicolumn{1}{c}{$~~2.7e^{-3}~~$} & \multicolumn{1}{c}{~~0.45~~} & \multicolumn{1}{c}{~~0.46~~} & \multicolumn{1}{c}{~~\underline{0.03}~~} & \multicolumn{1}{c}{\bf{~~0.5~~}} & \multicolumn{1}{c}{~~39.70~~} & \multicolumn{1}{c}{~~65.56~~} & \multicolumn{1}{c}{~~82.46~~} \\ PSPNet~\cite{zhao2017pyramid} & \multicolumn{1}{c}{$~~9.4e^{-4}~~$} & \multicolumn{1}{c}{~~\underline{0.29}~~} & \multicolumn{1}{c}{~~0.27~~} & \multicolumn{1}{c}{\bf{~~0.01~~}} & \multicolumn{1}{c}{~~\underline{0.6}~~} & \multicolumn{1}{c}{~~62.66~~} & \multicolumn{1}{c}{~~85.90~~} & \multicolumn{1}{c}{~~\underline{94.42}~~} \\ DFLF~\cite{hazirbas2018deep} & \multicolumn{1}{c}{$~~4.8e^{-3}~~$} & \multicolumn{1}{c}{~~0.59~~} & \multicolumn{1}{c}{~~0.72~~} & \multicolumn{1}{c}{~~0.07~~} & \multicolumn{1}{c}{~~0.7~~} & \multicolumn{1}{c}{~~28.64~~} & \multicolumn{1}{c}{~~53.55~~} & \multicolumn{1}{c}{~~71.61~~} \\ DDFF~\cite{hazirbas2018deep} & \multicolumn{1}{c}{$~~9.7e^{-4}~~$} & \multicolumn{1}{c}{~~0.32~~} & \multicolumn{1}{c}{~~0.29~~} & \multicolumn{1}{c}{\bf{~~0.01~~}} & \multicolumn{1}{c}{~~\underline{0.6}~~} & \multicolumn{1}{c}{~~61.95~~} & \multicolumn{1}{c}{~~85.14~~} & \multicolumn{1}{c}{~~92.98~~} \\ DefocusNet~\cite{maximov2020focus} & \multicolumn{1}{c}{~$9.1e^{-4}$~} & \multicolumn{1}{c}{~~-~~} & \multicolumn{1}{c}{~~-~~} & \multicolumn{1}{c}{~~-~~} & \multicolumn{1}{c}{~~-~~} & \multicolumn{1}{c}{~~-~~} & \multicolumn{1}{c}{~~-~~} & \multicolumn{1}{c}{~~-~~} \\ AiFDepthNet~\cite{Wang-ICCV-2021} & \multicolumn{1}{c}{$\underline{~8.6e^{-4}~}$} & \multicolumn{1}{c}{~~\underline{0.29}~~} & \multicolumn{1}{c}{~~\underline{0.25}~~} & \multicolumn{1}{c}{}{\bf{~~0.01~~}} & \multicolumn{1}{c}{~~\underline{0.6}~~} & \multicolumn{1}{c}{~~\underline{68.33}~~} & \multicolumn{1}{c}{~~\underline{87.40}~~} & \multicolumn{1}{c}{~~93.96~~} \\ Ours& \multicolumn{1}{c}{$\bf{~5.7e^{-4}~}$} & \multicolumn{1}{c}{\bf{~~0.21~~}} & \multicolumn{1}{c}{\bf{~~0.17~~}} & \multicolumn{1}{c}{\bf{~~0.01~~}} & \multicolumn{1}{c}{~~\underline{0.6}~~} & \multicolumn{1}{c}{\bf{~~77.96~~}} & \multicolumn{1}{c}{\bf{~~93.72~~}} & \multicolumn{1}{c}{\bf{~~97.94~~}} \\ \hline \end{tabular}} \vspace{-0.3cm} \label{tab:DDFFbenchmark} \end{table} \noindent\textbf{DDFF 12-Scene~\cite{hazirbas2018deep}.}\quad DDFF 12-Scene dataset provides focal stack images and its ground truth depth maps captured by a light-field camera and a RGB-D sensor, respectively. The images have shallow DoFs and show texture-less regions. Our method shows the better performance than those of recent published works in~\tabref{tab:DDFFbenchmark} and achieves the top rank in almost evaluation metrics of the benchmark site\footnote{\url{https://competitions.codalab.org/competitions/17807\#results}}. \begin{table}[t] \caption{Quantitative evaluation on DefocusNet dataset~\cite{maximov2020focus}~(unit: meter), 4D Light Field dataset~\cite{honauer2016dataset}~(unit: pixel) and Smartphone dataset~\cite{herrmann2020learning}~(unit: meter). For DefocusNet dataset and 4D Light Field dataset, we directly refer to the results from \cite{Wang-ICCV-2021}. For Smartphone dataset~\cite{herrmann2020learning}, we multiply confidence scores on metrics ('MAE' and 'MSE') which are respectively denoted as 'MAE*' and 'MSE*'. \textbf{bold}: Best.} \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|ccc|ccc} ~~ & \multicolumn{3}{c|}{~~DefocusNet Dataset~\cite{maximov2020focus}~~} & \multicolumn{3}{c|}{~~4D Light Field~\cite{honauer2016dataset}~~}& \multicolumn{3}{c}{~~Smartphone~\cite{herrmann2020learning}~~} \\ Method~~ & \multicolumn{1}{c}{~~MAE~$\downarrow$~~} & \multicolumn{1}{c}{~~MSE~$\downarrow$~~} & \multicolumn{1}{c|}{~~AbsRel~$\downarrow$~~} & \multicolumn{1}{c}{~~MSE~$\downarrow$~~} & \multicolumn{1}{c}{~~RMSE~$\downarrow$~~} &\multicolumn{1}{c|}{~~Bump~$\downarrow$~~} & \multicolumn{1}{c}{~~MAE*~$\downarrow$~~} &\multicolumn{1}{c}{~~MSE*~$\downarrow$~~}&\multicolumn{1}{c}{~~Secs~$\downarrow$~~}\\ \hline DefocusNet~\cite{maximov2020focus} & \multicolumn{1}{c}{~~0.0637~~} & \multicolumn{1}{c}{~~0.0175~~} & \multicolumn{1}{c|}{~~0.1386~~} & \multicolumn{1}{c}{~~0.0593~~} & \multicolumn{1}{c}{~~0.2355~~} & \multicolumn{1}{c|}{~~2.69~~} & \multicolumn{1}{c}{~~0.1650~~} & \multicolumn{1}{c}{~~0.0800~~} & \multicolumn{1}{c}{~~0.1598~~} \\ AiFDepthNet~\cite{Wang-ICCV-2021} & \multicolumn{1}{c}{~~0.0549~~} & \multicolumn{1}{c}{~~0.0127~~} & \multicolumn{1}{c|}{~~0.1115~~} & \multicolumn{1}{c}{~~0.0472~~} & \multicolumn{1}{c}{~~0.2014~~} & \multicolumn{1}{c|}{~~1.58~~}& \multicolumn{1}{c}{~~0.1568~~} & \multicolumn{1}{c}{~~0.0764~~} & \multicolumn{1}{c}{~~0.1387~~}\\ Ours& \multicolumn{1}{c}{\bf{~~0.0403~~}} & \multicolumn{1}{c}{\bf{~~0.0087~~}} & \multicolumn{1}{c|}{\bf{~~0.0809~~}} & \multicolumn{1}{c}{\bf{~~0.0230~~}} & \multicolumn{1}{c}{\bf{~~0.1288~~}} & \multicolumn{1}{c|}{\bf{~~1.29~~}}& \multicolumn{1}{c}{\bf{~~0.1394~~}} & \multicolumn{1}{c}{\bf{~~0.0723~~}}& \multicolumn{1}{c}{\bf{~~0.1269~~}} \\ \hline \end{tabular}} \vspace{-0.2cm} \label{tab:table1} \end{table} \begin{figure}[tb!] \includegraphics[width=1\linewidth]{ECCV_Figures/DefocusNet.pdf} \vspace{-0.7cm} \caption{Examples of depth prediction from AiFDepthNet and ours on DefocusNet dataset.} \label{fig:Defocus} \end{figure} \begin{figure}[t!] \includegraphics[width=1\linewidth]{ECCV_Figures/HCI_dataset.pdf} \vspace{-0.7cm} \caption{Qualitative comparison on 4D Light Field dataset. } \vspace{-0.7cm} \label{fig:4D_LF} \end{figure} \noindent\textbf{DefocusNet Dataset~\cite{maximov2020focus}.}\quad This dataset is rendered in a virtual space and generated using Blender Cycles renderer~\cite{blender2018blender}. Focal stack images consist of only five defocused images whose focus distances are randomly sampled in an inverse depth space. The quantitative results are shown in \tabref{tab:table1}. As shown in~\Figref{fig:Defocus}, our method successfully reconstructs the smooth surface and the sharp depth discontinuity rather than previous methods. \noindent\textbf{4D Light Field Dataset~\cite{honauer2016dataset}.}\quad This synthetic dataset has 10 focal slices with shallow DoFs for each focal stack. The number of focal stacks in training and test split is 20 and 4, respectively. For fair comparison on this dataset, we follow the evaluation protocol in the relevant work~\cite{Wang-ICCV-2021}. In qualitative comparisons~\Figref{fig:4D_LF}, our SRD and EFD enable to capture sharp object boundaries like the box and fine details like lines hanging from the ceiling. In quantitative evaluation of~\tabref{tab:table1} the MSE and RMSE are half of them from the comparison methods~\cite{abuolaim2020defocus,wanner2012globally}. \noindent\textbf{Smartphone~\cite{herrmann2020learning}.}\quad This dataset shows real-world scenes captured from Pixel 3 smartphones. Unlike previous datasets, ground truth depth maps are obtained by multiview stereo~\cite{schoenberger2016sfm,schoenberger2016mvs} and its depth holes are not considered in the evaluation. As expected, our network achieves the promising performance over the state-of-the-art methods, whose results are reported in \tabref{tab:table1} and \Figref{fig:Smartphone}. We note that our method consistently yields the best quality depth maps from focal stack images regardless of dataset, thanks to our powerful defocused feature representations using both SRD and EFD. \begin{figure}[t!] \includegraphics[width=1\linewidth]{ECCV_Figures/Smartphone.pdf} \vspace{-0.7cm} \caption{ Qualitative results on Smartphone dataset.} \vspace{-0.1cm} \label{fig:Smartphone} \end{figure} \begin{table}[t] \caption{Quantitative result across different datasets for generalization of the state-of-the-art methods and ours. We train our depth prediction model on FlyingThings3D and test them on Middlebury stereo~(unit: pixel) and DefocusNet dataset~(unit: meter). For fair comparison, we directly refer the results of the works~\cite{Wang-ICCV-2021,maximov2020focus} from \cite{Wang-ICCV-2021}.} \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|cc|cccccc} \multicolumn{1}{l|}{Method~~} & \multicolumn{1}{c}{~~Train Dataset~~} & \multicolumn{1}{c|}{~~Test Dataset~~} & \multicolumn{1}{c}{~~MAE~$\downarrow$~~} & \multicolumn{1}{c|}{~~MSE~$\downarrow$~~} & \multicolumn{1}{c}{~~RMSE~$\downarrow$~~} & \multicolumn{1}{c}{~~AbsRel~$\downarrow$~~} & \multicolumn{1}{c}{~~SqRel~$\downarrow$~~}\\ \hline DefocusNet~\cite{maximov2020focus} & \multicolumn{1}{c}{~~~~} & \multicolumn{1}{c|}{~~~~} & \multicolumn{1}{c}{~~7.408~~} & \multicolumn{1}{c}{~~157.440~~} & \multicolumn{1}{c}{~~9.079~~} & \multicolumn{1}{c}{~~0.231~~} & \multicolumn{1}{c}{~~4.245~~} \\ AiFDepthNet~\cite{Wang-ICCV-2021} & \multicolumn{1}{c}{~~FlyingThings3d~~} & \multicolumn{1}{c|}{~~Middlebury~~} & \multicolumn{1}{c}{~~3.825~~} & \multicolumn{1}{c}{~~58.570~~} & \multicolumn{1}{c}{~~5.936~~} & \multicolumn{1}{c}{~~0.165~~} & \multicolumn{1}{c}{~~3.039~~} \\ Ours& \multicolumn{1}{c}{~~~~}& \multicolumn{1}{c|}{~~~~} & \multicolumn{1}{c}{\bf{~~1.645~~}} & \multicolumn{1}{c}{\bf{~~9.178~~}} & \multicolumn{1}{c}{\bf{~~2.930~~}} & \multicolumn{1}{c}{\bf{~~0.068~~}} & \multicolumn{1}{c}{\bf{~~0.376~~}} \\ \hline DefocusNet~\cite{maximov2020focus} & \multicolumn{1}{c}{~~~~}& \multicolumn{1}{c|}{~~~~} & \multicolumn{1}{c}{~~0.320~~} & \multicolumn{1}{c}{~~0.148~~} & \multicolumn{1}{c}{~~0.372~~} & \multicolumn{1}{c}{~~1.383~~} & \multicolumn{1}{c}{~~0.700~~} \\ AiFDepthNet~\cite{Wang-ICCV-2021} & \multicolumn{1}{c}{~~FlyingThings3d~~}& \multicolumn{1}{c|}{~~DefocusNet~~} & \multicolumn{1}{c}{~~0.183~~} & \multicolumn{1}{c}{~~0.080~~} & \multicolumn{1}{c}{~~0.261~~} & \multicolumn{1}{c}{~~0.725~~} & \multicolumn{1}{c}{~~0.404~~} \\ Ours& \multicolumn{1}{c}{~~~~}& \multicolumn{1}{c|}{~~~~} & \multicolumn{1}{c}{\bf{~~0.163~~}} & \multicolumn{1}{c}{\bf{~~0.076~~}} & \multicolumn{1}{c}{\bf{~~0.259~~}} & \multicolumn{1}{c}{\bf{~~0.590~~}} & \multicolumn{1}{c}{\bf{~~0.360~~}} \\ \hline \end{tabular}} \vspace{-0.4cm} \label{tab:cross_domain} \end{table} \begin{figure}[t!] \includegraphics[width=1\linewidth]{ECCV_Figures/Middlebury.pdf} \vspace{-0.7cm} \caption{ Qualitative results on Middlebury dataset. } \label{fig:Middlebury} \vspace{-0.7cm} \end{figure} \noindent\textbf{Generalization across different datasets.}\quad Like~\cite{Wang-ICCV-2021}, we demonstrate the generality of the proposed network. For this, we train our network on Flyingthings3D~\cite{mayer2016large} which is a large-scale synthetic dataset, and test it on two datasets including Middlebury Stereo~\cite{scharstein2014high} and DefocusNet dataset~\cite{maximov2020focus}. As shown in \tabref{tab:cross_domain} and \Figref{fig:Middlebury}, our network still shows impressive results on both datasets. \begin{figure}[t] \includegraphics[width=1\linewidth]{ECCV_Figures/ablation_study_alignment.pdf} \vspace{-0.7cm} \caption{Ablation study on our alignment network. The first and second row refer a target and reference focal slice whose FoVs have the smallest and the biggest values, respectively. The third row shows depth estimation results in accordance to the alignment methods.} \label{fig:ablation_alignment} \vspace{-0.2cm} \end{figure} \begin{table}[t] \caption{Ablation study for alignment network. Unit: meter } \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccccccccc} Module~~ & \multicolumn{1}{c}{~~MAE~$\downarrow$~~} & \multicolumn{1}{c}{~~MSE~$\downarrow$~~} &\multicolumn{1}{c}{~~RMSE log~$\downarrow$~~} & \multicolumn{1}{c}{~~AbsRel~$\downarrow$~~} & \multicolumn{1}{c}{~~SqRel~$\downarrow$~~} &{~~$\delta = 1.25$~$\uparrow$~~} & {~~$\delta =1.25^{2}$~$\uparrow$~~} &{~~$\delta=1.25^{3}$~$\uparrow$~~}&{~~Secs~$\downarrow$~~}&{~~GPU~~~~}\\ \hline w/o alignment& \multicolumn{1}{c}{~~0.0247~~} & \multicolumn{1}{c}{~~0.0014~~} & \multicolumn{1}{c}{~~0.0915~~} & \multicolumn{1}{c}{~~0.0067~~} & \multicolumn{1}{c}{~~0.0034~~} & \multicolumn{1}{c}{~~0.9707~~} & \multicolumn{1}{c}{~~0.9970~~} & \multicolumn{1}{c}{~~0.9995~~} & \multicolumn{1}{c}{~~\bf{0.0107}~~}&\multicolumn{1}{c}{2080Ti}\\ w/ initial FoVs & \multicolumn{1}{c}{~~0.0165~~} & \multicolumn{1}{c}{~~0.0009~~} & \multicolumn{1}{c}{~~0.0636~~} &\multicolumn{1}{c}{~~0.0400~~} & \multicolumn{1}{c}{~~0.0019~~} & \multicolumn{1}{c}{~~0.9867~~} & \multicolumn{1}{c}{~~0.9976~~} & \multicolumn{1}{c}{~~0.9994~~} & \multicolumn{1}{c}{~~0.0358~~} &\multicolumn{1}{c}{2080Ti}\\ Homography-based~~~~~~~~~~~~~~~~~~& \multicolumn{1}{c}{~~\bf{0.0151}~~} & \multicolumn{1}{c}{~~\bf{0.0007}~~} & \multicolumn{1}{c}{~~\bf{0.0570}~~} & \multicolumn{1}{c}{~~0.0369~~} & \multicolumn{1}{c}{~~\bf{0.0015}~~} & \multicolumn{1}{c}{~~\bf{0.9907}~~} & \multicolumn{1}{c}{~~\bf{0.9986}~~} & \multicolumn{1}{c}{~~\bf{0.9997}~~} & \multicolumn{1}{c}{~~0.8708~~}&\multicolumn{1}{c}{R3600} \\ \hline Ours& \multicolumn{1}{c}{~~\bf{0.0151}~~} & \multicolumn{1}{c}{~~\bf{0.0007}~~} & \multicolumn{1}{c}{~~0.0578~~} & \multicolumn{1}{c}{~~\bf{0.0365}~~} & \multicolumn{1}{c}{~~0.0016~~} & \multicolumn{1}{c}{~~0.9898~~} & \multicolumn{1}{c}{~~0.9984~~} & \multicolumn{1}{c}{~~0.9996~~} &\multicolumn{1}{c}{~~0.0923~~}&\multicolumn{1}{c}{2080Ti}\\ \hline \end{tabular}} \vspace{-0.5cm} \label{tab:AlignmentNetwork} \end{table} \begin{figure}[t] \includegraphics[width=1\linewidth]{ECCV_Figures/Ablation_SRD_EFD.pdf} \vspace{-0.9cm} \caption{Candidate modules of our SRD and EFD. (a) 2D ResNet block, (b) 3D ResNet block, (c) Max pooling + 3D Conv, (d) Average pooling + 3D Conv, (e) Strided Conv and (f) 3D pooling layer. } \label{fig:ablation_SRD_EFD} \end{figure} \begin{table}[ht] \caption{Ablation studies for SRD and EFD. Unit: meter } \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccccccc} Module~~ & \multicolumn{1}{c}{~~MAE~$\downarrow$~~} & \multicolumn{1}{c}{~~MSE~$\downarrow$~~} &\multicolumn{1}{c}{~~RMSE log~$\downarrow$~~} & \multicolumn{1}{c}{~~AbsRel~$\downarrow$~~} & \multicolumn{1}{c}{~~SqRel~$\downarrow$~~} &{~~$\delta = 1.25$~$\uparrow$~~} & {~~$\delta =1.25^{2}$~$\uparrow$~~} &{~~$\delta=1.25^{3}$~$\uparrow$~~}\\ \hline SRD ~$\rightarrow$~ 2D ResNet block& \multicolumn{1}{c}{~~0.0421~~} & \multicolumn{1}{c}{~~0.0095~~} & \multicolumn{1}{c}{~~0.1614~~} & \multicolumn{1}{c}{~~0.0842~~} & \multicolumn{1}{c}{~~0.0142~~} & \multicolumn{1}{c}{~~0.9082~~} & \multicolumn{1}{c}{~~0.9722~~} & \multicolumn{1}{c}{~~0.9873~~} \\ SRD ~$\rightarrow$~ 3D ResNet block& \multicolumn{1}{c}{~~0.0409~~} & \multicolumn{1}{c}{~~0.0088~~} & \multicolumn{1}{c}{~~0.1576~~} &\multicolumn{1}{c}{~~0.0818~~} & \multicolumn{1}{c}{~~\bf{0.0128}~~} & \multicolumn{1}{c}{~~0.9123~~} & \multicolumn{1}{c}{~~0.9725~~} & \multicolumn{1}{c}{~~0.9891~~} \\ \hline EFD ~$\rightarrow$~ Maxpooling~+~3D Conv& \multicolumn{1}{c}{~~0.0421~~} & \multicolumn{1}{c}{~~0.0094~~} & \multicolumn{1}{c}{~~0.1622~~} & \multicolumn{1}{c}{~~0.0845~~} & \multicolumn{1}{c}{~~0.0143~~} & \multicolumn{1}{c}{~~0.9125~~} & \multicolumn{1}{c}{~~0.9712~~} & \multicolumn{1}{c}{~~0.9849~~} \\ EFD ~$\rightarrow$~ Avgpooling~+~3D Conv& \multicolumn{1}{c}{~~0.0422~~} & \multicolumn{1}{c}{~~0.0097~~} & \multicolumn{1}{c}{~~0.1628~~} & \multicolumn{1}{c}{~~0.0830~~} & \multicolumn{1}{c}{~~0.0141~~} & \multicolumn{1}{c}{~~0.9126~~} & \multicolumn{1}{c}{~~0.9718~~} & \multicolumn{1}{c}{~~0.9860~~} \\ EFD ~$\rightarrow$~ Strided Conv& \multicolumn{1}{c}{~~0.0419~~} & \multicolumn{1}{c}{~~0.0091~~} & \multicolumn{1}{c}{~~0.1630~~} & \multicolumn{1}{c}{~~0.0842~~} & \multicolumn{1}{c}{~~0.0135~~} & \multicolumn{1}{c}{~~0.9144~~} & \multicolumn{1}{c}{~~0.9725~~} & \multicolumn{1}{c}{~~0.9867~~} \\ EFD ~$\rightarrow$~ 3D Poolying Layer& \multicolumn{1}{c}{~~0.0414~~} & \multicolumn{1}{c}{~~0.0089~~} & \multicolumn{1}{c}{~~0.1594~~} & \multicolumn{1}{c}{~~0.0843~~} & \multicolumn{1}{c}{~~0.0132~~} & \multicolumn{1}{c}{~~0.9088~~} & \multicolumn{1}{c}{~~0.9747~~} & \multicolumn{1}{c}{~~0.9886~~} \\ \hline Ours & \multicolumn{1}{c}{\bf{~~0.0403~~}} & \multicolumn{1}{c}{\bf{~~0.0087~~}} & \multicolumn{1}{c}{\bf{~~0.1534~~}} & \multicolumn{1}{c}{\bf{~~0.0809~~}} & \multicolumn{1}{c}{{~~0.0130~~}} & \multicolumn{1}{c}{\bf{~~0.9137~~}} & \multicolumn{1}{c}{\bf{~~0.9761~~}} & \multicolumn{1}{c}{\bf{~~0.9900~~}} \\ \hline \end{tabular}} \vspace{-0.3cm} \label{tab:submoduleAblation} \end{table} \begin{figure}[ht!] \includegraphics[width=1\linewidth]{ECCV_Figures/Random_input.pdf} \vspace{-0.8cm} \caption{ (a) The performance change according to the number of focal slices in training and test phase. (b) One of focal slices and (c) its ground truth depth map. (d) to (g) output depth maps on the random number of input focal slices in training phase. } \label{fig:Random_input} \vspace{-0.5cm} \end{figure} \subsection{Ablation studies} We carry out extensive ablation studies to demonstrate the effectiveness of each module of the proposed network. \noindent\textbf{Alignment network.}\quad We first evaluate our alignment network. To do this, we render focal stacks using our simulator which generates defocused images based on a camera metadata. We test our alignment network in consideration of four cases: 1) without any warping, 2) with only initial FoVs in \eqref{eq:Fov}, 3) a classical homography method~\cite{evangelidis2008parametric}, 4) our alignment network with initial FoVs. The quantitative results are reported in \tabref{tab:AlignmentNetwork}, whose example is displayed in~\Figref{fig:ablation_alignment}. The results demonstrate that our alignment network achieves much faster and competable performance with the classic homography-based method. \noindent\textbf{SRD and EFD.}\quad We compare our modules with other feature extraction modules depicted in \Figref{fig:ablation_SRD_EFD}. We conduct this ablation study on DefocusNet dataset~\cite{maximov2020focus} because it has more diverse DoF values than other datasets. The quantitative result is reported in \tabref{tab:submoduleAblation}. When we replace our SRD module with either 3D ResNet block or 2D ResNet block only, there are performance drops, even with more learnable parameters for the 3D ResNet block. We also compare our EFD module with four replaceable modules: max-pooling+3D Conv, average pooling+3D Conv, Stride convolution and 3D pooling layer. As expected, our EFD module achieves the best performance because it allows better gradient flows preserving defocus property. \noindent\textbf{Number of focal slices.}\quad Like previous DfF networks \cite{Wang-ICCV-2021,maximov2020focus}, our network can handle an arbitrary number of focal slices by the virtue of 3D convolutions. Following the relevant work~\cite{Wang-ICCV-2021}, we train our network from three different ways, whose result is reported in~\Figref{fig:Random_input}: The '5' means a model trained using five focal slices; The 'Same' denotes that the number of focal slices in training and test phase is same; The 'Random' is a model trained using an arbitrary number of focal slices. The '5' case performs poorly when the different number of focal slices is used in the test phase, and the 'Same' case shows promising performances. Nevertheless, the 'Random' case consistently achieves good performances regardless of the number of focal slices \vspace{-0.3cm} \section{Conclusion} \vspace{-0.2cm} In this paper, we have presented a novel and true end-to-end DfF architecture. To do this, we first propose a trainable alignment network for sequential defocused images. We then introduce a novel feature extraction and an efficient downsampling module for robust DfF tasks. The proposed network achieves the best performance in the public DfF/DfD benchmark and various evaluations. \noindent\textbf{Limitation.}\quad There are still rooms for improvements. A more sophisticated model for flow fields in the alignment network would enhance depth prediction results. More parameters can be useful for extreme rotations. Another direction is to make depth prediction better by employing focal slice selection like defocus channel attention in the aggregation process. \noindent\textbf{Acknowledgement}\quad This work is in part supported by the Institute of Information $\&$ communications Technology Planning $\&$ Evaluation (IITP) (No.2021-0-02068, Artificial Intelligence Innovation Hub), Vehicles AI Convergence Research $\&$ Development Program through the National IT Industry Promotion Agency of Korea (NIPA), `Project for Science and Technology Opens the Future of the Region' program through the INNOPOLIS FOUNDATION (Project Number: 2022-DD-UP-0312) funded by the Ministry of Science and ICT (No.S1602-20-1001), the National Research Foundation of Korea (NRF) ( No. 2020R1C1C10\\12635) grant funded by the Korea government (MSIT), the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R$\&$D program (P0019797), and the GIST-MIT Collaboration grant funded by the GIST in 2022. \clearpage \bibliographystyle{splncs04}
1,477,468,750,546
arxiv
\section{Introduction} \label{intro} Bilayer membranes are formed by the self-assembly of amphiphilic molecules in water of brine~\cite{isra}. The aliphatic chains of the constituent molecules condense into an oily sheet that is shielded from contact with water by the polar heads of the molecules. Membranes can form various phases, e.g., lamellar ($L_\alpha)$, vesicular ($L_4$), or sponge ($L_3$) phases~\cite{safran,luca}. Being self-assembled systems with a conserved area, it is essentially the competition between their curvature energy and their entropy that determines their large scale behavior. In the standard macroscopic theory, membranes are modeled as structureless surfaces with a curvature elasticity~\cite{canhel1,canhel2}. This description, which has the advantage to involve only two material constants, accounts for a large number of universal properties and behaviors of amphiphilic membranes~\cite{safran,luca,statsurf}. On the other hand, a number of attempts have been made toward a more {\em microscopic} description of membranes \cite{marcel,owicki,huang,dan1,dan2,dan3}. The goal being to take into account various structural parameters of the bilayer, such as its thickness, the ordering of the chain segments, etc. As far as large scale properties are concerned, these extra degrees of freedom are irrelevant since they relax over microscopic lengths. Nevertheless, they can dictate important physical properties, such as adhesion behaviors, the short-range interaction between membrane inclusions, their aggregation properties, phase behaviors, etc. The aim of this work is to construct an elastic model of membranes that connects microscopic and macroscopic descriptions. Beside the standard shape and dilation variables we shall consider as elastic variables the {\em tilts\/} of the lipids in both monolayers~\cite{seifert,euro}. The model will be used to investigate the role of the monolayers tilts in the interaction between membrane inclusions. \section{Elastic model} \label{sec:microel} \begin{figure*} \resizebox{1.0\textwidth}{!} {\hspace{30pt}\includegraphics{fig.def.eps}\hspace{30pt}} \caption{a) Average membrane shape, $h$, and dilation, $u$, variables. b) Construction of the membrane average tilt ${\bf m}=\frac{1}{2}({\bf p}^{(1)}-{\bf p}^{(2)})+\nabla h$ and tilt-difference ${\bf\widehat m}=\frac{1}{2}({\bf p}^{(1)}+{\bf p}^{(2)})$ variables.} \label{fig:def} \end{figure*} To construct an elastic model, one considers a distorsion free energy depending on a particular set of structural parameters. Implicitly, this free energy results from the integration over all the microscopic states compatible with these (fixed) parameters of the Boltzmann weight associated with the microscopic Hamiltonian of the system. In practice, based on the symmetries of the system, one writes an expansion in powers of the structural parameters and their gradients. The choice of the relevant parameters depends on which deformations can be imposed externally on the system. We shall consider four structural parameters: (1) the membrane thickness, which can be modified, e.g., by the presence of an integral protein, (2) the membrane average shape, in order to connect with the large-scale theory and because it can be excited by a conically shaped inclusion, and (3) the tilts of the molecules within each monolayer, that can be independently excited by an inclusion with, e.g., a diamond-like shape. To simplify, we assume that the membrane undergoes only small deviations with respect to its flat ground state. We denote by $h^{(1)}(x,y)$ and $h^{(2)}(x,y)$ the vertical displacements (along $z$) of the chain--water interfaces of the upper and lower monolayer, respectively, with respect their positions in the flat unperturbed state (Fig.~\ref{fig:def}a). For further use, let us define the average {\em shape\/} $h(x,y)$ and the membrane {\em dilation\/} by \begin{eqnarray} \label{def_h} h&=&{h^{(1)}+h^{(2)}\over2}\\ \label{def_u} u&=&{h^{(1)}-h^{(2)}\over2}\,. \end{eqnarray} To construct the tilt variables, we introduce the vectors ${\bf p}^{(1)}(x,y)$ and ${\bf p}^{(2)}(x,y)$ defined in both monolayers as the projections onto the $(x,y)$ plane of the unit vectors parallel to the molecular direction and oriented from chain to polar head (Fig.~\ref{fig:def}b). The tilts relative to the membrane normal are measured by ${\bf m}^{(1)}={\bf p}^{(1)}+\nabla h$ and ${\bf m}^{(2)}={\bf p}^{(2)}-\nabla h$. Let us define the {\em average tilt\/} ${\bf m}(x,y)$ and the {\em tilt-difference\/} ${\bf\widehat m}(x,y)$ by \begin{eqnarray} \label{def_p} {\bf m}&=&{{\bf m}^{(1)}-{\bf m}^{(2)}\over2}\,,\\ \label{def_pchap} {\bf\widehat m}&=&{{\bf m}^{(1)}+ {\bf m}^{(2)}\over2}\,. \end{eqnarray} \subsection{Shape and dilation distortion energy} We start by constructing the most general quadratic free energy expansion in powers of $h^{(1)}$ and $h^{(2)}$ and its first and second gradient, \begin{equation} h^{(\alpha)}\,;\quad h_{,i}^{(\alpha)}\,;\quad h_{,ij}^{(\alpha)}\,. \end{equation} Here, $\alpha=1,2$ is the monolayer label, and the comma denotes partial derivation with respect to the coordinates $x$ and $y$. We write the free energy as $F=F^{(1)}+F^{(2)}+F^{(12)}$ with all the interaction terms coupling $h^{(1)}$ and $h^{(2)}$ in $F^{(12)}$. The symmetry of the bilayer imposes invariance with respect to the transformation: \begin{equation}\label{sym} h^{(1)}\to-h^{(2)},\quad h^{(2)}\to-h^{(1)}\,. \end{equation} Therefore, the most general quadratic form for $F^{(\alpha)}$ is \begin{eqnarray} F^{(\alpha)}&=& (-1)^{\alpha}a_1\,h^{(\alpha)}+ (-1)^{\alpha}a_2\,h^{(\alpha)}_{,ii}+ a_3\,h^{(\alpha)2}\nonumber\\&&+ a_4\,h^{(\alpha)}h^{(\alpha)}_{,ii}+ a_5\,h^{(\alpha)}_{,i}\,h^{(\alpha)}_{,i}+ a_6\,h^{(\alpha)2}_{,ii}\nonumber\\&&+ a_7\,h^{(\alpha)}_{,ij}h^{(\alpha)}_{,ij}\,, \end{eqnarray} summation over repeated indices being understood. The interaction energy, containing all the bilinear scalars, has the form \begin{eqnarray} F^{(12)}&=& b_1\,h^{(1)}\,h^{(2)}+ b_2\left(h^{(1)}\,h^{(2)}_{,ii}+h^{(2)}\,h^{(1)}_{,ii}\right)\nonumber\\&&+ b_3\,h^{(1)}_{,i}\,h^{(2)}_{,i}+ b_4\,h^{(1)}_{,ii}\,h^{(2)}_{,jj}+ b_5\,h^{(1)}_{,ij}\,h^{(2)}_{,ij}\,. \end{eqnarray} Expressing now $F$ in terms of $h$ and $u$, we obtain the decoupled form $F=F_h+F_u$, with \begin{eqnarray} F_h&=& d_1\,h^2+ d_2\,h_{,i}\,h_{,i}+ d_3\,h\,h_{,ii}+ d_4\,h^2_{,ii}\nonumber\\&&+ d_5\,h_{,ij}\,h_{,ij}\\ F_u&=& e_1\, u+ e_2\, u^2+ e_3\,u_{,i}\, u_{,i}+ e_4\, u_{,ii}+ e_5\, u\, u_{,ii}\nonumber\\&&+ e_6\,u^2_{,ii}+ e_7\, u_{,ij}\, u_{,ij}\,. \end{eqnarray} The new coefficients are related to the former by an invertible linear transformation. In this expression, several terms obviously vanish and other can be discarded: $e_1\equiv0$, since the minimum energy corresponds to $u=0$ by construction; $d_1=d_3\equiv0$ since $F$ must be invariant under a translation. We shall set $d_2=0$, since the tension of membranes usually vanishes~\cite{david,luca}. There is no reason however to discard the term $e_3(\nabla u)^2$, which represents the energy density associated with a {\em gradient of the membrane thickness}. The latter term involves not only the extra coast of lengthening the chain--water interfaces but also that of modulating the stretching of the molecules chains. Considers a planar membrane with a thickness modulation at some wavevector $q$: its elastic energy may be well described by the term $\propto u^2$ as long as $qa\ll1$ ($a$ monolayer thickness), however the term $\propto(\nabla u)^2$ should not be neglected when $qa\approx1$. From this point of view the present model differs from those of Refs.~\cite{huang,dan1,dan2,dan3} that neglect the coefficient $e_3$~\cite{note}. We can now rewrite $F$ in a more traditional way. Relabeling the nonzero coefficients, and making use of $h_{,ij}\,h_{,ij}=(\nabla^2h)^2-2\,{\rm Det}(h_{,ij})$, we arrive at \begin{eqnarray} F_h&=& \frac{1}{2}\kappa\,(\nabla^2h)^2+ \bar\kappa\,{\rm Det}(h_{,ij})\,,\\ F_ u&=& \frac{1}{2}B\,u^2+ \frac{1}{2}\lambda\left(\nabla u\right)^2+ \sigma\,\nabla^2 u+ \sigma'\,u\,\nabla^2 u\nonumber\\&&+ \frac{1}{2}\kappa'\,(\nabla^2 u)^2+ \bar\kappa'\,{\rm Det}(u_{,ij})\,. \end{eqnarray} $F_h$ is simply the Helfrich energy~\cite{canhel1,canhel2}, in which $\nabla^2h$ is twice the mean curvature of the average membrane shape, and ${\rm Det}(h_{,ij})$ is its Gaussian curvature. The thickness variations, which are completely decoupled from the average membrane shape, are described by an energy $F_u$ similar to that of Refs.~\cite{dan1,dan2,dan3}, however with two important differences: (1) there is a non vanishing term $\propto\!\left(\nabla u\right)^2$ at lowest-order, (2) the bending constants $\kappa'$ and $\bar\kappa'$ are different from the Helfrich constants appearing in $F_h$. To further simplify our model, we shall discard the terms proportional to $\sigma$ and $\sigma'$, since they can be transformed to boundary terms by integration by parts; we shall also discard the terms proportional to $\kappa'$ and $\bar\kappa'$, in order to keep only the leading-order saturation terms. We are left (at the moment) with \begin{equation} F=\frac{1}{2}\kappa\,(\nabla^2h)^2+ \bar\kappa\,{\rm Det}(h_{,ij})+ \frac{1}{2}B\,u^2+ \frac{1}{2}\lambda\left(\nabla u\right)^2 \end{equation} \subsection{Tilt distortion energy and coupling terms} We expand the distortion energy associated with the tilts of the molecular orientation in powers of \begin{equation} m_i^{(\alpha)};\quad m_{i,j}^{(\alpha)}; \end{equation} The tilt gradient $m^{(\alpha)}_{i,j}$ is a non-symmetric second rank tensor. We write the tilt free energy as $G=G^{(1)}+G^{(2)}+G^{(12)}+G_{\rm int}$ where all the terms coupling ${\bf m}^{(1)}$ and ${\bf m}^{(2)}$ are in $G^{(12)}$ and all the interaction terms coupling the tilts and the membrane shape or dilation are in $G_{\rm int}$. The interaction can be divided into four contributions: $G_{\rm int}=G_u^{(1)}+G_u^{(2)}+G_h^{(1)}+G_h^{(2)}$, each term containing all the contributions bilinear in either $m^{(1)}$ or $m^{(2)}$ and $u$ or $h$. Because of the symmetry of the bilayer, we require invariance with respect to the transformation (\ref{sym}) and the exchange of ${\bf m}^{(1)}$ and ${\bf m}^{(2)}$. Writing all the linear and quadratic scalars yields \begin{eqnarray} G^{(\alpha)}&=& A_1\,m_{i,i}^{(\alpha)}+ A_2\,m_i^{(\alpha)}\,m_i^{(\alpha)}+ A_3\,m_{i,i}^{(\alpha)2}\nonumber\\&&+ A_4\,m_{i,j}^{(\alpha)}\,m_{i,j}^{(\alpha)}+ A_5\,m_{i,j}^{(\alpha)}\,m_{j,i}^{(\alpha)}\\ G^{(12)}&=& B_1\,m_i^{(1)}\,m_i^{(2)}+ B_2\,m_{i,i}^{(1)}\,m_{j,j}^{(2)}\nonumber\\&&+ B_3\,m_{i,j}^{(1)}\,m_{i,j}^{(2)}+B_4\, m_{i,j}^{(1)}\,m_{j,i}^{(2)}\\ G_u^{(\alpha)}&=& C_1\,m_i^{(\alpha)}\,u_{,i}+ C_2\,m_{i,i}^{(\alpha)}\,u+ C_3\,m_{i,i}^{(\alpha)}\,u_{,jj}\nonumber\\&&+ C_4\,m_{i,j}^{(\alpha)}\,u_{,ij}\\ G_h^{(\alpha)}&=& (-1)^{\alpha}D_1\,m_i^{(\alpha)}\,h_{,i}+ (-1)^{\alpha}D_2\,m_{i,i}^{(\alpha)}\,h\nonumber\\&&+ (-1)^{\alpha}D_3\,m_{i,i}^{(\alpha)}\,h_{,jj}+ (-1)^{\alpha}D_4\,m_{i,j}^{(\alpha)}\,h_{,ij} \end{eqnarray} As previously, several terms can be discarded: $A_1=C_2=D_2\equiv0$, since we assume no spontaneous splay of the tilt; $D_1\equiv0$ since the minimum energy is still achieved with zero tilts when the membrane is rotated. We can also discard the terms with coefficients $A_5$ and $B_4$: integrating them by parts merely yields boundary terms (and a renormalization of $A_3$ and $B_2$). In terms of the variables ${\bf m}$ and ${\bf\widehat m}$, we can write the total tilt energy as $G=G_{\bf m}+G_{{\bf m}h}+G_{\bf\widehat m}+G_{{\bf\widehat m}u}$, with \begin{eqnarray} G_{\bf m}&=& \frac{1}{2}t\,m_i\,m_i+ k_1\,m_{i,i}^2+ k_2\,m_{i,j}\,m_{i,j}\\ G_{{\bf m}h}&=& d_3\,m_{i,i}\,h_{,jj}+ d_4\,m_{i,j}\,h_{,ij}\\ G_{\bf\widehat m}&=& \frac{1}{2}t'\,\widehat m_i\,\widehat m_i+ k'_1\,\widehat m_{i,i}^2+ k'_2\,\widehat m_{i,j}\,\widehat m_{i,j}\\ G_{{\bf\widehat m}u}&=& c\,\widehat m_i\,u_{,i}+ c_1\,\widehat m_{i,i}\,u_{,jj}+ c_2\,\widehat m_{i,j}\,u_{,ij}\,. \end{eqnarray} We can finally discard the terms with coefficients $c_2$ and $d_4$ by integrating by parts, and neglect the term with coefficient $c_1$ as a higher-order coupling term. \subsection{Total distortion energy} In vectorial notations and after some simple manipulations, the total distortion energy, i.e., $F+G$, can be written as $H_{hm}+H_{u\widehat m}$, with \begin{eqnarray} H_{hm}&=& \frac{1}{2}\kappa\,(\nabla^2h)^2+ \bar\kappa\,{\rm Det}\,(h_{,ij}) -\gamma\,\nabla^2 h\,(\nabla\cdot{\bf m})\nonumber\\&+& \frac{1}{2}t\,{\bf m}^2+ \frac{1}{2}K_1\,(\nabla\cdot{\bf m})^2+ \frac{1}{2}K_2\,(\nabla\times{\bf m})^2 \end{eqnarray} and \begin{eqnarray}\label{umc} H_{u\widehat m}&=& \frac{1}{2}B\,u^2+ \frac{1}{2}\lambda\left(\nabla u\right)^2+ c\,\nabla u\cdot{\bf\widehat m}\nonumber\\&+& \frac{1}{2}t'\,{\bf\widehat m}^2+ \frac{1}{2}K'_1\,(\nabla\cdot{\bf\widehat m})^2+ \frac{1}{2}K'_2\,(\nabla\times{\bf\widehat m})^2 \end{eqnarray} The total energy therefore splits up into a contribution $H_{hm}$ involving the average shape $h$ and the average tilt ${\bf m}$, and a decoupled contribution $H_{u\widehat m}$ involving the dilation $u$ and the tilt-difference ${\bf\widehat m}$. The term with coefficient $\gamma>0$ is responsible for the ripple phase of tilted membranes~\cite{ripple1,ripple2}. Similarly, the term with coefficient $c$ can produce a ``ripple'' instability in which a thickness modulation occurs together with a tilt-difference modulation~\cite{euro}. From the tendency of the molecules to orient perpendicular to the chain--water interface we expect $c>0$ (Fig.~\ref{fig:coupling}). \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{100pt}\includegraphics{fig.coupling.eps}\hspace{100pt}} \caption{Coupling between the tilt-difference ${\bf\widehat m}=\frac{1}{2}({\bf m}_1+{\bf m}_2)$ and the thickness gradient $\nabla u$, via the term $c\,\nabla u\cdot{\bf\widehat m}$.} \label{fig:coupling} \end{figure} \subsection{Equilibrium equations and energies} The total elastic energy of the membrane is given by \begin{equation}\label{totalnrj} {\cal H}={\cal H}_{hm}+{\cal H}_{u\widehat m}= \int\!{\rm d}^2r\,H_{hm}+ \int\!{\rm d}^2r\,H_{u\widehat m}. \end{equation} The equilibrium membrane configuration are those minimizing ${\cal H}$ with respect to all possible local variations of the structural fields. The four corresponding Euler-Lagrange equations, namely $\delta{\cal H}/\delta h=0$, $\delta{\cal H}/\delta{\bf m}=0$, $\delta{\cal H}/\delta u=0$ and $\delta{\cal H}/\delta {\bf\widehat m}=0$, are explicitly \begin{eqnarray} \label{hta} \kappa\,\nabla^4h&=& \gamma\,\nabla^2\left(\nabla\cdot{\rm m}\right)\\ \label{htb} t\,{\bf m}-K_1\,\nabla\left(\nabla\cdot{\bf m}\right)&+& K_2\,\nabla\!\times\!\left(\nabla\!\times\!{\bf m}\right) =-\gamma\,\nabla\left(\nabla^2h\right)\nonumber\\ \end{eqnarray} and \begin{eqnarray} \label{uaa} &B\,u-\lambda\,\nabla^2u= c\,\nabla\cdot{\bf\widehat m}&\\ \label{uab} &t'\,{\bf\widehat m}-K'_1\,\nabla\left(\nabla\cdot{\bf\widehat m}\right)+ K'_2\,\nabla\!\times\!\left(\nabla\!\times\!{\bf\widehat m}\right)= -c\,\nabla u&\!\!. \end{eqnarray} The calculation of the energy of equilibrium configurations can be simplified in the following way. Integrating ${\cal H}_{u{\widehat m}}$ by parts yields \begin{equation}\label{stokes} {\cal H}_{u{\widehat m}}=\frac{1}{2}\int\!{\rm d}^2r\, \left(u\,\frac{\delta{\cal H}}{\delta u}+ {\bf\widehat m}\cdot\frac{\delta{\cal H}}{\delta{\bf\widehat m}}\right) \,\,+\,\,{\cal H}'_{u{\widehat m}} \end{equation} with \begin{eqnarray}\label{eq:energie.up} {\cal H}'_{u{\widehat m}}= \frac{1}{2}\!\oint\!d\ell\,{\bf n}&\cdot&\left[ \lambda\,u\nabla u+ c\,u\,{\bf\widehat m}+ K'_1\left(\nabla\cdot{\bf\widehat m}\right)\,{\bf\widehat m} \right.\nonumber\\&&-\left. K'_2\left(\nabla\!\times\!{\bf\widehat m}\right)\! \times\!{\bf\widehat m} \right]\,, \end{eqnarray} where the last integral is restricted to the boundary of the integration domain, whose normal is ${\bf n}$. For equilibrium configurations, ${\cal H}_{u{\widehat m}}$ reduces to ${\cal H}'_{u{\widehat m}}$ since the first term of~(\ref{stokes}) vanishes. This provides a very useful simplification. On finds similarly that ${\cal H}_{hm}$ reduces for equilibrium configurations to \begin{eqnarray} {\cal H}'_{hm}&=&\frac{1}{2}\!\oint\!d\ell\,{\bf n}\cdot\left[ \kappa\,\nabla^2h\,\nabla h- \kappa\,h\,\nabla\!\left(\nabla^2h\right)\right.\nonumber \\&-&\left. \gamma\,\left(\nabla\cdot{\bf m}\right)\nabla h+ \gamma\,h\,\nabla\!\left(\nabla\cdot{\bf m}\right)- \gamma\,\left(\nabla^2h\right){\bf m}\right.\nonumber\\&+&\left. K_1\left(\nabla\cdot{\bf m}\right)\,{\bf m}- K_2\left(\nabla\times{\bf m}\right)\times{\bf m} \right]\nonumber\\\label{calH} \end{eqnarray} \subsection{Orders of magnitude}\label{sec:oom} For biological membranes, the bending constants $\kappa>0$ and $\bar\kappa<0$ have relatively high values $\simeq10^{-12}\,{\rm erg}$ ($\simeq25\,$ $k_{\rm B}T$) \cite{isra}. The typical value of the membrane area-stretching coefficient $k\simeq100\,{\rm erg}/{\rm cm}^2$~\cite{isra} allows to determine the dilation modulus via $B=k/(2a)^2$, where $a\simeq20\times10^{-8}\,{\rm cm}$ is a typical monolayer thickness. This yields $B\simeq6\times10^{14}\,{\rm erg}/{\rm cm}^4$. Therefore $B\simeq\kappa/a^4$: the membrane has a typical energy scale given by $\kappa$ and a typical length scale given by $a$. In the absence of experimental measurements, the other constants have to be estimated by dimensional analysis. We expect $\lambda$ to be $\approx\kappa/a^2$ (we recall that $\lambda$ is independent of the membrane tension). We therefore estimate $\lambda\approx25\,{\rm erg}/{\rm cm}^2$ Next, we shall assume roughly that tilting the molecules by a large angle compares energetically with compressing the membrane by half a monolayer thickness. This yields $t\approx t'\approx\lambda$. Then, we expect the characteristic length defined by the $(K_i/t)^{1/2}$ to be of order $a$, which implies $K_i\approx\lambda a^2\approx10^{-12}\,{\rm erg}$. This value correctly compares with the bending constant. Finally, the $K'_i$'s are expected to be of the same order of magnitude as the $K_i$'s, and $c$ is dimensionally expected to compare with $\lambda$. \subsection{Remarks on the validity of the truncation of the energy expansion}\label{sec:trunc} Strictly speaking, in all the microscopic theories of membranes~\cite{owicki,huang,dan1,dan2,dan3}, it is somewhat arbitrary to truncate the expansion at the lowest-order in the derivatives of the distortion field. Indeed, since the typical energy and length scales of the membrane are $\kappa$ and $a$, respectively, the distance $\xi$ on which dilation perturbations relax is expected to be $\approx a$. For small distortions $u\!\ll\!a$ there is no problem in neglecting quartic terms such as the one $\sim\!(\nabla u)^4$: this term is $(u/a)^2$ times smaller than the leading term $\sim\!(\nabla u)^2$. However the term $\sim\!(\nabla^2u)^2$, which we have discarded, might be of the same order of magnitude as the leading term $\sim\!(\nabla u)^2$ if indeed its coefficient is $\simeq\kappa$. The problem is that all the terms $\sim\!(\nabla^{n}u)^2$ may be also comparable if their coefficients are $\simeq\kappa\,a^{2n-4}$. However, at the microscopic scale corresponding to $a$, the membrane is not actually a continuum and there is not much meaning in considering high order derivatives of the thickness. It may therefore be a good approximation to keep only the leading order term. In any case, we expect that the lowest-order truncation of this continuum description will give a correct physical picture, at least qualitatively, of the competing trends associated with the various elastic variables. Note also that the truncation may be technically correct in the vicinity of a transition to a more ordered $L_\beta$ or $L_\beta'$ phase with a different equilibrium thickness, where $B$ might be significantly reduced and, accordingly, $\xi$ larger than $a$. \section{Interactions among membrane inclusions} Biological membranes contain a large number of inclusions such as integral proteins. Inclusions with a conical shape tend to curve the membrane since the lipids orient parallel to the inclusion's boundary in order to fill the volume. Because of the interference between the resulting membrane distortions, such inclusions are subject to long-range interactions~\cite{goulian,park,netz}. Inclusions also experience ``Casimir'' forces, which are due to the modification of the membrane fluctuation spectrum caused by their presence. The {\em short-range} interactions between inclusions arise from the local structural changes that the latter impose on the membrane~\cite{marcel,owicki,huang,dan1,dan2,dan3}. For instance, since proteins have a central hydrophobic region that spans the hydrophobic core of the membrane, a thickness mismatch between the hydrophobic region of the protein and that of the bilayer will result in a local membrane thickness perturbation. Interferences between such perturbations yield membrane-mediated interactions that add up to the standard screened-electrostatic and van der Waals interactions. \subsection{Boundary conditions} Let us consider a membrane inclusion such as the one depicted in Fig.~\ref{fig:general}. It is meant to model an integral protein with an arbitrary shape. For the sake of simplicity, however, we assume revolution symmetry. We suppose that the hydrophobic region of the inclusion has a thickness $2H$ that differs from the corresponding thickness $2a$ in the bilayer. The inclusion is also assumed to have a piecewise conical shape with two angles $\theta_1$ and $\theta_2$ pertaining to each monolayer and relative to the revolution axis. Let us consider an undistorted reference membrane above which the inclusion stands at a height $h_0$. As previously we denote by $h^{(1)}$ and $h^{(2)}$ the positions of the upper and lower membrane interfaces with respect to their equilibrium positions in the reference membrane. Assuming a strong coupling between hydrophobic parts~\cite{owicki,huang,dan1,dan2,dan3}, we require the conditions that both monolayers interfaces reach the inclusion at the separation line between its hydrophobic and hydrophilic regions, i.e., \begin{eqnarray} h^{(1)}|_{r_0}&\simeq&h_0+H-a\,,\\ h^{(2)}|_{r_0}&\simeq&h_0-(H-a)\,. \end{eqnarray} These conditions are only approximate because the position where the interfaces reach the inclusion is equal to $r_0$ only at lowest order in the deformation variables. Another boundary ``condition'', which is not imposed but actually free to adjust to equilibrium, is the angle $\beta$ at which the mid-membrane shape $h$ departs from the inclusion. Calling ${\bf e}_{r}$ the unit vector along $r$, this condition is \begin{equation} \nabla h|_{r_0}\simeq\beta\,{\bf e}_{r}\,. \end{equation} If we now require that the molecules within the membrane lie parallel to the inclusion's boundary, because of the space-filling constraint, we have the condition \begin{eqnarray} {\bf p}^{(1)}|_{r_0}&\simeq&-\theta_1\,{\bf e}_{r}\,,\\ {\bf p}^{(2)}|_{r_0}&\simeq&\theta_2\,{\bf e}_{r}\,. \end{eqnarray} Note that we have implicitly assumed that the revolution axis of the inclusion is normal to the reference plane $(x,y)$, although in the most general situation it can be a tilted (this tilt will be zero by symmetry in the following). \subsubsection{Decoupled boundary conditions}\label{sec:decoupled} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{5pt}\includegraphics{fig.general.eps}\hspace{5pt}} \caption{ General boundary conditions imposed by an inclusion.} \label{fig:general} \end{figure} In order to make use of the equilibrium equations previously derived, we must transform these boundary conditions into conditions involving the variables $h$, $u$, ${\bf m}$, and ${\bf\widehat m}$. From Eqs.~(\ref{def_h}--\ref{def_u}) and Eqs.~(\ref{def_p}--\ref{def_pchap}), we obtain \begin{eqnarray} \label{bch} h|_{r_0}&\simeq& h_0\,,\\ \nabla h|_{r_0}&\simeq& \beta\,{\bf e}_{r}\,,\\ \label{bct} \vec{m}|_{r_0}&\simeq& \left(\beta-\Theta\right){\bf e}_{r}\,, \end{eqnarray} and \begin{eqnarray} \label{bcu} u|_{r_0}&\simeq& u_0\,,\\ \label{bca} \vec{\widehat m}|_{r_0}&\simeq&\alpha_0\,{\bf e}_{r}\,, \end{eqnarray} where $\Theta=\frac{1}{2}(\theta_1+\theta_2)$ is the average cone angle of the inclusion, $u_0=H-a$ is the dilation and $\alpha_0=\frac{1}{2}(\theta_2-\theta_1)$ the tilt-difference set by the inclusion. It is important to note that these two sets of boundary conditions are decoupled in the same way as the corresponding equilibrium equations. \subsubsection{Arrays of inclusions} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{100pt}\includegraphics{fig.wigner.eps}\hspace{100pt}} \caption{An array of inclusions and its Wigner-Seitz cell.} \label{fig:wigner} \end{figure} Following previous works, we shall calculate the constitutive energy of {\em arrays} of inclusions. Paradoxally, it is easier to approximatively calculate the energy of an array than to calculate exactly the interaction between two inclusions. Since membrane-mediated interactions are not pairwise additive, this is the correct procedure to investigate the stability of $2D$ crystalline structures. To capture the physics of an array of inclusions, the standard method is to consider a single inclusion surrounded by its Wigner-Seitz cell (i.e., the unit cell made by the perpendicular bisectors of the bonds connecting the lattice sites)~\cite{owicki,dan1,dan2,dan3}. The Wigner-Seitz cell is further idealized by a circle of radius approximatively half the inclusions separation (cf.\ Fig.~\ref{fig:wigner}), and the equilibrium equations are solved assuming {\em revolution symmetry}, with boundary conditions at $r=r_0$ and $r=R$. When applied to an hexagonal lattice of inclusions, this approximation is quite good, as it consists in neglecting high Fourier harmonics of order $6$, $12$, etc. In a gas of inclusions, it amounts to considering that the first neighbors effectively screen the other inclusions. \subsection{Dilation--tilt-difference induced interactions in an array of inclusions} Inclusions with arbitrary shapes will in general excite all of the four distortion modes considered in this work. However, since we have seen that both the equilibrium equations and the boundary conditions are pairwise decoupled, one can study separately, and simply add, the effects of the coupled dilation and tilt-difference modes and the effects of the coupled shape and tilt modes. \subsubsection{Zero dilation--tilt-difference coupling} We focus on the dilation ($u$) and tilt-difference (${\bf\widehat m}$) modes and, to start with, we neglect their coupling: \begin{equation} c=0\,. \end{equation} Let us consider an array of inclusions, and assume, as previously discussed, a perfect revolution symmetry in the Wigner-Seitz cell surrounding an inclusion: \begin{equation}\label{eq:revup} u=u(r) \quad{\rm and}\quad \vec{\widehat m}=\alpha(r)\,{\bf e}_{r}\,. \end{equation} Under these conditions, the most general solution of the equilibrium equations~(\ref{uaa}-\ref{uab}) takes the form \begin{eqnarray} \label{2xiu} u(r)&=&\left[ A_1\,\,{\rm K}_0\!\left({r\over\xi_u}\right)+ A_2\,\,{\rm I}_0\!\left({r\over\xi_u}\right) \right]\times\sqrt{t'\over B}\,,\\ \label{2xia} \alpha(r)&=&\left[ A_3\,\,{\rm K}_1\!\left({r\over\xi_\alpha}\right)+ A_4\,\,{\rm I}_1\!\left({r\over\xi_\alpha}\right) \right]\,, \end{eqnarray} in which the I's and the K's are modified Bessel function and \begin{eqnarray} \xi_u&=&\sqrt{\frac{\lambda}{B}}\,,\\ \xi_\alpha&=&\sqrt{\frac{K'_1}{t'}}\,, \end{eqnarray} are two characteristic length comparable with the membrane thickness, except close to a $L_\beta$ tilted phase where $t'$ might be small, or close to the main-chain transition where $B$ might be small. The constants $A_{\rm i}$'s, which are real and dimensionless, are determined from the boundary conditions: \begin{eqnarray} \label{bd} u|_{r_0}&=&u_0\,,\\ \alpha|_{r_0}&=&\alpha_0\,,\\ \left.\dot u\right|_R&=&0\,,\\ \label{bf} \left.\alpha\right|_R&=&0\,, \end{eqnarray} with a dot indicating derivation with respect to $r$. The quantities $u_0$ and $\alpha_0$ are the boundary dilation and tilt-difference, respectively. The last two conditions are required by symmetry on the Wigner-Seitz circle. Figure~\ref{fig:mem.c0} shows a typical solution for an isolated inclusion ($R\!\to\!\infty$) and Fig.~\ref{fig:proch0} shows a typical solution corresponding to an array of interacting inclusions. These pictures sketch the membrane structure: the solid line represents the membrane shape, i.e., the sum of the equilibrium monolayer thickness $a$ and the thickness excess $u$. The dashed curve represents the amplitude of the tilt-difference angle $\alpha$. For the sake of clarity the distortions have been amplified in the following way: the boundary angle $\alpha_0$, the equilibrium monolayer thickness $a$, and the boundary thickness excess $u_0$ are all normalized to $1$. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.mem_c0.eps}\hspace{40pt}} \caption{ Sketch of the membrane structure around an isolated inclusion (distortions are amplified, see text). The inclusion radius is $r_0=3\,\xi_\alpha(\simeq60\,{\rm\AA})$, $\xi_u/\xi_\alpha=2$ and $c=0$.} \label{fig:mem.c0} \end{figure} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.proch0.eps}\hspace{40pt}} \caption{ Membrane structure between two inclusions in the array (amplified distortions). Parameters are as in Fig.~\ref{fig:mem.c0}.} \label{fig:proch0} \end{figure} Assuming revolution symmetry, the general distortion energy~(\ref{eq:energie.up}) within the Wigner-Seitz cell, takes the form \begin{equation}\label{eq:energie.up.radial} {\cal H}_{u\widehat m}= \pi\left[\lambda\,r\,u\,\dot u+c\,r\,u\,\alpha+K'_1\,r\,\alpha\,\dot\alpha +K'_1\,\alpha^2\right]_{r_0}^R\,, \end{equation} in which several terms vanish due to the boundary conditions. After eliminating constant terms, ${\cal H}_{u\widehat m}$ reduces to \begin{equation}\label{eq:energie.up2} {\cal H}_{u\widehat m}= {\cal H}_{u}+{\cal H}_{\widehat m}= -\pi r_0\left(\lambda\,u_0\,\dot u|_{r_0} +K'_1\,\alpha_0\,\dot\alpha|_{r_0}\right)\,. \end{equation} This interaction, which in principle depends on a large number of parameters ($r_0$, $R$, $u_0$, $\alpha_0$, $B$, $\lambda$, $t'$ and $K'$) has the following scaling property: \begin{eqnarray} \frac{{\cal H}_{u\widehat m}}{\pi B\,r_0\,\xi_\alpha u_0^2} &=&\overline{\cal H}_{u\widehat m}\left( x^2,s,\frac{r_0}{\xi_\alpha},\frac{R}{\xi_\alpha} \right)\,,\label{eq:nor}\\ s&=&\frac{\xi_u}{\xi_\alpha}\,,\\ x&=&\frac{\alpha_0}{u_0\sqrt{B/t'}}\,, \end{eqnarray} which advantageously reduces the effective numbers of parameters. At short inclusions separations, ${\cal H}_{\widehat m}$ diverges as $(R-r_0)^{-1}$ and ${\cal H}_{u}$ goes to a negative constant. At large separations, both relax exponentially. Figure~\ref{fig:nrj.c0} shows a typical situation where an energy minimum appears, resulting from the superposition of a dilation-induced attraction, that dominates at large distances, and a tilt-difference--induced repulsion, that dominates at short distances. This situation manifests itself for large values of $\xi_u$, for which the dilation mode has the longest range, and for small values of the boundary tilt-difference $\alpha_0$, for which the repulsion is weak \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.nrj_c0.eps}\hspace{40pt}} \caption{ Normalized interaction energy per inclusion $\overline{\cal H}_{u\widehat m}$ {\em vs.}~the inclusions separation $R$. The curves correspond to $r_0=3\,\xi_\alpha(\simeq60\,{\rm\AA})$, $s=2$, $x=1$ and $c=0$. The normalized energies $\overline{\cal H}_{u}$ and $\overline{\cal H}_{\widehat m}$ correspond to the attractive dilation and repulsive tilt-difference contributions, respectively.} \label{fig:nrj.c0} \end{figure} Let us estimate the magnitude of the interaction energy, which is given by the normalization factor $\pi B\,r_0\,\xi_\alpha u_0^2$ in~(\ref{eq:nor}). We choose for the tilt-difference coherence length a fixed microscopic value $\xi_\alpha\simeq20\,{\rm\AA}$ and we let for instance $0.1<s<10$. This assumption is based on the fact that close to the main chain transition $\xi_u$ should exhibit some degree of pretransitional divergence. For the inclusion, we assume a typical protein size $r_0=3\,\xi_\alpha(\simeq60\,{\rm\AA})$ and a thickness perturbation $u_0=0.2\,\xi_\alpha(\simeq4\,{\rm\AA})$. With the estimated values of the material constants given in Sec.~\ref{sec:oom}, we obtain $\pi B\,r_0\,\xi_\alpha u_0^2\simeq(10/s^2)k_{\rm B}T$. In the energy graphs depicted in Fig.~\ref{fig:nrj.c0}, the values $x\!=\!1$ and $s\!=\!2$ correspond to an inclusion boundary tilt-difference angle $\alpha_0=(x/s)(u_0/\xi_\alpha) \sqrt{\lambda/t'}\simeq6^\circ$. The depth of the energy minimum is $3\times\pi B\,r_0\,\xi_\alpha u_0^2\simeq7\,k_{\rm B}T$. For such a well, we expect that the array of inclusion will crystallize, the distance between the boundaries of the particles being then $2(R-r_0)\simeq2\times0.8\,\xi_\alpha\simeq35\,{\rm\AA}$~\cite{par_surface}. If we consider the inclusions radius $r_0$ as fixed, the interaction potential as a function of $R$ depends only on the parameters $x$ and $s$, as can be seen in~(\ref{eq:nor}). We have plotted in Fig.~\ref{fig:diaph.c0} the phase diagram, in the $(x,s)$ plane, for a collection of identical inclusions. Distinction is made between a disordered (D) gaseous state and a crystal (K) phase. The criterion for the latter is the existence of a energy minimum with a depth larger than $k_{\rm B}T$. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.c0.eps}\hspace{40pt}} \caption{ Phase diagram for a membrane with $\xi_\alpha\simeq20\,{\rm\AA}$ containing dilation-tilt-difference inducing inclusions with radius $r_0=3\,\xi_\alpha(\simeq60\,{\rm\AA})$. The coupling $c$ is neglected. (D) disordered phase. (K) crystal phase, as determined from the existence of an energy minimum deeper that $k_{\rm B}T$.} \label{fig:diaph.c0} \end{figure} \subsubsection{Nonzero dilation--tilt-difference coupling} In order to study the effect of the dilation--tilt-difference coupling, we now assume \begin{equation} c\ne0\,,\quad{\rm and}\quad\xi_u=\xi_\alpha\equiv\xi\,. \end{equation} The latter condition is a simplification, which is reasonable far from any membrane phase transition (with $\xi$ of the order of the membrane thickness). Let us define the coupling's characteristic length as \begin{equation} \ell=\frac{c}{2\sqrt{Bt'}}\,. \end{equation} We assume $\ell<\xi$, otherwise the membrane undergoes the microscopic ``ripple'' instability already mentioned~\cite{euro}. Under the revolution symmetry conditions~(\ref{eq:revup}), the most general solution of the equilibrium equations~(\ref{uaa}-\ref{uab}) is given by the real part of \begin{eqnarray} \label{1xiu} u(r)&=&\left[ \tens{A}_1\,\,{\rm K}_0\!\left({\rm e}^{{\rm i}\phi}\,{r\over\xi}\right)+ \tens{A}_2\,\,{\rm I}_0\!\left({\rm e}^{{\rm i}\phi}\,{r\over\xi}\right) \right]\times\sqrt{t'\over B}\,,\\ \label{1xia} \alpha(r)&=&\left[ \tens{A}_1\,\,{\rm K}_1\!\left({\rm e}^{{\rm i}\phi}\,{r\over\xi}\right)- \tens{A}_2\,\,{\rm I}_1\!\left({\rm e}^{{\rm i}\phi}\,{r\over\xi}\right) \right]\times{\rm i}\,, \end{eqnarray} where ${\rm i}=\sqrt{-1}$, $\tens{A}_1$ and $\tens{A}_2$ are two dimensionless {\em complex} constants, and \begin{equation} \sin\phi={\ell\over\xi}\,. \end{equation} The constants $\tens{A}_1$ and $\tens{A}_2$ are determined from the boundary conditions~(\ref{bd}--\ref{bf}) as previously. Figures~\ref{fig:mem.cne0} and~\ref{fig:proch1} show a typical solution for an isolated inclusion ($R\!\to\!\infty$) and a typical solution for interacting inclusions, respectively. The same conventions as for Figs.~\ref{fig:mem.c0} and~\ref{fig:proch0} are used. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.mem_cne0.eps}\hspace{40pt}} \caption{ Sketch of the membrane structure around an isolated inclusion (amplified distortions). The inclusion radius is $r_0=3\,\xi\,(\simeq60\,{\rm\AA})$, $x\!=\!-2$, and $\phi\!=\!0.75\times\pi/2$.} \label{fig:mem.cne0} \end{figure} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.proch1.eps}\hspace{40pt}} \caption{ Membrane structure between two inclusions in the lattice (amplified distortions). Parameters are as in Fig.~\ref{fig:mem.cne0}.} \label{fig:proch1} \end{figure} The distortion energy within the Wigner-Seitz cell is given by exactly the same formula~(\ref{eq:energie.up2}) as previously. Indeed, although the second term of~(\ref{eq:energie.up.radial}) does not vanish any longer in $r\!=\!r_0$, it is constant and can be omitted in the interaction. Note however that this energy cannot be splitted any longer into pure dilation and tilt-difference contributions. Again, ${\cal H}_{u\widehat m}$ has the following scaling property: \begin{equation} \frac{{\cal H}_{u\widehat m}}{\pi B\,r_0\,\xi\,u_0^2}= \overline{\cal H}_{u\widehat m}\left( x,\phi,\frac{r_0}{\xi},\frac{R}{\xi} \right)\,.\label{eq:nor2} \end{equation} Depending on the values of $r_0/\xi$, $x$ and $\phi$, the interaction energy is either monotonically repulsive or exhibits one {\em or several} marked minima. Figure~\ref{fig:nrj.cne0} shows a typical situation in which two minima appear. This phenomenon manifests itself for values of the dilation--tilt-difference coupling corresponding to $\phi\!>\!0.6\times\pi/2$, where, because of the vicinity of the dilation--tilt-difference ``ripple'' instability, the membrane has a tendency to develop damped undulations. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{40pt}\includegraphics{fig.nrj_cne0.eps}\hspace{40pt}} \caption{ Normalized interaction energy per inclusion $\overline{\cal H}_{u\widehat m}$ {\em vs.}~the inclusions separation $R$. The curves correspond to $r_0=3\,\xi\,(\simeq60\,{\rm\AA})$, $x\!=\!-3$, and $\phi\!=\!0.75\times\pi/2$.} \label{fig:nrj.cne0} \end{figure} The magnitude of the interaction energy is now given by the normalization factor $\pi B\,r_0\,\xi\,u_0^2$. With typically $\xi\simeq20\,{\rm\AA}$, and again $r_0=3\,\xi\,(\simeq60\,{\rm\AA})$, $u_0=0.2\,\xi\,(\simeq4\,{\rm\AA})$, we obtain, with the values of the material constants estimated in Sec.~\ref{sec:oom}, the typical energy scale $\pi B\,r_0\,\xi\,u_0^2\simeq10\,k_{\rm B}T$. The boundary tilt-difference angle is then given by $\alpha_0=x\,u_0\,\sqrt{B/t'}\simeq x\,u_0/\xi\simeq x\times10^\circ$ (for $\lambda\simeq t'$ as consistently assumed in Sec.~\ref{sec:oom}). Therefore, in Fig.~\ref{fig:nrj.cne0}, the depths of the two minima are $\simeq25\,k_{\rm B}T$ and $\simeq3\,k_{\rm B}T$, respectively. Two distinct crystals might therefore appear: one with a distance between the boundaries of the particles of $2(R-r_0)\simeq2\times1.2\,\xi\simeq50\,{\rm\AA}$, the other with a much larger separation $2(R-r_0)\simeq2\times4.5\,\xi\simeq180\,{\rm\AA}$~\cite{par_surface}. If we consider the inclusions radius $r_0$ as fixed, the interaction potential as a function of $R$ depends only on the parameters $x$ and $\phi$, as can be seen from~(\ref{eq:nor2}). Figure~\ref{fig:Ks} shows a phase diagram in the $(x,\phi)$ plane for a collection of identical inclusions. The symbol (D) indicates a disordered (D) gaseous state, (K) a crystal phase, and (K$_n$) the possibility of $n$ distinct crystalline phases with different separation distances. Again, the criterion for any crystal phase is an energy minimum depth larger than $k_{\rm B}T$. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{fig.Ks.eps}\hspace{30pt}} \caption{ Phase diagram for a membrane with $\xi\simeq20\,{\rm\AA}$ containing dilation-tilt-difference inducing inclusions with radius $r_0=3\,\xi_\alpha(\simeq60\,{\rm\AA})$. (D) disordered phase. (K) crystal phase, (K$_n$) region where $n$ possible distinct crystalline phases with different particles separations are possible.} \label{fig:Ks} \end{figure} An interesting feature of the phase diagram of Fig.~\ref{fig:Ks} is the asymmetry with respect to the change $x\!\leftrightarrow\!-x$ introduced by the dilation--tilt-difference coupling: crystal phases are more likely to occur for $x\!<\!0$, i.e., for a {\em thick-convex} inclusion ($u_0\!>\!0$ and $\alpha_0\!<\!0$) or for a {\em thin-concave} inclusion ($u_0\!<\!0$ and $\alpha_0\!>\!0$). This symmetry-breaking follows from the sign $c\!>\!0$ of the dilation--tilt-difference coupling, that we have assumed, in order to favor the situation depicted in Fig.~\ref{fig:coupling}. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{cristoph.eps}\hspace{30pt}} \caption{ Sketch of the inclusions that tend to form $2D$ crystals ($x\pp0$). The dashed lines show the monolayers of the unperturbed membrane. (Left) Thick-convex inclusion ($u_0\!>\!0$ and $\alpha_0\!<\!0$). (Right) Thin-concave inclusion ($u_0\!<\!0$ and $\alpha_0\!>\!0$).} \label{fig:crfyl} \end{figure} This asymmetry can be explained by simple arguments. First, let us recall that if a non-zero boundary tilt-difference $\alpha_0$ is present, the interaction is always repulsive at short distances. Indeed, the tilt-difference must go from $\alpha_0$ to $-\alpha_0$ from one inclusion to the other. Conversely, the distortion associated with the membrane dilation is attractive since the thickness mismatch is the same on identical inclusions. Two situations are therefore possible: if the dilation relaxes on a longer range than the tilt-difference, a crystal phase can occur since there is a long-range attraction followed by a short-range repulsion, whereas conversely the repulsion simply dominates. We therefore have to understand how the coupling affects the relative range of the dilation and tilt-difference distortions. To simplify, let us rewrite schematically the interaction energy~(\ref{umc}) as \begin{equation} {\cal H}_{u\widehat m}\sim u^2+\xi^2\dot u^2+\dot u\alpha+\alpha^2+\xi^2\dot\alpha^2\,, \end{equation} Let us first assume $u_0,\alpha_0\!>\!0$, which corresponds to $x\!>\!0$. To relax the positive dilation $u_0$, the membrane will set $\dot u\pp0$. The term $\dot u\alpha$ being then negative, it reduces the cost of making a gradient of $u$. Therefore the $u$ distortion will relax on distance that is somewhat {\em shorter} than $\xi$: the attractive dilation tail retracts (see Fig.~\ref{fig:KDplus}). From the point of view of the tilt-difference, since $\dot u\pp0$, the coupling $\dot u\alpha$ makes it as if the potential was of the type $(\alpha-\alpha_{\rm m})^2$ with $\alpha_{\rm m}\pg0$. Thus, on the distance $\xi$, the tilt-difference relaxes only up to $\alpha_{\rm m}$; it therefore needs a {\em longer} distance to reach zero: the repulsive tilt-difference tail expands (see Fig.~\ref{fig:KDplus}). Then, for $x\pg0$, a disordered phase is more favored, since the repulsive tail dominates at large distances. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{fig.KDplus.eps}\hspace{30pt}} \caption{ Dilation and tilt-difference distortions around an isolated inclusion with $x\pg0$. The inclusion radius is $r_0=3\,\xi(\simeq60\,{\rm\AA})$ and the coupling corresponds to $\phi\!=\!0.2\times\pi/2$. Due to the latter, the tilt-difference tail expands (dashed line) and the dilation tail retracts (solid line), thereby favoring repulsion, i.e., a disordered phase.} \label{fig:KDplus} \end{figure} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{fig.KDmoins.eps}\hspace{30pt}} \caption{ Same as Fig.~\ref{fig:KDplus} but for $x\pp0$. Now the tilt-difference tail retracts and the dilation tail expands, thereby favoring attraction, i.e., a crystal phase.} \label{fig:KDmoins} \end{figure} With still $u_0\pg0$, let us now assume $\alpha_0\pp0$, corresponding to $x\pp0$. Now the term $\dot u\alpha$ is positive: building a gradient of $u$ is more costly and therefore the attractive dilation tail expands (see Fig.~\ref{fig:KDmoins}). The tilt-difference, however, still experiences a potential of the type $(\alpha-\alpha_{\rm m})^2$ with $\alpha_{\rm m}\pg0$, but it now starts from a negative value $\alpha_0$. On a distance $\xi$ it would reach the equilibrium value $\alpha_{\rm m}\pg0$, it therefore reaches zero on a distance now shorter that $\xi$: the repulsive tilt-difference tail retracts (cf. Fig.~\ref{fig:KDmoins}). Thus, for $x\pp0$, a crystal phase is more favored, since the attractive tail dominates at large distances (and then the repulsive one at short distances). \subsection{Shape-tilt induced interactions in an array of inclusions} We now focus on the shape ($h$) and tilt (${\bf m}$) distortion modes induced by the inclusions. Assuming again revolution symmetry in the Wigner-Seitz cell, \begin{equation} h=h(r) \quad{\rm and}\quad {\bf m}=\theta(r)\,{\bf e}_{r}\,, \end{equation} the most general solution of the equilibrium equations~(\ref{hta}-\ref{htb}) takes the form \begin{eqnarray} h&=& (ar^2\!+\!b)\log{r}\!+\!cr^2\!+\!d\!+\! A\,{\rm I}_0(qr)\!+\! B\,{\rm K}_0(qr)\,,\\ \theta&=&-4\frac{L^2a}{\mu\,r}\!+\! \frac{qA}{\mu}\,{\rm I}_1(qr)- \frac{qB}{\mu}\,{\rm K}_1(qr)\,, \end{eqnarray} with \begin{eqnarray} \mu&=&\frac{\gamma}{\kappa}\,,\\ L&=&\frac{\gamma}{\sqrt{t\kappa}}\,,\\ \xi_\theta&=&\sqrt{\frac{K_1}{t}}\,,\\ q^{-1}&=&\sqrt{\xi_\theta^2-L^2}\,. \end{eqnarray} We assume $L<\xi_\theta$, i.e., $\gamma^2>K\kappa$, otherwise the membrane undergoes the ripple instability of the $P_{\beta'}$ phase, in which undulations and periodic tilt distortions occur~\cite{ripple1,ripple2}. We assume also $L>0$, i.e., $\gamma>0$, since we expect the molecules to tilt in such a way as to {\em relax\/} the splay of the molecules in a curved membrane. \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{10pt}\includegraphics{fig.relax.eps}\hspace{10pt}} \caption{ Sketch of the membrane mid-surface shape ($h$) between conical inclusions. The tilt of the lipid molecules at the inclusion boundary is $\beta-\Theta$.} \label{fig:relax} \end{figure} To simplify, let us assume strictly $K_1=\kappa$. We then have $L=\mu\,\xi_\theta$. Hence $0<\mu<1$ is now the only parameter controlling the shape-tilt coupling. The six real unknowns $a$, $b$, $c$, $d$, $A$, and $B$ are determined, according to the general boundary conditions~(\ref{bch}-\ref{bct}) for an inclusion with average cone angle $\Theta$, by \begin{eqnarray} h|_{r_0}&=&h_0\,,\\ \dot h|_{r_0}&=&\beta\,,\\ \label{trade} \theta|_{r_0}&=&\beta-\Theta\,,\\ h|_{R}&=&h_0\,,\\ \dot h|_{R}&=&0\,,\\ \theta|_{R}&=&0\,. \end{eqnarray} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{angles0.eps}\hspace{30pt}} \caption{Boundary tilt $\theta(r_0)$ and boundary membrane inclination $\beta$ as a function of the distance $R$ between the inclusion. The inclusions radius is $r_0=3\,\xi_\theta(\simeq60\,{\rm\AA})$ and the tilt-shape coupling $\gamma$ is zero.} \label{fig:angles} \end{figure} \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{anlges0.9.eps}\hspace{30pt}} \caption{Same as Fig.~\ref{fig:angles}, however in the presence of a strong shape-tilt coupling corresponding to $\mu=0.9$.} \label{fig:angles2} \end{figure} The latter three conditions are required by symmetry on the Wigner-Seitz circle, at which the origin of the membrane height has been chosen. After solving this system, the total membrane free energy has to be minimized with respect to the free parameters $h_0$ and $\beta$. Assuming revolution symmetry, the general distortion energy~(\ref{calH}) within the Wigner-Seitz cell takes the form \begin{eqnarray} {\cal H}_{hm}=\pi&&\!\!\left[ \kappa(r\,{\ddot h}+{\dot h}){\dot h} -\kappa\,h(r\,{\dot{\ddot h}}+{\ddot h}-{\dot h}/r)\right.\nonumber\\ &&-\left.\gamma(r\,{\dot\theta}+\theta){\dot h} +\gamma\,h(r\,{\ddot\theta}+{\dot\theta}-\theta/r)\right.\nonumber\\ &&-\left.\gamma(r\,{\ddot h}+{\dot h})\theta+ K_1\,r\,\theta\,{\dot\theta}+K_1\,\theta^2 \right]_{r_0}^R\,, \end{eqnarray} where all the terms taken in $r\!=\! R$ vanish due to the boundary conditions. The interaction has the following scaling property: \begin{equation}\label{hhmnor} \frac{{\cal H}_{hm}}{\pi\kappa\,\Theta^2}=\overline{\cal H}_{hm}\!\left( \mu,\frac{r_0}{\xi_\theta},\frac{R}{\xi_\theta}\right)\,. \end{equation} The results are the following. Even in the absence of a shape-tilt coupling, there is a trade between the shape and the tilt modes, which is due to the boundary condition~(\ref{trade}). The membrane tends to develop a tilt close to the inclusions in order to flatten its shape. The typical solution for the membrane shape resembles that sketched in Fig.~\ref{fig:relax}. The boundary tilt relaxes on a distance $\simeq4\,q^{-1}$, typically of order a few $\xi_\theta$'s unless $\mu$ is close to $1$. The boundary tilt is a function of the separation $R$ between the inclusions. When $R\gg\xi_\theta$, the amplitude of the boundary tilt $\theta(r_0)$ is negligible: the membrane curvature, which is small, only exerts a weak torque on the tilt. Conversely, when the inclusions are close to contact, the boundary tilt is a finite fraction of the inclusions average cone angle $\Theta$. For $K_1=\kappa$, as we have assumed, this fraction is exactly $1/2$ for $\mu=0$ (Fig.~\ref{fig:angles}). \begin{figure} \resizebox{0.5\textwidth}{!} {\hspace{30pt}\includegraphics{fig.nrjtilt.eps}\hspace{30pt}} \caption{Normalized interaction energy per inclusion $\overline{\cal H}_{hm}$ {\em vs.}~inclusions separation $R$. The inclusions radius is $r_0=3\,\xi_\theta(\simeq60\,{\rm\AA})$ and the tilt-shape coupling $\gamma=0$. The dashed curve corresponds to the case where the tilt is not allowed.} \label{fig:nrjtilt} \end{figure} In the presence of a strong shape-tilt coupling ($\mu$ close to $1$), the tilt relaxes on a distance $q^{-1}$ significantly shorter than $\xi_\theta$, and the boundary tilt $\theta(r_0)$, which gets somewhat larger at contact, actually relaxes more rapidly with the distance between the inclusions (Fig.~\ref{fig:angles2}). Thus, except when the inclusions are very close to one another, the tilt is rapidly negligible. The reason is that the tilt set in order to flatten the membrane is {\em always\/} costly form the point of view of the $-\gamma\,\nabla^2 h\,(\nabla\cdot{\bf m})$ coupling (when $\gamma>0$). Hence the coupling does not favor an expansion of the tilt distortion, while in the preceding section the dilation--tilt-difference coupling did favor an expansion of the tilt-difference for $x<0$, which produced spectacular effects. Let us estimate the magnitude of the interaction energy, which is given by the normalization factor $\pi\kappa\,\Theta^2$ in~(\ref{hhmnor}). With $\xi_\theta\simeq20\,{\rm\AA}$, a typical protein size $r_0=3\,\xi_\theta(\simeq60\,{\rm\AA})$ and $\Theta\simeq10^\circ$, we obtain $\pi\kappa\,\Theta^2\simeq2.5\,k_{\rm B}T$. Figure~\ref{fig:nrjtilt} shows the interaction energy per inclusion in the case of zero shape-tilt coupling. The interaction is always repulsive; it diverges at small separations as $(R-r_0)^{-1}$ and tends asymptotically towards the exact form \begin{equation} {\cal H}_{h}=2\pi\kappa\,\Theta^2\,\frac{r_0^2}{R^2-r_0^2}\,, \end{equation} which can be calculated analytically by completely neglecting the tilt. As it is apparent in Fig.~\ref{fig:nrjtilt}, the tilt relaxes some of the interaction energy at short inclusions separations. For $\mu\to1$, we find that ${\cal H}_{hm}\to{\cal H}_{h}$ at all separations. The effect of the tilt is therefore negligible. \section{Conclusions} We have developed an elastic model for membranes that describes at the same level large- and short-scale distortions of the bilayer. Strictly speaking, such a continuum theory at a molecular scale should not be expected to give more than semi-quantitative results (see Sec.~\ref{sec:trunc}). Nevertheless our hope is that the theory captures the qualitative trends of the competitions between the different elastic variables. Using a systematic expansion in the monolayers profiles and tilts, we have shown that the average membrane shape ($h$) is coupled to the average molecular tilt (${\bf m}$), both being decoupled (at lowest order) from the membrane dilation ($u$) and the difference in the monolayers tilts (${\bf\widehat m}$), which are coupled together. We have used this model to study the contribution of the membrane elasticity to the short- and long-range interactions among inclusions. Because the boundary conditions at a membrane inclusion are decoupled in the same way as the elastic variables, the interaction energy can be calculated as simply the sum of a dilation--tilt-difference contribution ($u$--${\bf\widehat m}$) and a shape-tilt contribution ($h$--${\bf m}$). Membrane inclusions generally have a slightly convex or concave hydrophobic core of thickness different from that of the bilayer. Such inclusions will excite the coupled dilation--tilt-difference ($u$--${\bf\widehat m}$) mode. The thickness mismatch creates an energetic dilation corona around the inclusions and yields an {\em attraction} between like inclusions: no extra distortion occurs when the coronas overlap since the boundary dilations match. The tilt-difference, however, yields a {\em repulsion} between like inclusions: going from $\alpha_0$ to $-\alpha_0$, it develops a strong gradient when the coronas overlap. Inclusion producing no tilt-difference aggregate, while inclusions producing a nonzero tilt-difference either repel one another or favor $2D$ crystals. The latter situation arises for small tilt-differences, or when the dilation corona extends further than the tilt-difference corona. When the dilation--tilt-difference coupling is large, the distortions in the coronas exhibit damped oscillations. This effect occurs because of the vicinity of a ``ripple'' instability in which both the membrane dilation and tilt-difference become unstable. The inter-particle potential develops then several minima, which implies the possible coexistence of different crystals of inclusions having different lattice spacings. The latters can be significantly larger than the inclusions size. The inclusions most likely to form $2D$ crystals are those with either a {\em long-convex} or a {\em short-concave} hydrophobic core, i.e., those disfavored from the point of view of the $c\,\nabla u\cdot{\bf\widehat m}$ coupling. This is because the gradient of $u$ being more costly, the dilation corona extends (favoring ``long-range'' attraction), while at the same time the dilation corona shrinks (making the repulsion occur only at smaller separations). Conversely, short-convex and long-concave inclusions have a dominant repulsion and should form disordered phases. Membrane inclusions generally have also a slightly conical shape. Hence they excite the coupled shape-tilt ($h$--${\bf m}$) mode. In first approximation, the conical shape constrains the membrane to depart with a contact angle $\Theta$ relative to the inclusion axis. The energy stored in the curvature of the membrane yields a repulsion between like inclusions in an array that diverges at short distances as $R^{-1}$ and fall off as $R^{-2}$. This is a many body effect, since the interaction between a pair of inclusions falls off more rapidly, as $R^{-4}$~\cite{goulian}. In the latter case, the inclusions axes rotate away from one another in order to minimize the curvature energy of the membrane. In an array of inclusions this rotation is zero by symmetry. If we allow for a tilt of the lipids, the membrane can depart with a smaller contact angle $\beta$. In order to remain parallel to the inclusions boundaries, the lipids tilt then by $\beta-\Theta$. When the inclusions are far apart, the tilt is completely negligible since the torque exerted by the membrane curvature on the tilt is weak. Conversely, when the inclusions are distant a few times the membrane thickness, the tilt becomes a finite fraction of $\Theta$. The interaction energy is then reduced, however there is no qualitative change in the interaction potential. As for the shape-tilt coupling $-\gamma\,\nabla^2 h\,(\nabla\cdot{\bf m})$, it reduces the relaxation length of the tilt and simply reduces its effects. The reason is that the tilt set in order to flatten the membrane is always costly form the point of view of the coupling, for the expected positive sign of $\gamma$. Hence the tilt does not propagate far away of the inclusions in the vicinity of the ripple instability (where both the shape and tilt modes become unstable). \vspace{12pt} {\bf Acknowledgments} \vspace{12pt} Useful discussions with A. Ajdari, P. Pincus and L. Peliti are gratefully acknowledged. This work was partially supported by the NSF Grants No. MRL DMR 91-23048 and 96-24091, and by the CNRS.
1,477,468,750,547
arxiv
\section{Introduction} The Hill-Valley Evolutionary Algorithm (HillVallEA) \cite{Maree18, Maree18b} is a real-valued multi-modal evolutionary algorithm, that automatically detects niches in the search space, based on the Hill-Valley test. This test states that two solutions belong to the same niche (valley) when there is no hill in between. To do so, a number of intermediate solutions are sampled and evaluated. Hill-Valley Clustering (HVC) is an iterative approach to efficiently cluster an entire population of solutions into niches. The resulting clusters are used to initialze a population-based core search algorithm, in this case, AMaLGaM-Univariate \cite{bosman08} is used. \section{Adaptations to the HillVallEA19} Some small adaptations have been made to HillVallEA18 \cite{Maree18b} to further enhance its performance. Source code of HillVallEA is available at \url{github.com/scmaree/HillVallEA}. \subsection{Adaptive initial population sampling} In HillVallEA, after all local optimizers have been terminated (and there is still budget remaining), a new initial population is sampled. When this population is sampled uniformly random, previously explored basins get re-explored every time a new population is initialized. To reduce this computational overhead, we store for each solution of the previous initial population to which cluster it belonged. Then, new solutions are sampled based on rejection sampling. A sample is rejected with probability $P = 0.9$ if its nearest $d+1$ solutions of the previous initial population belonged to the same cluster. In that case, it is very likely that this solution will end up exploring the same basin as the cluster of the previous generation. Additionally, better spreading the initially sampled population has been shown to improve performance of evolutionary algorithms \cite{wessing15}. Minimax sampling method or Latin hypercube sampling \cite{wessing15} grow slow very quickly as the problem dimensionality or the sample size increase. We therefore use a greedy scattered subset selection method \cite{rodrigues14}. To construct a population of $N$ solutions, we sample $2N$ using the strategy above, and use greedy scattered subset selection to reduce this to $N$ solutions. \subsection{Force accept of low-fitness solutions in Hill-Valley test} Previously in hill-valley clustering, all nearest-better solutions were tested with the hill-valley test. Especially for problems with a large number of low-fitness local optima, such as the Shubert function (problems 6 and 8 in the benchmark), this results in many resources spent on obtaining accurate low-fitness clusters that are later discarded. Therefore, during hill-valley clustering, the hill-valley test is only performed on solutions that belong to the fittest half of the selection, or on solutions-pairs that are more than the expected edge length (EEL) \cite{Maree18} apart (i.e., $N_t > 1$). \subsection{Recalibration of the recursion scheme} The population sizing scheme within HillVallEA is parameterized as $\xi = (N,N^{\mbox{inc}}, N_C,N_C^{\mbox{inc}}, )$, with initial population size $N$, population size increment $N^{\mbox{inc}}$, cluster size $N_C$, and cluster size increment $N_C^{\mbox{inc}}$. Previously, these parameters were set to $\xi = (2^8d, 2, 1, 1.2)$, where $d$ is the problem dimensionality. As the initial population sampling has been adapted, setting $\xi = (2^6, 2, 0.8, 1.1)$ was found to enhance performance of the HillVallEA. That is, using both a smaller population and smaller clusters initially, and increasing cluster size at a slower pace. \section{Experiment Setup} We evaluate the performance of HillVallEA on the test problems in the CEC2013 niching benchmark suite \cite{CEC2013NichingCompetition}. The benchmark consists of 20 problems, as shown in Table~\ref{tab:2dbenchmarks}, to be solved within a predefined budget in terms of function evaluations. For each of the benchmark problems, the location of the optima and the corresponding fitness values are known, however, these are only used for to measure performance, and not used during optimization. All benchmark functions are defined on a bounded domain. All experiments are repeated 50 times, and resulting performance measures are averaged over all repetitions. Note that no problem-specific parameter tuning has been performed. \subsection{Performance Metrics} \label{sec:measures} Two performance measures are used, from which three scoring scenarios are computed, according to the competition guidelines. Let $\mathcal{O}$ be the set of presumed optima obtained by an algorithm, and let $g$ be the number of distinct global optima within $\mathcal{O}$. Finally, let $G_p$ be the number of global optima for problem $p$. Then, we define the peak ratio (PR) as $\mbox{PR} = g / G_p$ and the success rate (SR) as $\mbox{SR} = g / |\mathcal{O}|$. Both measures should be maximized, with maximum $1$. From these two measures, three scoring scenarios are constructed. \begin{enumerate} \item[S1] The first scenario is simply the PR. \item[S2] The second scenario is known as the static $F_1$ measure, defined as $F_1 = \frac{2 \cdot PR \cdot SR}{PR + SR}$. \item[S3] The third and final scenario is the dynamic $F_1$ (dyn$F_1$), which is the area under the curve of the $F_1$ over time (in number of function evaluations). For this, sort the solutions in $\mathcal{O}$ based on the number of function evaluations $f_i$ before a solution $o_i\in\mathcal{O}$ was considered a global optimum, with the first-obtained solution first. Let $\mathcal{O}_{[1:t]}$ with $t\in[1,|\mathcal{O}|]$ be the subset of $\mathcal{O}$ containing the first $t$ solutions and let $B_p$ be the function evaluation budget for problem $p$. Then we can write the dyn$F_1$ as, $$\mbox{dyn}F_1 = \left(\frac{B_p - f_{|\mathcal{O}|}}{B_p}\right) F_1(\mathcal{O}) + \sum_{i = 2}^{|\mathcal{O}|} \left( \frac{f_i - f_{i-1} }{B_p}\right) F_1(\mathcal{O}_{[1:i-1]}).$$ \end{enumerate} According to the competition guidelines, a solution is marked as a distinct global optima for five different accuracy levels $\varepsilon = \{10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\}$. Then, for each of the accuracy levels, for each problem, the scenario score is the average of the scores over the five accuracy levels. \subsection{Algorithms} \label{sec:algorithms} We compare performance of HillVallEA-2019 to all algorithms that previously participated in the niching competitions held at the GECCO and CEC conferences in 2016, 2017 and 2018. The raw solution sets are used. The obtained solutions are re-evaluated and the scores under the different scenarios are computed given the definitions stated above. Note that the algorithms are not re-run. The included algorithms are, NEA2+ \cite{preuss12}, RLSIS \cite{wessing15}, RS-CMSA \cite{ahrari17} , HillVallEA18 \cite{Maree18b}, SDE-Ga (No known reference, developed by Jun-ichi Kushida) and finally the method that we discus in this paper, HillVallEA19. \section{Results and discussion} Tables~\ref{tab:s1}, \ref{tab:s2} and \ref{tab:s3} shows the score per problem and per algorithm under respectively scenario S1, S2 and S3. Table~\ref{tab:ranks} shows the overall score and the corresponding ranks. The SR of HillVallEA is 1 in all cases, and almost always for RS-CMSA, which shows that the similar post-processing step that both algorithms perform is successful at removing duplicates and local optima. Problems 1-5 and 10 are fully solved by all methods in all runs for all accuracy levels (except for two runs of NEA2+), which suggesting that these problems are too simple. No method can obtain the final two Weierstrass peaks of Composition Function 4 fully in any dimension (problem 15, 17, 19 and 20), which may indicate that this is a needle in a haystack problem. Similarly to problem 18, for which the obtained peak ratio was 0.667 for all methods (except NEA2+). Especially, SDE-Ga obtains for problems 13, 14, 16, and 18 a maximum peak ratio of 0.667, being unable to solve the Weierstrass function. To conclude, HillVallEA19 was shown to be an improvement over HillVallEA18 under all scenarios. RS-CMSA comes directly after in all three scenarios. SDE-Ga performs well under S1 and S2, especially for problems 8 and 9, but performance deteriorates for the higher-dimensional problems. SDE-Ga obtains solutions very late in the convergence process, resulting in a very low S3 score. Overall, HillVallEA19 performs best under all scenarios. \begin{table} \begin{center} \caption{Niching benchmark suite from the CEC2013 special session on multi-modal optimization \cite{CEC2013NichingCompetition}. For each problem the function name, problem dimensionality $d$, number of global optima $\#gopt$, and local optima $\#lopt$ and budget in terms of function evaluations are given.} \label{tab:2dbenchmarks} \small \begin{tabular}{cccccc} \toprule \# & Function name & $d$ & \#gopt & \#lopt & budget \\ \toprule 1 & Five-Uneven-Peak Trap & 1 & 2 & 3 & 50K \\ 2 & Equal Maxima & 1 & 5 & 0 & 50K \\ 3 & Uneven Decreasing Maxima & 1 & 1 & 4 & 50K \\ 4 & Himmelblau & 2 & 4 & 0 & 50K\\ 5 & Six-Hump Camel Back & 2 & 2 & 5 & 50K \\ 6 & Shubert & 2 & 18 & many & 200K \\ 7 & Vincent & 2 & 36 & 0 & 200K \\ 8 & Shubert & 3 & 81 & many & 400K \\ 9 & Vincent & 3 & 216 & 0 & 400K \\ 10 & Modified Rastrigin & 2 & 12 & 0 & 200K \\ 11 & Composition Function 1 & 2 & 6 & many &200K \\ 12 & Composition Function 2 & 2 & 8 &many & 200K \\ 13 & Composition Function 3 & 2 & 6 &many & 200K \\ 14 & Composition Function 3 & 3 & 6 &many & 400K \\ 15 & Composition Function 4 & 3 & 8 &many & 400K \\ 16 & Composition Function 3 & 5 & 6 &many & 400K \\ 17 & Composition Function 4 & 5 & 8 &many & 400K \\ 18 & Composition Function 3 & 10 & 6 & many &400K \\ 19 & Composition Function 4 & 10 & 8 &many & 400K \\ 20 & Composition Function 4 & 20 & 8 &many & 400K \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Scores obtained under Scenario S1 (peak ratio) for each of the algorithms per problem $p$. Higher is better, 1 is the maximum score. Scores are averaged over 50 runs and five accuracy levels. Average (avg.) score computed over all 20 problems. } \label{tab:s1} \smaller \begin{tabular}{c|cccc|cc} \toprule & & & & & \multicolumn{2}{|c}{HillVallEA} \\ p & NEA2+ & RLSIS & RS-CMSA & SDE-Ga & HillVallEA18 & HillVallEA19 \\ \toprule 1 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 2 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 3 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 4 & 0.998 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 5 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 6 & 0.997 & 0.872 & 0.999 & 1.000 & 1.000 & 1.000 \\ 7 & 0.840 & 0.920 & 0.997 & 1.000 & 1.000 & 1.000 \\ 8 & 0.568 & 0.189 & 0.871 & 1.000 & 0.920 & 0.975 \\ 9 & 0.552 & 0.584 & 0.730 & 0.992 & 0.945 & 0.972 \\ 10 & 0.997 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 11 & 0.955 & 1.000 & 0.997 & 0.733 & 1.000 & 1.000 \\ 12 & 0.796 & 0.950 & 0.948 & 0.800 & 1.000 & 1.000 \\ 13 & 0.947 & 0.938 & 0.997 & 0.667 & 1.000 & 1.000 \\ 14 & 0.813 & 0.799 & 0.810 & 0.667 & 0.917 & 0.923 \\ 15 & 0.721 & 0.720 & 0.748 & 0.750 & 0.750 & 0.750 \\ 16 & 0.683 & 0.675 & 0.667 & 0.667 & 0.687 & 0.723 \\ 17 & 0.723 & 0.738 & 0.703 & 0.703 & 0.750 & 0.750 \\ 18 & 0.650 & 0.667 & 0.667 & 0.667 & 0.667 & 0.667 \\ 19 & 0.505 & 0.515 & 0.502 & 0.555 & 0.585 & 0.593 \\ 20 & 0.398 & 0.422 & 0.482 & 0.460 & 0.482 & 0.480 \\ \bottomrule avg & 0.807 & 0.800 & 0.856 & 0.833 & 0.885 & 0.892 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Scores obtained under Scenario S2 (static $F_1$) for each of the algorithms per problem $p$. Higher is better, 1 is the maximum score. Scores are averaged over 50 runs and five accuracy levels. Average (avg.) score computed over all 20 problems. } \label{tab:s2} \smaller \begin{tabular}{c|cccc|cc} \toprule & & & & & \multicolumn{2}{|c}{HillVallEA} \\ \# & NEA2+ & RLSIS & RS-CMSA & SDE-Ga & HillVallEA18 & HillVallEA19 \\ \toprule 1 & 1.000 & 0.993 & 0.996 & 1.000 & 1.000 & 1.000 \\ 2 & 1.000 & 0.993 & 1.000 & 1.000 & 1.000 & 1.000 \\ 3 & 1.000 & 0.993 & 0.987 & 1.000 & 1.000 & 1.000 \\ 4 & 0.960 & 0.978 & 1.000 & 1.000 & 1.000 & 1.000 \\ 5 & 0.947 & 0.949 & 1.000 & 1.000 & 1.000 & 1.000 \\ 6 & 0.997 & 0.924 & 0.999 & 1.000 & 1.000 & 1.000 \\ 7 & 0.614 & 0.947 & 0.999 & 1.000 & 1.000 & 1.000 \\ 8 & 0.723 & 0.315 & 0.931 & 1.000 & 0.958 & 0.987 \\ 9 & 0.646 & 0.733 & 0.844 & 0.996 & 0.972 & 0.986 \\ 10 & 0.997 & 0.988 & 1.000 & 1.000 & 1.000 & 1.000 \\ 11 & 0.971 & 0.992 & 0.998 & 0.733 & 1.000 & 1.000 \\ 12 & 0.881 & 0.967 & 0.972 & 0.800 & 1.000 & 1.000 \\ 13 & 0.966 & 0.941 & 0.998 & 0.723 & 1.000 & 1.000 \\ 14 & 0.894 & 0.865 & 0.893 & 0.799 & 0.953 & 0.958 \\ 15 & 0.835 & 0.831 & 0.855 & 0.857 & 0.857 & 0.857 \\ 16 & 0.811 & 0.795 & 0.800 & 0.800 & 0.813 & 0.837 \\ 17 & 0.838 & 0.843 & 0.823 & 0.824 & 0.857 & 0.857 \\ 18 & 0.787 & 0.794 & 0.800 & 0.800 & 0.800 & 0.800 \\ 19 & 0.668 & 0.676 & 0.668 & 0.712 & 0.735 & 0.741 \\ 20 & 0.563 & 0.590 & 0.650 & 0.627 & 0.650 & 0.647 \\ \bottomrule avg & 0.855 & 0.855 & 0.911 & 0.884 & 0.930 & 0.934 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Scores obtained under Scenario S3 (dynamic $F_1$) for each of the algorithms per problem $p$. Higher is better, 1 is the maximum score. Scores are averaged over 50 runs and five accuracy levels. Average (avg.) score computed over all 20 problems. } \label{tab:s3} \smaller \begin{tabular}{c|cccc|cc} \toprule & & & & & \multicolumn{2}{|c}{HillVallEA} \\ \# & NEA2+ & RLSIS & RS-CMSA & SDE-Ga & HillVallEA18 & HillVallEA19 \\ \toprule 1 & 0.982 & 0.987 & 0.911 & 0.572 & 0.992 & 0.995 \\ 2 & 0.917 & 0.983 & 0.959 & 0.626 & 0.987 & 0.989 \\ 3 & 0.895 & 0.982 & 0.949 & 0.456 & 0.992 & 0.994 \\ 4 & 0.959 & 0.975 & 0.932 & 0.528 & 0.973 & 0.977 \\ 5 & 0.971 & 0.979 & 0.944 & 0.416 & 0.982 & 0.983 \\ 6 & 0.917 & 0.699 & 0.933 & 0.525 & 0.951 & 0.966 \\ 7 & 0.659 & 0.855 & 0.928 & 0.413 & 0.960 & 0.966 \\ 8 & 0.464 & 0.209 & 0.715 & 0.324 & 0.750 & 0.805 \\ 9 & 0.550 & 0.573 & 0.654 & 0.157 & 0.791 & 0.818 \\ 10 & 0.988 & 0.983 & 0.984 & 0.539 & 0.979 & 0.982 \\ 11 & 0.961 & 0.980 & 0.967 & 0.254 & 0.983 & 0.983 \\ 12 & 0.829 & 0.859 & 0.909 & 0.303 & 0.958 & 0.963 \\ 13 & 0.932 & 0.897 & 0.923 & 0.372 & 0.957 & 0.964 \\ 14 & 0.862 & 0.834 & 0.832 & 0.434 & 0.867 & 0.882 \\ 15 & 0.806 & 0.782 & 0.785 & 0.544 & 0.824 & 0.836 \\ 16 & 0.799 & 0.788 & 0.777 & 0.449 & 0.776 & 0.793 \\ 17 & 0.803 & 0.779 & 0.688 & 0.419 & 0.787 & 0.816 \\ 18 & 0.719 & 0.748 & 0.730 & 0.160 & 0.721 & 0.763 \\ 19 & 0.624 & 0.627 & 0.560 & 0.035 & 0.634 & 0.656 \\ 20 & 0.491 & 0.496 & 0.502 & 0.003 & 0.514 & 0.524 \\ \bottomrule avg & 0.806 & 0.801 & 0.829 & 0.376 & 0.869 & 0.883 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Algorithm ranks based on the three scenarios. } \label{tab:ranks} \smaller \begin{tabular}{c|cc|cc|cc|c} \toprule Algorithm & S1 & rank & S2 & rank & S3 & rank & average rank \\ \toprule RLSIS & 0.800 & 6& 0.855 &5 & 0.801 & 5& 5.3\\ NEA2+ & 0.807 & 5& 0.855 & 6 & 0.806 &4 & 5\\ SDE-Ga & 0.833 & 4& 0.884 & 4 & 0.376 & 6& 4.7\\ RS-CMSA & 0.856 &3 & 0.911 & 3& 0.829 & 3& 3\\ HillVallEA18 & 0.885 &2 & 0.930 & 2 & 0.869 &2 & 2\\ HillVallEA19 & 0.892& 1& 0.934 & 1 & 0.883 & 1& 1\\ \bottomrule \end{tabular} \end{center} \end{table} \bibliographystyle{acm}
1,477,468,750,548
arxiv
\section{Introduction} In the last decade the connection between cosmology and particle physics has become more and more interesting. We have several sectors in which a comparison between the theoretical high energy physics processes of the very early universe and their observable traces is possible. These research frontiers include the exploration of the anisotropies in the cosmic microwave background (CMB) and of the large scale structure in the present matter distribution (LSS); wide and high resolution experiments designed to gain observationally insight into these topics are currently in preparation \cite{CMB,LSS}. Let us approach the subject of this work. Very recently the emissions from deep type Ia supernova have become observable with unprecedented high resolution \cite{IA}. The surprising news arising from these observations suggest that we are currently living in a universe that is {\it accelerating} its expansion. As it is well known, this could be the observable effect of a vacuum energy density comparable with the critical one. A possible explanation could be the existence of a cosmological constant, much smaller than the characteristic energy scales of quantum gravity effects but non-zero. This {\it ad\ hoc} possibility is unlikely for theorists, tending to believe that some unknown process set the cosmological constant to zero in the very early universe \cite{CC}. Recently the idea that the vacuum energy density could be mimicked by a dynamical scalar field $\phi$, named quintessence, has been considered with more and more interest since it provide several nice features in the CMB power spectrum and LSS. More precisely, this occurs in the two most popular candidates proposed so far for this field, a cosine potential for the Pseudo Nambu Goldstone Boson \cite{CDF} and an exponential potential \cite{FJ}. However, since most of the inflationary phenomenology is based on the dynamics of a scalar field , the inflaton, it is tempting to relate quintessence to inflaton. A first proposal from the scientific community in this sense was made very recently \cite{PV}. The authors suggested that quintessence and inflaton are the same field seen at different times. They proposed a detailed model, providing an appropriate set of values for the physical constants in order to realize this fascinating possibility. The key feature is the occurring of a kinetic energy dominated phase that connects inflation and radiation era. Although the model has to be further investigated, especially for what concerns the spectrum of gravitational waves and the possible production of topological defects, it is interesting to look at the perturbations dynamics induced by its basic features. A quintessence model with similar characteristics is being considered by other authors \cite{ZWS}, with emphasis on arguments regarding the viability of general quintessence models. In this work we concentrate on the power spectra of CMB polarization and temperature anisotropies as well as on the LSS perturbations produced in this scenario. In section II we recall the background dynamics in quintessential inflation and in section III we describe its perturbations; finally, in section IV we numerically compute and discuss the signature of this scenario on the CMB polarization and temperature power spectra and on the LSS. \section{quintessential inflation} In this section we briefly review the basic features of quintessential inflation; the reader is advised to look at the original work in \cite{PV} for a complete exposition and references. The model involves a minimally coupled scalar field with potential $$ V(\phi )=\lambda\cdot (M^{4}+\phi^{4})\ \ {\rm for}\ \ \phi < 0\ , $$ \begin{equation} V(\phi )={\lambda M^{8}\over (M^{4}+\phi^{4})} \ \ {\rm for}\ \ \phi\ge 0\ . \label{pvp} \end{equation} Inflation occurs for $\phi\ll -M$; radiation and matter dominated eras for $\phi\gg M$. In order to make particles and perturbations production just as in chaotic inflationary models \cite{MFB}, and to have about $70\%$ of the critical energy today in quintessence, the following physical constants, in $\hbar =1,c=1$ units, have been chosen: \begin{equation} \lambda= 10^{-14}\ \ ,\ \ M=8\cdot 10^{5}\ {\rm GeV}\ . \label{pc} \end{equation} The cosmic trajectory is assumed to begin at $\phi\ll -M_{PL}$, with an era of chaotic inflation. At $\phi\simeq -M_{PL}$ a kinetic energy dominated era begins; in this epoch the (kinetic) quintessence energy density decreases very rapidly, $\rho_{\phi}\sim a^{-6}$ where $a$ is the scale factor. As in the ordinary scenario, particles are produced in the curved space-time from an initial quantum vacuum state \cite{MFB}. In order to have a workable model, it is necessary that the kinetic era ends when the total field energy is negligible with respect to the radiation one; if this is the case, the radiation era begins and $\phi$ starts its slow rolling toward the present state. It is interesting to note that this model gives a reheating temperature curiously comparable with the supposed electroweak symmetry breaking scale, $T_{rh}\simeq 10^{3}N_{\psi}^{3/4}$GeV, where $N_{\psi}$ is the number of scalar fields involved in the process. This is the general phenomenology imposed by the constants (\ref{pc}). At the present time, in the matter dominated era, the inflaton is totally equivalent to a quintessence field, rolling on the potential $V(\phi )\simeq \lambda M^{8}/\phi^{4}$ with a simple time evolution, $\phi\simeq 2^{1/3} \lambda^{1/6}M^{4/3}H_{0}^{-1/3}/\sqrt{1+z}$. \section{Perturbations} Perturbations in models with a dynamical scalar field together with the other ordinary matter and radiation particles require a generalization \cite{PB} of earlier works \cite{MB}; a complete treatment of this subject can be found in the cited works and here we report only the relevant issues for the present problem. Even if the reheating temperature in the present scenario is much smaller than in chaotic inflation, radiation dominates well before nucleosynthesis. In the most simple view, the cosmic fluid can be thought composed by photons ($\gamma$), baryons ($b$), cold dark matter ($cdm$) and three families of massless neutrinos ($\nu$). As we briefly exposed in the previous section, Gaussian perturbations arise adiabatically from the inflaton dynamics at the end of inflation \cite{PV}. They involve matter and radiation as well as fluctuations $\delta\phi$ of the scalar field around its background value $\phi$. The initial conditions for the perturbations are posed at early conformal time $\tau =\int_{0}^{t}dt/a(t)$ when essentially all the perturbation wavenumber $k$ interesting for structure formation are well outside the effective horizon, $k\tau\ll 1$. Adiabatic conditions are posed initially by requiring that no gauge invariant entropy perturbation difference exists between any pair of components \cite{PB}; in the conformal Newtonian gauge, the leading order early time behaviour for the scalar quantities evolving from adiabatic initial conditions is $$ \delta_{\gamma}={4\over 3}\delta_{b}= {4\over 3}\delta_{cdm}=\delta_{\nu}\propto constant\ , $$ $$ v_{\gamma}=v_{b}=v_{cdm}=v_{\nu}\propto k^{2}\tau\ \ , \ \ \sigma_{\nu}\propto k^{2}\tau^{2}\ , $$ \begin{equation} \delta\phi\propto \left({d\phi\over dt}\right)_{t=0} \cdot\tau^{2}\ , \label{ic} \end{equation} where $\delta,v,\sigma$ means density, velocity and shear perturbations respectively. Note that $\delta\phi$ is initially linked to the kinetic field energy. A multipole expansion accounts for temperature as well as polarization perturbations of the Planckian black body spectrum arising mainly from Thomson scattering at the energies relevant in the present problem; neutrinos also are treated similarly without the Thomson scattering terms \cite{MB}. From this initial regime, perturbations evolve according the linearized Einstein and Boltzmann equations in a flat Friedmann Robertson Walker background; the latter involves quintessence, scale factor and unperturbed density of all the fluid species, being driven by the Klein Gordon and unperturbed Einstein equations respectively (see \cite{PB} and references therein). \section{Results and discussion} We require that at the present about $70\%$ of the critical energy density resides in quintessence. This energy comes essentially from the potential component, since $\phi$ is rolling very slowly making the kinetic energy negligible, as we show below; the baryon abundance respects the nucleosynthesis constraint: \begin{equation} \Omega_{\phi}=.7\ \ ,\ \ \Omega_{b}=.05 \ \ ,\ \ \Omega_{cdm}=1-\Omega_{\phi}-\Omega_{b}\ . \label{o} \end{equation} Also we adopt $H_{0}=70$ km/s/Mpc consistently with some present measurements \cite{HUBBLE} and assume an initial power spectrum exactly scale-invariant. The request that the present amount of quintessence energy is $\Omega_{\phi}$ does not fix completely its dynamics; since it obeys the Klein Gordon equation we need to specify its time derivative. At the present this is very low since it has been redshifted away during the expansion occurred in the radiation and matter eras (assuming that it is not too large with respect to the potential energy at the beginning of the radiation dominated era). However, particularly in this scenario where the kinetic energy plays a fundamental role during the cosmic evolution, it is important to take into account the field time derivative. This is realized in the following way: after fixing $\Omega_{\phi}$, the code asks for the initial kinetic to potential energy ratio; the sign of the time derivative is then chosen toward the direction of lower potential of course, that is $d\phi /dt >0$ in this case. In all the cases analyzed $\phi$ has initially an equal amount of kinetic and potential energy. We examine the imprint on the main observational topics for this scenario. We show our results regarding CMB polarization, temperature and linear matter power spectrum in figures 1,2,3 respectively. In these figures, solid curves having increasing amplitudes describe quintessence models having increasing values of $\Omega_{\phi}$. Polarization anisotropies in the CMB has not yet measured; the existing upper limits are at the level of the measured temperature anisotropies, see \cite{W} and references therein for further details. On the other hand it is expected to arise naturally as the result of the anisotropic nature of the Thomson scattering; thus polarization anisotropies arises mainly from the CMB acoustic oscillations occurring on sub-horizon scale at decoupling, that is a degree in the sky or less. Figure 1 shows the power spectra of the polarization anisotropies in quintessential inflation (solid line). The thin dashed line represents an ordinary Cold Dark Matter model (CDM) in which the energy density associated with quintessence has been replaced with dark matter. The amplitude of the peaks increases and there is a global shift toward higher multipole indexes, or smaller scales. The first effect is due to the lack of matter at decoupling with respect to the CDM model: the universe at decoupling is mildly radiation dominated and this enhances the radiation perturbation amplitude (see \cite{PB} and references therein). The second is a projection effect resulting from a pure geometric feature \cite{HSS}; the comoving distance of the last scattering surface in quintessence models is larger than in CDM, thus shifting the angular scales corresponding to the coherent acoustic oscillations toward smaller angular scales. The same features occur in the COBE normalized CMB temperature power spectra shown in figure 2. In this case a third effect arises from the integrated Sachs-Wolfe effect due to the time evolution of matter perturbations along the photons path. Again this is due to the lack of matter in quintessence model with respect to the CDM. Also this is effective mainly on super-horizon scales, low multipoles in the figure, that prevent the cancelation from the oscillatory sub-horizon dynamics. We used linear perturbation theory to calculate the matter power spectra $ \ P(k) \ $ plotted in figure 3. They are defined by $< \delta ( {\bf k}) {\delta}^{\ast} ( {\bf k'})> = 4 \pi P(k) {\delta}_D ( {\bf k-k'})$, where ${\delta}_D$ is the Dirac delta function and $ \delta ( {\bf k})$ is the Fourier transform of the spatial matter density fluctuation field. The epoch of matter-radiation equality in quintessence scenarios is obviously closer to the present than in CDM models, because of the lack of matter. The effective horizon scale at equivalence corresponds roughly to the location of the turnover in the spectra; this causes the shift toward larger scales, or small wavenumbers, and therefore subtracts power to the small scale structure as indicated by the current data from galaxy surveys \cite{RS}. \\ The dispersion of the density field is quantified by values of $\sigma_8 = .59$ for $\Omega_{\phi}=.8$, $\sigma_8 = .87$ for $\Omega_{\phi}=.7$, $\sigma_8 = 1.08$ for $\Omega_{\phi}=.6$, which are to be compared with standard CDM ($\Omega_{matter}=1$) model prediction of $\sigma_8 = 1.63$. Before concluding, it is necessary to point out here that most of the dynamics of background and perturbations regards the $\phi\ge 0$ side of the potential (\ref{pvp}). In other words, in the present analysis the distinctive features on CMB and LSS come from the form of the potential in the radiation and matter dominated eras and from the assumption of initial adiabaticity. If the future observed spectra should be in agreement with the ones computed here, we shall be able to state that the observed cosmology is consistent with quintessential inflation, without confirming it definitely. Surely this model has to be further investigated, and hopefully its predictions will be enriched by a more deep understanding of its phenomenology at the transition between inflation and radiation era; ultimately this scenario will be further constrained by experimental enterprises of the next generation, beginning from the primordial gravitational waves spectrum. \acknowledgements We are grateful to Sabino Matarrese for his warm encouragement.
1,477,468,750,549
arxiv
\section{Introduction} \labbel{intro} Conditions under which a product of topological spaces satisfies some local property have long been known in many particular instances. Results with a general flavor appeared in Preu\ss\ \cite[Section 5.3]{P} and in Hoffmann \cite[Theorem 2.2 and Remark 2.4(b)]{H}. Notice that the terminology used by the above authors sometimes differs from the one we shall use. The results from \cite{P, H} have been improved, together with significant examples and applications, in Brandhorst \cite{Br} and Brandhorst and Ern\'e \cite{BE}. We refer to \cite{BE} for historical remarks and examples; in particular, about how the definitions and the results generalize classical cases in special situations. Here we give a complete characterization of those spaces which are local relative to some class closed under finite products. We also deal with some classes which are not even closed under finite product. Let $\mathcal T$ be a class of topological spaces. Members of $\mathcal T$ will be called $\mathcal T$-spaces. For each class $\mathcal T$, three local notions are defined; see \cite{BE,H,P} and Hoshina \cite{Henc}. A topological space $X$ is a \emph{local $\mathcal T$-space} (resp., a \emph{basic $\mathcal T$-space}) if, for every point $x \in X$ and every neighborhood $U$ of $x$, there is a neighborhood (resp., an open neighborhood) $V$ of $x$ such that $V \subseteq U$ and $V \in \mathcal T$. We sometimes say that $X$ is $\mathcal T$-local, instead of saying that $X$ is a local $\mathcal T$-space, and similarly for $\mathcal T$-basic. Under the Axiom of Choice (AC) a space is $\mathcal T$-local if and only if every point of $x$ has a neighborhood base consisting of $\mathcal T$-subspaces. Here a \emph{$\mathcal T$-subspace} is a subspace which belongs to $\mathcal T$; similarly, a \emph{$\mathcal T$-neighborhood} of some point is a neighborhood of that point which belongs to $\mathcal T$. Again using AC, a space is $\mathcal T$-basic if and only if it has an open base consisting of $\mathcal T$-subspaces. However, we shall try to avoid the use of AC as much as possible; see Remark \ref{ac}. As a rougher notion, a \emph{local$_1$ $\mathcal T$-space} is a space such that each point has at least one neighborhood which is a $\mathcal T$-space. In many cases, especially assuming some separation axiom, localness and local$_1$ness coincide, and sometimes all the three above local notions coincide, but sometimes not. See Lemma \ref{triv}(d), where another local property frequently equivalent to $\mathcal T$-localness shall be mentioned. Characterizations of basic and local $\mathcal T$-spaces appear in \cite{H,P}, in the case when $\mathcal T$ is closed with respect both to products and to images of surjective continuous functions. A characterization under weaker conditions appears in \cite[Theorem 2.4]{BE}. In the quoted theorem $\mathcal T$ has to be closed under finite products, and a further condition has to be satisfied. We prove here a more general statement which applies to \emph{every} class $\mathcal T$ which is closed under finite products. The proof is perhaps simpler. Then a characterization is given for certain $\mathcal T$ which are not even closed under finite products. Examples include local sequential compactness and local Lindel\"ofness. Moreover, we show that the assumption that $\mathcal T$ is closed with respect to images of surjective continuous functions can be considerably weakened. We reformulate many results in such a way that, seemingly, the Axiom of Choice is not needed. No separation axiom is used, either, unless explicitly stated otherwise. All products under consideration are endowed with the Tychonoff topology, the coarsest topology making all the projections continue. Most results would change dramatically, when considering the box topology, or intermediate topologies. The next lemma is trivial, we shall use it (especially clause (f)) in many examples, generally without explicit mention. $T_3$ means \emph{regular and Hausdorff}, while we do not assume that \emph{regular} implies Hausdorff. \begin{lemma} \labbel{triv} Let $\mathcal T$ be a class of topological spaces. \begin{enumerate}[(a)] \item $\mathcal T$-basic implies $\mathcal T$-local and $\mathcal T$-local implies $\mathcal T$-local$_1$. \item A $\mathcal T$-space is $\mathcal T$-local$_1$. Hence, for every class $\mathcal T$, $\mathcal T$-local and $\mathcal T$-local$_1$ are equivalent if and only if every $\mathcal T$-space is $\mathcal T$-local. \item If $\mathcal T$ is open-hereditary, then all the three local properties coincide, in particular any $\mathcal T$-space is $\mathcal T$-basic and $\mathcal T$-local. \item If $\mathcal T$ is closed-hereditary, then, in a regular topological space, $\mathcal T$-localness and $\mathcal T$-local$_1$ness are equivalent, and are also equivalent to the following. \begin{enumerate}[(1)] \item[{\rm (L)}] {\rm For every point $x$ and every neighborhood $U$ of $x$, there is an open neighborhood $V$ of $x$ such that $ \overline{V} \subseteq U $ and $ \overline{V} \in \mathcal T$.} \end{enumerate} \item In particular, if $\mathcal T$ is closed-hereditary, then a regular $\mathcal T$-space is $\mathcal T$-local. \item In both (d) and (e) above, the assumption that $\mathcal T$ is closed-hereditary can be weakened to $\mathcal T$ being hereditary with respect to regular closed subsets. \end{enumerate} \end{lemma} \section{A weak assumption} \labbel{prel} We shall use almost everywhere the following assumption (W). \begin{enumerate} \item[(W)] $\mathcal T$ is a class of topological spaces which satisfies the following properties. \begin{enumerate} \item[(W1)] $\mathcal T$ is closed under homeomorphic images. \item[(W2)] Whenever $A$, $B$ are arbitrary topological spaces, $a \in A$, $b \in B$ and $ T \subseteq A \times B$ is a $\mathcal T$-neighborhood of $(a, b)$, then there is $S \subseteq A$ which is a $\mathcal T$-neighborhood of $a$. \item[(W3)] Whenever $A$, $B$ are nonempty topological spaces, $B'$ is a nonempty open subset of $B$, $ A \times B' \subseteq T \subseteq A \times B$ and $T \in \mathcal T$, then $A \in \mathcal T$. \end{enumerate} \end{enumerate} Notice that, in particular, (W3) implies the following weaker property. \begin{enumerate}[(WWW)] \item [(W3$'$)] If $A \times B \in \mathcal T$, for some nonempty $A$, $B$, then $A \in \mathcal T$. \end{enumerate} In particular (provided that $\mathcal T$ contains at least one nonempty space), (W1) and (W3$'$) imply that every one-element space is a $\mathcal T$-space. If $\mathcal T$ is closed under images of continuous surjections, then (W) is verified. Indeed, under that assumption, (W1) is trivial and, as far as (W2) and (W3) are concerned, it is enough to consider $ \pi_1(T)$, where $\pi_1$ is the canonical projection onto the first factor. In another direction, (W) is verified also in case $\mathcal T$ is hereditary and closed under homeomorphic images. In this case it is enough to consider $T \cap (A \times \{ b \}) $, where, to get (W3), we pick any $b \in B'$. If we are working in the context of $T_1$ spaces (i.e., all spaces in (W2) and (W3) are assumed to be $T_1$) then it is enough to assume that $\mathcal T$ is closed-hereditary and closed under homeomorphic images. In particular, in the context of $T_1$ spaces, the class $\mathcal T$ of all normal spaces satisfies (W). Hence Property (W) seems to be definitely very\/ {\emph{W}}\/eak. A few more conditions implying (W) shall be discussed near the end of this note. If $\prod _{i \in I} X_i $ is a product of topological spaces and $ J \subseteq I$, then $\prod _{i \in J} X_i $ is called a \emph{subproduct} of $\prod _{i \in I} X_i $. If $I \setminus J$ is finite, then (with a slight abuse of terminology) we shall call $\prod _{i \in J} X_i $ a \emph{cofinite subproduct} of $\prod _{i \in I} X_i $. \begin{lemma} \labbel{ln} Suppose that $X$ is a nonempty product of topological spaces and $\mathcal T$ is a class of topological spaces closed under homeomorphic images. \begin{enumerate} \item[(a)] If $\mathcal T$ satisfies {\rm (W2)} and $X$ is $\mathcal T$-local, then all factors and all subproducts are $\mathcal T$-local. \item[(b)] If $\mathcal T$ satisfies {\rm (W3)}, $X$ contains a set $T \in \mathcal T$ and $T$ contains some nonempty open set, then some cofinite subproduct belongs to $\mathcal T$. \item[(c)] If $\mathcal T$ is closed under finite products, $\mathcal T$ satisfies {\rm (W3$'$)}, $X \in \mathcal T$ and all factors of $X$ are $\mathcal T$-local, then $X$ is $\mathcal T$-local. \item[(d)] If $\mathcal T$ is closed under finite products, then the product of two local $\mathcal T$-spaces is still $\mathcal T$-local. \end{enumerate} \end{lemma} \begin{proof} (a) Let $X=\prod _{i \in I} X_i $ be nonempty and $\mathcal T$-local and let $ \emptyset \not=J \subseteq I$. Since $\prod _{i \in I} X_i $ is nonempty, then also $Y=\prod _{i \in J} X_i $ is nonempty. We have to show that $Y $ is $\mathcal T$-local (we allow $|J|= 1$, so this case takes into account factors). Let $H= I \setminus J$. If $H= \emptyset $, there is nothing to prove, hence we can suppose that $H\not= \emptyset $. Let $B= \prod _{i \in H} X_i $. Notice that $X$ is homeomorphic to $Y \times B$, hence $Y \times B$ is $\mathcal T$-local, since $\mathcal T$ is closed under homeomorphisms. Let $a \in Y$ and suppose that $A$ is a neighborhood of $a$ in $Y $. Since $X=\prod _{i \in I} X_i $ is nonempty, then also $B= \prod _{i \in H} X_i $ is nonempty; pick $b \in B$. Now $A \times B$ is a neighborhood of $(a,b)$ in $Y \times B$, which is $\mathcal T$-local, hence there is $T \subseteq A \times B$ which is a $\mathcal T$-neighborhood of $(a,b)$. By (W2), there is $ S \subseteq A$ which is a $\mathcal T$-neighborhood of $a$. The above argument works for every $a \in Y $ and every neighborhood $A$ of $a$, hence we get that $Y$ is $\mathcal T$-local. (b) Let $X=\prod _{i \in I} X_i $, hence $T$ contains a basic nonempty open set of the form $\prod _{i \in I} Y_i $, where each $Y_i$ is open in $X_i$, and $Y_i =X_i$, for every $i \in J = I \setminus F$, with $F$ finite. If $F= \emptyset $ then $T=X$ and we are done, so suppose that $F \not= \emptyset $. Take $A= \prod _{i \in J} X_i $ and $B= \prod _{i \in F} X_i $. Since $\mathcal T$ is closed under homeomorphisms, we lose no generality if we identify $X$ with $A \times B$. Taking $B' = \prod _{i \in F} Y_i $, we have that $ A \times B' \subseteq T \subseteq A \times B$, hence we can apply (W3) to get that the cofinite subproduct $A$ belongs to $\mathcal T$. (c) If $x = ( x_i) _{i \in I} \in X$ and $U$ is a neighborhood of $x$, then $U$ contains a basic open set of the form $\prod _{i \in I} U_i $, where $x_i \in U_i$ for every $i \in I$, and $U_i=X_i$, for all indices except perhaps for indices in a finite set $F$. By (W3$'$) and closure under homeomorphisms, $C=\prod _{i \in I \setminus F } X_i \in \mathcal T$. Since each factor is $\mathcal T$-local, then, for every $i \in F $, $x_i$ has a $\mathcal T$-neighborhood $V_i \subseteq U_i$. Let $V_i = X_i$ for $i \not\in F $. Then $\prod _{i \in I} V_i $ is a neighborhood of $x$ contained in $U$. Since $\prod _{i \in I} V_i $ is homeomorphic to the finite product $C \times \prod _{i \in F} V_i $ and since $\mathcal T$ is closed under finite products and homeomorphisms, then $\prod _{i \in I} V_i \in \mathcal T$, hence $\prod _{i \in I} V_i $ is a neighborhood of $x$ as requested. (d) is similar and easier. \end{proof} Notice that (a) in Lemma \ref{ln} holds also in case we give to $\prod _{i \in I} X_i $ the box topology, but this is not necessarily the case for (b) and (c). \section{Properties closed under products} \labbel{clpr} \begin{theorem} \labbel{bebis} Suppose that $X$ is a nonempty product and $\mathcal T$ is a class of topological spaces closed under finite products and satisfying {\rm (W)}. Then the following conditions are equivalent (conditions marked with an asterisk are equivalent under the further assumption that every $\mathcal T$-space is $\mathcal T$-local). \begin{enumerate} \item $X$ is $\mathcal T$-local. \item Each factor is $\mathcal T$-local and some cofinite subproduct is a $\mathcal T$-space. \item[(3)*] Some cofinite subproduct is a $\mathcal T$-space and each of the remaining factors are $\mathcal T$-local. \end{enumerate} If, in addition, $\mathcal T$ is closed under arbitrary products, then the preceding conditions are also equivalent to the following ones. \begin{enumerate} \item[(4)] Every countable subproduct is $\mathcal T$-local. \item[(5)] Each factor is $\mathcal T$-local and all but a finite number of factors are $\mathcal T$-spaces. \item[(6)*] All but a finite number of factors are $\mathcal T$-spaces and the remaining factors are $\mathcal T$-local. \end{enumerate} \end{theorem} \begin{proof} (1) $\Rightarrow $ (2) follows from Lemma \ref{ln}(a)(b). (2) $\Rightarrow $ (1) By Lemma \ref{ln}(c), the cofinite subproduct given by (2) is $\mathcal T$-local, and then $X$ is homeomorphic to a finite product of $\mathcal T$-local spaces, hence $\mathcal T$-local, by Lemma \ref{ln}(d). (2) $\Rightarrow $ (3) is trivial. If (3) holds and every $\mathcal T$-space is $\mathcal T$-local, then the cofinite subproduct given by (3) is $\mathcal T$-local, hence every factor is $\mathcal T$-local, by Lemma \ref{ln}(a). Thus (3) $\Rightarrow $ (2). (2) $\Rightarrow $ (5) follows by (W3$'$) and (W1); (5) $\Rightarrow $ (2) is immediate from the additional assumption. Hence, under the additional assumption, (1), (2) and (5) are equivalent. (1) $\Rightarrow $ (4) follows from Lemma \ref{ln}(a). If (4) holds, then, again by \ref{ln}(a), all factors are $\mathcal T$-local. Suppose by contradiction that (4) holds and (5) fails, thus there are infinitely many factors which are not $\mathcal T$-spaces. Choose a countable subfamily. By (4), the subproduct of the members of such a family is $\mathcal T$-local. Applying the already proved implication (1) $\Rightarrow $ (5) to this countable subproduct, we get that all but finitely many members of the subfamily are $\mathcal T$-spaces, a contradiction. The equivalence of (5) and (6) is immediate from the assumption that every $\mathcal T$-space is $\mathcal T$-local. \end{proof} Notice that the equivalence of (1) and (2) above improves \cite[Theorem 2.4]{BE}. This is because the assumptions in \cite[Theorem 2.4]{BE} imply that $\mathcal T$ is closed under finite products, and, under the same assumptions, the last conclusion in \cite[Theorem 2.4]{BE} is equivalent to the product having a cofinite subproduct in $\mathcal T$. The versatility of Theorem \ref{bebis} and the broad range of validity of Property (W) are shown by the samples presented in the next two corollaries. In some cases the results are well-known. Further examples can be found in \cite{BE}; in some cases the results here are slightly more general. Following \cite{BE}, if $\kappa$ is an infinite cardinal, we denote by $\mathcal T_ \kappa $ the class of all spaces which can be obtained as the union of $< \kappa $ many $\mathcal T$-spaces. Notice that if $\mathcal T$ is closed under finite products, then $\mathcal T_ \kappa $ is closed under finite products, too. \begin{corollary} \labbel{ex1} A nonempty product of topological spaces is locally Hausdorff if and only if all but finitely many factors are Hausdorff and all the remaining factors are locally Hausdorff. The same holds when ``Hausdorff'' is replaced by any one of the following: $T_3$, regular, Tychonoff. If we work in the context of regular spaces, the same applies to compact, sequentially pseudocompact, bounded, $\lambda$-bounded, $D$-compact, $D$-feebly compact (for some given ultrafilter $D$). Here and below we can also consider the conjunction of any set of the above properties, in particular, simultaneous $D$-compactness, for $D$ belonging to a given set of ultrafilters. Without assuming separation axioms, a nonempty product of topological spaces is locally $D$-compact if and only if all factors are locally $D$-compact and all but finitely many factors are $D$-compact. The same applies when ``$D$-compact'' is replaced by any of the above mentioned properties, as well as by connected, path-connected, $H$-closed. Relative to any of the above properties a nonempty product is local if and only if every countable subproduct is local. If $\kappa$ is an infinite cardinal, a nonempty product is locally $\kappa$-sequentially compact if and only if all factors are locally $\kappa$-sequentially compact and some cofinite subproduct is $\kappa$-sequentially compact. The same applies when ``$\kappa$-sequentially compact'' is replaced by $\mathcal T_ \kappa $ (if $\mathcal T$ is closed under finite products and $\mathcal T_ \kappa $ satisfies (W)), or ``of cardinality $<\kappa$''. \end{corollary} Notice that, for example, a Hausdorff compact space is locally compact, but this is not necessarily true without assuming the Hausdorff property. Hence, in case we assume no separation axiom, we get only the weaker statements in the third paragraph of Corollary \ref{ex1}. In most cases the Hausdorff property is not enough and regularity is needed. As an example, if some space is $D$-feebly compact, then the closure of every open set is $D$-feebly compact, that is, $D$-feeble compactness is hereditary with respect to regular closed sets. Hence, by Lemma \ref{triv}(f), a regular $D$-feebly compact space is locally $D$-feebly compact, but, again, this is not necessarily the case, without assuming some separation axiom. Notice that in the context of Tychonoff spaces, $D$-feebly compact spaces are usually called \emph{$D$-pseudocompact}. For certain properties, some slightly more refined results can be obtained. Local sequential compactness shall be dealt with in the next section. \begin{corollary} \labbel{metr} (a) A nonempty product is locally metrizable if and only if all but countably many factors are one-element, all but finitely many factors are metrizable and the remaining factors are locally metrizable. In particular, a nonempty product is locally metrizable if and only if each subproduct by $ \leq \omega_1$ factors is locally metrizable. (b) A nonempty product is locally finite if and only if all but a finite number of factors are one-element spaces and the remaining factors are locally finite. A nonempty product is locally finite if and only if each countable subproduct is locally finite. The same applies when ``finite'' is replaced by either ``countable'', or ``of cardinality $< \kappa $'', if $ \omega \leq \kappa \leq 2^ \omega $ (of course, this adds nothing, if the Continuum Hypothesis holds). \end{corollary} \section{Local sequential compactness} \labbel{examples} We first present another corollary of Theorem \ref{bebis}. It deals with the general situation in which a product belongs to $\mathcal T$ if and only if all subproducts by a small number of factors belong to $\mathcal T$. \begin{corollary} \labbel{cor} Suppose that $\mathcal T$ is a class of topological spaces closed under finite products, $\mathcal T$ satisfies {\rm (W)} and there is some cardinal $\kappa > \omega $ such that a product belongs to $\mathcal T$ if and only if every subproduct by $<\kappa$ factors belongs to $\mathcal T$. If $X=\prod _{i \in I} X_i $ is a nonempty product, then the following conditions are equivalent. \begin{enumerate} \item[(I)] $X$ is $\mathcal T$-local. \item[(II)] Every subproduct by $<\kappa$ factors is $\mathcal T$-local. \end{enumerate} \end{corollary} \begin{proof} (I) $\Rightarrow $ (II) follows from Lemma \ref{ln}(a). We shall show that (II) implies Condition (2) in Theorem \ref{bebis}. If (II) holds, then all factors are $\mathcal T$-local, again by Lemma \ref{ln}(a). Arguing as in the last part of the proof of Theorem \ref{bebis} and since $\kappa$ is uncountable, we get that all but a finite number of factors are $\mathcal T$-spaces. Let $J$ be the set of those factors which are in $\mathcal T$. By assumption, any subproduct $\prod _{i \in H} X_i $ of $X$ such that $|H|<\kappa$ is $\mathcal T$-local, in particular, this happens if $H \subseteq J$. By Theorem \ref{bebis}(1) $\Rightarrow $ (2) \emph{applied to the product} $\prod _{i \in H} X_i $, we get that $\prod _{i \in H'} X_i $ is a $\mathcal T$-space, for some $H'$ cofinite in $H$. If $H \subseteq J$, then $X_i$ is a $\mathcal T$-space, for $i \in H \setminus H'$, hence, since, by assumption, $\mathcal T$ is closed under finite products, $\prod _{i \in H} X_i $ is a $\mathcal T$-space. Since this happens for every $H \subseteq J$ such that $|H|<\kappa$, we get from the assumption on $\mathcal T$ that $\prod _{i \in J} X_i $ belongs to $\mathcal T$. Thus \ref{bebis}(2) holds. \end{proof} \begin{corollary} \labbel{lsc} Let $X = \prod _{i \in I} X_i $ be a nonempty product. Then the following conditions are equivalent. \begin{enumerate} \item $X$ is locally sequentially compact; \item each factor is locally sequentially compact and some cofinite subproduct is sequentially compact; \item each factor is locally sequentially compact and there is a cofinite $J \subseteq I$ such that whenever $J' \subseteq J$ and $|J'| \leq \m s$, then $\prod _{i \in J'} X_i $ is sequentially compact; \item all subproducts by $\leq \m s$ factors are locally sequentially compact; \item ($ \m h = \m s$) all factors are locally sequentially compact, all but a finite number of factors are sequentially compact and the set of factors with a nonconverging sequence has cardinality $< \m s$. \item ($ \m h = \m s$, for $T_1$ spaces) all factors are locally sequentially compact, all but a finite number of factors are sequentially compact, and the set of factors with more than one point has cardinality $< \m s$. \item ($ \m h = \m s$, for $T_3$ spaces) the set of factors with more than one point has cardinality $< \m s$, all but a finite number of factors are sequentially compact, and the remaining factors are locally sequentially compact. \end{enumerate} \end{corollary} \begin{proof} In \cite[Corollary 6.4]{L} we have proved that a product is sequentially compact if and only if all subproducts by $\leq \m s$ factors are sequentially compact. See \cite{L} for the definition of $\m s$, $\m h$ and further references. (1) $\Leftrightarrow $ (2) is a particular case of the corresponding equivalence in Theorem \ref{bebis}. (2) $\Leftrightarrow $ (3) follows from \cite[Corollary 6.4]{L}. (1) $\Leftrightarrow $ (4) follows from \cite[Corollary 6.4]{L} and Corollary \ref{cor} with $\kappa= \m s^+$. In \cite[Corollary 6.6]{L} we have proved that if $ \m h = \m s$, then a product is sequentially compact if and only if all factors are sequentially compact and the set of factors with a nonconverging sequence has cardinality $<\m s$. This implies (2) $\Leftrightarrow $ (5). (5) $\Leftrightarrow $ (6) follows from the fact that a $T_1$ space in which every sequence converges is necessarily a one-point space. (6) $\Leftrightarrow $ (7) follows from the fact that a $T_3$ sequentially compact space is locally sequentially compact. \end{proof} \section{Some classes which are not closed under products} \labbel{clnpr} In order to work with classes which are not necessarily closed under products, we shall consider the following property of some class $\mathcal T$. \begin{enumerate} \item [(S)] There are a class $\mathcal S$ of topological spaces and an infinite cardinal $\kappa$ such that a nonempty product $\prod _{i \in I} X_i $ belongs to $\mathcal T$ if and only if $I$ can be written as a disjoint union $I=J \cup K$ in such a way that $|J| < \kappa $, $\prod _{i \in J} X_i $ is a $\mathcal T$-space and $\prod _{i \in K} X_i $ is an $\mathcal S$-space. We also require that $\mathcal S$ is closed under homeomorphic images and under taking cofinite subproducts. \end{enumerate} In the above condition we allow both $J= \emptyset $ and $K= \emptyset $. This is consistent, since if $\mathcal T$ satisfies (W3$'$), then any one-element space is a $\mathcal T$-space. Moreover, ``$\mathcal S$ being closed under cofinite subproduct'' can be interpreted in a sense that it implies that any one-element space belongs to $\mathcal S$. In particular, (S) implies that every $\mathcal S$-space is a $\mathcal T$-space and, more generally, that the product of a $\mathcal T$-space with an $\mathcal S$-space is a $\mathcal T$-space. Hence also the product of a $\mathcal T$-space with finitely many $\mathcal S$-spaces is a $\mathcal T$-space. If not otherwise mentioned, we \emph{do not} require that $\mathcal S$ satisfy any special further property. However, we should mention that if $\mathcal S$ satisfies the additional assumption that a nonempty product belongs to $\mathcal S$ if and only if each factor belongs to $\mathcal S$ then a nonempty product belongs to $\mathcal T$ if and only if every subproduct by $\leq \kappa$ factors belongs to $\mathcal T$. Indeed, if the latter is the case, we cannot have $\kappa$-many factors failing to be $\mathcal S$-spaces, hence the product is a $\mathcal T$-space, by (S). \begin{theorem} \labbel{st} Suppose that $\mathcal T$ is a class of topological spaces and $\mathcal T$ satisfies {\rm (W)} and {\rm (S)}, as given by $\mathcal S$ and $\kappa$. If $X=\prod _{i \in I} X_i $ is a nonempty product, then the following conditions are equivalent. \begin{enumerate} \item $X$ is $\mathcal T$-local. \item Both the following conditions hold. \begin{enumerate}[(a)] \item All subproducts of $X$ by $<\kappa$ factors are $\mathcal T$-local, and \item the index set $I$ can be partitioned into two disjoint subsets as $I=H \cup K$ in such a way that $|H| < \kappa $ and $\prod _{i \in K} X_i $ is an $\mathcal S$-space. \end{enumerate} \item The index set $I$ can be partitioned into two disjoint subsets as $I=H \cup K$ in such a way that $|H| < \kappa $, $\prod _{i \in K} X_i $ is an $\mathcal S$-space and $\prod _{i \in H \cup F} X_i $ is $\mathcal T$-local, for every finite $F \subseteq I$. \end{enumerate} If $\mathcal S$ satisfies the additional assumption that a nonempty product belongs to $\mathcal S$ if and only if each factor belongs to $\mathcal S$, then the preceding conditions (1)-(3) are equivalent to the following. \begin{enumerate} \item [(4)] All subproducts by $\leq \kappa$ factors are $\mathcal T$-local. \end{enumerate} \end{theorem} \begin{proof} If (1) holds, then each subproduct is a $\mathcal T$-space by Lemma \ref{ln}(a), hence (2)(a) holds. Moreover, by Lemma \ref{ln}(b), some cofinite subproduct is a $\mathcal T$-space, hence (2)(b) follows from (S), since if $F$ is finite and $|J| < \kappa $ then $|J \cup F| < \kappa $, $\kappa$ being infinite. (2) $\Rightarrow $ (3) is trivial. Suppose that (3) holds, $x = ( x_i) _{i \in I} \in X$ and $U$ is a neighborhood of $x$. Thus $U$ contains a basic neighborhood of the form $\prod _{i \in I} U_i $, where $U_i = X_i$, except for those $i$ in some finite set $F \subseteq I$. If $H$ and $K$ are given by (3), then, by the last requirement in (S), $\prod _{i \in K \setminus F} X_i $ is an $\mathcal S$-space. By (3), the subproduct $X'= \prod _{i \in H \cup F} X_i $ is $\mathcal T$-local. Consider the neighborhood $U'=\prod _{i \in H \cup F} U_i $ of $x'=( x_i) _{i \in H \cup F} $ in $X'$. Since $X'$ is $\mathcal T$-local, we get some $T \in \mathcal T$ such that $x' \in T \subseteq U'$. By (S), $T \times \prod _{i \in K \setminus F} X_i $ is a $\mathcal T$-space and, modulo the natural homeomorphism, it is a neighborhood of $x$ contained in $U$. Hence we have proved that $X$ is $\mathcal T$-local, that is (1) holds. Thus (1)-(3) are equivalent. (1) $\Rightarrow $ (4) follows again by Lemma \ref{ln}(a). We shall conclude the proof by showing that (4) implies (2), under the additional assumption. The implication (4) $\Rightarrow $ (2)(a) is trivial. In order to show (2)(b), in view of the additional hypothesis, it is enough to show that the set of all factors which are not $\mathcal S$-spaces has cardinality $<\kappa$. Suppose by contradiction that $J \subseteq I$, $|J| = \kappa $ and $X_i \not \in \mathcal S $, for every $i \in J$. By (4), the subproduct $\prod _{i \in J} X_i $ is $\mathcal T$-local, but then we get a contradiction by applying (1) $\Rightarrow $ (2)(b) to that subproduct. \end{proof} If $\prod _{i \in I} X_i $ is a product of topological spaces and $J \subseteq I$, we shall say, again with some abuse of terminology, that a product $\prod _{i \in H} X_i $ is a \emph{finite superproduct} of $( X_i) _{i \in J} $ if $H=J \cup F$, for some finite $F \subseteq I$. \begin{corollary} \labbel{fin} Suppose that $ n <\omega$ and $X$ is a nonempty product. Then the following conditions are equivalent. \begin{enumerate} \item $X$ is locally finally $ \omega_n$-compact. \item All but $ <\omega_n$ factors are compact, and any finite superproduct of the set of noncompact factors is locally finally $ \omega_n$-compact. \item Every subproduct by $\leq \omega _n$ factors is locally finally $ \omega_n$-compact. \item (for $T_2$ spaces) All but $ <\omega_n$ factors are compact, and the product of the noncompact factors is locally finally $ \omega_n$-compact. \end{enumerate} If $\lambda$ is a strong limit cardinal with $\cf\lambda \geq \omega _n$, then all the above conditions hold when final $ \omega_n$-compactness is everywhere replaced by $[ \omega _n, \lambda ]$-compactness and compactness is replaced by initial $\lambda$-compactness (but the separation assumption in (4) should be $T_3$). \end{corollary} \begin{proof} Immediate from Theorem \ref{st} and \cite[Theorems 4.1 and 4.3]{L}. \end{proof} Notice that $ \omega_1$-final compactness is the same as Lindel\"ofness. Since the product of countably many copies of $ \omega$ with the discrete topology is Lindel\"of and locally Lindel\"of, but the product of uncountably many copies of $ \omega$ is not Lindel\"of (hence not locally Lindel\"of, either), we get that ``$\leq \omega _1$'' in Condition (3) above cannot be improved to ``$< \omega _1$''. However, we do not know whether Corollary \ref{fin} can be improved, say, in the case of Lindel\"ofness, to the following. A product is locally Lindel\"of if and only if all but countably many factors are compact, all but finitely many factors are Lindel\"of and every finite subproduct is locally Lindel\"of. We expect the above statement to be false, in general. Again applying Theorem \ref{st}, in this case together with \cite[Corollary 5.3 and Propositions 5.1 and 5.2]{L}, we get the following. \begin{corollary} \labbel{meng} If $X$ is a nonempty product, then the following conditions are equivalent. \begin{enumerate} \item $X$ is locally Menger. \item All but countably many factors are compact, and any finite superproduct of the set of non Menger factors is locally Menger. \item Every subproduct by $\leq \omega _1$ factors is locally Menger. \item (for $T_2$ spaces) All but countably many factors are Menger, and the product of the non Menger factors is locally Menger. \end{enumerate} All the above conditions hold when Menger is everywhere replaced by either the Rothberger property, or the Rothberger property for countable covers, and compactness by supercompactness. \end{corollary} \section{Further remarks} \labbel{fr} All the above arguments, with the obvious modifications, can be applied also to the ``basic'' and the ``local$_1$'' case. \begin{proposition} \labbel{same} Lemma \ref{ln}, Theorems \ref{bebis} and \ref{st} and Corollary \ref{cor} hold with ``local'' replaced everywhere by either ``basic'' or ``local$_1$'', except that in the ``basic'' case Condition {\rm (W2)} should be replaced everywhere by the following Condition {\rm (W2$_O$)}, and {\rm (W)} should be modified accordingly, that is, we should consider {\rm (W$_O$)}, the conjunction of {\rm (W1)}, {\rm (W2$_O$)} and {\rm (W3)}. \begin{enumerate} \item[{\rm (W2$_O$)}] Whenever $A$, $B$ are topological spaces, $a \in A$, $b \in B$ and $ T \subseteq A \times B$ is an \emph{open} $\mathcal T$-neighborhood of $(a, b)$, then there is $S \subseteq A$ which is an \emph{open} $\mathcal T$-neighborhood of $a$. \end{enumerate} \end{proposition} Let us say that $\mathcal T$ satisfies (C) if $\mathcal T$ is closed under images of continuous surjection. As we mentioned, (C) implies (W). It is easy to see that if $\mathcal T$ satisfies (C), then the image of a local $\mathcal T$-space under a continuous open map is still a local $\mathcal T$-space. In order to get the above conclusion, it is not enough to assume (W) in place of (C). E.~g., the image of a Hausdorff space (hence locally Hausdorff) is not necessarily locally Hausdorff. The example is classical: take two disjoint copies of the unit real interval and pairwise identify the copies of $0$, as well as the copies of $1/n$, for each $n>0$. However, there are conditions weaker than (C) which still imply that images of local $\mathcal T$-spaces under open continuous maps are $\mathcal T$-local. \begin{enumerate} \item[{\rm (C$^{-}$)}] Whenever $X$ is a topological space, $T \subseteq X$ is a subspace, $T \in \mathcal T$ and $\pi: X \to Y$ is a continuous open surjection, then $\pi(T) \in \mathcal T$. \item[{\rm (C$^{=}$)}] Whenever $X$ is a topological space, $T \subseteq X$ contains some open set of $X$ and $\pi: X \to Y$ is a continuous open surjection, then $\pi(T) \in \mathcal T$. \item[{\rm (C$^{\equiv}$)}] Whenever $X$ is a topological space, $x \in T \subseteq X$, $T $ is a $ T$-neighborhood of $x$ in $X$ and $\pi: X \to Y$ is a continuous open surjection, then $\pi(x)$ has some $\mathcal T$-neighborhood. \end{enumerate} Notice that (C) $\Rightarrow $ (C$^{-}$) $\Rightarrow $ (C$^{=}$) $\Rightarrow $ (C$^{\equiv}$) $\Rightarrow $ (W2) and (C$^{=}$) $\Rightarrow $ (W). Consider also the following property (C$^{\equiv}_O$), which implies (W2$_O$). \begin{enumerate} \item[{\rm (C$^{\equiv}_O$)}] Whenever $X$ is a topological space, $x \in T \subseteq X$, $T \in \mathcal T$ is an open neighborhood of $x$ in $X$ and $\pi: X \to Y$ is a continuous open surjection, then $\pi(x)$ has some open neighborhood in $\mathcal T$. \end{enumerate} \begin{lemma} \labbel{lem} If $\mathcal T$ is a class of topological spaces satisfying {\rm (C$^{\equiv}$)}, then the image of any local (resp., local$_1$) $\mathcal T$-space under a continuous open surjection is a local (resp., local$_1$) $\mathcal T$-space. If $\mathcal T$ is a class of topological spaces satisfying {\rm (C$^{\equiv}_O$)}, then the image of any basic $\mathcal T$-space under a continuous open surjection is a basic $\mathcal T$-space. \end{lemma} \begin{remark} \labbel{ambient} We have usually worked in the class of arbitrary topological spaces, however, essentially all the above definitions and results can be considered as restricted to some special class, e.~g., $T_1$, Hausdorff or Tychonoff spaces. Seemingly, we can allow also spaces with a richer structure, e.~g., topological groups. We only need an ambient in which it makes sense to talk of (arbitrary) products, and, if there is more structure other than topology, the topological Tychonoff product agrees with the product of the structure. If we work in a specific ambient, say, of Hausdorff spaces, everything should be interpreted relative to that ambient; for example, in that context, a class $\mathcal T$ is ``closed under images of surjective continuous functions'' if whenever $f:X \to Y$ is continuous and surjective, $X \in \mathcal T$ \emph{and $X$ and $Y$ are Hausdorff}, then $Y \in \mathcal T$. For example, the class of Hausdorff compact spaces is closed under images of surjective continuous functions in the Hausdorff context, but \emph{not} in the context of arbitrary topological spaces. \end{remark} \begin{remark} \labbel{improv} It seems that, whenever we use the assumption that $\mathcal T$ is closed under finite products, we can do with the following weaker condition. \begin{enumerate} \item [(FP)] Whenever $A, B \in \mathcal T$ and $x \in A \times B$, then $x$ has a neighborhood in $\mathcal T$. \end{enumerate} This remark applies, e.~g., to Lemma \ref{ln}(c)(d), Theorem \ref{bebis}(1)-(3) and Corollary \ref{cor}. Notice that (FP) can be reformulated as ``the product of two $\mathcal T$-spaces is $\mathcal T$-local$_1$''. Notice also that if $\mathcal T$ is such that every $\mathcal T$-space is $\mathcal T$-local, then (FP) is equivalent to the assertion that the product of two $\mathcal T$-local spaces is $\mathcal T$-local. We know no application of the above remarks, hence we have kept the statements in the simpler (but less general) form. \end{remark} \begin{remark} \labbel{ac} Concerning our use of the Axiom of Choice (AC), as the results are formulated, it seems unnecessary in the statements and proofs of Lemmas \ref{triv}, \ref{ln}, \ref{lem}, Theorems \ref{bebis} and \ref{st} (except for \ref{bebis}(4) and \ref{st}(4)) and in the corresponding parts of Proposition \ref{same}. The use of AC seems to be essential in most examples and applications. \end{remark} \begin{disclaimer*} This is a preliminary report and might contain some inaccuracies. In particular, the author acknowledges that the following list of references might be incomplete or partially inaccurate. Henceforth the author strongly discourages the use of indicators extracted from the list in decisions about individuals, attributions of funds, selections or evaluations of research projects, etc. A more detailed disclaimer can be found at the author's web page. \end{disclaimer*}
1,477,468,750,550
arxiv
\section{Introduction} \label{sec:1} Galactic bulges have been generally assumed to be simple components that, morphologically, closely resemble elliptical galaxies. First photometric decompositions of lenticular and spiral galaxies \citep[e.g.][]{1993MNRAS.265.1013C} established that the radial behaviour of their surface brightness followed a de Vacouleours \citep{1948AnAp...11..247D} or a S\'ersic profile \citep{1968adga.book.....S} with typically high $n$ values. In the mid 90s, we discovered that bulges in late-type, spiral galaxies were smaller and displayed exponential profiles \citep{1995MNRAS.275..874A,1996ApJ...457L..73C,1999ApJ...523..566C}. This difference observed in the light profiles was also present in their colours, with exponential bulges displaying bluer colours than those with larger S\'ersic $n$ \citep[e.g.][]{2004ApJS..152..175M,2009MNRAS.395.1669G}. Despite the marked distinction in their light profiles, the variation of colour between bulges and their surrounding disks is rather smooth \citep[e.g.][]{1994AJ....107..135B}.\medskip Our view of the location of bulges in the major scaling relations (e.g. Faber-Jackson [\citealt{1976ApJ...204..668F}], Kormendy relation [\citealt{1977ApJ...218..333K}], or Fundamental Plane [\citealt{1987ApJ...313...42D,1987ApJ...313...59D}]) has also evolved over time. The sample selection biases introduced in the first studies (e.g. predominantly early-type galaxies) showed no significant differences between bulges and elliptical galaxies \citep[e.g.][]{1989ARA&A..27..235K, 1996MNRAS.280..167J, 2007ApJ...665.1104B}. With samples nowadays including large numbers of spiral galaxies, our understanding of the situation of bulges in those relations has now drastically changed \citep[e.g.][]{2009MNRAS.393.1531G, 2010MNRAS.405.1089L,2015MNRAS.446.4039E}.\medskip One aspect in the study of galactic bulges that has radically changed our understanding of their nature (i.e. merger-driven structures around which disks are formed) is their kinematics. While the photometric properties of some bulges already pointed to a high degree of structural similarity with disks (e.g. exponential profiles), this can only be confirmed if their kinematics also follows that displayed by disks (e.g. significant rotation and low velocity dispersions). In a pioneering study \citet{1982ApJ...256..460K} investigated the degree of rotational support of a small sample of bulges compared to elliptical galaxies. Figure~\ref{fig:1} presents an updated version, from \citet{2008ASPC..396..297K}, of the original figure published in 1982. The figure shows that bulges display a much larger degree of rotation than the elliptical galaxies at a given apparent ellipticity. This was the first piece of evidence in the literature indicating that bulges differed dynamically from their otherwise similarly looking, slow rotating, massive early-type counterparts. While we know now that this picture is not accurate, at the time it led to the realisation that some bulges are actually disks and therefore may not have formed in merger episodes, as most scenarios would assume, but rather formed from internal material through secular processes \citep{1993IAUS..153..209K}. These ideas evolved over time and gave rise to the definition of pseudobulges. We refer the reader to \citet{2013seg..book.....F} for an extensive review, produced by the lecturers of the \textit{XXIII Canary Islands Winter School of Astrophysics}, of bulge formation and evolution in the context of secular evolutionary processes.\medskip In this review I will give an overview of the kinematic properties observed in extragalactic bulges, establish their connection to the dynamical features produced by bars, and briefly discuss the similarities with the Milky Way bulge. I will also summarise our yet limited knowledge of the kinematics of bulges at high redshift and end with future prospects yet to be explored in this field. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figure1.eps} \caption{Historical view of the level of rotational support and anisotropy of a sample of elliptical galaxies (crosses) and bulges (remaining symbols) from \citet{2008ASPC..396..297K}. This is an updated version of the original figure presented in \citet{1982ApJ...256..460K}. While the physical interpretation of this figure has evolved over time, it was the first piece of evidence suggesting that bulges and massive early-type galaxies were intrinsically different.} \label{fig:1} \end{figure} \section{Kinematic Properties of extragalactic Bulges} \label{sec:2} The central regions of galaxies are complex environments often displaying multiple coexisting structural components. It is thus important to define what we mean by a bulge in this context. In this chapter I will consider as a bulge the stellar structures in the central regions of galaxies that ``bulge'' vertically over the disk. The modern view is that there are three type of bulges: classical bulges (with properties akin to elliptical galaxies), disky bulges (with properties akin to disks), and Boxy/Peanut bulges (which are related to bars, see \S\ref{sec:3}). In addition to bulges, the central regions of galaxies can also host smaller structures such as nuclei, black holes, or nuclear rings (that do not extend vertically beyond the main disk of the galaxy).\medskip The study of bulges is often hampered by the contamination from different sources\footnote{It is important to remember that properties observed in galaxies are result of integrating along the line of sight. This averaging depends greatly on the number of components as well as the type of stars contributing most to the light in that direction.}. In general there are two main components that can affect our measurements: (1) the underlying main disk of the galaxy, as so far there is no indication of truncation of disks in the inner parts of galaxies; (2) dust, that will prevent the full integration along the line-of-sight and thus will only allow to measure properties of stars in front of the dust lanes. These issues are usually solved by observing galaxies in edge-on or face-on configurations. The first one will give a clear view of the bulge above the disk and avoid dust obscuration. It is most useful for prominent bulges in early-type galaxies. The face-on orientation will minimize the effects of the underlying disk. It is best for small bulges in late-type systems, which have higher surface brightness than the disk. The drawback is that if bulges are rotating, their signature will be likely minimal in that orientation.\medskip In the following subsections I will summarize the main kinematic properties of bulges paying particular attention to those works in the literature that have considered these issues more carefully. \subsection{Rotational support and level of anisotropy} \label{sec:2.1} \citet{1982ApJ...256..460K} were the first to describe the level of rotational support specifically in bulges of galaxies. This was achieved by measuring the maximum rotational velocity observed in the regions above the main disk where the light of the bulge dominates over the central velocity dispersion of the system (V$_{\rm max}$/$\sigma$). The work by Kormendy not only concluded that the level of rotation observed in galactic bulges was larger than that displayed by elliptical galaxies but also, with the aid of model predictions \citep{1981seng.proc...55B}, concluded that bulges were very likely oblate, have isotropic velocity dispersions, and are flattened by rotation. This study was quickly followed up by Kormendy himself \citep{1982ApJ...257...75K}, but also other authors \citep{1983ApJ...266...41D,1983ApJ...266..516D} reaching similar conclusions. Our current view on the level of anisotropy of bulges is, however, different \citep[e.g.][]{2007MNRAS.379..418C}.\looseness-2\medskip The V$_{\rm max}$/$\sigma$--$\epsilon$ diagram has been very popular for its power to classify dynamically different kind of galaxies, but most studies have focused on the study of the entire systems and not in their bulge components specifically \citep[e.g.][]{1988A&A...193L...7B,1994A&A...282L...1P, 1996ApJ...464L.119K,1999ApJ...513L..25R,2004AJ....128..121V}. With the advent of integral field spectroscopy (IFS), this diagram has evolved and led to a parameter (i.e. $\lambda_{\rm Re}$, \citealt{2007MNRAS.379..401E}) that allows a more robust (and less inclination dependent) kinematic classification of galaxies. $\lambda_{\rm Re}$ quantifies the level of specific angular momentum in a galaxy within its half-light radius. Applied to large samples of early-type galaxies it allowed the distinction between Slow and Fast rotating galaxies \citep{2007MNRAS.379..401E,2011MNRAS.414..888E}. Together with model predictions for oblate/prolate, (an)isotropic systems, it can also be used to establish the level of anisotropy of galaxies. This aspect was explored by \citet{2007MNRAS.379..418C} for the SAURON sample \citep{2002MNRAS.329..513D} of early-type galaxies. This study shows that the family of Slow Rotators are weakly triaxial, while the Fast Rotators (with V$_{\rm max}$/$\sigma$ values similar to those observed in bulges) are typically oblate and display a wide range of anisotropy values. The results of this study indicate that the anisotropy observed in Fast Rotators is mainly due to a flattening of the velocity ellipsoid in the meridional plane ($\sigma_R\ge\sigma_z$), with clear indications that anisotropy is larger for intrinsically flatter galaxies. Given the significant contribution of the bulge to the light in these regions, this result suggests that bulges are actually anisotropic. This is consistent with the level of intrinsic flattening observed in different kind of bulges (see M\'endez-Abreu in this volume). In this context, the study of larger samples of bulges in late-type galaxies will be very important to fully characterize their dynamical properties \citep[e.g. CALIFA survey,][]{2014arXiv1409.7786F}.\medskip There has been very few attempts in the literature to extract a \textit{clean} measurement of the anisotropy of bulges and are mostly focused on the analysis of the Milky Way bulge. The complications to decompose accurately the contributions of the disk to the velocity ellipsoid in the bulge dominated areas still remains the major hurdle. The best way forward in this topic has come from the use of detailed dynamical modelling fitting the observed stellar kinematics \citep[e.g.][]{1991A&A...247..357B,1999A&A...349..369P,2005MNRAS.358..481K}. Nevertheless, the main limitation of those studies is that often the shape of the velocity ellipsoid is a property imposed in the fitting. The natural step forward is the use of orbit-based dynamics models \citep[e.g.][]{1979ApJ...232..236S} to separate the contributions of the bulge, disk, and any other components present in a galaxy and thus obtain their intrinsic properties. These models are quite demanding and require a large number of kinematic constraints. With many IFS surveys providing data for vast amounts of galaxies, it is only a matter of time that we exploit these analysis tools more routinely to study the intrinsic properties of bulges. \subsection{Scaling relations} Many of the scaling relations used to study galaxy evolution are, in essence, different manifestations of the Virial Theorem \citep{1870PM..40.122C}, and relates the kinetic energy of a galaxy with the one provided by its gravitational potential. The relationship between different structural parameters of galaxies (e.g. absolute magnitude, half-light radius, mean surface brightness), are discussed at length in other reviews in this volume. Here we concentrate only on those relations that involve the velocity dispersion of the galaxy ($\sigma$). \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figure2.eps} \caption{Faber-Jackson relation for galaxies of different morphological types from \citet{2004ASSL..319..261K}. Bulges of late-type galaxies deviate systematically from the relation defined by ellipticals.} \label{fig:2} \end{figure} \subsubsection{Faber--Jackson relation} The Faber--Jackson relation establishes the link between the absolute magnitude of a galaxy with its central velocity dispersion \citep{1976ApJ...204..668F}. Early-type galaxies form a well defined sequence where more luminous galaxies are also those exhibiting larger velocity dispersions. When it comes to the bulges in particular, the inclusion of bulges of lenticular galaxies hardly introduces any changes in the relation. Bulges of disk dominated spiral galaxies, however, seem to populate different regions in this parameter space, with largest offsets more from the relation defined by the ellipticals for those galaxies with latest morphological types (see Figure~\ref{fig:2}). The observed offset implies that: (1) either the bulges of later-types are brighter at a given velocity dispersion, which would suggest the presence of younger stellar populations (as they are also typically bluer) and/or (2) the dynamics of late-type bulges, at a given absolute bulge luminosity, is closer to that observed in their surrounding disks. Both cases are likely possible given that the velocity dispersion is biased towards the younger population present along the line-of-sight. Note, that despite the potential disky origin of those late-type bulges, the observed relation is not driven by the luminosity of the disk but of the bulge itself \citep[e.g.][]{2007ApJ...665.1104B}.\looseness-2 \begin{figure} \centering \includegraphics[angle=270,width=0.90\linewidth]{figure3.eps} \caption{Mg$_{2}-\sigma$ relation for galactic bulges presented in \citet{2002MNRAS.335..741F}. The figure includes samples from this work as well as \citet{1992ApJ...399..462B}, \citet{1996AJ....112.1415J}, and \citet{2001A&A...366...68P}. Dashed line marks the reference relation for early-type galaxies observed by \citet{1996MNRAS.280..167J}. Bulges of later-type galaxies, e.g. with larger amounts of ionised-gas and younger stellar populations, deviate most from the reference line.} \label{fig:3} \end{figure} \subsubsection{Mg$_2-\sigma$ relation} A more direct connection with stellar populations is made in the Mg$_{2}-\sigma$ relation \citep[e.g.][]{1981MNRAS.196..381T}. In Figure~\ref{fig:3} we show the compilation made by \citet{2002MNRAS.335..741F} using their own sample together with that of \citet{1992ApJ...399..462B}, \citet{1996AJ....112.1415J}, and \citet{2001A&A...366...68P} against the reference relation defined for early-type galaxies by \citet{1996MNRAS.280..167J}. Galaxies displaying larger amounts of ionised gas (i.e. [O{\sc{iii}}] equivalent width) are also the ones deviating most from the relation for early-types. This relation is usually considered as a mass--metallicity relation. This is however only true in the absence of young stellar populations. If present, the Mg$_{2}$ index is no longer a good metallicity indicator and it becomes quite sensitive to age \citep[e.g.][]{2010MNRAS.404.1639V}. Galaxies with large amounts of ionised-gas are also typically the ones experiencing more intense star formation and thus result into overall younger stellar populations. It is therefore not surprising that the bulges in those galaxies are the ones deviating most from the relation described by the early-type galaxies. Similar conclusions have been reached using much larger samples \citep[e.g.][]{2002ASPC..253..321C}, although exploring the dependence with maximum rotational velocity rather than morphological type. \subsubsection{Fundamental Plane relation} The Fundamental Plane is one of the most studied scaling relations. It relates the half-light radius of galaxies to the mean surface brightness within that radius and the central velocity dispersion of the galaxy. As many other scaling relations, early-type galaxies have been studied extensively \citep[e.g.][]{1987ApJ...313...42D,1987ApJ...313...59D, 1996MNRAS.280..167J, 1998AJ....116.1591P, 1999MNRAS.304..225M, 2003AJ....125.1866B, 2008ApJ...685..875D, 2009MNRAS.396.1171H, 2010MNRAS.408.1335L, 2012MNRAS.427..245M, 2013MNRAS.432.1709C}. In contrast, the specific location of bulges in the relation has not been explored much and has been limited to galaxies with prominent bulges.\medskip One of the first studies in this respect was carried out by \citet{1992ApJ...399..462B}. They showed that bulges of lenticular galaxies followed the relation defined by elliptical galaxies. This result was later confirmed by \citet{2002MNRAS.335..741F}, who also found that bulges of later-type galaxies (e.g. Sbc) were slightly displaced with respect to the main relation. Bulges presenting the largest offsets were those with younger stellar populations and lower velocity dispersions. These authors showed that the offsets could be removed if one considers the missing rotational support expected in these late-type bulges. As the rotational support of some bulges increases, the measured velocity dispersion is no longer a reliable tracer of their motion. In those cases rotational velocity is a much better probe of those motions. For purely rotationally supported systems the Tully--Fisher relation \citep{1977A&A....54..661T} is the one often the one invoked. Several studies have confirmed that when the full kinetic energy is accounted for and differences in the stellar populations are considered, galaxies of all morphological types form a single relation \citep[e.g.][]{1994A&A...282L...1P, 1996A&A...309..749P, 2006MNRAS.366.1126C, 2010ApJ...717..803G, 2011MNRAS.417.1787F}, with remaining scatter typically driven by changes in their mass-to-light ratios \citep[e.g.][]{2013MNRAS.432.1709C}. \subsection{Radial behaviour} The study of the kinematic radial properties of galaxies has been one of the most prolific areas in astronomy. Mainly for bulges of early-type galaxies \citep[e.g.][]{1982ApJ...256..460K, 1997AJ....113..950F, 1998A&AS..133..317H, 1999A&AS..136..509H, 2003A&A...405..455F, 2004MNRAS.352..721E, 2010MNRAS.408..254S}, over time we quickly started to routinely explore the motions of stars in late-type systems \citep[e.g.][]{1989A&A...221..236B, 1992A&A...257...69B, 2001A&A...374..394V, 2004A&A...424..447P, 2005MNRAS.358..481K, 2008MNRAS.387.1099P, 2012ApJ...754...67F}. More recently, we have started expanding our understanding of bulges through IFS (e.g. SAURON [\citealt{2006MNRAS.367...46G}], DisKMass [\citealt{2013A&A...557A.130M}]). While at first only rotational velocity and velocity dispersion was extracted, the arrival of new parametrizations of the line-of-sight velocity distributions (e.g. Gauss-Hermite expansions, \citealt{1993ApJ...407..525V}) allowed us to identify the presence of kinematic subcomponents in galaxies (see \S\ref{sec:2.4} for a detailed discussion). Despite displaying clear signatures of rotational support, it is very hard to distinguish between the signal of the bulge and underlying disk in typical rotation curves. A much more fruitful avenue to explore is the study of the radial behaviour of the stellar velocity dispersion. With many bulges still having a high degree pressure support (e.g. dynamical support by random motions), it is easiest to identify the contrast between the velocity dispersion of the disk and the bulge-dominated regions.\medskip \citet{1997AJ....113..950F} is one of the first studies to correlate the slope of the observed velocity dispersion profile with general properties of their host galaxies (e.g. central velocity dispersion, absolute magnitude, or Mg$_2$ and Fe line-strength indices). He analysed a sample of 18 lenticular galaxies and computed the velocity dispersion gradients along the major and minor axes of the galaxies. Compared to bright elliptical galaxies, the velocity dispersion profiles of lenticulars in his sample were much steeper. This is expected given that the profiles reached the low dispersion regimes observed in the disk dominated regions. The contrast between the velocity dispersion in the bulges and disks of his galaxies was therefore large. The intriguing result of this study was to discover that there was no correlation between these gradients and central velocity dispersion ($\sigma_0$), absolute magnitude or gradients of metallicity sensitive line-strength indices. The lack of correlation with central velocity dispersion was particularly surprising, as one would expect a larger contrast (i.e. steeper gradient) between the very high central dispersion galaxies and their surrounding disk. At face value, this result suggests that: (1) the sample used in this study did not cover a sufficiently large range of central velocity dispersion values, which could be true as the lowest $\sigma_0$ was above 100\,km\,s$^{-1}$ or (2) galaxies with dynamically hotter bulges (i.e. with larger $\sigma_0$) have also hotter disks. At this point, with the current sample it was not possible to discern between the two scenarios.\medskip \begin{figure} \centering \includegraphics[width=\linewidth]{figure4.eps} \caption{Radial velocity dispersion profiles for a sample of 45 lenticular to spiral galaxies from \citet{2012ApJ...754...67F}. Profiles have been normalised to their central velocity dispersion and bulge radius. Profiles of classical bulges are plotted in red and pseudobulges in blue. Major axis profiles are shown on the left and minor axis on the right columns respectively. The thick black lines correspond to the median of the individual profiles.} \label{fig:4} \end{figure} The next natural step in this direction was to extend the sample to later-type galaxies. \citet{2003A&A...405..455F} studied the radial kinematic profiles (along the minor axis) of 19 galaxies with morphological types expanding between S0 and Sbc. The sample was carefully chosen to have intermediate inclinations and thus permit access to the bulge with minimal contamination of the disk on one side of the galaxy. Central velocity dispersions ranged from 50 to over 300\,km\,s$^{-1}$. The analysis of their sample did show remarkably different $\sigma$ radial profiles. While about half of the sample displayed very steep profiles, the remaining set showed mainly flat profiles. The lack of velocity dispersion gradient in a fair amount of galaxies in the sample was yet another piece of evidence pointing to the disky nature of some galactic bulges. In relation to the properties of the host galaxy, there was a slight tendency for galaxies with flatter profiles to display higher disk central surface brightness. A trend was also found with the ellipticity of the bulge component in the sense that more flattened bulges showed shallower gradients. Despite analysing galaxies covering a wider range of morphological types, no correlation was found with either morphological type index, bulge S\'ersic index $n$, bulge and disk scale lengths and bulge effective surface brightness. It appears that the disky nature of bulges cannot be established on the basis of spheroid luminosity, as velocity dispersion gradients do not seem to correlate with bulge luminosity or with central velocity dispersion either.\looseness-2\medskip \citet{2012ApJ...754...67F} presents the most recent effort in the literature trying to address these issues. In this work 45 S0 to Sbc galaxies were studied with the goal of relating the kinematic information with photometric properties typical of classical and pseudobulges\footnote{Note that in this work the definition of a bulge differs from the one used in this review. While \citet{2012ApJ...754...67F} define bulges as structures with flux above the disk surface brightness profile, here they are also required to extend vertically above the disk.}. The sample contained a fair fraction of barred galaxies and displayed a wide range of central velocity dispersions (between $\sim$50 to 200\,km\,s$^{-1}$) and absolute magnitudes (from $-18$ to $-21$\,mag). The galaxies were also moderately inclined with allowed access to the bulge region without being significantly affected by dust in the disk. Figure~\ref{fig:4} shows the radial behaviour of the velocity dispersion along the major and minor axes of the galaxies in the sample. Similarly to \citet{2003A&A...405..455F}, bulges exhibit two types of profiles: steep and flat velocity dispersion profiles. This work provides first tentative evidence for a correlation between the slope of the velocity dispersion profile and the bulge's S\'ersic index $n$.\medskip The study of the stellar kinematics of late-type galaxies has usually been hampered by complex, often dusty, morphologies. Furthermore, bulges in those galaxies are not particularly bright which makes the extraction of any spectroscopic measurement (kinematic in particular) specially harder. With the advent of integral-field spectroscopy, a few studies have allowed a kinematic characterisation of bulges in galaxies from Sb to Sd types. \citet{2006MNRAS.367...46G} carried out SAURON observations of 18 spiral galaxies with good \textit{Hubble Space Telescope} photometry available. The velocity dispersion profiles of the galaxies were mostly flat or with positive gradients. Very few galaxies displayed negative gradients. When looking for correlations between these gradients and the morphological type of the galaxies, there was only a slight tendency for earlier types to displayed negative gradients. Positive gradients were not strongly correlated with latest Hubble types.\medskip The study of velocity dispersion gradients will be soon expanding thanks to the large number of IFU surveys (DiskMass, \citealt{2010ApJ...716..198B}; CALIFA, \citealt{2012A&A...538A...8S}]; SAMI, \citealt{2012MNRAS.421..872C}; MaNGA, \citealt{2015ApJ...798....7B}). However, it is important to remember that not all of them will allow the study of bulges in late-type galaxies due to restrictions in the spatial sampling or their spectral resolution. \subsection{Amount of substructure} \label{sec:2.4} So far in this review we have exposed the properties of different kind of bulges, and yet this has gone as far as showing that some bulges exhibit kinematics closer to what it is observed in a disk (e.g. rotation dominated) instead of the classical idea of bulges being pressure supported. Here we will revise the kinematic properties of the different structural components dominating the light in the inner regions of galaxies.\medskip Counter-rotating components are common in galaxies. Large, kpc-scale, kinematically decoupled components (KDCs) are typically found in bright elliptical galaxies \citep[e.g.][]{1988A&A...202L...5B, 1989ApJ...344..613F, 1997ApJ...481..710C, 1999MNRAS.306..437H, 2001ApJ...548L..33D,2014MNRAS.445L..79E}. They usually contain old stellar populations and are almost indistinguishable from the remaining body of the galaxy. Smaller decoupled components are, however, harder to identify, are made of young stars and reside in lower luminosity early-type galaxies \citep[e.g.][]{2006MNRAS.373..906M}. Large-scale counter-rotation of disk components seems also not so rare: NGC\,4550 \citep[e.g.][]{1992ApJ...394L...9R, 1992ApJ...400L...5R}, NGC\,4138 \citep{1996AJ....112..438J}, NGC\,4473 \citep{2004cbhg.sympE...5C}. See \citet{2011MNRAS.414.2923K} for other cases detected through a \textit{kinemetry} analysis \citep{2006MNRAS.366..787K}. The detection of such extreme cases keeps increasing as new kinematic decomposition techniques are developed \citep[e.g.][]{2013A&A...549A...3C, 2013MNRAS.428.1296J, 2014A&A...570A..79P}.\medskip Counter-rotation of bulges is an odd phenomenon. There are very few cases reported in the literature of bulges rotating around a completely different axis than their surrounding disks. One of those striking cases is NGC\,4698 \citep{1999ApJ...519L.127B}, where the bulge appear to rotate perpendicular to the stellar disk. Another unusual case is that of NGC\,7331, where the bulge was reported to counter-rotate with respect to the disk (\citealt{1996ApJ...463L...9P}, but see \citealt{1999A&A...348...77B}). Numerical simulations suggest mergers of galaxies as the only viable path for the formation of such structures \citep[e.g.][]{1998ApJ...505L.109B, 1998ApJ...506...93T}.\medskip \begin{figure} \centering \includegraphics[width=\linewidth]{figure5a.eps} \includegraphics[width=\linewidth]{figure5b.eps} \caption{Stellar kinematic maps for NGC\,4274 from \citet{2006MNRAS.369..529F}. The arrow and its associated dash at the top of each figure mark the north and east directions, respectively. (\textit{First row}) HST unsharp-masked image of the galaxy and some basic information. (\textit{Second row}) reconstructed total intensity (in mag/arcsec2 with an arbitrary zero point), stellar mean velocity V, and stellar velocity dispersion in km\,s$^{−1}$). (\textit{Third row}) [O{\sc{iii}}]/H$\beta$ emission line ratio map (in logarithmic scale), and Gauss–-Hermite moments h3 and h4 of the stellar line-of-sight velocity distribution.} \label{fig:5} \end{figure} A common feature is the presence of co-rotating components (e.g. a nuclear disk) embedded in an otherwise pressure supported spheroidal bulge . The key kinematic signature of these inner disks is a steep rise of the rotation velocity in the inner parts (i.e. faster than the expected rise of the main disk) accompanied by low velocity dispersions values. There is often also an anti-correlation between the velocity and h$_3$ moment in the locations with lowest velocity dispersion, which is usually an indication of multiple kinematic components. All these features are shown in Figure~\ref{fig:5} using the two-dimensional kinematic maps of NGC\,4274 from \citet{2006MNRAS.369..529F} as an example. The \textit{Hubble Space Telescope} unsharped-masked image reveals the presence of a dusty disk in the inner regions of the galaxy, which is not so obvious in the reconstructed image of the galaxy. The disk has a clear signature in the velocity map, and even more so in the velocity dispersion which is much lower than the values of the surrounding dynamically hot bulge. In this particular case, the very low [O{\sc{iii}}]/H$\beta$ emission line ratio suggests star formation is taking place in the inner disk. The presence of these co-rotating components do not always imply associated young stellar populations. The stellar population analysis carried out by \citet{2007MNRAS.379..445P} of the \citet{2006MNRAS.369..529F} sample of 24 Sa galaxies concluded that about half of the galaxies displaying low central velocity dispersion values (so called $\sigma$-drops, \citealt{2001A&A...368...52E,2003A&A...409..469W}) have mean luminosity weighted ages above 5\,Gyr. The incidence of $\sigma$-drops in this sample was about 50\%. $\sigma$-drops are not only produced by nuclear disks, but can also be caused by nuclear dust spirals and star-forming rings \citep{2008A&A...485..695C}. The origin of these components is often related to the inflow of gas, driven by bars, towards the inner regions of galaxies \citep[e.g.][]{2005MNRAS.358.1477A}. Note, however, that minor mergers could be also responsible for the formation of inner disks and rings in spiral galaxies \citep[e.g.][]{2011A&A...533A.104E}. \section{Relating Bars and Bulges} \label{sec:3} Bars are prominent components of galaxies, produced by disk instabilities, that can pump disk material above the plane generating central structures that also {\it bulge} over the thin disk \citep[e.g.][]{1993ApJ...409...91H}. As we discuss in this section, the kinematic properties of these bars are different from those observed in common bulges. The origin of some type of bulges (e.g. pseudobulges) appears to be tightly connected to secular evolutionary processes induced by bars \citep[see][for a theoretical view of bulge formation in the context of bars]{2005MNRAS.358.1477A}. Bars are active agents in the inflow of gas towards the inner regions of galaxies \citep[e.g.][]{1999ApJ...525..691S}. This naturally allows the formation of new structures (e.g. bulges, rings, inner disks, central mass concentration).\medskip The vertical extent of bars is best observed in edge-on galaxies. When the long axis of the bar is perpendicular to our line-of-sight bars are usually called Boxy/Peanut (BP) bulges due to their peculiar shape. Most of the material outside the disk plane has been elevated through bar buckling episodes early in the evolution of the bar \citep[e.g.][]{2006ApJ...637..214M}. Kinematically, BP bulges produce a characteristic signature (i.e. a ``figure-of-eight'') in the Position--Velocity Diagram (PVD). This was first predicted by \citet{1995ApJ...443L..13K} (see Figure~\ref{fig:6}, top row). With the aid of analytical models, they determine the location of particles in this diagram for barred and non-barred galaxies. In their view, the gap observed in the PVD of barred galaxies is produced for a lack of available orbits near the corotation radius of the bar. This effect should affect both the stellar and gas components of galaxies. This prediction was nicely confirmed with larger samples of galaxies \citep[e.g.][]{1999A&A...345L..47M, 1999AJ....118..126B}. In the case of \citet{1999AJ....118..126B}, they produced PVDs for a sample of 30 edge-on spiral galaxies with prominent BP bulges. Figure~\ref{fig:6}, bottom row, shows the observed PVD for NGC\,5746 that clearly displays the predicted gap.\medskip \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figure6a.eps} \includegraphics[width=0.7\linewidth]{figure6b.eps} \caption{Position--Velocity diagrams (PVDs) of barred galaxies. (\textit{Top}) Model prediction for the observed line-of-sight velocity distribution as a function of radius for non-barred and barred galaxies \citep{1995ApJ...443L..13K}. (\textit{Bottom}) Observed PVD for the boxy/peanut bulge of NGC\,5746 \citep{1999AJ....118..126B}. The kinematic signature of a bar in the observations is very evident.} \label{fig:6} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{figure7a.eps} \includegraphics[width=0.45\linewidth]{figure7b.eps} \caption{Stellar line-of-sight rotation curves and velocity dispersion profiles for two Boxy/Peanut, edge-on galaxies in the \citet{2011MNRAS.414.2163W} sample. NGC\,3390 shows clear signatures of cylindrical rotation, while IC\,4767 does not (i.e. kinematics at increasing distance from the main disk shows different behaviour). The shaded regions mark the disk dominated regions.} \label{fig:7} \end{figure} Another typical kinematic feature of BP bulges predicted by numerical simulations is cylindrical rotation \citep[e.g.][]{1988ApJ...331..124R, 1990A&A...233...82C}. The first evidence for cylindrical rotation in galaxies was revealed by \citep{1982ApJ...256..460K} for NGC\,4565 when studying the stellar kinematics of galactic bulges. References of cylindrical rotation in other galaxies are rather scarce in the literature: IC\,3370 \citep{1987AJ.....94...30J}, NGC\,1055 \citep{1993A&A...280...33S}, NGC\,3079 \citep{1993A&A...268..511S}, NGC\,5266 \citep{1987ApJ...313...69V}, NGC\,7332 \citep{1994AJ....107..160F}. This lack of cases is likely due to: (1) inclinations effects. Cylindrical rotation is best observed in edge-on galaxies \citep[e.g][]{2002MNRAS.330...35A}, (2) the fact that most observations with long-slit spectrographs targeted the major and/or minor axes of the galaxies, which makes it difficult to detect. The most recent work addressing this aspect of BP bulges is that of \citet{2011MNRAS.414.2163W}. This study placed long slits parallel to the major axis of five known BP bulges. The surprising result of this study is that not all BP bulges displayed cylindrical rotation. Figure~\ref{fig:7} shows the analysis for two distinct cases in their sample. While NGC\,3390 displays clear signatures of solid-body rotation, IC\,4767 presents shallower major axis velocity profiles as a we move away from the disk. This outcome requires further confirmation using larger samples of edge-on galaxies. It will also benefit from studies making use of integral-field spectrographs to map the full two-dimensional kinematics over the BP dominated region. A glimpse of what this kind of studies can bring is presented in \citet{2004MNRAS.350...35F} for the known case of NGC\,7332.\medskip Bars are also capable of producing other distinct features in the stellar kinematics of galaxies, which are often related to resonances induced by the bar itself in the host galaxy. \citet{2005ApJ...626..159B} established, using N-body simulations, a series of kinematic diagnostics for bars of different strength and orientations in highly-inclined galaxies (see Figure~\ref{fig:8}): (1) a ``double-hump'' rotation curves, (2) velocity dispersion profiles with a plateau at moderate radii, and often displaying a $\sigma$-drop in the centre, (3) a positive correlation between the velocity and the h$_3$ Gauss-Hermite moment over the length of the bar. Some of these features have been recognised observationally in several studies \citep[e.g.][]{1981ApJ...247..473P, 1983ApJ...275..529K, 1997A&AS..124...61B, 2001A&A...368...52E, 2003A&A...409..459M, 2009A&A...495..775P}. While having the most potential to unravel the presence of bars, the V--h$_3$ correlation has been hardly studied observationally \citep[e.g.][]{2004AJ....127.3192C}. These diagnostics work best for edge-on galaxies. The kinematic tracer of BP bulges in face-on systems is the h$_4$ Gauss-Hermite moment. Simulations carried out by \citet{2005ApJ...628..678D} predict that a negative double minima around the centre of the galaxy is an excellent indicator of a BP bulge for a wide range of bar strengths and inclinations. Although the observational requirements to measure this parameter are very demanding, this feature has been nicely confirmed observationally by \citet{2008ApJ...679L..73M}. Interestingly, \citet{2014MNRAS.444L..80L} suggest that the barlenses observed in the face-on view of many disk galaxies \citep[e.g.][]{2011MNRAS.418.1452L} are effectively the thick part of the BP bulge when seen face-on. See also \citet{2014arXiv1405.6726A} for a theoretical interpretation.\medskip \begin{figure} \centering \includegraphics[width=0.98\linewidth]{figure8.eps} \caption{Stellar kinematic diagnostics for barred galaxies in N-body simulations from \citep{2005ApJ...626..159B}. (\textit{Left to right}) No-bar, weak-bar, intermediate-bar, and strong-bar case. (\textit{Top to bottom}) image, PVD, surface brightness, and kinematic parameters (velocity, velocity dispersion, h$_3$ and h$_4$ Gauss-Hermite moments) as a function of bar orientation, from end-on to side-on.} \label{fig:8} \end{figure} There are strong indications that large bulges can have an effect in the strength of a bar. Stronger bars appear in galaxies with low bulge-to-total ratios and central velocity dispersions \citep{2008Ap&SS.317..163D, 2009A&A...495..491A, 2009ApJ...692L..34L}. What it is not well established yet, observationally, is the effect a bar would have on the dynamics of a pre-existing bulge. Numerical simulations by \cite{2013MNRAS.430.2039S} suggest that a pressure supported bulge would gain net rotation as a result of angular momentum exchange with the bar. Rotation of the final composite classical and BP bulge would be close to cylindrical, with small deviations in the early phases of the secular evolution. Therefore, untangling the intrinsic properties of bulges in barred galaxies is a very difficult task that will require detailed dynamical modelling of high quality observations. Numerical tools like the NMAGIC code \citep{2007MNRAS.376...71D} applied to high-quality, integral-field data \citep[e.g.][]{2013MNRAS.429.2974D} seems the way forward.\medskip The Milky Way bulge is the most vivid example of a complex system. Besides cylindrical rotation, it displays many of the other kinematic signatures of bars summarised above. The origin of the multiple substructures present at the centre of our Galaxy (possibly including other types of bulges, e.g. \citealt{2014ApJ...787L..19N}) cannot be solved by inspecting the kinematics alone, as angular momentum transfer is expected between them. Most of the efforts today to solve this puzzle come from relating the observed kinematics to the distinct stellar populations present in those regions. We refer the reader to Oscar Gonz\'alez and Dimitri Gadotti's review in this volume for a comprehensive summary of the properties observed in the Galactic bulge, but also Juntai Shen's chapter for a theoretical view on the possible paths for its formation and evolution. \section{Kinematics of Bulges at High Redshift} \label{sec:4} With typical sizes of a few kiloparsecs, bulges in nearby galaxies would be very difficult to resolve spatially at intermediate to high redshifts even with the best instruments on board of \textit{Hubble Space Telescope}. In addition, the morphologies of galaxies are known to deviate from the standard Hubble sequence from redshift $\sim$1 onwards \citep[e.g.][]{2008ApJ...688...67E}, so we should probably not think of bulges at high-redshift in the same way we think of them in the local Universe. Nevertheless knowing the conditions, in terms of rotational support, of the galaxies that will eventually lead to lenticular and spiral galaxies nearby, can help us understand the kind of progenitors that will host the variety of bulges we see today.\medskip In the light of the large amount of pseudobulges observed in the nearby Universe, a logical question to ask is: do we see the signatures of secular evolution in bulges at high-$z$? Numerical simulations reproducing the clumpy galaxies from redshift $z$\,$\sim$\,1 suggest that bulge kinematics is not very different from the values observed for pressure-supported systems, with (V/$\sigma$) values below 0.5 \citep[e.g.][]{2007ApJ...670..237B, 2008ApJ...688...67E}. This is likely due to the turbulent nature of clumps merging at the centre of galaxies \citep[e.g.][]{2012MNRAS.420.3490C}. Note, however, that the merging and migration of clumps towards the inner regions is an internal process, as it takes place in the disk of galaxies. The physical conditions, in terms of gas supply, for bulge formation at high redshifts are very different from the ones observed in the local Universe. Secular evolution takes place at a much faster pace at high-$z$.\medskip Integral-field observations of galaxies at increasing redshifts confirm the turbulent nature of disks, as revealed by the systematically high velocity dispersion values \citep[e.g.][]{2013ApJ...767..104N, 2014arXiv1409.6791W}. Nevertheless, galaxies show a wide range of kinematic properties: from well behave rotating disks, to dispersion dominated systems, and galaxies with chaotic motions \citep[e.g.][]{2008A&A...477..789Y, 2008ApJ...687...59G, 2011MNRAS.417.2601W, 2014MNRAS.439.1494B}. Recent results from the KMOS3D survey \citep{2014arXiv1409.6791W} show that most galaxies, in the main star forming sequence, between redshifts 1 and 2 are rotationally-supported. When combined with other datasets, they measure an evolution of the ionised-gas velocity dispersion which is consistent with the observed changes in the gas fractions and specific star formation rates of galaxies as a function of redshift. This results favours an 'equilibrium' model where the amount of turbulence of a disk is defined by the balance between gas accretion and outflows.\medskip The physical conditions between redshifts 1 and 4 appear to be particularly favourable for the formation of bulges, and yet it appears that it cannot be the only channel to build the (pseudo)bulges observed in the nearby Universe. Mergers seem to be required too \citep[e.g.][]{2014arXiv1409.2622C}. To complicate the issue further, the analysis of the star formation histories of different types of bulges \citep[e.g.][]{2015MNRAS.446.2837S} suggest that at least 60\% of the stellar mass of those bulges formed at redshifts beyond 4 (see Figure~\ref{fig:9}). All these results together indicate that bulge formation most likely happens in a two stage process \citep[e.g.][]{2013ApJ...763...26O}, with an initial period of rapid build-up (with possible influence of mergers) and a secondary phase (between redshifts 1 and 2) of high star formation activity that would lead to the younger pseudobulge components we see today. \begin{figure} \centering \includegraphics[width=\linewidth]{figure9.eps} \caption{Relative light (top row) and mass (bottom row) fractions of young, intermediate and old stellar populations as a function of radius present in three galactic bulges studied in \citet{2015MNRAS.446.2837S}. Uncertainties in the analysis are indicated in the top left corner. Shaded regions mark the regions where the average light and mass fractions of this study are computed. More than 60\% of the stellar mass in those bulges was already in place beyond $z\sim4$.} \label{fig:9} \end{figure} \section{Concluding Remarks \& Future Prospects} \label{sec:5} Lying at the centre and denser regions of galaxies, bulges are a keystone in our understanding of galaxy formation and evolution. It is also their location, shared with other components of galaxies what makes them so difficult to study. In this review I have tried to provide an overview of the main kinematic features observed in extragalactic bulges.\medskip Identifying the formation scenario for bulges based solely on kinematic grounds is a very difficult task. The orbits of the different structural components in galaxies (e.g. bulges, disks, bars, spiral arms, nuclear disks rings, etc) are not necessarily well separated in phase-space. The best example of this complexity come from the observations of the Milky Way bulge. As nicely illustrated in other contributions to this volume (e.g. Gonz\'alez \& Gadotti, or S\'anchez-Bl\'azquez), the combined study of kinematics and stellar populations provides one of the best ways to discern between different formation scenarios. While this coupling can be achieved relatively easy in the Milky Way (because it is possible to measure the properties of individual stars) this is no easy task in bulges of other galaxies where all we get is the integrated light along the line-of-sight. Fortunately, with better data, models, and numerical tools we are at the verge of being able to treat other galaxies in the same way we study our own Galaxy. Studies of the coupling between kinematics and stellar populations in external galaxies are now flourishing \citep[e.g.][]{2008AN....329..980O}. Initially restricted to galaxies with known distinct counter-rotating components, they are now exploring more regular galaxies \citep[e.g.][]{2014MNRAS.441..333J}.\medskip As remarked many times throughout this review, this new step in the 3D decomposition of galaxies can only be achieved with datasets that allow the uniform exploration of galaxies in the two-dimensions they project in the sky. The first generation of IFU surveys and instruments (e.g. SAURON, ATLAS3D, DiskMass, SINFONI, VIMOS, PPaK) showed us the potential of these datasets to reveal the intrinsic properties of galaxies. The currently ongoing IFU surveys (e.g. CALIFA, SAMI, MaNGA, KMOS3D) will allow the exploitation of these new techniques for very large, morphologically and mass unbiased samples of galaxies. We should not forget though that we can still learn a lot of the physical processes governing galaxies, and bulge formation and evolution in particular, with unique instruments like MUSE. The Milky Way is a unique case, as we will be able to probe the 3D nature of the Galaxy directly thanks to the Gaia space mission.\looseness-2 \begin{acknowledgement} J.~F-B would like thank D. Gadotti, E. Laurikainen and R.F. Peletier for their invitation to take part in this volume and for their infinite patience waiting for this review. J.~F-B acknowledges support from grant AYA2013-48226-C3-1-P from the Spanish Ministry of Economy and Competitiveness (MINECO), as well as from the FP7 Marie Curie Actions of the European Commission, via the Initial Training Network DAGAL under REA grant agreement number 289313. \end{acknowledgement} \newpage \bibliographystyle{mn2e}
1,477,468,750,551
arxiv
\section{Introduction} In this paper, we investigate the large deviation behaviours of point processes and partial sums of stationary \emph{symmetric $\alpha$-stable} ($S\alpha S$) random fields with $\alpha\in(0,2)$. A random field $\mathbf{X}:=\{X_t\}_{t \in \mathbb{Z}^d}$ is called a \textit{stationary symmetric $\alpha$-stable} discrete-parameter random field if for all $k \geq 1$, for all $s, t_1, t_2,\ldots, t_k \in \mathbb{Z}^d$, and for all $c_1, c_2, \ldots, c_k \in \mathbb{R}$, $ \sum_{i=1}^k c_i X_{t_i+s} $ follows an $S \alpha S$ distribution that does not depend on $s$. See, for example, \cite{samorodnitsky:taqqu:1994} for detailed descriptions on $S \alpha S$ distributions and processes. The study of rare events and large deviations for heavy-tailed distributions and processes has been of considerable importance starting from the classical works of \cite{heyde:1967a, heyde:1967b, heyde:1968}, \cite{nagaev:1969b,nagaev:1969a}, \cite{nagaev:1979}; see also the technical report of \cite{cline:hsing:1991}. Some of the more recent works in this area include \cite{mikosch:samorodnitsky:2000a}, \cite{rachev:samorodnitsky:2001}, \cite{hult:lindskog:mikosch:samorodnitsky:2005}, \cite{denisov:dieker:shneer:2008}, \cite{hult:samorodnitsky:2010}, etc. When studying the probability of rare events, it is usually important not only to determine the size and the frequency of clusters of extreme values but also to capture the intricate structure of the clusters. For this reason, \linebreak \cite{hult:samorodnitsky:2010} developed a theory to study large deviation behaviors at the level of point processes to get a better grasp on how rare events occur. Their work relies on convergence of measures that was introduced in \cite{hult:lindskog:2006}. See also the recent works of \cite{das:mitra:resnick:2013} and \cite{Lindskog:Resnick:Roy}, which extended this convergence to more general situations. Inspired by the works of \cite{davis:resnick:1985} and \cite{davis:hsing:1995}, \cite{resnick:samorodnitsky:2004} studied the asymptotic behaviour of a point process sequence induced by a stationary symmetric stable process. This work was extended to stable random fields by \cite{roy:2010a}. In the present work, we take a slightly stronger version of the point process sequence considered in \cite{roy:2010a} and use the framework introduced by \cite{hult:samorodnitsky:2010} to investigate the corresponding large deviation behaviour. We observe that this point process large deviation principle depends on the ergodic theoretic and group theoretic properties of the underlying nonsingular $\bbz^d$-action through the works of \cite{rosinski:1995, rosinski:2000} and \cite{roy:samorodnitsky:2008}. Just as in \cite{samorodnitsky:2004a, samorodnitsky:2004b} (see also \cite{roy:2010b}), we notice a phase transition that can be regarded as a passage from shorter to longer memory. The paper is organized as follows. In Section~\ref{Section:Preliminaries}, we present background on ergodic theory of nonsingular group actions and integral representations of $S\alpha S$ random fields, and describe a special type of convergence of measures. The large deviation behaviors of the associated point processes are considered separately for stationary $S\alpha S$ random fields generated by dissipative group actions (reflecting shorter memory) in Section~\ref{section:dissipative}, and generated by conservative group actions (reflecting longer memory) in Section~\ref{section:conservative}. Finally, in Section~\ref{section:classical:large deviation}, we obtain the large deviation principle for the partial sum sequence of a stationary $S\alpha S$ random field using continuous mapping theorem. We introduce some notations that we are going to use throughout this paper. For two sequences of real numbers $\{a_n\}_{n\in\mathbb{N}}$ and $\{b_n\}_{n\in\mathbb{N}}$ the notation $a_n \sim b_n$ means $a_n/b_n \to 1$ as $n \to \infty$. For $u, v \in \mathbb{Z}^d$, $u = (u_{1}, u_{2}, \ldots, u_{d}) \leq v =(v_{1}, v_{2}, \ldots, v_{d})$ means $u_{i} \leq v_{i}$ for all $i=1,2,\ldots,d$; $[u, v]$ is the set \linebreak $\{t \in \mathbb{Z}^d: u \leq t \leq v\}$; $\|u\|_\infty:=\max_{1 \leq i \leq d} \,|u_{i}|$ and $\mathbf{0}_d=(0,0,\ldots,0)$, \linebreak $\mathbf{1}_d=(1,1,\ldots,1)$ are elements of $\mathbb{Z}^d$. For $x\in\bbr$ we define $x^+:=\max(x,0)$ and $x^-:=\max(-x,0)$. Weak convergence is denoted by $\weak$. For some standard Borel space $(S,\mathcal{S})$ with $\sigma$-finite measure $\mu$ we define the space $L^{\alpha}(S,\mu):=\left\{f:S\to\mathbb{R} \mbox{ measurable}: \|f\|_\alpha <\infty \right\}$ with $\|f\|_\alpha:=\left(\int_S|f(s)|^{\alpha}\,\mu(ds)\right)^{1/\alpha}$. For two random variables $Y$, $Z$ (not necessarily defined on the same probability space), we write $Y\stackrel{\text{d}}{=}Z$ if $Y$ and $Z$ are identically distributed. For two random fields $\{Y_t\}_{t \in \mathbb{Z}^d}$ and $\{Z_t\}_{t \in \mathbb{Z}^d}$, the notation $Y_t\stackrel{\text{d}}{=}Z_t$, $t \in \mathbb{Z}^d$ means that they have same finite-dimensional distributions. \begin{comment} For stationary $S\alpha S$ moving average processes, including, in particular, an independent and identically distributed sequence, we consider the following type of point processes \begin{eqnarray} \label{intro:point} N_n=\sum_{\|t\|_{\infty}\leq n}\delta_{(n^{-1}t,\gamma_n^{-1}(X_{t-w})_{w\in[-q\mathbf{1}_d,q\mathbf{1}_d]})}, \quad n\in\mathbb{N}, \end{eqnarray} for some $q\in\mathbb{N}_0$, where $\{\gamma_n\}$ is a sequence of positive constants tending to $\infty$ ($\delta_x$ denotes the dirac measure which has point mass in $x$, $\|t\|_{\infty}$ for $t\in\mathbb{Z}^d$ is the infinity norm and $\mathbf{1}_d=(1,\ldots,1)\in\mathbb{Z}^d$). Since we investigate not only $X_t$ in the second component but also $(X_{t-w})_{w\in[-q\mathbf{1}_d,q\mathbf{1}_d]}$ we preserve the information of the process in the neighborhood of $t$ with distance $q$. On this way we are able to catch the fine structure of the clusters and in particular, the order of extremes. If $q$ is increasing the information we get is getting finer. Under the assumption $n^{d/\alpha}\gamma_n\to 0$ as $n\to\infty$ we are in the large deviation setting because, $n^d\mathbb{P}(|X_t|>\gamma_n)\to 0$ and the point process $N_n$ converges weakly to the null measure as $n\to\infty$. The event $\{N_n\in A\}$ for some properly chosen set $A$ is a rare event. As in classical large deviation a proper normalization of the probability measure of the Poisson process is necessary to obtain some convergence result. We will show that as $n\to\infty$, \begin{eqnarray} \label{intro:point_process} \frac{\gamma_n^\alpha}{n^d}\mathbb{P}\left(N_n\in\cdot\right) \end{eqnarray} converges in a appropriate sense on the space of point measures and compute the limit measure. The limit measure is a Borel measure induced by a cluster Poisson process. The type of convergence used in \eqref{intro:point_process} is going back to \cite{hult:lindskog:2006} and \cite{hult:samorodnitsky:2010}, therefore shortly called HLS (Hult-Lindskog-Samorodnitsky)-convergence; it is described in detail in Section~\ref{Section:Preliminaries}. The scaling $\gamma_n^\alpha n^{-d}$ in \eqref{intro:point_process} is determined by the heaviness of the stable distribution given by $\alpha$; the dependence structure has no influence. It is obvious that more general classes than stationary $S\alpha S$ moving average processes are of interest. In the general context the convergence in \eqref{intro:point_process} still holds if the $S\alpha S$ random field is weakly dependent (generated by a dissipative group action); these are mixed moving average random fields. The paper pays as well attention to the large deviation point process behavior of $S\alpha S$ random fields under some special kind of strong dependence (generated by a conservative group action). In the strong dependent case the limit behavior is much more sophisticated. We will see that on the one hand, the scaling in \eqref{intro:point_process} will be determined by both the heaviness of the tails of the random field and the long range dependence. On the other hand, $N_n$ may be scaled as well, because by the strong clustering of the extremal events due to the strong dependence structure the sequence of point measures may not be tight anymore. In both the weak and the strong dependent case investigated in this paper the scalings in the large deviation point process behavior are linked to the speed of increase of maxima and can be summarized as follows. Let $\{a_n\}$ be a sequence of positive constants such that $$\max_{\|t\|_{\infty}\leq n} X_t=O_P(a_n^{1/\alpha})\quad \mbox{ as } n\to\infty$$ and $\{\gamma_n\}$ satisfies $\gamma_n^{-1}a_n^{1/\alpha}\to 0$ then \begin{eqnarray*} \frac{\gamma_n^\alpha}{a_n}\mathbb{P}\left((a_nn^{-d})\cdot N_n\in\cdot\right) \end{eqnarray*} converges in the HLS sense on the space of point measures as $n\to\infty$. In the weak dependent case we can take $a_n=n^d$ such that we obtain \eqref{intro:point_process}; in the strong dependent case $a_n=o(n^d)$ and $a_nn^{-d}=o(1)$ as $n\to\infty$ by the strong clustering of extremes. The behavior of $\max_{\|t\|_{\infty}\leq n} X_t$ and hence, $\{a_n\}$ depend on the effective dimension of the stationary $S\alpha S$ random field, which can be computed using group theory and is a measure for the dependence; for more details see \cite{samorodnitsky:2004a} and \cite{roy:samorodnitsky:2008}. The stronger clustering in the strong dependent case can also be discovered in the limit measure. The large deviation point process results presented in this paper support the classical point process results for stationary $S\alpha S$ random fields as well; if we replace in the definition \eqref{intro:point} of $N_n$ the constant $\gamma_n$ by $a_n^{-1/\alpha}$ such that \begin{eqnarray} \kappa_n:=\sum_{\|t\|_{\infty}\leq n}\delta_{(n^{-1}t,a_n^{-1/\alpha}(X_{t-w})_{w\in[-q\mathbf{1}_d,q\mathbf{1}_d]})}, \quad n\in\mathbb{N}, \end{eqnarray} the process $(a_nn^{-d})\cdot\kappa_n$ converges weakly. This point process result was derived in \cite{resnick:samorodnitsky:2004} for $d=1$ and in \cite{roy:2010a} for any stationary $S\alpha S$ random field. Since stable distributions have an infinite second moment it is not possible to take the correlation function as dependence measure for weak and strong dependence. \cite{samorodnitsky:2004a,mansfield:rachev:samorodnitsky:2001,rachev:samorodnitsky:2001,mikosch:samorodnitsky:2000a} and others had the idea to look at particular important functionals of stochastic processes and try to find some phase transition which reflect the boundary between weak and strong dependence. This phase transition we see in the large deviation behavior of point processes stationary $S\alpha S$ random fields as well. For $S\alpha S$ stable random fields the phase transition can be identified by ergodic-theoretical properties of nonsingular flows underlying the field; confirming the phase transition discovered by the above authors for functionals as maxima, long strange segments and ruin probabilities. A further goal of this paper is to understand the large deviation behavior of partial sums which is strongly connected to the asymptotic behavior of partial sums in the setting of stable processes. There exist only a few papers considering the large deviation behavior of partial sums for heavy tailed distributions going away from the iid assumption; see \cite{davis:Hsing:1995,hult:samorodnitsky:2010}. We will prove that with the notation above and $S_n=\sum_{\|t\|_{\infty}\leq n}X_t$ under some additional technical assumption \begin{eqnarray} \label{intro:large:deviation:sums} \frac{\gamma_n^{\alpha}}{a_n}\mathbb{P}((a_nn^{-d})\cdot\gamma_n^{-1}\cdot S_n\in\cdot ) \end{eqnarray} converges in the HLS sense as $n\to\infty$. For a general subclass of weakly dependent stationary $S\alpha S$ random fields, whose local dependence is weak as well, the statement holds. However, this result does not hold for all $S\alpha S$ mixed moving average random fields. When the local dependence is strong (e.g. for fractional stable noise) the probability measure has to be scaled on a different way. More interesting is the case of strong dependence. Again a general rule of thumb for the large deviation behavior of partial sums does not exist. Thus, we restrict our attention to a subclass of strongly dependent stationary $S\alpha S$ random fields which have a special kind of dependence structure. An interesting conclusion of the paper is that the large deviation behavior of point processes and the large deviation behavior of partial sums for stationary $S\alpha S$-stable random fields may be differ. Where the large deviation behavior of point processes is only influenced by weak/strong dependence characterized by the ergodic-theoretical properties of the underlying nonsingular group action and not by local dependence, local dependence influences the large deviation behavior of partial sums. In the case of local dependence the scaling $\gamma_n^{\alpha}/a_n$ of the probability measure in \eqref{intro:large:deviation:sums} will change. The same phenomena occurs by the comparison of the asymptotic behavior of the point processes $\kappa_n$ and the asymptotic behavior of the partial sums $S_n$. Only without local dependence we can conclude from the weak convergence of $(a_nn^{-d})\cdot\kappa_n$ the weak convergence of $(a_nn^{-d}) \cdot a_n^{-1/\alpha} \cdot S_n$. \end{comment} \section{Preliminaries} \label{Section:Preliminaries} In this section, we present the mathematical background on (a) nonsingular group actions, (b) stationary symmetric $\alpha$-stable random fields and (c) Hult-Lindskog-Samorodnitsky (HLS) convergence. The connection between the first two topics will be clear in this section and the third one will be useful in the entire paper. \subsection{Nonsingular group actions} Suppose $(G, +)$ is a countable Abelian group with identity element $\textbf{\e}$ and $(S,\mathcal{S},\mu)$ is a $\sigma$-finite standard Borel space. A collection $\{\phi_t\}_{t \in G}$ of measurable maps of $S$ into itself is called a \emph{nonsingular $G$-action} if $\phi_\textbf{\e}$ is the identity map on $S$, $\phi_{t_1+t_2}=\phi_{t_1} \circ \phi_{t_2}$ for all $t_1, t_2 \in G$ and each $\mu \circ \phi_t^{-1}$ is an equivalent measure of $\mu$; see \cite{aaronson:1997}, \cite{krengel:1985} and \cite{zimmer:1984}. Nonsingular actions are also known as {\em quasi-invariant actions} in the literature (see \cite{varadarajan:1970}). A collection of measurable $\pm 1$-valued maps $\{c_t\}_{t\in G}$ defined on $S$ is called a (measurable) \emph{cocycle} for $\{\phi_t\}_{t \in G}$ if for all $t_1,t_2 \in G$, $c_{t_1+t_2}(s)=c_{t_2}(s) c_{t_1}\big(\phi_{t_2}(s)\big)$ for all $s \in S$. A measurable set $W \subseteq S$ is called a \emph{wandering set} for the nonsingular $G$-action $\{\phi_t\}_{t \in G}$ if $\{\phi_t(W):\;t\in G\}$ is a pairwise disjoint collection. The set $S$ can be decomposed into two disjoint and invariant parts as follows: $S=\mC \cup \mD$ where $\mathcal{D} = \bigcup_{t \in G} \phi_t(W^\ast)$ for some wandering set $W^\ast \subseteq S$, and $\mathcal{C}$ has no wandering subset of positive $\mu$-measure; see \cite{aaronson:1997} and \cite{krengel:1985}. This decomposition is called the {\em Hopf decomposition}, and the sets $\mathcal{C}$ and $\mathcal{D}$ are called {\em conservative} and {\em dissipative} parts (of $\{\phi_t\}_{t \in G}$), respectively. The action is called conservative if $S=\mathcal{C}$ and dissipative if $S=\mathcal{D}$. \subsection{Stationary symmetric stable random fields} Every stationary $\SaS$ random field $\mathbf{X}$ admits an integral representation of the form \begin{eqnarray} X_t\eqdef\int_S c_t(s){\left(\frac{d \mu \circ \phi_t}{d \mu}(s)\right)}^{1/\alpha}f \circ \phi_t(s) M(ds),\;\; t \in \mathbb{Z}^d\,, \label{repn_integral_stationary} \end{eqnarray} where $M$ is an $S \alpha S$ random measure on some standard Borel space $(S,\mathcal{S})$ with $\sigma$-finite control measure $\mu$, $f \in L^{\alpha}(S,\mu)$, $\{\phi_t\}_{t \in \mathbb{Z}^d}$ is a nonsingular $\mathbb{Z}^d$-action on $(S, \mathcal{S},\mu)$, and $\{c_t\}_{t \in \mathbb{Z}^d}$ is a measurable cocycle for $\{\phi_t\}$; see \cite{rosinski:1995, rosinski:2000}. We say that a stationary $S\alpha S$ random field $\{X_t\}_{t\in \mathbb{Z}^d}$ is generated by a nonsingular $\mathbb{Z}^d$-action $\{\phi_t\}$ on $(S,\mathcal{S}, \mu)$ if it has an integral representation of the form $(\ref{repn_integral_stationary})$ satisfying the full support condition $ \bigcup_{t \in \mathbb{Z}^d} support( f \circ \phi_t)=S, $ which can be assumed without loss of generality. The Hopf decomposition of $\{\phi_t\}_{t \in \Zd}$ induces the following unique (in law) decomposition of the random field $\bX$, \begin{equation*} X_t \eqdef \int_{\mC} f_t(s)M(ds)+\int_{\mD} f_t(s)M(ds)=:X^{\mC}_t+X^{\mD}_t,\;\; t \in \mathbb{Z}^d, \label{decomp_of_X_t} \end{equation*} into a sum of two independent random fields $\bX^\mathcal{C}$ and $\bX^\mathcal{D}$ generated by a conservative and a dissipative $\Zd$-action, respectively; see \cite{rosinski:1995, rosinski:2000}, and \cite{roy:samorodnitsky:2008}. This decomposition reduces the study of stationary $S \alpha S$ random fields to that of the ones generated by conservative and dissipative actions. It was argued by \cite{samorodnitsky:2004a} (see also \cite{roy:samorodnitsky:2008}) that stationary $S\alpha S$ random fields generated by conservative actions have longer memory than those generated by dissipative actions and therefore, the following dichotomy can be observed: \begin{equation*} n^{-d/\alpha} \max_{\|t\|_\infty \leq n}|X_t| \Rightarrow \left\{ \begin{array}{ll} c_\bX \xi_\alpha & \mbox{ if $\bX$ is generated by a dissipative action,} \\ 0 & \mbox{ if $\bX$ is generated by a conservative action} \end{array} \right. \end{equation*} as $n \rightarrow \infty$. Here $\xi_\alpha$ is a standard Frech\'{e}t type extreme value random variable with distribution function \begin{equation} \bbP(\xi_\alpha \leq x)=\e^{-x^{-\alpha}},\;\,x > 0, \label{cdf_of_Z_alpha} \end{equation} and $c_\bX$ is a positive constant depending on the random field $\bX$. In the present work, we observe a similar phase transition in the large deviation principles of the point processes, partial sums, order statistics, etc.~as we pass from dissipative to conservative $\mathbb{Z}^d$-actions in the integral representation \eqref{repn_integral_stationary}. \subsection{The Hult-Lindskog-Samorodnitsky convergence} \label{subsec:HLS_conv} Fix a nonnegative integer $q$. Let $\mathbb{M}^q$ be the space of all Radon measures on $$ \mathbb{E}^q:= [-1,1]^d \times \big([-\infty, \infty]^{[-q\mathbf{1}_d, q\mathbf{1}_d]} \setminus\{0\}^{[-q\mathbf{1}_d, q\mathbf{1}_d]}\big) $$ equipped with the vague topology. Note that $\mathbb{E}^q$ is a locally compact, complete and separable metric space. Therefore, $C^+_K(\mathbb{E}^q)$, the space of all non-negative real-valued continuous functions defined on $\mathbb{E}^q$ with compact support, admits a countable dense subset consisting only of Lipschitz functions; see \cite{kallenberg:1983} and \cite{resnick:1987}. Using the above mentioned countable dense subset, $\mathbb{M}^q$ can be identified with a closed subspace of $\left[0,\infty\right)^\infty$ in parallel to \cite{hult:samorodnitsky:2010}, p.~36. In particular, it transpires that $\mathbb{M}^q$ is also a complete and separable metric space under the vague metric (see \cite{resnick:1987}, Proposition~3.17). Let $\mathbf{M}_0(\mathbb{M}^q)$ denote the space of all Borel measures $\rho$ on $\mathbb{M}^q$ satisfying $\rho(\mathbb{M}^q \setminus B(\O,\varepsilon))<\infty$ for all $\varepsilon>0$ (here $B(\O,\varepsilon)$ is the open ball of radius $\varepsilon$ around the null measure $\O$ in the vague metric). Define the \emph{Hult-Lindskog-Samorodnitsky} (HLS) convergence $\rho_n \to \rho$ in $\mathbf{M}_0(\mathbb{M}^q)$ by $\rho_n(f) \to \rho(f)$ for all $f \in C_{b,0}(\mathbb{M}^q)$, the space of all bounded continuous functions on $\mathbb{M}^q$ that vanish in a neighbourhood of $\O$; see Theorem~2.1 in \cite{hult:lindskog:2006} and Theorem~2.1 in \cite{Lindskog:Resnick:Roy}. This set up is the same as in \cite{hult:samorodnitsky:2010} except that the space $\mathbb{M}^q$ includes all Radon measures in $\mathbb{E}^q$, not just the Radon point measures. Observe that the space $\mathbb{M}_p^q$ of Radon point measures on $\mathbb{E}^q$ is a closed subset of $\mathbb{M}^q$ (see \cite{resnick:1987}, Proposition~3.14) and hence a complete and separable metric space under the vague metric (see \cite{resnick:1987}, Proposition~3.17). The space $\mathbf{M}_0(\mathbb{M}_p^q)$ (and the HLS convergence therein) can be defined in the exact same fashion; see \cite{hult:samorodnitsky:2010}, pp.~36. In fact, $\mathbf{M}_0(\mathbb{M}_p^q)$ can be viewed as a subset of $\mathbf{M}_0(\mathbb{M}^q)$ using the following natural identification: $\rho \in \mathbf{M}_0(\mathbb{M}_p^q)$ needs to be identified with its extension to $\mathbb{M}^q$ that puts zero measure on $\mathbb{M}^q \setminus \mathbb{M}_p^q$. For all $g_1,\,g_2 \in C^+_K(\mathbb{E}^q)$ and for all $\epsilon_1,\,\epsilon_2>0$, define a function \linebreak $F_{g_1,g_2,\epsilon_1,\epsilon_2}: \mathbb{M}^q\rightarrow \left[0,\infty\right)$ by \begin{eqnarray} \label{eq:F} F_{g_1,g_2,\epsilon_1,\epsilon_2}(\xi):=\left(1-\e^{-(\xi(g_1)-\epsilon_1)_{+}}\right)\left(1-\e^{-(\xi(g_2)-\epsilon_2)_{+}}\right), \;\;\xi\in\mathbb{M}^q. \end{eqnarray} Define, for any $\rho \in \mathbf{M}_0(\mathbb{M}^q)$, for all $g_1,\,g_2 \in C^+_K(\mathbb{E}^q)$ and for all $\epsilon_1,\,\epsilon_2>0$, \begin{eqnarray*} \rho(F_{g_1,g_2,\epsilon_1,\epsilon_2}):=\int_{\mathbb{M}^q}F_{g_1,g_2,\epsilon_1,\epsilon_2}(\xi) d\rho(\xi). \end{eqnarray*} Following verbatim the arguments in the appendix of \cite{hult:samorodnitsky:2010} (more specifically, Theorem A.2), the following result can be established. \begin{propn} \label{propn:suff:condn:HLS:conv} Let $\rho,\rho_1,\rho_2,\ldots$ be in $\mathbf{M}_0(\mathbb{M}^q)$ and $$\rho_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) \to \rho(F_{g_1,g_2,\epsilon_1,\epsilon_2}) \quad \mbox{ as }n \to \infty$$ for all Lipschitz $g_1,\,g_2 \in C^+_K(\mathbb{E}^q)$ and for all $\epsilon_1,\,\epsilon_2>0$. Then the HLS convergence $\rho_n \to \rho$ holds in $\mathbf{M}_0(\mathbb{M}^q)$. \end{propn} \section{The dissipative case} \label{section:dissipative} Suppose $\bX:=\{X_t\}_{t \in \mathbb{Z}^d}$ is a stationary $S \alpha S$ random field generated by a dissipative group action. In this case, it has been established by \cite{rosinski:1995, rosinski:2000} and \cite{roy:samorodnitsky:2008} that $\bX$ is a stationary {\em mixed moving average random field} (in the sense of \cite{surgailis:rosinski:mandrekar:cambanis:1993}). This means that $\bX$ has the integral representation \begin{eqnarray} X_t \eqdef \int_{W \times {\mathbb{Z}^d}}f(v,u-t)\,M(dv,du),\;\;\;t \in {\mathbb{Z}^d}\,, \label{repn_mixed_moving_avg} \end{eqnarray} where $f \in L^{\alpha}(W \times {\mathbb{Z}^d}, \nu \otimes \zeta)$, $\nu$ is a $\sigma$-finite measure on a standard Borel space $(W, \mathcal{W})$, $\zeta$ is the counting measure on $\mathbb{Z}^d$, and $M$ is a $\SaS$ random measure on $W \times {\mathbb{Z}^d}$ with control measure $\nu\otimes \zeta$ (cf. \cite{samorodnitsky:taqqu:1994}). Suppose $\nu_\alpha$ is the symmetric measure on $[-\infty,\infty] \setminus \{0\}$ given by \begin{equation} \nu_\alpha\left(x,\infty\right]=\nu_\alpha\left[-\infty,-x\right)=x^{-\alpha},\;\;x>0\,. \label{defn:nu_alpha} \end{equation} Let \begin{equation} \sum_{i=1}^\infty \delta_{(j_i,v_i,u_i)} \sim \PRM(\nu_\alpha \otimes \nu \otimes \zeta) \label{PRM:underlying} \end{equation} be a Poisson random measure on $([-\infty,\infty] \setminus \{0\}) \times W \times \mathbb{Z}^d$ with mean measure $\nu_\alpha \otimes \nu \otimes \zeta$. Then from \eqref{repn_mixed_moving_avg}, it follows that $\mathbf{X}$ has the following series representation: $ X_t \eqdef {C_\alpha}^{1/\alpha} \sum_{i=1}^\infty j_i f(v_i,u_i-t),\;\;t \in \mathbb{Z}^d $, where $C_\alpha$ is the stable tail constant given by \begin{equation} C_\alpha = {\left(\int_0^\infty x^{-\alpha} \sin{x}\,dx \right)}^{-1} =\left\{ \begin{array}{ll} \frac{1-\alpha}{\Gamma(2-\alpha) \cos{(\pi \alpha/2)}},&\mbox{\textit{\small{if }}}\alpha \neq 1,\\ \frac{2}{\pi}, &\mbox{\textit{\small{if }}}\alpha = 1. \end{array} \right. \label{defn:C_alpha \end{equation} For simplicity of notations, we shall drop the factor ${C_\alpha}^{1/\alpha}$ and redefine $X_t$ as \begin{equation} X_t :=\sum_{i=1}^\infty j_i f(v_i,u_i-t),\;\;t \in \mathbb{Z}^d\,. \label{repn_Possion_integral_X_n} \end{equation} Mimicking the arguments given in \cite{resnick:samorodnitsky:2004}, it was established in Theorem~3.1 of \cite{roy:2010a} that the weak convergence \begin{eqnarray*} \label{Roy:point process} \sum_{\|t\|_\infty \leq n} \delta_{(2n)^{-d/\alpha} X_t} \Rightarrow \sum_{i=1}^\infty \sum_{u \in \mathbb{Z}^d} \delta_{j_i f(v_i,u)} \quad\mbox{ as } n\to\infty \end{eqnarray*} holds on the space of Radon point measures on $[-\infty,\infty] \setminus \{0\}$ equipped with the vague topology. Clearly the above limit is a cluster Poisson process. For each $q \in\bbn_0$, define a random vector field \begin{equation} \widetilde{X}^q_t:=\{X_{t-w}\}_{w \in [-q\mathbf{1}_d, q\mathbf{1}_d]}. \label{defn:of:tilde:X} \end{equation} We take a sequence $\gamma_n$ satisfying $n^{d/\alpha}/\gamma_n \to 0$ so that for all $q\geq 0$, \begin{equation} N^{q}_n:=\sum_{\|t\|_\infty \leq n} \delta_{(n^{-1}t,\,\gamma_n^{-1}\widetilde{X}^q_t)} \label{defn:of:N_n} \end{equation} converges almost surely to $\O$, the null measure in the space $\mathbb{M}^q$ defined in Section~\ref{subsec:HLS_conv}. We define a map $ \psi: ([-\infty, \infty] \setminus \{0\}) \times W \times \mathbb{Z}^d \to [-\infty, \infty]^{[-q\mathbf{1}_d, q\mathbf{1}_d]} $ by \begin{equation} \psi(x, v, u)= \{xf(v,u-w)\}_{w\in [-q\mathbf{1}_d, q\mathbf{1}_d]} \label{defn:of:psi} \end{equation} in order to state the following result, which is an extension of Theorem~4.1 in \cite{hult:samorodnitsky:2010} to mixed moving average stable random fields. In particular, it describes the large deviation behavior of point processes induced by such fields. \begin{Theorem} \label{thm:main:proc:diss} Let $\{X_t\}_{t\in\mathbb{Z}^d}$ be the stationary symmetric $\alpha$-stable mixed moving average random field defined by \eqref{repn_Possion_integral_X_n} and $N^q_n$ be as in \eqref{defn:of:N_n} with \begin{eqnarray} \label{gamma} n^{d/\alpha}/\gamma_n \to 0 \quad \mbox{ as } n\to\infty. \end{eqnarray} Then for all $q \geq 0$, the HLS convergence \begin{equation} m^q_n(\cdot):=\frac{\gamma_n^\alpha}{n^d}\bbP(N^q_n \in \cdot) \rightarrow m^q_\ast(\cdot) \quad \mbox{ as } n\to\infty, \label{conv:m_n} \end{equation} holds in the space $\mathbf{M}_0(\mathbb{M}_p^q)$, where $m^q_\ast$ is a measure on $\mathbb{M}_p^q$ defined by \begin{align*} m^q_\ast(\cdot):=&(\Leb|_{[-1,1]^d} \otimes \nu_\alpha \otimes \nu)\Big( \Big\{(t,x,v) \in [-1,1]^d \times ([-\infty,\infty] \setminus \{0\}) \times W: \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\sum_{u \in \mathbb{Z}^d} \delta_{\left(t,\,\psi(x, v, u) \right)} \in \cdot\Big\}\Big) \end{align*} and satisfying $m^q_\ast(\mathbb{M}_p^q \setminus B(\O,\varepsilon))<\infty$ for all $\varepsilon>0$. \end{Theorem} The proof of the above result is given in the next section. The following statement is a direct consequence of \cref{thm:main:proc:diss} in a similar pattern as in \cite{hult:samorodnitsky:2010}. \begin{cor}\label{Corollary:Order Statistics} Let $X_{i:n}$ be the $i$-th order statistic of \linebreak $\{X_t\}_{t\in [-n\mathbf{1}_d, n\mathbf{1}_d]}$ in descending order, i.e., $X_{1:n}\geq X_{2:n}\geq \ldots \geq X_{(2n+1)^d\,:\,n}$\,. Moreover, for all $v \in W$, let $f_i^+(v)$ be the $i$-th order statistic of the sequence $\{f^+(v,u)\}_{u\in\bbz^d}$ in descending order and $f_i^-(v)$ be the $i$-th order statistic of the sequence $\{f^-(v,u)\}_{u\in\bbz^d}$ in descending order. Then for $y_1,\ldots,y_m> 0$, \begin{eqnarray*} \lefteqn{\lim_{n\to\infty}\frac{\gamma_n^{\alpha}}{n^d} \bbP(X_{1:n}>\gamma_ny_1, X_{2:n}>\gamma_ny_2,\ldots,X_{m:n}>\gamma_ny_m)}\\ &&=2^d\int_W\Big(\min_{i=1,\ldots,m}(f_i^+(v)y_i^{-1})^{\alpha}+\min_{i=1,\ldots,m}(f_i^-(v)y_i^{-1})^{\alpha}\Big) \nu({\rm d}v). \end{eqnarray*} In particular, for all $a>0$ and $n \geq 1$, if we define $\tau^a_n:=\inf\{\|t\|_{\infty}: X_t>a\gamma_n\}$, then \begin{eqnarray*} \lim_{n\to\infty}\frac{\gamma_n^{\alpha}}{n^d} \bbP(\tau^a_n\leq \lambda n) = (2\lambda)^d a^{-\alpha}\int_W(\sup_{u\in\bbz^d}f^+(v,u))^{\alpha}+(\sup_{u\in\bbz^d}f^-(v,u))^{\alpha} \nu({\rm d}v). \end{eqnarray*} \end{cor} \begin{proof}[\textbf{Proof}] Following the proof of Corollary~5.1 in \cite{hult:samorodnitsky:2010}, we can show that the set $$B(y_1,y_2, \ldots,y_m):=\bigcap_{i=1}^m\left\{\xi\in \mathbb{M}_p^0:\xi([-1,1]^d\times (y_i,\infty))\geq i\right\}$$ is bounded away from the null measure and its boundary is an $m^0_\ast$-null set. Therefore by applying \cref{thm:main:proc:diss} with $q=0$ and Portmanteau-Theorem (Theorem 2.4 in \cite{hult:lindskog:2006}), we obtain \begin{eqnarray*} \lim_{n\to\infty}\frac{\gamma_n^{\alpha}}{n^d} \bbP(X_{1:n}>\gamma_ny_1,\ldots,X_{m:n}>\gamma_ny_m) &=&\lim_{n\to\infty}m_n^0(B(y_1,\ldots,y_m))\\ &=&m_\ast^0(B(y_1,\ldots,y_m)), \end{eqnarray*} which can be shown to be equal to the first limit above by an easy calculation. The second statement follows trivially from the first one using the observation that \begin{eqnarray*} \frac{\gamma_n^{\alpha}}{n^d} \bbP(\tau^a_n\leq \lambda n)&=& \frac{\gamma_n^{\alpha}}{n^d} \bbP\left(\sup_{t\in[-\lfloor n\lambda \rfloor\mathbf{1}_d,\lfloor n\lambda \rfloor\mathbf{1}_d]} X_t>a\gamma_n\right) \end{eqnarray*} for all $n \geq 1$ and $a>0$. \end{proof} \subsection{Proof of Theorem~\ref{thm:main:proc:diss}} We shall first discuss a brief sketch of the proof of Theorem~\ref{thm:main:proc:diss}. Fix Lipschitz functions $g_1,\,g_2 \in C^+_K(\mathbb{E}^q)$ and $\epsilon_1,\,\epsilon_2>0$. By Theorem~A.2 of \cite{hult:samorodnitsky:2010}, in order to prove \eqref{conv:m_n}, it is enough to show that $m^q_\ast \in \mathbf{M}_0(\mathbb{M}_p^q)$ and \begin{equation} \lim_{n \to \infty} m^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) = m^q_\ast(F_{g_1,g_2,\epsilon_1,\epsilon_2}) \label{suff:condn:conv:mq_n} \end{equation} with $F_{g_1,g_2,\epsilon_1,\epsilon_2}$ as in \eqref{eq:F}. Following the heuristics in \cite{resnick:samorodnitsky:2004}, one expects that under the normalization $\gamma_n^{-1}$, all the Poisson points in \eqref{repn_Possion_integral_X_n} except perhaps one will be killed and therefore the large deviation behavior of $N^q_n$ should be the same as that of $$ \widehat{N}^q_n:=\sum_{i=1}^\infty \sum_{\|t\|_\infty \leq n} \delta_{(n^{-1}t,\,\gamma_n^{-1}\psi(j_i, v_i, u_i-t))}. $$ Keeping this in mind, we define \begin{eqnarray} \label{eq:mhat} \widehat{m}^q_n(\cdot):=\frac{\gamma_n^\alpha}{n^d}\bbP(\widehat{N}^q_n \in \cdot) \end{eqnarray} and hope to establish \begin{equation} \lim_{n \to \infty} \widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) = m^q_\ast(F_{g_1,g_2,\epsilon_1,\epsilon_2}). \label{conv:hatm_n}\, \end{equation} as the first step of proving \eqref{suff:condn:conv:mq_n}. For $p=1,2$ and for all $i\in\mathbb{N}$, let \begin{align} Z_{p,i}:=\sum_{\|t\|_\infty \leq n} g_p(n^{-1}t, \gamma_n^{-1}\psi(j_i, v_i, u_i-t)), \label{defn:Z_p,i} \end{align} where $\psi$ is as in \eqref{defn:of:psi}. For all $q \geq 0$ and $n \geq 1$, define \begin{eqnarray} \label{eq:mtilde32} \widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}):=\frac{\gamma_n^\alpha}{n^d}\bbE\Big[\sum_{i=1}^\infty \big(1-\e^{-(Z_{1,i}-\epsilon_1)_+}\big)\big(1-\e^{-(Z_{2,i}-\epsilon_1)_+}\big)\Big] \end{eqnarray} In order to establish \eqref{conv:hatm_n}, we shall first show that the quantities $\widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})$ and $\widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})$ are asymptotically equal, and then prove \begin{equation} \lim_{n \to \infty} \widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) = m^q_\ast(F_{g_1,g_2,\epsilon_1,\epsilon_2}). \nonumber \end{equation} The execution and justification of these steps are detailed below with the help of a series of lemmas. Among these, Lemma~\ref{lemma:diff:of:mhat:and:mtilde} is the key step that makes our proof amenable to the techniques used in \cite{resnick:samorodnitsky:2004}. The rest of the lemmas can be established by closely following the proof of Theorem~3.1 in the aforementioned paper and improving it whenever necessary. Most of these improvements are nontrivial albeit somewhat expected. The first step in establishing the HLS convergence \eqref{conv:m_n} is to check that the limit measure $m^q_\ast$ is indeed an element $\mathbf{M}_0(\mathbb{M}_p^q)$. \begin{lemma} \label{lemma:diss:1} For all $q \geq 0$, $m^q_\ast \in \mathbf{M}_0(\mathbb{M}_p^q).$ \end{lemma} \begin{proof}[\textbf{Proof}] The statement $m^q_\ast \in \mathbf{M}_0(\mathbb{M}_p^q)$ means that $m^q_\ast$ is a Borel measure on $\mathbb{M}_p^q$ with $m^q_\ast(\mathbb{M}_p^q\backslash B({\O},\epsilon))<\infty$ for any $\epsilon>0$. To prove this, we first claim that for almost all $(t, x, v) \in [-1,1]^d \times ([-\infty,\infty]\setminus \{0\}) \times W$, \begin{equation} \sum_{u \in \mathbb{Z}^d}\delta_{\left(t,\,\psi(x,v,u)\right)} \in \mathbb{M}_p^q, \label{first:step:comp:of:m_ast} \end{equation} concluding $m^q_\ast$ is a Borel measure on $\mathbb{M}_p^q$. To this end, setting \begin{equation} A_\eta:=[-\infty,\infty]^{[-q\mathbf{1}_d, q\mathbf{1}_d]}\setminus(-\eta,\eta)^{[-q\mathbf{1}_d, q\mathbf{1}_d]} \label{defn:A_eta} \end{equation} for all $\eta>0$, and $\|f\|_\alpha:=\left(\int_W \sum_{u\in\mathbb{Z}^d}|f(v,u)|^\alpha\nu(dv)\right)^{1/\alpha}$, we get \begin{align} &\int_{[-1,1]^d} \int_{|x|>0} \int_W \sum_{u \in \mathbb{Z}^d}\delta_{\left(t,\,\psi(x,v,u)\right)}\left([-1,1]^d \times A_\eta\right) \nu(dv) \nu_\alpha(dx) dt \nonumber\\ &\quad \leq 2^{d+1}\eta^{-\alpha} (2q+1)^d\, \|f\|_\alpha^\alpha <\infty. \label{eq.3.20} \end{align} Applying the method used to establish that the limit measure in Theorem~3.1 of \cite{resnick:samorodnitsky:2004} (p.196) is Radon, \eqref{first:step:comp:of:m_ast} follows from \eqref{eq.3.20}. Because of the estimates used in the proof of Theorem~A.2 in \linebreak \cite{hult:samorodnitsky:2010}, to obtain $m^q_\ast(\mathbb{M}_p^q\backslash B({\O},\epsilon))<\infty$ for all $\epsilon > 0$, it is enough to show that $ m^q_\ast(F_{g_1,g_2,\epsilon_1,\epsilon_2}) < \infty $ for all $g_1,\,g_2 \in C^+_K(\mathbb{E}^q)$ and for all $\epsilon_1,\,\epsilon_2>0$. Using \eqref{first:step:comp:of:m_ast} and a change of measure, we get \begin{align} m^q_\ast(F_{g_1,g_2,\epsilon_1, \epsilon_2} &=\int_{[-1,1]^d} \int_{|x|>0} \int_W \Big\{\left(1-\e^{-(\sum_{u\in\mathbb{Z}^d}\,g_1(t,\psi(x,v,u))-\epsilon_1)_{+}}\right) \nonumber\\ &\hspace{0.25in}\times \left(1-\e^{-(\sum_{u\in\mathbb{Z}^d}\,g_2(t,\psi(x,v,u))-\epsilon_2)_{+}}\right)\Big\}\nu(dv) \nu_\alpha(dx) dt.\nonumber \end{align} Let $C$ be an upper bound for $|g_1|$ and $|g_2|$, and $\eta > 0$ be such that $g_1(t,y)=g_2(t,y)=0$ for all $y \in (-\eta,\eta)^{[-q\mathbf{1}_d, q\mathbf{1}_d]}$. Then \eqref{eq.3.20} and the inequality $1-\e^{-(x-\epsilon)_{+}} \leq x$ (for $x \geq 0$ and $\epsilon >0$) yield that $m^q_\ast(F_{g_1,g_2,\epsilon_1,\epsilon_2})$ can be bounded by $2^{d+1}C \eta^{-\alpha}(2q+1)^d\|f\|_\alpha^\alpha $. This shows $m^q_\ast(\mathbb{M}_p^q\backslash B({\O},\epsilon))<\infty$. \end{proof} To proceed with the proof of \cref{thm:main:proc:diss} by using the ideas mentioned above, we need the following most crucial lemma. \begin{lemma} \label{lemma:diff:of:mhat:and:mtilde} Let $\widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})$ and $\widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})$ be as in \eqref{eq:mhat} and \eqref{eq:mtilde32}, respectively. Then for all $q \geq 0$, \[ \lim_{n\to\infty}|\widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})-\widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})|=0. \] \end{lemma} \begin{proof}[\textbf{Proof}] Let $C, \eta >0$ be as above and $A_\eta$ be defined by \eqref{defn:A_eta}. For $n \geq 1$, let $B_n$ be the event that for at most one $i$, $ \sum_{\|t\|_\infty \leq n} \delta_{\gamma_n^{-1}\psi(j_i, v_i, u_i-t)}(A_\eta) \geq 1, $ where $\psi$ is as in \eqref{defn:of:psi}. We claim that \begin{equation} \frac{\gamma_n^\alpha}{n^d} \bbP(B_n^c) \to 0 \label{bound:on:Prob:of:B_n:compliment} \end{equation} as $n \to \infty$. To prove this claim, observe that on $B_n^c$, there exist more than one $i$ such that $|j_i| \geq \eta\gamma_n/|f(v_i,u_i-t-w)|$ for some $(t,w) \in [-n\mathbf{1}_d, n\mathbf{1}_d] \times [-q\mathbf{1}_d, q\mathbf{1}_d]$ and therefore because of \eqref{PRM:underlying}, the sequence in \eqref{bound:on:Prob:of:B_n:compliment} can be bounded by \[ \frac{\gamma_n^\alpha}{n^d} \bbP\bigg(\sum_{i=1}^\infty \delta_{(j_i,v_i,u_i)}(L_n) \geq 2\bigg) \leq \frac{\gamma_n^\alpha}{n^d} \bigg(\bbE\Big(\sum_{i=1}^\infty \delta_{(j_i,v_i,u_i)}(L_n)\Big)\bigg)^2 = O\left({n^{d}}/{\gamma_n^{\alpha}}\right), \] where $ L_n:=\left\{(x,v,u): |x| \geq \eta \gamma_n\big(\sum_{\|t\|_\infty \leq n} \sum_{\|w\|_\infty \leq q}|f(v,u-t-w)|^\alpha\big)^{-\frac{1}{\alpha}}\right\} $. It is easy to check that with $Z_{1,i}$ and $Z_{2,i}$ as in \eqref{defn:Z_p,i}, \linebreak $\widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) =\frac{\gamma_n^\alpha}{n^d} \bbE \Big[\big(1-\e^{-(\sum_{i=1}^\infty Z_{1,i}-\epsilon_1)_+}\big) \big(1-\e^{-(\sum_{i=1}^\infty Z_{2,i}-\epsilon_2)_+}\big)\Big]$. Since on the event $B_n$, the random variables $\big(1-\e^{-(\sum_{i=1}^\infty Z_{1,i}-\epsilon_1)_+}\big) \big(1-\e^{-(\sum_{i=1}^\infty Z_{2,i}-\epsilon_2)_+}\big)$ and $\sum_{i=1}^\infty \big(1-\e^{-(Z_{1,i}-\epsilon_1)_+}\big)\big(1-\e^{-(Z_{2,i}-\epsilon_1)_+}\big)$ are equal, it transpires that \begin{eqnarray*} \lefteqn{|\widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})-\widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})|} \nonumber\\ & \leq& \frac{\gamma_n^\alpha}{n^d} \bbP(B_n^c) + \frac{\gamma_n^\alpha}{n^d} \bbE \Big[\1_{B_n^c}\sum_{i=1}^\infty \big(1-\e^{-(Z_{1,i}-\epsilon_1)_+}\big) \big(1-\e^{-(Z_{2,i}-\epsilon_2)_+}\big)\Big]\nonumber\\ &\leq& \frac{\gamma_n^\alpha}{n^d} \bbP(B_n^c) + \sqrt{\frac{\gamma_n^\alpha}{n^d}\bbP(B_n^c)\,\frac{\gamma_n^\alpha}{n^d}\bbE\,\Big(\sum_{i=1}^\infty \big(1-\e^{-(Z_{1,i}-\epsilon_1)_+}\big) \Big)^2}\;,\nonumber \end{eqnarray*} which, combined with \eqref{bound:on:Prob:of:B_n:compliment}, yields Lemma~\ref{lemma:diff:of:mhat:and:mtilde} provided we show that \begin{equation} \frac{\gamma_n^\alpha}{n^d}\bbE\,\Big(\sum_{i=1}^\infty \big(1-\e^{-(Z_{1,i}-\epsilon_1)_+}\big) \Big)^2=O(1). \label{bigOone} \end{equation} To this end, note that applying \eqref{PRM:underlying}, Lemma 9.5IV in \cite{Daley:Vere-JonesII}, and the inequality $1 - \e^{-x} \leq x$ for $x \geq 0$, we obtain \begin{align*} &\bbE\,\Big(\sum_{i=1}^\infty \big(1-\e^{-(Z_{1,i}-\epsilon_1)_+}\big) \Big)^2\nonumber\\ &= \int_{|x|>0} \int_W\sum_{u \in \mathbb{Z}^d}\big(1-\e^{-(\sum_{\|t\|_\infty \leq n} \,g_1(n^{-1}t, \gamma_n^{-1}\psi(x, v, u-t))-\epsilon_1)_+}\big)^2 \nu(dv) \nu_\alpha(dx)\nonumber\\ & \;\;\;\;+ \bigg(\int_{|x|>0} \int_W\sum_{u \in \mathbb{Z}^d}\big(1-\e^{-(\sum_{\|t\|_\infty \leq n} g_1(n^{-1}t, \gamma_n^{-1}\psi(x, v, u-t))-\epsilon_1)_+} \big)\\ &\hspace{3.7in}\nu(dv) \nu_\alpha(dx)\bigg)^2 \nonumber\\ &\leq \int_{|x|>0} \int_W \sum_{u \in \mathbb{Z}^d} \sum_{\|t\|_\infty \leq n}\,g_1\big(n^{-1}t, \gamma_n^{-1}\psi(x, v, u-t)\big) \nu(dv) \nu_\alpha(dx) \nonumber\\ &\;\;\;\;+ \bigg(\int_{|x|>0} \int_W \sum_{u \in \mathbb{Z}^d} \sum_{\|t\|_\infty \leq n}\,g_1\big(n^{-1}t, \gamma_n^{-1}\psi(x, v, u-t)\big) \nu(dv) \nu_\alpha(dx)\bigg)^2,\nonumber \end{align*} from which \eqref{bigOone} follows because by similar calculations as in \eqref{eq.3.20}, the first term of above is bounded by $2 C (\eta \gamma_n)^{-\alpha}(2q+1)^d \|f\|_\alpha^\alpha (2n+1)^d$ for all $n \geq 1$ and $q \geq 0$ and for the second term we additionally use \eqref{gamma}. This finishes the proof of this lemma. \end{proof} We shall now establish \eqref{conv:hatm_n}. In light of Lemma~\ref{lemma:diff:of:mhat:and:mtilde}, it is enough to prove the next lemma. \begin{lemma} For all $q \geq 0$, \begin{equation} \lim_{n\to\infty}\widetilde{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) = m^q_\ast(F_{g_1,g_2,\epsilon_1,\epsilon_2}). \label{conv:tildem_n} \end{equation} \end{lemma} \begin{proof}[\textbf{Proof}] This can be achieved in a fashion similar to the proof of Theorem~3.1 in \linebreak \cite{resnick:samorodnitsky:2004}, namely, by first proving a version of \eqref{conv:tildem_n} for $f$ supported on $W \times [-T\mathbf{1}_d,T\mathbf{1}_d]$ for some $T \geq 1$, and then using a converging together argument with the help of the inequalities used in the proof of Lemma~\ref{diff:of:m_F:and:mhat_F} below. \end{proof} Therefore in order to complete the proof of Theorem~\ref{thm:main:proc:diss}, it remains to establish the following lemma. \begin{lemma} \label{diff:of:m_F:and:mhat_F} For all $q \geq 0$, \begin{equation*} \lim_{n \to \infty} \big|m^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})-\widehat{m}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) \big| = 0. \end{equation*} \end{lemma} \begin{proof}[\textbf{Proof}] Because of the inequalities $|x_1 x_2 - y_1 y_2| \leq |x_1-y_1|+|x_2-y_2|$ for $x_1, x_2, y_1, y_2 \in [0,1]$ and $|\e^{-(z_1-\epsilon_1)_+}-\e^{-(z_2-\epsilon_2)_+}| \leq |z_1 - z_2|$ for $z_1, z_2 \in \left[0,\infty\right)$ and $\epsilon_1, \epsilon_2 \in (0, \infty)$, the convergence in \Cref{diff:of:m_F:and:mhat_F} will be established provided we show that for all Lipschitz $g \in C^+_K(\mathbb{E}^q)$, \begin{equation} \frac{\gamma_n^\alpha}{n^d} \bbE\big|N_n^q(g) - \widehat{N}_n^q(g)\big| \to 0 \label{diff:of:N_g:and:Nhat_g} \end{equation} as $n \to \infty$. We shall establish \eqref{diff:of:N_g:and:Nhat_g} by closely following the proof of (3.14) in \cite{resnick:samorodnitsky:2004} and modifying their estimates as needed. We sketch the main steps below. Assume that $|g| \leq C$ and $g(t,y)=0$ for all $y \in (-\eta,\eta)^{[-q\mathbf{1}_d, q\mathbf{1}_d]}$. For each $n\geq 1$ and for each $\theta >0$, let $A(\theta, n)$ denote the event that for all $\|t\|_\infty \leq n$ and for all $\|w\|_\infty \leq q$, $\sum_{i=1}^\infty \delta_{|j_if(v_i,u_i-t-w)|}\big([\gamma_n \theta, \infty]\big)\leq 1$. Then similarly as in \cite{resnick:samorodnitsky:2004} p.201 it follows that for all $\theta >0$, \begin{equation} \gamma_n^\alpha \bbP \big(A(\theta,n)^c\big) \to 0 \quad \mbox{ as }n \to \infty.\label{bound:on:Prob:A_theta,n^c} \end{equation} Defining $Y_t$ to be the summand of largest modulus in $ X_t = \sum_{i=1}^\infty j_i f(v_i, u_i-t) $ for all $t \in \mathbb{Z}^d$, and adapting the method of \cite{resnick:samorodnitsky:2004} p.~201 to our situation, we can find $T \in \mathbb{N}$ such that for all $\theta < \eta/2$, \begin{equation} D(\theta,n):=\bigg\{\bigvee_{\|w\|_\infty \leq q}\,\bigvee_{\|t\|_\infty \leq n} \left|\gamma_n^{-1}X_{t-w} - \gamma_n^{-1}Y_{t-w} \right|>\theta \bigg\}\cap A\left(\theta/T, n\right) \nonumber \end{equation} satisfies \begin{equation} \label{eq.3.26} \lim_{n\to\infty}\gamma_n^\alpha \bbP\big(D(\theta,n)\big)= 0. \end{equation} Define, for each $q \geq 0$, a random vector field $\{\widetilde{Y}^q_t\}_{t \in \mathbb{Z}^d}$ in $ \mathbb{R}^{[-q\mathbf{1}_d, q\mathbf{1}_d]}$ by replacing $\{X_t\}_{t \in \mathbb{Z}^d}$ by $\{Y_t\}_{t \in \mathbb{Z}^d}$ in \eqref{defn:of:tilde:X}. For any $\theta < \eta/2$, the sequence in \eqref{diff:of:N_g:and:Nhat_g} is bounded by \begin{align} & \frac{\gamma_n^\alpha}{n^d} \sum_{\|t\|_\infty \leq n} \bbE\,\big|g(n^{-1}t, \gamma_n^{-1} \widetilde{X}^q_t)-g(n^{-1}t, \gamma_n^{-1} \widetilde{Y}^q_t)\big| \1_{D(\theta,n)}\nonumber\\ & \;\;\;\;+ \frac{\gamma_n^\alpha}{n^d} \sum_{\|t\|_\infty \leq n} \bbE\,\big|g(n^{-1}t, \gamma_n^{-1} \widetilde{X}^q_t)-g(n^{-1}t, \gamma_n^{-1} \widetilde{Y}^q_t)\big| \1_{A(\theta/M,n) \setminus D(\theta,n)}\nonumber\\ & \;\;\;\;+ \frac{\gamma_n^\alpha}{n^d} \bbE\, \big|N_n^q(g)\big|\1_{A(\theta/M,n)^c} + \frac{\gamma_n^\alpha}{n^d} \bbE\, \big|\widehat{N}_n^q(g)\big|\1_{A(\theta/M,n)^c} \nonumber\\ &=\frac{\gamma_n^\alpha}{n^d} \sum_{\|t\|_\infty \leq n} \bbE\,\big|g(n^{-1}t, \gamma_n^{-1} \widetilde{X}^q_t)-g(n^{-1}t, \gamma_n^{-1} \widetilde{Y}^q_t)\big| \1_{A(\theta/M,n) \setminus D(\theta,n)} \nonumber\\ &\;\;\;\;+\frac{\gamma_n^\alpha}{n^d} \bbE\, \big|\widehat{N}_n^q(g)\big|\1_{A(\theta/M,n)^c} \;+\;o(1). \nonumber \end{align} In the last step, we used the asymptotic results \eqref{bound:on:Prob:A_theta,n^c} and \eqref{eq.3.26}, and the fact that $g$ bounded. Following \cite{resnick:samorodnitsky:2004} p.~202 , the first term above can be bounded by $2 L_g (\eta/2)^{-\alpha} (2q+1)^d \|f\|_\alpha^\alpha \left(\frac{2n+1}{n}\right)^d \theta$ (here $L_g$ denotes the Lipschitz constant of $g$) and repeating the method used in the proof of Lemma~\ref{lemma:diff:of:mhat:and:mtilde}, the second term can be shown to be $o(1)$. Since $\theta \in (0, \eta/2)$ is arbitrary, \eqref{diff:of:N_g:and:Nhat_g} follows. \end{proof} \section{The conservative case} \label{section:conservative} Suppose now that $\mathbf{X}$ is a stationary $S\alpha S$ random field generated by a conservative $\mathbb{Z}^d$-action. Unlike the mixed moving average representation in the dissipative case, no nice representation is available in general. However, if we view the underlying action as a group of invertible nonsingular transformations on $(S,\mathcal{S},\mu)$ (see \cite{roy:samorodnitsky:2008} and \cite{roy:2010a}), then under certain conditions, $\mathbf{X}$ can be thought of as a lower dimensional mixed moving average field. This will enable us to analyze the large deviation issues of point processes induced by such fields. Let $A:=\{\phi_t:\,t \in \mathbb{Z}^d\}$ be the subgroup of the group of invertible nonsingular transformations on $(S,\mathcal{S},\mu)$ and $ \Phi:\mathbb{Z}^d \rightarrow A $ be a group homomorphism defined by $\Phi(t)=\phi_t$ for all $t \in \mathbb{Z}^d$ with kernel $K:=Ker(\Phi)=\{t \in \mathbb{Z}^d:\,\phi_t = 1_S\}$. Here $1_S$ is the identity map on $S$. By the first isomorphism theorem of groups (see, for example, \cite{lang:2002}) we have $A \simeq \mathbb{Z}^d/K$. Therefore, the structure theorem for finitely generated abelian groups (see Theorem $8.5$ in Chapter I of \cite{lang:2002}) yields $ A=\overline{F} \oplus \overline{N}\,, $ where $\overline{F}$ is a free abelian group and $\overline{N}$ is a finite group. Assume $rank(\overline{F})=p \geq 1$ and $|\overline{N}|=l$. Since $\overline{F}$ is free, there exists an injective group homomorphism $ \Psi: \overline{F} \rightarrow \mathbb{Z}^d $ such that $\Phi \circ \Psi$ is the identity map on $\overline{F}$. Clearly, $F:=\Psi(\overline{F})$ is a free subgroup of $\mathbb{Z}^d$ of rank $p \leq d$. It follows easily that the sum $F+K$ is direct and $ \mathbb{Z}^d/(F+K) \simeq \overline{N} $. Let $x_1+(F +K)$, $x_2+(F +K),\,\ldots\,,x_l+(F+K)$ be all the cosets of $F +K$ in $\mathbb{Z}^d$. It has been observed in \cite{roy:samorodnitsky:2008} that $ H:=\bigcup_{k=1}^l (x_k + F) \label{defn_of_H} $ forms a countable Abelian group (isomorphic to $\mathbb{Z}^d/K$) under addition $\oplus$ modulo $K$ [for all $s_1,s_2\in H$, $s_1\oplus s_2$ is defined as the unique $s\in H$ such that $(s_1+s_2)-s\in K$] and it admits a map $N:H \to \{0,1,\ldots\}$ defined by \linebreak $ N(s):=\min\{\|s+v\|_ \infty: v \in K\} $ satisfying symmetry [for all $s \in H$, \linebreak $N(s^{-1})=N(s)$, where $s^{-1}$ is the inverse of $s$ in $(H,\oplus)$] and triangle inequality [for all $s_1, s_2 \in H$, $N(s_1 \oplus s_2) \leq N(s_1) + N(s_2)$]. Note that every $t \in \Zd$ can be decomposed uniquely as $t=t_H+t_K$, where $t_H \in H$ and $t_K \in K$. Therefore, we can define a projection map $\pi: \Zd \to H$ as $\pi(t)=t_H$ for all $s \in \Zd$. Define, for all $n \geq 1$, $ H_n=\{s \in H: N(s) \leq n\}. $ It is easy to see that $H_n$'s are finite subsets increasing to $H$ and \begin{equation} |H_n| \sim c n^p \label{rate_of_growth:H_n} \quad \mbox{ as }n\to\infty, \end{equation} for some $c>0$; see (5.19) in \cite{roy:samorodnitsky:2008}. If $\{\phi_t\}_{t \in F}$ is a dissipative group action then $\{\wt\phi_s\}_{s \in H}$ defined by $ \wt\phi_s= \phi_s $ is a dissipative $H$-action; see, once again, \cite{roy:samorodnitsky:2008}, p.228. Because of Remark 4.3 in \cite{roy:2010a} (an extremely useful observation of Jan Rosi\'nski), without loss of generality, all the known examples of stationary $S \alpha S$ random fields can be assumed to satisfy \begin{equation} c_v \equiv 1 \;\;\;\mbox{ for all }v \in K, \label{assumption_on_c_t_for_t_in_K} \end{equation} which would immediately yield that $\{c_s\}_{s \in H}$ is an $H$-cocycle for $\{\wt\phi_s\}_{s \in H}$. Hence the subfield $\{X_s\}_{s \in H}$ is $H$-stationary and is generated by the dissipative action $\{\wt\phi_s\}_{s \in H}$. This implies, in particular, that there is a standard Borel space $(W,\mathcal{W})$ with a $\sigma$-finite measure $\nu$ on it such that \begin{equation} X_s \eqdef \int_{W \times H} h(v,u \oplus s)\,M^\prime(dv,du),\;\;\; s \in H, \label{mixed_moving_avg_repn_of_X_r_r_in_H} \end{equation} for some $h \in L^{\alpha}(W\times H, \nu \otimes \zeta_H)$, where $\zeta_H$ is the counting measure on $H$, and $M^\prime$ is a $S\alpha S$ random measure on $W \times H$ with control measure $\nu \otimes \zeta_H$ (see, for example, Remark $2.4.2$ in \cite{roy:2008}). Let $$ \sum_{i=1}^{\infty} \delta_{(j_i,v_i,u_i)} \sim \PRM(\nu_\alpha \otimes \nu \otimes \zeta_H) $$ be a Poisson random measure on $([-\infty,\infty] \setminus \{0\}) \times W \times H$, where $\nu_\alpha(\cdot)$ is the measure defined by \eqref{defn:nu_alpha}. The following series representation holds in parallel to \eqref{repn_Possion_integral_X_n} after dropping a factor of $C_\alpha^{1/\alpha}$ ($C_\alpha$ is as in \eqref{defn:C_alpha}): \begin{equation*} X_s = \sum_{i=1}^\infty j_ih(v_i,u_i\oplus s),\;\;\;s \in H. \end{equation*} Note that $rank(K)=d-p$; see the proof of Proposition 3.1 in \linebreak \cite{chakrabarty:roy:2013}. Assume $p<d$. Let $U$ be a $d \times p$ matrix whose columns form a basis of $F$ and $V$ be a $d \times (d-p)$ matrix whose columns form a basis of $K$. Let \begin{equation} \Delta:=\{y \in \mathbb{R}^p: \mbox{ there exists }\lambda \in \mathbb{R}^{d-p} \mbox{ such that } \|Uy+V\lambda\|_\infty \leq 1 \}, \nonumber \end{equation} which is a compact and convex set; see Lemma~5.1 in \cite{roy:2010a}. For all $y \in \Delta$, define $ Q_y:=\{\lambda \in \mathbb{R}^{d-p}:\, \|Uy+V\lambda\|_\infty \leq 1 \} $ and let $\mathcal{V}(y)$ be the $q$-dimensional volume of $Q_y$. Lemma~5.1 in \cite{roy:2010a} says that $\mathcal{V}:\Delta\to [0,\infty)$ is a continuous map. We also define a map $ \psi_H: ([-\infty, \infty] \setminus \{0\}) \times W \times H\to [-\infty, \infty]^{[-q\mathbf{1}_d, q\mathbf{1}_d]} $ by \begin{equation*} \psi_H(x, v, u)= \{xh(v,u\ominus \pi(w))\}_{ w \in [-q\mathbf{1}_d, q\mathbf{1}_d]}, \end{equation*} where $\pi$ is the projection on $H$ as above and $u \ominus s:=u \oplus s^{-1}$ with $s^{-1}$ being the inverse of $s$ in $(H,\oplus)$. The rank $p$ can be regarded as the effective dimension of the random field and it gives more precise information on the rate of growth of the partial maxima than the actual dimension $d$. More precisely, according to Theorem 5.4 in \cite{roy:samorodnitsky:2008}, \begin{equation*} n^{-p/\alpha} \max_{\|t\|_\infty \leq n }|X_t| \Rightarrow \left\{ \begin{array}{ll} c^\prime_\bX \xi_\alpha & \mbox{ if $\{\phi_t\}_{t \in F}$ is a dissipative action,} \\ 0 & \mbox{ if $\{\phi_t\}_{t \in F}$ is a conservative action}, \end{array} \right. \end{equation*} where $c^\prime_\bX$ is a positive constant depending on $\bX$ and $\xi_\alpha$ is as in \eqref{cdf_of_Z_alpha}. However, even when $\{\phi_t\}_{t \in F}$ is dissipative and \eqref{assumption_on_c_t_for_t_in_K} holds, the point process sequence $\sum_{\|t\|_\infty \leq n} \delta_{n^{-p/\alpha}X_t}$ does not remain tight due to clustering of points owing to the longer memory of the field. It so happens that the cluster sizes are of order $n^{d-p}$ and therefore the scaled point process $ n^{p-d}\sum_{\|t\|_\infty \leq n} \delta_{n^{-p/\alpha}X_t} $ converges weakly to a random measure on $[-\infty, \infty] \setminus \{0\}$; see Theorem 4.1 in \cite{roy:2010a}. To be precise \[ \hspace*{-0.2cm} n^{p-d}\sum_{\|t\|_\infty \leq n} \delta_{( l \text{Leb}(\Delta)n^p)^{-1/\alpha}X_t}\weak\sum_{u \in H}\sum_{i=1}^{\infty} \mathcal{V}(\xi_i)\delta_{j_ih(v_i,u)} \quad \mbox{ as } n\to\infty, \] where $\sum_{i=1}^{\infty}\delta_{(\xi_i,j_i,v_i)} \sim $ \PRM$(\mbox{Leb}|_{\Delta}\otimes\nu_\alpha \otimes \nu)$. Therefore, we take a sequence $\norm_n$ such that $ n^{p/\alpha}/ \norm_n \to 0$ as $n\to\infty$ so that for all $q \geq 0$ and $\widetilde{X}^q_t$ as defined in \eqref{defn:of:tilde:X}, \begin{equation} \Lamb^q_n:=n^{p-d}\sum_{\|t\|_\infty \leq n} \delta_{(n^{-1}t, \norm_n^{-1}\widetilde{X}^q_t)} \label{defn:of:lambda_n} \end{equation} converges almost surely to $\O$. With the notations introduced above, we have the following result. \begin{Theorem}\label{thm:main:proc:cons} Let $\{X_t\}_{t\in\mathbb{Z}^d}$ be a stationary symmetric $\alpha$-stable random field generated by a conservative action $\{\phi_t\}_{t \in \mathbb{Z}^d}$ and $\Lamb^q_n$ be as in \eqref{defn:of:lambda_n} with \begin{eqnarray} \label{beta} n^{\frac{p}{\alpha}}/\norm_n\to 0 \quad \mbox{ as } n\to\infty. \end{eqnarray} Assume $1 \leq p < d$, $\{\phi_t\}_{t \in F}$ is dissipative and \eqref{assumption_on_c_t_for_t_in_K} holds. Then for all $q \geq 0$, the HLS convergence \begin{equation} \ka^q_n(\cdot):=\frac{\norm_n^\alpha}{n^p}\bbP(\Lamb^q_n \in \cdot) \rightarrow \ka^q_\ast(\cdot) \label{conv:kappa_n} \quad \mbox{ as } n\to\infty \end{equation} holds in the space $\mathbf{M}_0(\mathbb{M}^q)$, where $\ka^q_\ast$ is a measure on $\mathbb{M}^q$ defined by \begin{align*} \ka^q_\ast(\cdot):=&\,l \,(\text{Leb}|_\Delta \otimes \nu_\alpha \otimes \nu) \Big(\Big\{(y,x,v) \in \Delta \times ([-\infty,\infty] \setminus \{0\}) \times W: \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \int_{Q_y}\,\sum_{u \in H} \delta_{\left(Uy+V\lambda,\,\psi_H(x, v, u)\right)} \,d\lambda\in \cdot\Big\}\Big) \end{align*} and satisfying $\ka^q_\ast(\mathbb{M}^q \setminus B(\O,\varepsilon))<\infty$ for all $\varepsilon>0$. \end{Theorem} \begin{proof}[\textbf{Proof}] Since this proof is similar to the proof of Theorem~\ref{thm:main:proc:diss} above with ingredients from \cite{roy:2010a}, we shall only sketch the main steps. For example, it can be verified that $\ka^q_\ast\in \mathbf{M}_0(\mathbb{M}^q)$ using the same approach used in the proof of \Cref{lemma:diss:1}. As before, fix Lipschitz functions $g_1,\,g_2 \in C^+_K(\mathbb{E}^q)$ and $\epsilon_1,\,\epsilon_2>0$. For all $s \in \mathbb{Z}^d$ and $n \geq 1$, define $C_{s,n}:= [-n\mathbf{1}_d,n\mathbf{1}_d]\cap (s+K)$. With the help of this notation, $\Lamb_n^q$ can be rewritten as $ \Lamb_n^q:=n^{p-d} \sum_{s \in H_n} \sum_{t \in C_{s,n}} \delta_{(n^{-1}t,\,\norm_n^{-1}\widetilde{X}^q_s)}. $ Using the heuristics given before the proof of Theorem \ref{thm:main:proc:diss}, one can guess that the large deviation of $\Lamb_n^q$ would be same as that of $$ \widehat{\Lamb}_n^q:=n^{p-d} \sum_{i=1}^\infty \sum_{s \in H_n} \sum_{t \in C_{s,n}} \delta_{(n^{-1}t, \norm_n^{-1}\psi_H(j_i,v_i,u_i\oplus s))}. $$ Keeping this in mind, we define \begin{eqnarray*} \widehat{\ka}_n^q:=\frac{\norm_n^\alpha}{n^p} \bbP(\widehat{\Lamb}_n^q \in \cdot) \in \mathbf{M}_0(\mathbb{M}^q) \end{eqnarray*} and follow the proof of Lemma~\ref{diff:of:m_F:and:mhat_F} to establish that \begin{equation} \lim_{n \to \infty} \big|\ka_n^q(F_{g_1, g_2, \epsilon_1, \epsilon_2}) - \widehat{\ka}_n^q(F_{g_1, g_2, \epsilon_1, \epsilon_2})\big| = 0, \label{diff:ka_and_kahat} \end{equation} where $F_{g_1, g_2, \epsilon_1, \epsilon_2}$ is as in \eqref{eq:F}. Moreover, we define for all $q\geq 0$, \begin{eqnarray} &&\hspace*{-0.6cm}\widetilde{\ka}_n^q(F_{g_1, g_2, \epsilon_1, \epsilon_2}) \nonumber\\ &&\hspace*{-0.3cm}:=\frac{\norm_n^\alpha}{n^p} \bbE\bigg[\sum_{i=1}^\infty \Big\{\Big(1 - \e^{-( n^{p-d}\sum_{s \in H_n} \sum_{t \in C_{s,n}}g_1(n^{-1}t,\,\norm_n^{-1} \psi_H(j_i, v_i, u_i \oplus s))-\epsilon_1)_+}\Big) \nonumber\\ &&\hspace*{-0.3cm} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times \Big(1 - \e^{-(n^{p-d}\sum_{s \in H_n} \sum_{t \in C_{s,n}}g_2(n^{-1}t,\,\norm_n^{-1} \psi_H(j_i, v_i, u_i \oplus s))-\epsilon_2)_+}\Big)\Big\}\bigg].\nonumber \end{eqnarray} Assuming that $g_1(t,y)=g_2(t,y)=0$ for all $y \in (-\eta,\eta)^{[-q\mathbf{1}_d, q\mathbf{1}_d]}$, and using \eqref{rate_of_growth:H_n} and an argument parallel to the one used in establishing \eqref{bound:on:Prob:of:B_n:compliment} above, it follows that \[ \frac{\norm_n^\alpha}{n^p} \bbP\Big(\mbox{for more than one }i,\,\sum_{s\in H_n} \delta_{\norm_n^{-1} \psi_H(j_i, v_i, u_i \oplus s)}(A_\eta) \geq 1\Big) \to 0, \] from which we can establish a version of Lemma \ref{lemma:diff:of:mhat:and:mtilde} in this set up and conclude \begin{equation} \lim_{n\to\infty}|\widehat{\ka}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})-\widetilde{\ka}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})| = 0. \label{diff:katilde_and_kahat} \end{equation} \begin{comment} The proof goes a long the lines of the proof of Lemma \ref{lemma:diff:of:mhat:and:mtilde}. Furthermore, the same arguments as in \Cref{diff:of:m_F:and:mhat_F} yield \begin{lemma} \label{diff:of:m_F:and:mhat_F:cons} \begin{equation*} \lim_{n \to \infty} \big|\ka^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2})-\widehat{\ka}^q_n(F_{g_1,g_2,\epsilon_1,\epsilon_2}) \big| = 0. \end{equation*} \end{lemma} \end{comment} In light of \cref{propn:suff:condn:HLS:conv}, \eqref{diff:ka_and_kahat}, and \eqref{diff:katilde_and_kahat}, it is enough to prove that for all $q\geq 0$, \begin{eqnarray} \label{conv:tildem_n:cons} \lim_{n\to\infty}\widetilde{\ka}_n^q(F_{g_1, g_2, \epsilon_1, \epsilon_2})= {\ka}_*^q(F_{g_1, g_2, \epsilon_1, \epsilon_2}). \end{eqnarray} We shall start with the special case when $h$ is supported on $W \times H_T$ for some $T \geq 1$. For such a function $h$, we have \begin{align*} &\widetilde{\ka}_{n}^q(F_{g_1, g_2, \epsilon_1, \epsilon_2})\\ &=\frac{1}{n^p}\int_{|x|>0}\int_W \sum_{u \in H_{n+T+q}} \Big\{\Big(1 - \e^{-(n^{p-d}\sum_{s \in H_{n}} \sum_{t \in C_{s,n}}g_1(n^{-1}t,\,\psi_H(x, v, s\ominus u))-\epsilon_1)_+}\Big)\\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times \Big(1 - \e^{-(n^{p-d}\sum_{s \in H_n} \sum_{t \in C_{s,n}}g_2(n^{-1}t,\,\psi_H(x, v, s\ominus u))-\epsilon_2)_+}\Big)\Big\}\\ &\hspace{3.9in} \nu(dv) \nu_\alpha(dx), \end{align*} from which, applying Lemma ~5.1 in \cite{roy:2010a}, \eqref{rate_of_growth:H_n} above and the fact that $g_1$ and $g_2$ are Lipschitz, it follows that \begin{eqnarray*} &&\hspace*{-0.6cm}\widetilde{\ka}_{n}^q(F_{g_1, g_2, \epsilon_1, \epsilon_2})\label{claim}\\ &&\hspace*{-0.6cm}=\frac{1}{n^p}\int_{|x|>0}\int_W \sum_{u \in H_{n+T+q}} \Big\{\Big(1 - \e^{-(n^{p-d}\sum_{z \in B_{u,n}} \sum_{t \in C_{u,n}}g_1(n^{-1}t,\,\psi_H(x, v, z))-\epsilon_1)_+}\Big) \nonumber\\ && \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times \Big(1 - \e^{-(n^{p-d}\sum_{z \in B_{u,n}} \sum_{t \in C_{u,n}}g_2(n^{-1}t,\,\psi_H(x, v, z))-\epsilon_2)_+}\Big)\Big\}\nonumber\\ &&\hspace{3.2in} \nu(dv) \nu_\alpha(dx)+o(1), \nonumber \end{eqnarray*} where $B_{u,n}:=\{z \in H_{T+q}: z\oplus u \in H_n\}$. The above equality and an argument similar to the one used in establishing (5.17) of \cite{roy:2010a} yield \begin{align*} &\lim_{n \to \infty}\widetilde{\ka}_{n}^q(F_{g_1, g_2, \epsilon_1, \epsilon_2})\\ &= l\int_{|x|>0}\int_W \int_\Delta \Big\{\Big(1 - \e^{-(\int_{Q_y}\sum_{z \in H_{T+q}} g_1(Uy+V\lambda,\,\psi_H(x, v, z))d\lambda-\epsilon_1)_+}\Big)\\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times \Big(1 - \e^{-(\int_{Q_y}\sum_{z \in H_{T+q}} g_2(Uy+V\lambda,\,\psi_H(x, v, z))d\lambda-\epsilon_2)_+}\Big)\Big\} \,dy \,\nu(dv)\,\nu_\alpha(dx). \end{align*} This establishes \eqref{conv:tildem_n:cons} for $h$ with support $W\times H_T$ for some $T \geq 1$. The proof of \eqref{conv:tildem_n:cons} in the general case follows easily from the above by using a standard converging together technique (see the proofs of (5.21) and (5.22) in \cite{roy:2010a}) based on the inequalities used to establish Lemma~\ref{diff:of:m_F:and:mhat_F}. This completes the proof of Theorem~\ref{thm:main:proc:cons}. \end{proof} \begin{remark} It is possible to interpret \cref{thm:main:proc:diss} as a special case of \cref{thm:main:proc:cons} by setting $p=d$, $l=1$, $\Delta=[-1,1]^d$, $H=\bbz^d,$ $U=I_d$ (the identity matrix of order $d$), $V=0$ along with the convention that $\mathbb{R}^{0}=\{0\}$ so that $Q_y=\{0\}$ for all $y \in [-1,1]^d$ and $\lambda$ is interpreted as the counting measure on $\{0\}$ (think of it as the zero-dimensional Lebesgue measure). However, since the above proof does not honour these conventions, a separate proof had to be given for \cref{thm:main:proc:diss}. Same remark applies to the two parts of \cref{thm:large_deviation} below. \end{remark} \begin{example} In order to understand Theorem~\ref{thm:main:proc:diss} and its notations, let us consider Example 6.1 in \cite{roy:2010a} and apply \cref{thm:main:proc:cons} on it. This means $d=2$, $S=\mathbb{R}$, $\mu$ is the Lebesgue measure and $\{\phi_{(t_1,t_2)}\}$ is a measure preserving conservative $\mathbb{Z}^2$-action on $\mathbb{R}$ defined by $ \phi_{(t_1,t_2)}(x)=x+t_1-t_2. $ Take any $f \in L^\alpha(\bbr,\mu)$ and define a stationary $S\alpha S$ random field $\{X_{(t_1,t_2)}\}$ as \linebreak $X_{(t_1,t_2)} \eqdef \int_{\mathbb{R}} f\big(\phi_{(t_1,t_2)}(x)\big)\, M(dx)$, $t_1,t_2 \in \mathbb{Z}$, where $M$ is an $S \alpha S$ random measure on $\mathbb{R}$ with control measure $\mu$. This representation of $\{X_{(t_1,t_2)}\}$ is of the form $(\ref{repn_integral_stationary})$ with $c_{(t_1,t_2)} \equiv 1$. As computed in \cite{roy:2010a}, in this case, $K=\{(t_1,t_2)\in \mathbb{Z}^2:\,t_1=t_2\}$, $p=d-p=l=1$, $H=F=\{(u_1,0):\,u_1 \in \mathbb{Z}\}$, and $U=(1,0)^T$, $V=(1,1)^T$ so that $\Delta = [-2,2]$ and for all $y \in [-2,2]$, \[ Q_y=\left\{ \begin{array}{ll} \,[-(1+y),1], &\;\;\;y \in \left[-2,0\right), \\ \,[-1,1-y], &\;\;\;y \in [0,2]. \end{array} \right. \] There is a standard Borel space $(W,\mathcal{W})$ with a $\sigma$-finite measure $\nu$ on it such that \eqref{mixed_moving_avg_repn_of_X_r_r_in_H} holds for some $h \in L^{\alpha}(W\times H, \nu \otimes \zeta_H)$, where $\zeta_H$ is the counting measure on $H$, and $M^\prime$ is a $S\alpha S$ random measure on $W \times H$ with control measure $\nu \otimes \zeta_H$. Note that for $u, s \in H$ with $u=(u_1, 0)$ and $s=(s_1, 0)$, $u \oplus s=(u_1+s_1,0)$, and $\pi(w_1, w_2)=(w_1-w_2, 0)$. Therefore, in this example, $\psi_H(x,v, (u_1,0))=\{xh(v, (u_1-w_1+w_2,0))\}_{-q \leq w_1, w_2 \leq q}$. It was shown in \cite{roy:2010a} that $ n^{-1} \sum_{|t_1|,\,|t_2| \leq n} \delta_{(4n)^{-1/\alpha}X_{(t_1,t_2)}} $ converges weakly to a random element in the space of all Radon measures on \linebreak $[-\infty,\infty]\setminus\{0\}$. We take a sequence $\norm_n$ satisfying $n^{1/\alpha}/\norm_n \to 0$ and apply Theorem \ref{thm:main:proc:cons} to conclude that the following HLS convergence holds in $\mathbf{M}_0(\mathbb{M}^q)$: \begin{align*} &\frac{\norm_n^\alpha}{n}\bbP(\Lamb^q_n \in \cdot) \rightarrow \mu|_{[-2,2]} \otimes \nu_\alpha \otimes \nu \Big(\Big\{(y,x,v) \in [-2,2] \times ([-\infty,\infty] \setminus \{0\}) \times W: \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \int_{Q_y}\,\sum_{u_1 \in \mathbb{Z}} \delta_{\left((y+\lambda,\lambda),\,\psi_H(x,v,(u_1,0))\right)} \,d\lambda\in \cdot\Big\}\Big), \end{align*} where $ \Lamb^q_n=n^{-1}\sum_{|t_1|,\,|t_2| \leq n} \delta_{\left(n^{-1}(t_1, t_2), \; \norm_n^{-1}\{X_{(t_1-w_1,t_2-w_2)}\}_{-q \leq w_1, w_2 \leq q}\right)} $. \end{example} The following corollary is a direct consequence of \cref{thm:main:proc:cons}. Its proof is very similar to that of \cref{Corollary:Order Statistics} and hence is skipped. \begin{cor} \label{corollary:4.1} Let $y>0$. Then as $n\to\infty$, \begin{eqnarray*} &&\frac{\norm_n^{\alpha}}{n^p}\mathbb{P}\left(\max_{\|t\|_{\infty}\leq n}X_t>\norm_ny\right)\\ &&\quad \to l\text{Leb}(\Delta)y^{-\alpha} \int_W(\sup_{u\in H} h^+(v,u))^{\alpha}+(\sup_{u\in H} h^-(v,u))^{\alpha} \nu(dv). \end{eqnarray*} In particular, with $\tau^a_n$ as defined in \cref{Corollary:Order Statistics}, \begin{eqnarray*} &&\lim_{n\to\infty}\frac{\norm_n^{\alpha}}{n^p}\mathbb{P}(\tau^a_n\leq \lambda n)\\ &&\;\;\;\;\;\;\;\;=\lambda ^pa^{-\alpha}l\mbox{Leb}(\Delta)\int_W(\sup_{u\in H} h^+(v,u))^{\alpha}+(\sup_{u\in H} h^-(v,u))^{\alpha} \nu(dv). \end{eqnarray*} \end{cor} \section{Large deviation of the partial sum} \label{section:classical:large deviation} In this section, we use our point process large deviation results to investigate the classical large deviation behaviour for the partial sum sequence of stationary symmetric stable random fields. As before, we consider two cases depending on whether the underlying group action is dissipative or conservative. To fix the notations, let $\{X_t\}_{t\in\mathbb{Z}^d}$ be a stationary symmetric $\alpha$-stable random field as before and define the partial sum sequence \begin{eqnarray} \label{partial sum} S_n=\sum_{\|t\|_{\infty}\leq n}X_t, \quad n\in\bbn. \end{eqnarray} Using continuous mapping arguments from the results of Theorem~3.1 and Theorem~4.1, respectively in \cite{roy:2010a}, one can establish the following weak convergence results. If $\{X_t\}_{t\in\mathbb{Z}^d}$ is generated by a dissipative action as in \cref{thm:main:proc:diss} having representation \eqref{repn_Possion_integral_X_n} with kernel function $f\in L^{\alpha}(W\times \bbz^d,\nu\otimes\zeta)$ satisfying \begin{eqnarray} \label{assump:1} \int_W \left(\sum_{u\in\bbz^d} \left|f(v,u)\right|\right)^{\alpha}\nu({\rm d}v)<\infty, \end{eqnarray} then $ n^{-d/\alpha}S_n\Rightarrow C_f Z_\alpha $, where $Z_\alpha \sim S\alpha S(1)$ and \begin{eqnarray} C_f^\alpha:=2^d\int_W\left(\left(\sum_{u\in\bbz^d}f(v,u)\right)^+\right)^{\alpha +\left(\left(\sum_{u\in\bbz^d} f(v,u)\right)^-\right)^{\alpha}\, \nu({\rm d}v). \label{def:C_f} \end{eqnarray} On the other hand, if $\{X_t\}_{t\in\mathbb{Z}^d}$ is generated by a conservative action as in \cref{thm:main:proc:cons} with $h\in L^{\alpha}(W\times H,\nu\otimes\zeta_H)$ satisfying \begin{eqnarray} \label{assump:2} \int_W \left(\sum_{u\in H} \left|h(v,u)\right|\right)^{\alpha}\nu({\rm d}v)<\infty, \end{eqnarray} then $ \displaystyle n^{p-d-p/\alpha}S_n\weak C_{l,\mathcal{V},h}Z_\alpha $, where \begin{align} C_{l,\mathcal{V},h}^\alpha&:=l\left(\int_\Delta (\mathcal{V}(y))^{\alpha}\,{\rm d}y\right) \times \nonumber\\ & \;\;\;\;\;\;\;\;\;\;\int_W\left(\left(\sum_{u\in H}h(v,u)\right)^+\right)^{\alpha} +\left(\left(\sum_{u\in H} h(v,u)\right)^-\right)^{\alpha}\, \nu({\rm d}v). \label{def:C_lVh} \end{align} We do not present the proofs of the above statements because they will also follow from our large deviation results; see \cref{thm:large_deviation} and Remark~\ref{remark:A} below. Note that the normalization for weak convergence of partial maxima and partial sum sequences are the same in the dissipative case but not in the conservative case. This is because the longer memory results in huge clusters and this causes the partial sum to grow faster than the maxima. The following theorem deals with the classical large deviation issue of the partial sum sequence $S_n$ under the assumptions of Theorems ~\ref{thm:main:proc:diss} and \ref{thm:main:proc:cons}, respectively. The convergence used in these results is as in \cite{hult:lindskog:2006} with the space $\mathbf{S} = \bbr$ and the deleted point $s_0 =0$, i.e. $\mathbf{S}_0=\bbr\backslash\{0\}$. This results in the space $\mathbb{M}_0(\bbr)$ of all Borel measures on $\bbr \backslash \{0\}$ that are finite outside any neighbourhood of $0$. The convergence in $\mathbb{M}_0(\bbr)$ implies vague convergence in $\bbr \backslash \{0\}$; see Lemma 2.1. in \cite{Lindskog:Resnick:Roy}. \begin{Theorem} \label{thm:large_deviation} Let $\{X_t\}_{t\in\mathbb{Z}^d}$ be a stationary symmetric $\alpha$-stable random field and $S_n$ be the partial sum sequence as defined in \eqref{partial sum}. Then the following large deviation results hold. \\ \noindent \textsl{(a)} \, If $\{X_t\}_{t\in\mathbb{Z}^d}$ is generated by a dissipative group action as in \cref{thm:main:proc:diss} having representation \eqref{repn_Possion_integral_X_n} with kernel function $f\in L^{\alpha}(W\times \bbz^d,\nu\otimes\zeta)$ satisfying \eqref{assump:1} and $\{\gamma_n\}$ satisfying \eqref{gamma}, then \begin{eqnarray*} \frac{\gamma_n^{\alpha}}{n^d}\bbP(\gamma_n^{-1}S_n\in \cdot) \to C_f^\alpha\nu_{\alpha}(\cdot)\quad \mbox{ as } n\to\infty \mbox{ in }\mathbb{M}_0(\bbr), \end{eqnarray*} where $C_f$ is as in \eqref{def:C_f} and $\nu_\alpha$ is as in \eqref{defn:nu_alpha}. \\ \noindent \textsl{(b)} If $\{X_t\}_{t\in\mathbb{Z}^d}$ is generated by a conservative action as in \cref{thm:main:proc:cons} with $h\in L^{\alpha}(W\times H,\nu\otimes\zeta_H)$ satisfying \eqref{assump:2} and $\{\norm_n\}$ satisfying \eqref{beta}, then \begin{eqnarray*} \mu_n(\cdot):=\frac{\norm_n^{\alpha}}{n^p}\bbP(n^{p-d}\norm_n^{-1}S_n\in \cdot) \to \mu(\cdot) \quad \mbox{ as } n\to\infty \mbox{ in }\mathbb{M}_0(\bbr), \end{eqnarray*} where $\mu(\cdot)= C_{l,\mathcal{V},h}^\alpha\nu_{\alpha}(\cdot)$ with $C_{l,\mathcal{V},h}$ as in \eqref{def:C_lVh} and $\nu_\alpha$ as in \eqref{defn:nu_alpha}. \end{Theorem} \noindent The proof of this theorem is presented in the next subsection. For the point process large deviation result, we gave the detailed proof of the dissipative case and sketched the proof in the conservative case. In this case, we shall present the detailed proof of this theorem when the underlying action is conservative. The other case will follow similarly. \begin{Remark} \label{remark:A} (a) Let $\{X_t\}_{t\in\bbz^d}$ be an S$\alpha$S process. Then $S_n$ defined by \eqref{partial sum} is an S$\alpha$S random variable as well. We denote its scaling parameter by $\sigma_n$. This means $S_n\stackrel{\mbox{\tiny d}}{=}\sigma_n Z_\alpha$ with $Z_\alpha\sim$S$\alpha$S$(1)$. If $\{\gamma_n\},\{c_n\}$ are sequences of positive constants satisfying $n^{\ka}/\gamma_n\to 0$ for some $\ka>0$, then the following equivalences hold for $C>0$: \begin{itemize} \item[(i)] ${\displaystyle \frac{\gamma_n}{n^{\ka}c_n}\sigma_n\to C}$ as $n\to\infty$. \item[(ii)] ${\displaystyle \frac{\gamma_n^{\alpha}}{n^{\alpha\ka}}\mathbb{P}(c_n^{-1}S_n\in\cdot)\to C^\alpha\nu_\alpha(\cdot)}$ as $n\to\infty$ in $\mathbb{M}_0(\bbr)$. \item[(iii)] ${\displaystyle \frac{\gamma_n}{n^{\ka}c_n}S_n\Rightarrow CZ_\alpha}$ as $n\to\infty$. \end{itemize} Consequently, the large deviation behaviors in \Cref{thm:large_deviation} imply the weak convergence results presented in the beginning of this section, and vice versa. (b) If $\alpha\in\left(0,1\right]$ and $f\in L^{\alpha}(W\times \bbz,\nu\otimes\zeta)$ then assumption \eqref{assump:1} is satisfied. However, for $\alpha\in\left(1,2\right)$ this is unfortunately not necessarily the case. To see this, let $\{X_t\}_{t\in\bbz}$ be a moving average process of the form $ X_t=\sum_{j=-\infty}^t \beta_{t-j}Z_j $, $t\in\bbz$, where $(Z_j)_{j\in\bbz}$ is an iid sequence following an S$\alpha$S(1) distribution with $\alpha>1$ and $\beta_j=j^{-\norm}$, $j\in\bbn$, for some $\alpha^{-1}<\norm<1$. Clearly, \eqref{assump:1} is not satisfied since $\sum_{j=0}^\infty |\beta_j|=\infty$. Theorem~1 in \cite{Astrauskas:1983a} says that $n^{-1/\alpha-1+\norm}S_n\Rightarrow CZ_{\alpha}$ as $n\to\infty$ for some $C >0$. Hence, $\sigma_n\sim Cn^{1/\alpha+1-\gamma}$. A conclusion of the equivalences in (a) is that for any sequence $\{\gamma_n\}$ with $n^{1/\alpha+(1-\norm)}/\gamma_n\to 0$ as $n\to\infty$, \begin{eqnarray} \label{ex:scaling} \frac{\gamma_n^{\alpha}}{n^{1+(1-\norm)\alpha}}\mathbb{P}(\gamma_n^{-1}S_n\in\cdot)\to C^\alpha\nu_\alpha(\cdot) \quad \mbox{ as } n\to\infty \mbox{ in }\mathbb{M}_0(\bbr). \end{eqnarray} We see that the scaling in the large deviation behavior in \cref{thm:large_deviation}~(a) under assumption~\eqref{assump:1} differs from the scaling in \eqref{ex:scaling}. Further examples for moving average processes with $\sum_{j=0}^{\infty}|\beta_j|=\infty$ whose scaling $\sigma_n$ satisfies $n^{-1/\alpha}\sigma_n\to\infty$ can be found in \cite{whitt:2002}, \cite{Astrauskas:1983b,Astrauskas:1983a} and \cite{Hsing:1999}. \end{Remark} \begin{comment} In \cite{resnick:samorodnitsky:2004}, Example~4.2, we have the following model. Let $P_i$, $i\in\bbz$ be the laws on $S=\bbz^\bbz$ of an irreducible null-recurrent Markov chain on $\bbz$ that corresponds to the different positions of the chain at time zero. By $\pi=(\pi)_{i\in\bbz}$ we denote the unique ($\sigma$-finite) invariant measure for this Markov chain satisfying $\pi_0=1$. Then \begin{eqnarray*} \mu(\cdot)=\sum_{i=-\infty}^\infty\pi_iP_i(\cdot) \end{eqnarray*} is a $\sigma$-finite measure on $S$ invariant under the left shift map $\phi$; the latter map is, further, conservative (see \cite{harris:robbins:1953}). For $\mathbf{x}=(\ldots,x_{-1},x_0,x_1,\ldots)\in S$ let $$f(\mathbf{x})=\1_0(x)$$ Then $f\in L^\alpha(\mu)$ and we can define a stationary S$\alpha$S process $X$ by the integral representation \eqref{decomp_of_X_t}, with $M$ a S$\alpha$S random measure with control measure $\mu$, and $$f_n(\mathbf{x})=f\circ \phi^n, \quad \mathbf{x}\in S,\, n\in\bbn_0.$$ We assume that \begin{eqnarray*} \mathbb{P}_0(\inf\{n>0:x_n=0\}>k)=k^{\norm-1}, \end{eqnarray*} where $\frac{1}{2}\leq \norm<1$. With the notation $\sum_{l=1}^{\infty}\delta_{(j_l,\mathbf{t}_l)}\sim \mbox{PRM}(\nu_\alpha\times \mu)$ and $K_n(\mathbf{t})=\sum_{i=0}^{n-1}\1_{\{\mathbf{t}(i)=0\}}$ we get \begin{eqnarray*} X_n=\sum_{l=1}^{\infty}\1_{\{\mathbf{t}_l(n)=0\}} j_l \quad \mbox{ and } \quad S_n=\sum_{l=1}^{\infty} K_n(\mathbf{t}_l)j_l \quad \mbox{ for } n\in\bbn. \end{eqnarray*} Let $Z_{\alpha}$ denote a S$\alpha$S random variable and $\sigma_n$ the scale parameter of $S_n$. Then \begin{eqnarray*} \frac{n^{\norm-1}}{n^{\frac{\norm}{\alpha}}}S_n\Rightarrow CZ_{\alpha} \quad \mbox{ as } n\to\infty \end{eqnarray*} and \begin{eqnarray*} n^{-\frac{1}{\alpha}}\sigma_n\to\left\{\begin{array}{ll} 0, & \alpha<1,\\ C>0, & \alpha=1,\\ \infty, & \alpha>1. \end{array}\right. \end{eqnarray*} as well. {\tcr The proof is \ldots $p-d$ entspricht $\norm-1$. the cluster sizes are represented by $K_n(\mathbf{t}_l)$ with } \end{comment} \begin{comment} \begin{proof}[\textbf{Proof}] The proof follows a long the lines of the proof of \cref{thm:large_deviation:conservative}. First we investigate the process $(X_t^{(M)})_{t\in\bbz^d}$ and show with the same arguments as there that \begin{eqnarray*} \lim_{\epsilon\uparrow 0}\limsup_{n\to\infty}\bbP\left(\left|\sum_{\|t\|_{\infty}\leq n}X_t^{(M)}\1_{\{|X_t^{(M)}|\leq n^{p/\alpha}\epsilon\}}\right|>n^{d-p+p/\alpha}\delta\right) =0. \end{eqnarray*} Then the point result \eqref{thm:Roy2010:Theorem 4.1} and a continuous mapping theorem will result in \begin{eqnarray*} n^{p-d-p/\alpha}\sum_{\|t\|_{\infty}\leq n}X_t^{(M)}\weak l\,\Leb(\Delta)\sum_{i=1}^{\infty}V(\xi_i)j_i\left[\sum_{u\in H}h_M(v_i,u)\right]. \end{eqnarray*} Finally from \cref{Lemma:partial_sum:conservative} the statement follows. \end{proof} \end{comment} \subsection{Proof of \cref{thm:large_deviation}} As discussed earlier, we will prove this theorem only for the conservative case \textsl{(b)}. The dissipative case \textsl{(a)} can be dealt with in a similar fashion. We shall first prove \cref{thm:large_deviation}~(b) for $h$ supported on $W\times H_T$ for some $T \geq 1$, and then use a converging together argument. To this end, for all $T \in \mathbb{N}$, set $h_T=h\1_{W\times H_T}$ and define $X_t^{(T)}$, $\mu_{n,T}$, $\mu_T$ and $C_{l,\mathcal{V},h_{T}}$ by replacing $h$ by $h_T$ in the definition of $X_t$, $\mu_{n}$, $\mu$ and $C_{l,\mathcal{V},h}$, respectively. \begin{Lemma} \label{Proposition:large_deviation:conservative} Let $S_n^{(T)}=\sum_{\|t\|_\infty\leq n}X_t^{(T)}$, $n\in\bbn$. Then \begin{eqnarray*} \mu_n^{(T)}(\cdot):=\frac{\norm_n^{\alpha}}{n^p}\bbP(n^{p-d}\norm_n^{-1}S_n^{(T)}\in \cdot) \to \mu^{(T)}(\cdot) \quad \mbox{ as } n\to\infty \mbox{ in }\,\mathbb{M}_0(\bbr). \end{eqnarray*} \end{Lemma} \begin{proof}[\textbf{Proof}] Since the proof is very similarly to the proof of Theorem~6.1 in \linebreak \cite{hult:samorodnitsky:2010} we will give only a short sketch. The idea is that for any $0<\epsilon<1$, $S_n$ is divided into three parts \begin{eqnarray*} S_n^{(T)}&=&\sum_{\|t\|_\infty\leq n}X_t^{(T)}\left[\1_{\{|X_t^{(T)}|\leq \epsilon\}}+\1_{\{\epsilon<|X_t^{(T)}|\leq \epsilon^{-1}\}} +\1_{\{|X_t^{(T)}|> \epsilon^{-1}\}}\right]\\ &=:&S_n^{(1)}+S_n^{(2)}+S_n^{(3)}. \end{eqnarray*} In the following we investigate the second term. Define \linebreak $g_\epsilon:[-1,1]^d\times[-\infty,\infty]\backslash\{0\}\to\bbr$ with $g_\epsilon(t,x)=x\1_{\{\epsilon<|x|\leq \epsilon^{-1}\}}$. Since \begin{eqnarray*} &&\hspace*{-0.6cm}\kappa_{*,T}^{0}(\xi\in \mathbb{M}^0:\xi([-1,1]^d\times\{|x|=\epsilon\mbox{ or }\epsilon^{-1}\})>0)\\ &&\hspace*{-0.3cm}\leq l Leb(\Delta)\sum_{u\in H_T}\nu_\alpha\otimes\nu\left(\left\{(x,v)\in[-\infty,\infty]\backslash\{0\}\times W: |xh(v,u)|=\epsilon \mbox{ or } \epsilon^{-1}\right\}\right)\\ &&\hspace*{-0.3cm}=0, \end{eqnarray*} the continuous-mapping theorem (see Lemma A.2 in \cite{hult:samorodnitsky:2010}) and \cref{thm:main:proc:cons} give \begin{eqnarray*} \lefteqn{\frac{\gamma_n^\alpha}{n^p}\mathbb{P}(n^{p-d}S_n^{(2)}\in\cdot)=\frac{\gamma_n^\alpha}{n^p}\mathbb{P}(g_\epsilon(\Lambda_{T,n}^{0})\in\cdot)}\\ &&\to lLeb|_\Delta\otimes\nu_\alpha\otimes\nu\bigg(\bigg\{(y,x,v)\in\Delta\times [-\infty,\infty]\backslash\{0\}\times W:\\ &&\hspace*{4.5cm}\mathcal{V}(y)\sum_{u\in H_T}xh(v,u)\1_{\{\epsilon<|xh(v,u)|\leq \epsilon^{-1}\}}\in\cdot\bigg\}\bigg)\\ &&=:\mu^{(T)}_\epsilon(\cdot) \end{eqnarray*} as $n\to\infty$ in $\mathbb{M}_0(\bbr)$. Moreover, for any bounded continuous map $g: \bbr \to \bbr$ that vanishes in a neighbourhood of $0$, say $(-\eta,\eta)$ for some $\eta>0$, by dominated convergence the limit \begin{eqnarray*} \mu^{(T)}_\epsilon(g)\hspace*{-0.3cm}&=&\hspace*{-0.4cm} \int_\Delta \int_{\bbr\backslash\{0\}}\int_Wg\left(\mathcal{V}(y)\sum_{u\in H_T}xh(v,u)\1_{\{\epsilon<|xh(v,u)|\leq \epsilon^{-1}\}}\right) \nu(dv)\nu_\alpha(dx)dy\\ &\to&\hspace*{-0.4cm}\int_\Delta \int_{\bbr\backslash\{0\}}\int_Wg\left(\mathcal{V}(y)\sum_{u\in H_T}xh(v,u)\right)\, \nu(dv)\,\nu_\alpha(dx)\,dy= \mu^{(T)}(g) \end{eqnarray*} holds as $\epsilon \to 0$. Dominated convergence theorem can be applied in the above limit since $\mathcal{V}$ is bounded (Lemma~5.1 in \cite{roy:2010a}) and we assume \eqref{assump:2}. Finally, if we show that for any $\delta>0$ \begin{eqnarray} \label{eq: 6.1} \lim_{\epsilon\downarrow 0}\limsup_{n\to\infty} \frac{\norm_n^{\alpha}}{n^p}\bbP\left(\left|\sum_{\|t\|_{\infty}\leq n}X_t^{(T)}\1_{\{|X_t^{(T)}|\leq\norm_n\epsilon\}}\right|>\norm_n n^{d-p}\delta\right)=0, \end{eqnarray} then Lemma~\ref{Proposition:large_deviation:conservative} will follow step by step as in the proof of Theorem~6.1 in \cite{hult:samorodnitsky:2010} by a converging together argument. To prove \eqref{eq: 6.1}, note that \begin{eqnarray*} \lefteqn{\frac{\norm_n^{\alpha}}{n^p}\bbP\left(\left|\sum_{\|t\|_{\infty}\leq n}X_t^{(T)}\1_{\{|X_t^{(T)}|\leq \norm_n\epsilon\}}\right|>\norm_nn^{d-p}\delta\right)}\\ &&=\frac{\norm_n^{\alpha}}{n^p}\bbP\left(\left|\sum_{s\in H_n}m(s,n)X_s^{(T)}\1_{\{|X_s^{(T)}|\leq \norm_n\epsilon\}}\right|>\norm_nn^{d-p}\delta\right) \end{eqnarray*} with $m(s,n):=|[-n\mathbf{1}_d,-n\mathbf{1}_d]\cap(s+K)|$ for $s\in H$. First, we would like to point out that if $N(u\oplus s)\leq T$ for some $u\in H$, then it can easily be shown that $N(s)-T\leq N(u)\leq N(s)+T$. Hence, we have the representation \begin{eqnarray*} X_s^{(T)} = \int_{W \times H_{N(s)+T}\cap H^c_{N(s)-T}} h_T(v,u \oplus s)\,M^\prime(dv,du). \end{eqnarray*} From this we see that for $s_1,s_2\in H$ with $N(s_1)+T\leq N(s_2)-T$ the intersection $H_{N(s_2)-T}^c\cap H_{N(s_1)+T}$ is empty so that $X_{s_1}^{(T)}$ and $X_{s_2}^{(T)}$ are independent. Let $s_1,s_2\in H$ and $u\in H$ with $N(u\oplus s_1)\leq T$. Then \begin{eqnarray} \label{ineq:N1} N(u\oplus s_2)\geq N(s_2\ominus s_1)- N(u \oplus s_1)\geq N(s_2\ominus s_1)-T. \end{eqnarray} We define the positive finite constant \begin{eqnarray*} c:=\min\{\|Ui+V\norm\|_{\infty}:i\in\bbz^p\backslash\{\mathbf{0}_p\},\norm\in\bbr^q\}, \end{eqnarray*} and $c^*:=\inf\{z\in\bbn: 1/c\leq z\}=\lceil c^{-1}\rceil $. If $s_1:=x_k+U(c^*(2T+1)i_1+ j)\in H$ and $s_2:=x_k+U(c^*(2T+1)i_2+ j)\in H$ for some $i_1,i_2,j\in\bbz^p$, $i_1\not=i_2$, then \begin{eqnarray} \label{ineq:N2} N(s_2\ominus s_1)&=&\min\{\|s_2-s_1+v\|_{\infty}:v\in K\} \nonumber\\ &\geq &(2T+1)c^*\min\{\|Ui+V\norm\|_{\infty}:i\in\bbz^p\backslash\{\mathbf{0}_p\},\,\norm\in\bbr^q\}\nonumber\\ &= &(2T+1). \end{eqnarray} A conclusion of \eqref{ineq:N1} and \eqref{ineq:N2} is that $N(u\oplus s_2)\geq T+1$ and finally, \linebreak $X_{s_1}^{(T)}=X_{x_k+U(c^*(2T+1)i_1+ j)}^{(T)}$ and $X_{s_2}^{(T)}=X_{x_k+U(c^*(2T+1)i_2+ j)}^{(T)}$ are independent. In the following, we assume without loss of generality that $n+L$ is a multiple of $c^*(2T+1)$ where $L:=\max_{k=1,\ldots,l}\|x_k\|_{\infty}$ and define $n':=(n+L)/(c^*(2T+1))$. This gives $H_n \subseteq [-n\mathbf{1}_d,n\mathbf{1}_d]$ and \begin{align*} H_n &\subseteq \bigcup_{k=1}^l\{x_k+U(c^*(2T+1)i+j): \, j\in [-c^*T\mathbf{1}_p,c^*T\mathbf{1}_p],\,i\in[- n'\mathbf{1}_p, n'\mathbf{1}_p]\}. \end{align*} We define $s_{k,i,j}:=x_k+U(c^*(2T+1)i+j)$ for $i,j\in\bbz^p$, $k\in\{1,\ldots,l\}$. Then $H_n\subseteq \{s_{k,i,j}: \, k\in\{1,\ldots,l\},\,j\in [-c^*T\mathbf{1}_p,c^*T\mathbf{1}_p],\,i\in[- n'\mathbf{1}_p, n'\mathbf{1}_p]\}.$ The independence of the sequence $(X_{s_{k,i,j}}^{(T)})_{i\in\bbz^p}$ for fixed $j\in\bbz^p$ and $k\in\{1,\ldots,l\}$, Markov's inequality and Karamata's Theorem (cf.~\cite{resnick:2007}, eq.~(2.5) on p.~36) result in \begin{eqnarray*} \lefteqn{\frac{\norm_n^{\alpha}}{n^p}\bbP\left(\left|\sum_{s\in H_n}m(s,n)X_s^{(T)}\1_{\{|X_s^{(T)}|\leq \norm_n\epsilon\}}\right|>\norm_nn^{d-p}\delta\right)}\\ &&\leq {\rm const. }\,\norm_n^{\alpha} \bbP(|X_1^{(T)}|>\norm_n\epsilon)\epsilon^2 \frac{1}{n^p} \sum_{s\in H_n}\frac{m(s,n)^2}{n^{2(d-p)}} \leq {\rm const. }\,\epsilon^{2-\alpha}\stackrel{\epsilon\downarrow 0}{\to}0. \end{eqnarray*} In the last inequality, we used \eqref{rate_of_growth:H_n} and Lemma~5.1 in \cite{roy:2010a}, which says that $m(s,n)/n^{(d-p)}$ is uniformly bounded. \end{proof} In order to complete the converging together argument and establish Theorem~\ref{thm:large_deviation} (b) from Lemma~\ref{Proposition:large_deviation:conservative}, we need one more lemma. \begin{Lemma} \label{Lemma:partial_sum:conservative} $S_n-S_n^{(T)}\sim S\alpha S(\sigma_{T,n})$ where \begin{eqnarray*} \lim_{T\to\infty}\limsup_{n\to\infty}\frac{\sigma_{T,n}}{n^{\frac{p}{\alpha}+(d-p)}}=0. \end{eqnarray*} \end{Lemma} \begin{proof}[\textbf{Proof}] By the decomposition \begin{eqnarray*} S_n-S_n^{(T)}\hspace*{-0.2cm}&=&\hspace*{-0.2cm}\sum_{s\in H_n}m(s,n)[X_s-X_s^{(T)}]\\ &&\hspace*{-2.3cm}=\left[\int_{W\times H_{n+T}}+\int_{W\times H_{n+T}^c}\right]\left(\sum_{s\in H_n}m(s,n) h(v,u\oplus s)\1_{\{N(u\oplus s)>T\}}\right)\,M'({\rm d}v,{\rm d}u),\\ \end{eqnarray*} the random variable $S_n-S_n^{(T)}$ is $S\alpha S$ with scale parameter \begin{eqnarray} \label{sigma:2} \sigma_{T,n}=(\sigma_{1,T,n}^{\alpha}+\sigma_{2,T,n}^{\alpha})^{1/\alpha}, \end{eqnarray} where \begin{eqnarray*} \sigma_{1,T,n}^{\alpha}&=&\int_{W\times H_{n+T} }\left|\sum_{s\in H_n} m(s,n)h(v,u\oplus s)\right|^{\alpha}\1_{\{N(u\oplus s)>T\}}\zeta_H({\rm d}u) \nu({\rm d}v),\\ \sigma_{2,T,n}^{\alpha}&=&\int_{W\times H_{n+T}^c}\left|\sum_{s\in H_n } m(s,n)h(v,u\oplus s)\1_{\{N(u\oplus s)>T\}}\right|^{\alpha}\zeta_H({\rm d}u) \nu({\rm d}v). \end{eqnarray*} In the following, we will use that there exists a constant $\ka_0$ such that \linebreak $m(s,n)/n^{(d-p)}\leq \ka_0$ for all $s\in H$ and $n\in\bbn$ (cf. \cite{roy:2010a}, Lemma~5.1) and $|H_{n+T}|\sim c (n+T)^p\sim cn^p$ (cf. \eqref{rate_of_growth:H_n}). The first term in \eqref{sigma:2} has the representation \begin{align} \label{eq:v1} \frac{\sigma_{1,T,n}^{\alpha}}{n^{p+\alpha(d-p)}}&=\frac{1}{n^p}\int_{W} \sum_{u\in H_{n+T}}\left|\sum_{s\in H_n } \frac{m(s,n)}{n^{d-p}}h(v,u\oplus s)\1_{\{N(u\oplus s)>T\}}\right|^{\alpha} \nu({\rm d}v) \nonumber\\ &\hspace*{-0.9cm}\leq\mbox{const.}\frac{|H_{n+T}|}{n^p}\int_{W} \left(\sum_{j\in H_T^c}\left|h(v,j)\right|\right)^{\alpha} \nu({\rm d}v)\nonumber\\ &\hspace*{-0.9cm}\stackrel{n\to\infty}{\longrightarrow}\mbox{const.}\int_{W} \left(\sum_{j\in H_T^c}\left|h(v,j)\right|\right)^{\alpha} \nu({\rm d}v) \stackrel{T\to\infty}{\longrightarrow}0 \end{align} by dominated convergence and assumption \eqref{assump:2}. It is easy to check that, if $\alpha\leq 1$, then \begin{align*} \frac{\sigma_{2,T,n}^{\alpha}}{n^{p+\alpha(d-p)}}&\leq\frac{1}{n^{p}}\int_{W} \sum_{u\in H_{n+T}^c}\left|\sum_{s\in H_n} \frac{m(s,n)}{n^{(d-p)}} |h(v,u\oplus s)|\1_{\{N(u\oplus s)>T\}}\right|^{\alpha}\nu({\rm d}v) \\ &\leq\frac{\mbox{const. }}{n^{p}}\int_{W} \sum_{s\in H_n}\sum_{u\in H_{n+T}^c} |h(v,u\oplus s)|^{\alpha}\1_{\{N(u\oplus s)>T\}}\nu({\rm d}v)\\ &\leq\mbox{const. }\int_{W} \sum_{j\in H_T^c} |h(v,j)|^{\alpha}\nu({\rm d}v)\stackrel{T\to\infty}{\longrightarrow}0, \end{align*} by dominated convergence and $h\in L^{\alpha}(W\times H,\nu\otimes\zeta_H)$. On the other hand, if $1<\alpha<2$, then \begin{align*} &\frac{\sigma_{2,T,n}^{\alpha}}{n^{p+\alpha(d-p)}}\\ &\leq\mbox{const. }\int_{W} \sum_{u\in H_{n+T}^c}\left(\sum_{j\in H_T^c}\left|h(v,j)\right|\right)^{\alpha} \frac{1}{n^p} \times\\ &\hspace{2in} \left(\frac{\sum_{s\in H_n} \left| h(v,u\oplus s)\1_{\{N(u\oplus s)>T\}}\right|}{\sum_{j\in H_T^c}\left|h(v,j)\right|}\right)^{\alpha}\nu({\rm d}v)\\ &\leq\mbox{const. }\int_{W} \left(\sum_{j\in H_T^c}\left|h(v,j)\right|\right)^{\alpha} \times\\ & \hspace{1.7in}\frac{\sum_{u\in H_{n+T}^c}\sum_{s\in H_n} \left|h(v,u\oplus s)\right|\1_{\{N(u\oplus s)>T\}}}{n^p\sum_{j\in H_T^c}\left|h(v,j)\right|}\nu({\rm d}v)\\ &\leq\mbox{const. }\int_{W} \left(\sum_{j\in H_T^c}\left|h(v,j)\right|\right)^{\alpha}\nu({\rm d}v)\stackrel{T\to\infty}{\to}0, \end{align*} by dominated convergence and assumption \eqref{assump:2}. To summarize \begin{eqnarray} \label{eq:v2} \lim_{T\to\infty}\limsup_{n\to\infty}\frac{\sigma_{2,T,n}^{\alpha}}{n^{p+\alpha(d-p)}}=0. \end{eqnarray} A conclusion of \eqref{sigma:2}-\eqref{eq:v2} is that $ \lim_{T \to \infty}\limsup_{n\to\infty}\frac{\sigma_{T,n}^{\alpha}}{n^{p+\alpha(d-p)}}=0. $ \end{proof} Now we are ready to prove Theorem~\ref{thm:large_deviation}~(b). We have to show \linebreak $\lim_{n\to\infty}\mu_n(g)=\mu(g)$ for any bounded continuous map $g: \bbr \to \bbr$ that vanishes in a neighbourhood of $0$; see Theorem 2.1 in \cite{hult:lindskog:2006}. As noted in the appendix of \cite{hult:samorodnitsky:2010}, p.33, we can further assume that $g$ is a Lipschitz function. For such a function $g$ and any $\delta>0$, $|\mu(g)-\mu_n(g)|$ is bounded by \begin{eqnarray} \label{eq:A1:2} &&\hspace*{-0.5cm}\; |\mu(g)-\mu^{(T)}(g)|+\left|\mu^{(T)}(g)-\bbE\left(\frac{\norm_n^{\alpha}}{n^p}g(n^{p-d}\norm_n^{-1}S_n^{(T)})\right)\right| \nonumber\\ &&\hspace*{-0.5cm}\;\;\;+\left|\bbE\left(\left(\frac{\norm_n^{\alpha}}{n^p}g(n^{p-d}\norm_n^{-1}S_n^{(T)})-\frac{\norm_n^{\alpha}}{n^p}g(n^{p-d}\norm_n^{-1}S_n)\right) \1_{\{n^{p-d}\norm_n^{-1}|S_n-S_n^{(T)}|> \delta\}}\right)\right|\nonumber\\ &&\hspace*{-0.5cm}\;\;\;+ \left|\bbE\left(\left(\frac{\norm_n^{\alpha}}{n^p}g(n^{p-d}\norm_n^{-1}S_n^{(T)})-\frac{\norm_n^{\alpha}}{n^p}g(n^{p-d}\norm_n^{-1}S_n)\right) \1_{\{n^{p-d}\norm_n^{-1}|S_n-S_n^{(T)}|\leq \delta\}}\right)\right|\nonumber\\ &&\hspace*{-0.5cm}\;=:I_{T,n,1}+I_{T,n,2}+I_{T,n,3}+I_{T,n,4}. \nonumber \end{eqnarray} We shall show that $\lim_{T\to\infty}\limsup_{n\to\infty}I_{T,n,i}=0$ for $i=1,2,3$, which combined with $\lim_{\delta\downarrow 0}\lim_{T\to\infty}\limsup_{n\to\infty}I_{T,n,4}=0$ will prove this theorem. First, using dominated convergence and assumption~\eqref{assump:2}, we obtain for any Borel $B\subseteq \bbr\backslash\{0\}$, \begin{eqnarray} \label{eq:v5:2} \mu^{(T)}(B) \stackrel{T\to\infty}{\to \mu(B). \end{eqnarray} A consequence of Portmanteau-Theorem (Theorem 2.4 in \cite{hult:lindskog:2006}) is $\mu^{(T)}\to \mu$ as $T\to\infty$ in $\mathbb{M}_0(\bbr)$, and $\lim_{T\to\infty}\limsup_{n\to\infty}I_{T,n,1}=0.$ Moreover, \Cref{Proposition:large_deviation:conservative} results in $\lim_{T\to\infty}\limsup_{n\to\infty}I_{T,n,2}=0.$ Next, for any $\delta>0$, we have \begin{eqnarray*} I_{T,n,3}\leq \frac{\norm_n^{\alpha}}{n^p}2\|g\|_{\infty}\bbP(n^{p-d}\norm_n^{-1}|S_n-S_n^{(T)}|>\delta). \end{eqnarray*} Obviously, a conclusion of \Cref{Lemma:partial_sum:conservative} is that $S_n-S_n^{(T)}\sim S\alpha S(\sigma_{T,n})$ with $\norm_n n^{d-p}\sigma_{T,n}^{-1}\to\infty$ if $n^{p/\alpha}/\norm_n\to 0$ and hence \begin{eqnarray*} \lim_{n\to\infty}\frac{\norm_n^{\alpha}}{n^p}\bbP(n^{p-d}\norm_n^{-1}|S_n-S_n^{(T)}|>\delta) =\lim_{n\to\infty} \frac{\sigma_{T,n}^\alpha}{n^{p+\alpha(d-p)}}\bbP(|Z_\alpha|>\delta)=0, \end{eqnarray*} where $Z_\alpha \sim S\alpha S(1)$ . Therefore, $ \lim_{T\to\infty}\limsup_{n\to\infty}I_{T,n,3}=0.$ Let $\eta>0$ such that $g(x)=0$ for $x\in(-\eta,\eta)$. Suppose that $\delta<\eta/2$. If either $|g(n^{p-d}\norm_n^{-1}S_n)|>0$ or $|g(n^{p-d}\norm_n^{-1}S_n^{(T)})|>0$, we have $n^{p-d}\norm_n^{-1}|S_n^{(T)}|>\eta/2$ on $\{n^{p-d}\norm_n^{-1}|S_n-S_n^{(T)}|\leq \delta\}$. This results in $$ I_{T,n,4}\leq\sup_{|x-y|\leq \delta}|g(x)-g(y)|\frac{\norm_n^{\alpha}}{n^p} \bbP(n^{p-d}\norm_n^{-1}|S_n^{(T)}|>\eta/2). $$ Using \Cref{Proposition:large_deviation:conservative}, \eqref{eq:v5:2} and the fact that $g$ is a Lipschitz function, it follows finally that $ \lim_{\delta\downarrow 0}\lim_{T\to\infty}\limsup_{n\to\infty}I_{T,n,4}=0$. This proves Theorem~\ref{thm:large_deviation} (b). \\ \noindent \textbf{Acknowledgement.} The authors would like to thank Gennady Samorodnitsky for some useful discussions.
1,477,468,750,552
arxiv
\section{Introduction} Wonderful varieties are projective algebraic varieties (for us, over the field of complex numbers $\mathbb C$) endowed with an action of a semisimple connected algebraic group $G$, having certain properties which have been inspired by the compactifications of symmetric homogeneous spaces given by De Concini and Procesi in \cite{DP83}. Wonderful varieties turn out to have a significant role in the theory of spherical varieties, which are a class of $G$-varieties representing a common generalization of flag varieties and toric varieties. In this paper we answer a question raised by Brion at the end of the paper \cite{Br90}. Given a wonderful $G$-variety $X$, we want to study whether there exists a $G$-equivariant closed immersion in a projective space $\mathbb P(V)$ where $V$ is a simple $G$-module (a ``simple immersion'', for brevity). This fact is true if $X$ is a complete symmetric variety in the sense of \cite{DP83}, but is false for other easy examples, such as $\mathbb P^1\times\mathbb P^1$ under the diagonal action of $PSL_2$. In our main theorem (theorem \ref{thm:main}) we prove a necessary and sufficient condition for this to be true, and find all the linear systems which give rise to such an immersion. The condition is given in terms of the stabilizers in $G$ of the points of the variety; it can also be stated in terms of some known invariants of wonderful varieties (the {\em spherical roots}). Our approach consists in reducing the problem to a small family of wonderful varieties: those of rank $1$ which are not a parabolic induction. This family is finite and classified for any given $G$, see the works of Ahiezer (\cite{Ah83}), Huckleberry and Snow (\cite{HS82}), Brion (\cite{Br89a}), so we can carry on the proof case-by-case. The same technique is used to show that on a wonderful variety any ample line bundle is very ample. \subsection*{Acknowledgements} The Author thanks Prof.~M.~Brion for all his precious help in the developing of this work, and Prof.~D.~Luna for the fruitful discussions on the subject. \section{Definitions}\label{sect:def} \subsection{Wonderful varieties} Throughout this paper, $G$ will be a semisimple connected algebraic group over $\mathbb C$. We also suppose $G$ simply connected. In $G$ we fix a Borel subgroup $B$, a maximal torus $T\subset B$, we denote by $\mathrm{\Phi}$ the corresponding root system, by $\mathrm{\Phi}^+$ the positive roots and by $S$ the set of simple roots. Also, we denote by $B_-$ the Borel subgroup opposite to $B$, i.e.~such that $B\cap B_- = T$. Given a subset $S'$ of the simple roots, we denote by $\mathrm{\Phi}_{S'}$ the associated root subsystem, and we denote by $G_{S'}$ (resp.~$G_{-S'}$) the associated parabolic subgroup containing $B$ (resp.~$B_-$). We will also use the notation $\mathbb C^\times$ to denote the multiplicative group $\mathbb C\setminus\{0\}$. \begin{definition} \cite{Lu01} A {\em wonderful $G$-variety} is an irreducible algebraic variety $X$ over $\mathbb C$ such that: \begin{enumerate} \item X is smooth and complete; \item $G$ has a open (dense) orbit on $X$, and the complement is the union of ($G$-stable) prime divisors $D_i$ ($i = 1,\ldots,r$), which are smooth, with normal crossings and satisfy $\bigcap_{i=1}^r D_i \neq \emptyset$; \item If $x,x'\in X$ are such that $\left\{ i \;|\; x\in D_i\right\} = \left\{ i \;|\; x'\in D_i\right\}$, then $x$ and $x'$ lie on the same $G$-orbit. \end{enumerate} The number $r$ of $G$-stable prime divisors is the {\em rank} of $X$. \end{definition} A wonderful variety $X$ is always spherical, i.e.~a Borel subgroup has an open dense orbit on $X$ (see \cite{Lu96}). We can introduce some data associated to $X$ coming from the theory of spherical varieties, and fix some notations (for details, see \cite{Kn96}, \cite{Lu01}): \begin{enumerate} \item $H = $ the stabilizer of a point in the open $G$-orbit of $X$, so that this orbit is isomorphic to $G/H$: it is known that $H$ has finite index in its normalizer $N_GH$; we choose the point stabilized by $H$ to be also in the open $B$-orbit, so that $BH$ is open in $G$; \item $\Xi_X = \left\{B\textrm{-weights of functions in } \mathbb C(X) \textrm{ which are }B\textrm{-eigenvectors}\right\}$, where $G$ acts on rational functions on $X$ in the usual way: $(gf)(x)=f(g^{-1}x)$, and the weight of a function $f$ is the character $\chi\colon B\to \mathbb C^\times$ such that $bf=\chi(b)f$ for all $b\in B$; \item $\Delta_X = \left\{ B\textrm{-stable but not } G\textrm{-stable prime divisors of }X \right\}$, whose elements are called the {\em colours} of $X$; \item for any colour $D$, we define $\rho_X(D) \in \mathrm{Hom}_\mathbb Z(\Xi_X,\mathbb Z)$ in the following way: $\langle\rho_X(D),\chi\rangle=\nu_D(f_\chi)$ where $\nu_D$ is the discrete valuation on $\mathbb C(X)$ associated to $D$, and $f_\chi$ is a rational function on $X$ being a $B$-eigenvector with weight $\chi$; this functional $\rho_X(D)$ is well defined because $X$ has an open $B$-orbit and thus any weight $\chi$ determines $f_\chi$ up to a multiplicative constant; \item for any simple root $\alpha\in S$, we say that $\alpha$ ``moves'' a colour $D\in\Delta_X$ if $D$ is non-stable under the action of $G_{\{\alpha\}}$; \item $z = $ the unique point fixed by $B_-$ on $X$; it lies on $Z=Gz$ the unique closed $G$-orbit; \item $\Sigma_X=\left\{ T\textrm{-weights of the } T\textrm{-module } T_zX/T_z(Gz)\right\}$; the elements of $\Sigma_X$ are called the {\em spherical roots} of X, and the cardinality of $\Sigma_X$ is equal to $\mathrm{rank} X$; \item $P_X =$ the stabilizer in $G$ of the open $B$-orbit; this is a parabolic subgroup containing $B$; \item $S^p_X=$ the subset of simple roots associated to the parabolic subgroup $P_X$, so that in our notations: $P_X = G_{S^p_X}$. \end{enumerate} \subsection{Parabolic induction} \label{subsect:induction} Let $X$ be a wonderful $G$-variety, and suppose that the stabilizer $H$ of a point in the open $G$-orbit is such that $R(Q) \subseteq H \subseteq Q$ for some parabolic subgroup $Q$ of $G$, where $R(Q)$ is the radical of $Q$. Then $X$ is isomorphic to $G\times_Q Y$ where $Y$ is a $Q$-variety where the radical $R(Q)$ acts trivially. Moreover, $Y$ turns out to be wonderful under the action of $Q/R(Q)$, thus also under the action of $L$ a Levi subgroup of $Q$. Here $G\times_Q Y$ is defined as the quotient $G\times Y/\sim$ where $(g,x)\sim(gq,q^{-1}x)$ for all $q\in Q$. \begin{definition} Such a wonderful variety $X\cong G\times_Q Y$ is said to be a {\em parabolic induction} of $Y$ by means of $Q$. A wonderful variety which is not a parabolic induction is said to be {\em cuspidal}. \end{definition} We will need some facts upon wonderful varieties which are parabolic induction. The main idea is that the $L$-action on $Y$ determines the whole structure of $X$, and this is the reason why the study of wonderful varieties most often reduces to the study of cuspidal ones. It is convenient to make some choice upon $Q$ and $L$, using conjugation whenever necessary: we choose $Q$ so that it contains $B_-$, and let $S(Q)\subset S$ be the subset of simple roots associated to $Q$. We can choose $L$ such that $B\cap L$ is a Borel subgroup of $L$. Let $\phi\colon X\to G/Q$ be the map given by $[g,y]\mapsto gQ$; we can identify $Y$ with $\phi^{-1}(Q)$. There are some colours of $X$ that go surjectively on $G/Q$ via $\phi$: such a colour is equal to $\overline{B D}$ for a colour $D$ of $Y$. The other colours of $X$ are the pull-back of the colours of $G/Q$ along $\phi$. Therefore $\Delta_X$ can be identified with the disjoint union of $\Delta_{G/Q}$ and $\Delta_{Y}$; however, in order to avoid confusion, we use the notation $\widetilde D$ to denote the element in $\Delta_X$ associated to $D\in\Delta_{G/Q}$ or $D\in\Delta_Y$. With this identification, any simple root moving some colour coming from $Y$ must belong to $S(Q)$, while simple roots moving colours coming from $G/Q$ must be in $S\setminus S(Q)$. \subsection{Line bundles}\label{subsect:linebundles} Since $X$ is spherical and has only one closed $G$-orbit, we have the following description of $\mathrm{Pic}(X)$: \begin{prop}\cite{Br89}\label{prop:bunsphe} The Picard group of $X$ has a basis consisting of the classes of the colours. Moreover, the divisors which are generated by global sections (resp.~ample) are the linear combinations of these classes having non-negative (resp.~positive) coefficients. \end{prop} Any line bundle $\mathscr L$ on $X$ has a unique $G$-linearization, that is, a $G$-action on the total space of the bundle such that the projection on $X$ a $G$-equivariant map, and such that it is a linear action on the fibers (see \cite{KKLV89}). This determines an action of $G$ on the space of global sections $\Gamma(X,\mathscr L)$. Our $X$ is spherical, so this $G$-module has no multiplicities, which means that any simple $G$-module appears no more than once (see \cite{Br97}). Moreover, the highest weights of the simple $G$-modules which actually appear can be described precisely. Let $\mathscr L$ be a line bundle on $X$, and suppose it is associated to an effective divisor of the form $\delta=\sum_{D\in\Delta_X}n_D D$ with $n_D\geq 0$ for all $D$. We will also use the standard notation $\mathcal O(\delta)$ for such a line bundle. Consider its canonical section $\sigma_\mathscr L\in\Gamma(X,\mathscr L)$; since the colours are $B$-stable then $\sigma_\mathscr L$ is $B$-proper: call its $B$-weight $\chi_\mathscr L$. Then, the highest weights of the simple modules appearing in $\Gamma(X,\mathscr L)$ are the dominant weights which can be expressed as $\chi_\mathscr L + \xi$, where $\xi$ is a linear combination of spherical roots with non-positive coefficients, and $\langle\rho_X(D),\xi\rangle + n_D \geq 0$ for all colours $D$ of $X$. This result is established in \cite{Br89}, after some analysis of the action of $G$ on $\Gamma(X,\mathscr L)$ in relation with the usual induced action on $\mathbb C(X)$. It is useful to recall here the main idea; we start from the following equality of vector spaces: \[ \Gamma(X,\mathscr L) = \left\{ f\in\mathbb C(X)\;\;|\;\;(f)+\delta\geq 0\right\}. \] The restriction to the open $G$-orbit $G/H\subseteq X$ induces an inclusion: \[ \Gamma(X,\mathscr L) \subseteq \Gamma(G/H,\mathscr L). \] The sections in $\Gamma(G/H,\mathscr L)$ are quotients $f_1/f_2$ of regular functions on $G$ which are $H$-proper (with same weight) under right translation, and such that the zeros of $f_2$ are ``less than or equal to'' $\delta$. Let $f_\delta\in\mathbb C[G]$ be a global equation of $\delta$ pulled back on $G$ via the projection $G\to G/H$ ($f_\delta$ is unique up to a multiplicative constant). It is a regular function on $G$ and it is $H$-proper under right translation; denote $\lambda$ its $H$-weight and $\mathbb C[G]^{(H)}_{\lambda}$ the set of all $H$-proper functions with that weight. We have: \[ \Gamma(G/H,\mathscr L) = \left\{ \frac{f}{f_\delta} \;\;|\;\; f\in\mathbb C[G]^{(H)}_{\lambda} \right\};\] and the map $f/f_\delta\mapsto f$ gives a inclusion $\Gamma(X,\mathscr L)\subseteq \mathbb C[G]^{(H)}_{\lambda}$, where the latter in turn provides the $G$-module structure of $\Gamma(X,\mathscr L)$ via the left translation action of $G$. In this way, the canonical section $\sigma_\mathscr L$ is represented by the constant rational function $1$, and it corresponds to the highest weight vector $f_\delta$. Notice that if $g\in G$ fixes $f_\delta$ under left translation, then $g$ acts on a section of $\mathscr L$ in the same way as it acts on the corresponding rational function with the usual action induced on $\mathbb C(X)$. \section{Simple immersions} \subsection{Main theorem} The basic examples are the two wonderful $SL_2$-varieties: $\mathbb P^1\times\mathbb P^1$ and $\mathbb P^2\cong\mathbb P(\mathrm{Sym}^2(\mathbb C^2))$. The former does not admit any immersion into the projective space of a simple $SL_2$-module, whereas $\mathbb P^2$ does (and even happens to be such a projective space), as we will see in \ref{ssect:nonrigid}. Notice that the open orbits of these two varieties are isomorphic resp.\ to $SL_2/T$ and $SL_2/N_{SL_2}T$, with $|N_{SL_2}T/T|=2$. \begin{thm} \label{thm:main} Let $X$ be a wonderful $G$-variety. There exist a simple $G$-module $V$ and a $G$-equivariant closed immersion $X\to\mathbb P(V)$ if and only if the stabilizer of any point of $X$ is equal to its normalizer. For any fixed $V$, this immersion is unique. \end{thm} Varieties satisfying the condition of this theorem are also called {\em strict}. The proof of the theorem will take place in section \ref{sect:proof}. We will also see in that section that we can characterise all simple modules admitting such an immersion for a given $X$. It is evident that any map as in the theorem is given by a linear system which corresponds to a simple submodule of $\Gamma(X,\mathscr L)$ for some ample line bundle $\mathscr L$. The easiest case is where $X$ has rank zero. Indeed, rank zero wonderful varieties are exactly the generalized flag varieties $G/Q$, $Q$ a parabolic subgroup of $G$. A part of the classical Borel-Weil theorem states that the space of global sections of any ample line bundle on $G/Q$ is an irreducible $G$-module, and it gives a closed immersion in the corresponding projective space. For $X$ of any rank, $\Gamma(X,\mathscr L)$ is not irreducible; however we have strong restrictions on the submodules we can consider: \begin{prop}\label{prop:notall} Let $\mathscr L$ be generated by global sections, and $V$ be a simple submodule of $\Gamma(X,\mathscr L)$. If the associated rational map $F\colon X\dashrightarrow\mathbb P(V^*)$ is regular and birational onto its image, then the centre of $G$ acts trivially on $X$ and the highest weight of $V$ is the ``highest possible'' among the simple modules occurring in $\Gamma(X,\mathscr L)$, which is the weight $\chi_\mathscr L$. \end{prop} \begin{proof} The assertion on the centre of $G$ is evident, since an element in the centre acts as a scalar on $V$ and thus trivially on $\mathbb P(V)$. Let $Z$ be the unique closed $G$-orbit on $X$: the restriction of $\mathscr L$ to $Z$ gives a $G$-equivariant linear map $\Gamma(X,\mathscr L)\to\Gamma(Z,\mathscr L|_Z)$, and $\Gamma(Z,\mathscr L|_Z)$ is a simple $G$-module by the Borel-Weil theorem. Take a point $z\in Z$ fixed by $T$, such that its stabilizer $Q$ is a parabolic subgroup with $BQ$ open in $G$. Then, $Q$ acts on the fiber at $z$ of $\mathscr L$ with the character $-\chi_\mathscr L$, thus the highest weight of $\Gamma(Z,\mathscr L|_Z)$ is $\chi_\mathscr L$, again by the Borel-Weil theorem. Therefore the map $\Gamma(X,\mathscr L)\to\Gamma(Z,\mathscr L|_Z)$ is zero on all simple submodules of $\Gamma(X,\mathscr L)$, except for the one having weight $\chi_\mathscr L$. Recall that all simple submodules appearing have highest weight of the form $\chi_\mathscr L + \xi$, where $\xi$ is a linear combination of spherical roots with nonpositive coefficients: we have shown that all simple submodules with $\xi\neq 0$ cannot give a regular map, since their sections restrict to zero on $Z$. \end{proof} \begin{definition} We denote $V_\mathscr L$ the simple submodule of $\Gamma(X, \mathscr L)$ having highest weight $\chi_\mathscr L$. \end{definition} The following proposition is related to our problem, and will be useful. \begin{prop}\label{prop:brionemb} \cite{Br97} Let $X$ be a wonderful variety: there exist a simple $G$-module $M$ and a vector $v\in M$ such that: \[ H \subseteq G_{[v]} \subseteq N_GH \] where $[v]\in\mathbb P(M)$ is the point corresponding to $v$ and $G_{[v]}$ is its stabilizer in $G$. Our $X$ is isomorphic to the normalization of the variety $\overline{G[v]}\subseteq \mathbb P(M)$ in the field $\mathbb C(G/H)$, which contains $\mathbb C(G/G_{[v]})$. \end{prop} Notice that if $\mathscr L$ is generated by its global sections, then the global sections belonging to $V_\mathscr L$ suffice to generate $\mathscr L$, as one can see from the proof of proposition \ref{prop:notall}. It is useful to state a slightly different version of theorem \ref{thm:main}, taking also into account proposition \ref{prop:notall}. \begin{thm}\label{thm:SI} Let $X$ be a wonderful $\overline{G}$-variety, where $\overline{G}$ is the adjoint group of $G$. Let $\mathscr L$ be an ample line bundle on $X$. Then the simple $G$-submodule $V_\mathscr L$ of $\Gamma(X,\mathscr L)$ gives a closed immersion $F_\mathscr L\colon X\to \mathbb P\left(V_\mathscr L^*\right)$ if and only if the following condition holds: \begin{itemize} \item[(R)]any wonderful $G$-subvariety $X'$ of rank $1$ of $X$ is rigid, i.e.\ its generic stabilizer $H'$ is equal to its normalizer $N_G(H')$. \end{itemize} which is also equivalent to the following combinatorial condition: \begin{itemize} \item[(R')]for any spherical root $\gamma$ of $X$, there exist no rank $1$ wonderful $\overline{G}$-variety $X'$ having spherical root $2\gamma$ and such that $S^p_X=S^p_{X'}$. \end{itemize} \end{thm} The theorem follows from lemma \ref{lemma:reduction} and from section \ref{sect:rank1}. The equivalence of the two conditions (R) and (R') is easy: the spherical roots of $X$ and its wonderful subvarieties of rank $1$ are in bijection, in such a way that the subvariety $X^\gamma$ associated to $\gamma\in\Sigma_X$ satisfies $\Sigma_{X^\gamma}=\{\gamma\}$, and $S^p_{X^\gamma}=S^p_X$ (see \cite{Lu01}). Now the equivalence follows at once from the classification of rank $1$ varieties. \subsection{Reduction to rank $1$} Let $X$ be a wonderful $\overline G$-variety of any rank, with an ample line bundle $\mathscr L = \mathcal O(\delta)$ where $\delta = \sum_{D\in\Delta_X} n_D D$ ($n_D >0$ for all $D$). In the case of $X$ being of rank zero our main result is immediate, as we have already noticed, thanks to the Borel-Weil theorem. The general case can be reduced to the study of the rank one case. We begin with the following lemma, which can be essentially found in \cite{Lu02}: \begin{lem} \cite{Lu02} \label{lemma:luna} The following conditions are equivalent: \begin{enumerate} \item $F_\mathscr L$ is a closed immersion; \item the application $T_z F_\mathscr L$ between tangent spaces induced by $F_\mathscr L$ in a point $z$ of the closed $G$-orbit $Z$ of $X$ is injective; \item the restrictions of $F_\mathscr L$ to the rank $1$ wonderful sub-$G$-varieties of $X$ are closed immersions. \end{enumerate} \end{lem} The rank $1$ wonderful sub-$G$-varieties of $X$ are the intersections of any $r-1$ prime divisors stable under $G$ (the $D_i$'s of the definition) where $r = \mathrm{rank} X$. \begin{proof} The implications (1)$\Rightarrow$(2) and (1)$\Rightarrow$(3) are obvious. We begin with (2)$\Rightarrow$(1). The map $F_\mathscr L$ is $G$-equivariant and $Z$ is the unique closed $G$-orbit, therefore (2) ensures that $T_x F_\mathscr L$ is injective for all $x\in X$. This implies also that $F_\mathscr L$ is finite, a consequence of the Stein factorization and the finiteness of the fibers of $F_\mathscr L$. Let $Z'=F_\mathscr L(Z)$: since $F_\mathscr L$ is finite, $Z\to Z'$ is an isomorphism. We also have that $F_\mathscr L^{-1}(Z')=Z$ since $Z$ is the unique closed $G$-orbit. The set $\{x\in X \;\;|\;\; F_\mathscr L^{-1}(F_\mathscr L(x))=\{x\}\}$ is a $G$-stable open subset of $X$, and it contains $Z$: therefore it is equal to the whole $X$; this shows that $F_\mathscr L$ is a closed immersion. We show that (3)$\Rightarrow$(2). For all spherical roots $\gamma$, let $(T_z X)_\gamma$ be the subspace of $T_z X$ where $T$ acts with weight $\gamma$. The tangent space $T_z X$ is the sum of the $(T_z X)_\gamma$'s (which are in direct sum), plus $T_z Z$ (with possibly non-trivial intersection with the previous subspaces). For all spherical roots $\gamma$ we have that $(T_z X)_\gamma + T_z Z$ is the tangent space at $z$ of some rank $1$ wonderful subvariety $X^\gamma$ of $X$, and (3) ensures that $T_z F_\mathscr L |_{T_z X^\gamma}$ is injective for all $\gamma$. This implies (2). \end{proof} Our approach will derive from the following lemma. Here we fix a representative $\dot{w}_0\in N_GT$ of the longest element in the Weyl group $N_GT/T$; recall that $P_X$ is the stabilizer of the open $B$-orbit of $X$. \begin{lem}\label{lemma:diffcond} Let $d$ be the dimension of $X$, and let $\sigma_\mathscr L$ be the canonical section of $\mathscr L$, viewed as the constant rational function $1$ on $X$. Then the simple submodule $V_\mathscr L$ of $\Gamma(X,\mathscr L)$ gives a closed immersion $F_\mathscr L\colon X\to \mathbb P\left(V_\mathscr L^*\right)$ if and only if there exist $u_1,\ldots,u_d \in R^u(P_X)$ such that the Jacobian matrix of the functions% \footnote{The Reader should make no confusion here: we consider the sections of $\mathscr L$ as rational functions on $X$, but $G$ does not act on $\Gamma(X,\mathscr L)$ via the usual action on $\mathbb C(X)$ (see the end of section \ref{sect:def}).} $(u_1\dot{w}_0)\sigma_\mathscr L,\ldots, (u_d\dot{w}_0)\sigma_\mathscr L$ is nondegenerate at $z$. \end{lem} \begin{proof} Since $V_\mathscr L$ is simple then $G\sigma_\mathscr L$ spans the whole $V_\mathscr L$. This is true also if we replace $G$ with any non-empty open set, like $R^u(P_X) \dot{w}_0 P_X$. The section $\sigma_\mathscr L$ is an eigenvector under the action of $P_X$, so we have that $(R^u(P_X) \dot{w}_0) \sigma_\mathscr L$ spans $V_\mathscr L$. Therefore there is no harm in supposing that the map $F_\mathscr L$ is given (in coordinates) by sections of the form $(u \dot{w}_0) \sigma_\mathscr L$, for $u\in R^u(P_X)$. Now it follows from lemma \ref{lemma:luna} that the problem of having a closed immersion is local in $z$ and can be checked just on the induced application on the tangent spaces. \end{proof} The advantage of this pont of view is that we can use the {\em canonical chart} $X_{Z,B}$, which is the open set of $X$ where $\sigma_\mathscr L$ is non-zero. The notation $X_{Z,B}$ comes from the following equivalent definitions (see \cite{Br97}): \[ X_{Z,B} = \left\{ x\in X\;\;|\;\; \overline{Bx}\supseteq Z \right\} = X\setminus\bigcup_{D\in\Delta_X}D\] This is an affine open set; its stabilizer in $G$ is exactly $P_X$. The canonical chart has a very useful description: \begin{prop} \label{prop:canonicalchart} \cite{Br97} Let $X$ be any wonderful $G$-variety, and let $L$ be a Levi subgroup of $P_X$. We can suppose that $L$ contains $T$. There exists a $L$-stable closed affine subvariety $M\subseteq X_{Z,B}$ which intersects $Z$ exactly in $z$, and such that we have a $P_X$-equivariant isomorphism: \[ \begin{array}{ccc} R^u(P_X)\times M & \longrightarrow & X_{Z,B} \\ (u,x) & \longmapsto & ux \end{array} \] where $P_X$ acts on the product in the following way: if we write an element in $P_X$ as $vl$ where $v\in R^u(P_X)$ and $l\in L$, then $vl(u,x)=(vlul^{-1},lx)$. Moreover, $M$ is an affine space where $(L,L)$ acts trivially and the whole $L$ acts linearly with weights the spherical roots of $X$. \end{prop} We are now able to reduce the problem to rank $1$ varieties. \begin{lem}\label{lemma:reduction} If theorem \ref{thm:SI} holds for all cuspidal wonderful varieties of rank $1$, then it holds for all wonderful varieties. \end{lem} \begin{proof} We maintain the notations of the theorem. We first prove that the theorem can be reduced to wonderful varieties of rank $1$, and then we reduce to the cuspidal case. Hence now we suppose that the theorem holds for all rank $1$ wonderful varieties. Let $X$ be a wonderful variety of rank $r$ and suppose that it satisfies the condition (R) of the theorem. A spherical root $\gamma\in\Sigma_X$ is associated to a unique $G$-stable wonderful subvariety $X^\gamma\subseteq X$ of rank $1$, having spherical root $\gamma$ and such that $S^p_{X^\gamma}=S^p_X$: the condition (R) of the theorem holds for all these $X^\gamma$ too. Let $\mathscr L$ be an ample line bundle on $X$, and consider $\mathscr L^\gamma:= \mathscr L|_{X^\gamma}$. The hypotesis of our lemma applies, and the map $F_{\mathscr L^\gamma}\colon X^\gamma\to \mathbb P(V_{\mathscr L^\gamma}^*)$ associated to $\mathscr L^\gamma$ is a closed immersion. Thanks to lemma \ref{lemma:diffcond}, this fact can be expressed in terms of the Jacobian matrix at $z$ of some $G$-translates of the canonical section of $\mathscr L^\gamma$. This section is actually $\sigma_\mathscr L$ restricted to $X^\gamma$, so we have proven that $(F_\mathscr L)|_{X^\gamma}$ is a closed immersion of $X^\gamma$ into $\mathbb P(V_\mathscr L^*)$, for all $\gamma$. We conclude that $F_\mathscr L$ is a closed immersion of the whole $X$ thanks to lemma \ref{lemma:luna}. On the contrary, suppose that $X$ does not satisfy the condition (R) of the theorem, for some $\gamma\in\Sigma_X$. Consider $X^\gamma$ as before: it does not satisfy the condition (R) and so it can't be embedded in the projective space of any simple $G$-module thanks to the hypothesis of this lemma. Thus $\mathscr L$ here cannot give a closed immersion $X\to\mathbb P(V_\mathscr L^*)$ because it would restrict to a closed immersion of $X^\gamma$. We have reduced the problem to rank $1$ varieties, it remains to reduce the problem to the cuspidal case. Suppose that the theorem holds for all cuspidal wonderful varieties of rank $1$, and let $X$ be a non-cuspidal wonderful variety of rank $1$. Thus $X=G\times_Q Y$ where $Q$ is a proper parabolic subgroup of $G$ and $Y$ is a cuspidal wonderful $L$-variety of rank $1$ for $L$ a Levi subgroup of $Q$. We choose $Q$ and $L$ as in \ref{subsect:induction}, and take an ample line bundle $\mathscr L$ on $X$. Define\footnote{If we have a group $\Gamma$ acting on a set $A$ then we use the standard notation $A^\Gamma$ to denote the fixed points of $\Gamma$ in $A$.} $W= V_\mathscr L^{R^u(Q)}$, and observe that $Y=X^{R^u(Q)}$. The map $F_\mathscr L$ sends $Y$ into $\mathbb P(W^*)$, and $W$ is a simple $L$-module. Also, $X$ satisfies the condition (R) of the theorem if and only if $Y$ does. If $F_\mathscr L$ is a closed immersion of $X$ into $\mathbb P(V_\mathscr L^*)$ then its restriction to $Y$ is a closed immersion of $Y$ into $\mathbb P(W^*)$, associated to the ample line bundle $\mathscr L|_Y$. Viceversa, suppose that $\mathscr L|_Y$ is a closed immersion of $Y$ into $\mathbb P(W^*)$; this implies that $T_z F_\mathscr L$ is injective if restricted to $T_z Y$. We also know thanks to the Borel-Weil theorem that $T_z F_\mathscr L$ is injective if restricted to $T_z Z$. The tangent space of $X$ at $z\in Z$ can be written as: $T_zX= T_zY + T_z Z$ (with non-zero intersection); we want to conclude that $T_z F_\mathscr L$ is injective on all $T_zX$. We use that $X= G\times_Q Y$: the quotient $T_zX/T_zY$ is $T$-isomorphic to $T_P(G/Q)$; the $T$-weights appearing in it do not belong to the span of the roots of the Levi factor $L\subset Q$. On the other hand, the $T$-weights appearing in $T_zY$ are those corresponding to the tangent space of the closed $L$-orbit of $Y$, with in addition the spherical root of $Y$: all of them lie in the weight lattice of $L$. Therefore none of the $T$-weights appearing in $T_zX/T_zY$ can appear also in $T_zY$; since $T_z F_\mathscr L$ is $T$-equivariant, we conclude that it is injective on all $T_zX$. Lemma \ref{lemma:luna} applies again and we conclude that $F_\mathscr L$ is a closed immersion of $X$ into $\mathbb P(V_\mathscr L^*)$. \end{proof} \section{Rank $1$ cuspidal wonderful varieties}\label{sect:rank1} \subsection{General considerations}\label{subsect:general} Now let $X$ be a rank $1$ wonderful $\overline G$-variety, having dimension $d$ and spherical root $\gamma$, with the ample line bundle $\mathscr L$ as in the previous section. The idea here is to use lemma \ref{lemma:diffcond} on the canonical chart $X_{Z,B}$. Proposition \ref{prop:canonicalchart} describes $X_{Z,B}$ as the product of $Z\cap X_{Z,B} = R^u(P_X)$ and an affine space $M$, of dimension equal to $\mathrm{rank} X=1$. The maximal torus $T$ acts linearly on $M$ via the spherical root of $X$. Therefore $X_{Z,B}\cong \mathbb A^{\dim R^u(P_X)+1}=\mathbb A^d$. Our strategy is based on the fact that if we can find the section $\dot{w}_0\sigma_\mathscr L$ explicitly as a regular function on $X_{Z,B}$, then the condition of lemma \ref{lemma:diffcond} can be examined when working only on $X_{Z,B}$. Here it is convenient to consider the action of $P_X$ on $\mathbb C[X_{Z,B}]\supseteq\Gamma(X, \mathscr L)$ induced by the one on $\mathbb C(X)$, instead of that induced by the linearization of $\mathscr L$. This causes no harm, since $R^u(P_X)$ and $(L,L)$ fix the canonical section $\sigma_\mathscr L$, thus the two actions are the same for $R^u(P_X) (L,L)$. The difference in the action of $T$ is simply the shift by $\chi_\mathscr L$ of all weights (see the end of section \ref{sect:def}). Fix a global coordinate system $x_1, \ldots, x_{d-1}, y$ on $X_{Z,B}$ with the $x_i$'s relative to $R^u(P_X)$ and the $y$ relative to $M\cong \mathbb A^1$. Choose the $x_i$'s to be $T$-eigenvectors; this can be accomplished using the decomposition in root spaces $\mathfrak g_\alpha \subseteq \mathrm{Lie}(R^u(P_X))$ and the isomorphism $\exp\colon \mathrm{Lie}(R^u(P_X)) \to R^u(P_X)$. We will use the corresponding subset of positive roots as an alternative set of indexes for our variables: \[\{x_1, x_2, \ldots, x_{d-1} \} = \{ x_\alpha \;|\; \mathfrak g_\alpha\subseteq \mathrm{Lie}(R^u(P_X))\} = \{ x_\alpha \;|\; \alpha \in \mathrm{\Phi}^+\setminus\mathrm{\Phi}_{S^p_X} \} .\] Notice that the weight of the function $x_\alpha$ is the negative root $-\alpha$. If we think of $\dot{w}_0\sigma_\mathscr L$ as a regular function on $X_{Z,B}$, then it is a polynomial in these coordinates. Its zero locus is $\dot{w}_0\delta = \sum_{D\in\Delta_X}n_D (\dot{w}_0 D)$ intersected with $X_{Z,B}$. We have: \[ \dot{w}_0 \sigma_\mathscr L = \prod_{D\in\Delta_X} \left(\dot{w}_0 \sigma_{\mathcal O(D)}\right)^{n_D} \] It is convenient to write it also as a polynomial in $y$: \begin{equation} \label{formula:w0sigmaL} \dot{w}_0\sigma_\mathscr L = f_m(x) y^m + \ldots + f_1(x) y + f_0(x) \end{equation} where $x = (x_1,\ldots,x_{d-1})$. The function $y$ is the equation of $R^u(P_X)$ inside $X_{Z,B}$. So in particular $y$ is a $T$-eigenvector, with weight $-\gamma$. Since $\dot{w}_0 \sigma_\mathscr L$ is a $T$-eigenvector too, each summand $f_i(x)y^i$ must be a $T$-eigenvector with the same weight. This implies that $f_i(x)$ is a $T$-eigenvector, with weight a sum of negative roots. Finally, $f_0(x)$ is the equation (in $X_{Z,B}$) of $\dot{w}_0\delta|_{Z}$. The latter is the sum (with multiplicities) of some of the $B_-$-stable prime divisors of $Z$. The explicit equations of these prime divisors can be easily found; anyway often the following lemma suffices: \begin{lem}\label{lemma:weightsonZ} If $E$ is a colour of $Z$ and it is moved by $\alpha_i\in S$, then the $T$-weight of the equation of $\dot{w}_0 E$ on $R^u(P_X)$ is $\dot{w}_0(\omega_i)-\omega_i$ where $\omega_i$ is the fundamental dominant weight associated to $\alpha_i$. \end{lem} \begin{proof} Let $P_-=G_{-S^p_X}$ be the opposite subgroup of $P_X$ with respect to $T$: it is also the stabilizer of $z$ the unique point fixed by $B_-$ (see for example \cite{Br97}). Consider the pullback along $\pi:G\to G/P_- = Z$ of the colour $E$. Call $f_E$ its global equation on $G$: it is a $B$-eigenvector under the left translation action, of weight $\omega_i$. The map $\pi$ sends $R^u(P_X)$ isomorphically onto the the canonical chart of $Z$. The equation of $\dot{w}_0 E$ on $R^u(P_X)$ corresponds then to the rational function $\frac{\dot{w}_0 f_E}{f_E}$ on $Z$ (where $\dot{w}_0$ acts on regular functions by left translation), and the lemma follows. \end{proof} The multiplicities of the intersections between colours and $Z$ are given by the following proposition due to D.~Luna: \begin{prop} \label{prop:mult} Let $X$ be a wonderful variety (of any rank), with closed orbit $Z$ and let $D$ be a colour. Then $D$ intersects $Z$ with multiplicity $1$, unless $D$ is moved by $\alpha\in S$ such that $2\alpha\in\Sigma_X$; in this case $D$ intersects $Z$ with multiplicity $2$. \end{prop} \begin{proof} We use the Poincar\'e duality between divisors and curves. Using a Bialynicki-Birula decomposition of $X$ (see \cite{BB73}, \cite{BB76}), one can find a set of $B_-$-stable curves which are a dual basis of the basis of $\mathrm{Pic}(X)$ given by the colours. That is, for each $D\in\Delta_X$ there exists a $B_-$-stable curve $c_D$ in $X$ such that: \begin{itemize} \item[--] $c_D\cap D' = \emptyset$ if $D\neq D'$, \item[--] $c_D$ and $D$ intersect (transversally) in only one point. \end{itemize} On the other hand, in \cite{Br93} it is shown that all $B_-$-stable curves are the following: \begin{enumerate} \item curves $c_\alpha$ for $\alpha\in S\setminus S^p_X$, which are in $Z$ and are dual to the colours of $Z$; \item curves $c_\alpha^+$ and $c_\alpha^-$, for $\alpha\in S\cap\Sigma_X$; \item curves $c_\alpha'$ for $\alpha\in S\cap\frac{1}{2}\Sigma_X$. \end{enumerate} See \cite{Lu97} for a description of these curves. In the Chow ring (or, equivalently, in the cohomology) of $X$, we have the following relations: $[c_\alpha^+] + [c_\alpha^-] = [c_\alpha]$ and $[c_\alpha']=2[c_\alpha]$. On the other hand the curve $c_D$ can be chosen to be: \begin{enumerate} \item one of the $c_\alpha$, if $\alpha\notin \Sigma_X$ and $2\alpha\notin\Sigma_X$ where $\alpha\in S$ moves $D$; or \item one of the $c_\alpha^+$ or $c_\alpha^-$, if $\alpha\in\Sigma_X$ and $\alpha\in S$ moves $D$; or \item $c_\alpha'$ if $\alpha\in S$ moves $D$ and $2\alpha\in\Sigma_X$. \end{enumerate} Now the proposition follows from the fact that the multiplicity of the intersection between $D$ and $Z$ is $\langle [D], [c_\alpha]\rangle$. \end{proof} Therefore we have: \[ \dot{w}_0\delta|_{Z} = \sum_{D\in\Delta_X} n_D m_D \cdot \dot{w}_0(D\cap Z) \] where $m_D =2$ if $D$ is moved by a simple root $\alpha\in S\cap \frac{1}{2}\Sigma$, and $m_D =1$ otherwise. We have some last remarks which will help determining $\dot{w}_0\sigma_\mathscr L$. We recall that $H$ is the stabilizer of a point in the open $B$-orbit. \begin{lem} \label{lemma:appear} If $H=N_GH$, then the coordinate $y$ must appear in the expression of $\dot{w}_0\sigma_\mathscr L$. Moreover there exist a non-trivial line bundle $\mathscr L_0$ generated by global sections such that $\dot{w}_0\sigma_{\mathscr L_0}$ doesn't have the form $f(x,y^n)$ for an $n>1$. \end{lem} \begin{proof} If $H=N_GH$ then proposition \ref{prop:brionemb} states that there exist a $G$-equivariant morphism $X\to\mathbb P(M)$ where $M$ is a simple $G$-module and the image of $X$ is the normalization of $X$ in its own field of rational functions. In particular, this morphism is birational onto its image. On the other hand, this map must be $F_{\mathscr L_0}$ for some $\mathscr L_0$ generated by global sections. Moreover, $F_{\mathscr L_0}$ is given in coordinates by sections of the form $u \dot{w}_0 \sigma_{\mathscr L_0}$ for $u\in R^u(P_X)$ (see proof of lemma \ref{lemma:diffcond}). The function $y\in\mathbb C[X_{Z,B}]$ viewed as a rational function on $X$ is stable under the action of $R^u(P_X)$. This analysis implies that if $y$ does not appear in $\dot{w}_0 \sigma_{\mathscr L_0}$ then the image of $X$ through $F_{\mathscr L_0}$ is equal to the image of $Z$, which is a contradiction. This implies also that $y$ appears in $\dot{w}_0 \sigma_{\mathcal O(D)}$ for some colour $D$, and thus in $\dot{w}_0 \sigma_{\mathscr L}$ for all ample $\mathscr L$. Finally, if $\sigma_{\mathscr L_0}$ had form $f(x,y^n)$ for an $n>1$, then all rational functions defining $F_{\mathscr L_0}$ would share this property. This would be again in contradiction with the fact that $F_{\mathscr L_0}$ is birational onto its image. \end{proof} At this point we have collected all general informations on $\dot{w}_0\sigma_\mathscr L$ and we must examine our cuspidal rank $1$ varieties one by one. We use the list in \cite{Wa96}, and maintain the notations of this paper: we recall that we work on $\overline G$-varieties. \subsection{Rank $1$ varieties with ${H\neq N_GH}$} \label{ssect:nonrigid} As one can see from the list in \cite{Wa96}, these cases are those which do not satisfy the condition (R) of theorem \ref{thm:SI}. They are: $\mathbf {(1\mathsf A,n=2})$, $\mathbf{(7\mathsf B)}$, $\mathbf{(7\mathsf C, n=2)}$ (which is actually $\mathbf{(7\mathsf B, n=2)}$) and $\mathbf{(13)}$: we have to prove that there is no line bundle $\mathscr L$ such that $F_\mathscr L$ is a closed immersion. We use the notations of section \ref{subsect:linebundles}. We have seen that the regular function $f_\delta$ on $G$ is a $H$-eigenvector with respect to the right translation, and that the associated morphism $F_\mathscr L\colon X\to \mathbb P(V_\mathscr L^*)$ is given by rational functions on $X$ of the form $g f_\delta/f_\delta$ for some element $g\in G$. Consider the $G$-morphism $G/H\to G/N_GH$; it is finite of degree $|N_GH/H|$ and it extends to a morphism $X\to\widetilde X$ where $\widetilde X$ is the wonderful embedding of $G/N_GH$. In cases $\mathbf{(7\mathsf B)}$, $\mathbf{(7\mathsf C, n=2)}$ and $\mathbf{(13)}$ we have that $f_\delta$ is also an eigenvector for $N_GH$. This is easy to prove using the fact that $X$ and $\widetilde X$ both have only one colour, and the one of $X$ is the inverse image of the one of $\widetilde X$. This implies that the map $F_\mathscr L|_{G/H}\colon G/H \to \mathbb P(V_\mathscr L^*)$ factorizes through $G/N_GH$ for any ample $\mathscr L$, and thus $F_\mathscr L$ cannot be a closed immersion. It remains to examine the case $\mathbf {(1\mathsf A,n=2})$, which is the only one such that $f_\delta$ is not necessarily an eigenvector for the whole $N_GH$. We have $G=SL_2$, $\gamma=\alpha_1$, there are two colours $D^+,D^-$ and $S^p_X=\emptyset$. Here $X=\mathbb P^1\times\mathbb P^1$, and $G$ acts diagonally; $R^u(P_X)$ is one-dimensional. The closed $G$-orbit is $\mathbb P^1\subset X$ embedded diagonally, and is has only one colour $E$; the equation on $R^u(P_X)$ of $\dot{w}_0E$ is $x_1$. So the functions $y$ and $x_1$ have the same $T$-weight $\alpha_1$, and we can conclude that the equations of $\dot{w}_0D^+\cap X_{Z,B}$ and $\dot{w}_0D^-\cap X_{Z,B}$ both have degree $1$ in $y$, with constant leading coefficient. Consider the line bundle $\mathscr L= \mathcal O(n_+D^+ + n_-D^-)$, with $n_+,n_->0$: \[ \dot{w}_0 \sigma_\mathscr L = (ay + x_1)^{n_+}(by + x_1)^{n_-} \] with $a\neq b\in\mathbb C$. Now $R^u(P_X)$ is the additive group $\mathbb C$, so if $u\in R^u(P_X)$: \[ u\cdot \dot{w}_0\sigma_\mathscr L (x_1,y) = (ay + x_1 - u)^{n_+}(by + x_1 - u)^{n_-} \] The derivatives of these functions calculated in $z=(0,0)$ are: \[ \begin{array}{ccl} \frac{\displaystyle\partial(u\cdot \dot{w}_0\sigma_\mathscr L)}{\displaystyle\partial x_1} (0,0) & = & 2(-u)^{n_++n_--1} \\[10pt] \frac{\displaystyle\partial(u\cdot \dot{w}_0\sigma_\mathscr L)}{\displaystyle\partial y_1} (0,0) & = & (a+b) (-u)^{n_++n_--1} \end{array} \] Here we see that we cannot find two elements $u_1,u_2\in R^u(P_X)$ such that the Jacobian of lemma \ref{lemma:diffcond} is nondegenerate at $z=(0,0)$, no matter which $n_+,n_-$ we choose. \subsection{Rank $1$ varieties with $ {H=N_GH}$} These cases are those that satisfy the condition (R) of the theorem, thus we must prove that $F_\mathscr L$ is a closed immersion into $\mathbb P(V_\mathscr L^*)$. Those which are complete symmetric varieties can be omitted, since for them this fact is true (see \cite{DP83}); they are: $\mathbf {(1\mathsf A, n>2)}$, $\mathbf {(2)}$, $\mathbf {(4)}$, $\mathbf {(6\mathsf A)}$, $\mathbf {(8\mathsf B)}$, $\mathbf {(7\mathsf C, n>2)}$, $\mathbf {(8\mathsf C)}$, $\mathbf {(1\mathsf D)}$, $\mathbf {(6\mathsf D)}$, $\mathbf {(6)}$, $\mathbf {(12)}$ (again we use the labels and the ordering of \cite{Wa96}). The remaining cases are $\mathbf {(9\mathsf B)}$, $\mathbf {(11)}$, $\mathbf {(9\mathsf C)}$, $\mathbf {(14)}$, $\mathbf {(15)}$. Consider cases $\mathbf{(11)}$ and $\mathbf{(14)}$. $X$ has only one colour $D$. Examining the weight of the function $y$ and the weight of $f_0(x)$ (see formula \ref{formula:w0sigmaL}) for an ample line bundle $\mathscr L = \mathcal O(lD)$, one can apply lemma \ref{lemma:appear} and immediately conclude that $f_1(x)\neq 0$. We leave the easy details to the Reader. We can apply now for these cases the following: \begin{lem}\label{lemma:notaroot} If $\gamma$ does not belong to $\mathrm{\Phi}^+\setminus\mathrm{\Phi}_{S^p_X}$, and if $f_1(x)\neq 0$, then $F_\mathscr L$ is a closed immersion. \end{lem} \begin{proof} We begin with some considerations about the condition in lemma \ref{lemma:diffcond}. Since $z$ has coordinates $(0,\ldots,0)$ in our canonical chart, this condition is equivalent to the existence of $u_1, \ldots, u_d\in R^u(P_X)$ such that the matrix: \[ \left( \begin{array}{ccccc} \left.\frac{\partial(u_1\cdot f_0)}{\partial x_1}\right|_{x=0} & \left. \frac{\partial(u_2\cdot f_0)}{\partial x_1}\right|_{x=0} & \ldots & \left.\frac{\partial(u_{d-1}\cdot f_0)}{\partial x_1}\right|_{x=0} & \left.\frac{\partial(u_d\cdot f_0)}{\partial x_1}\right|_{x=0}\\[6pt] \left.\frac{\partial(u_1\cdot f_0)}{\partial x_2}\right|_{x=0} & \left.\frac{\partial(u_2\cdot f_0)}{\partial x_2}\right|_{x=0} & \ldots & \left.\frac{\partial(u_{d-1}\cdot f_0)}{\partial x_2}\right|_{x=0} & \left.\frac{\partial(u_d\cdot f_0)}{\partial x_2}\right|_{x=0}\\[6pt] \vdots & \vdots & \ddots & \vdots & \vdots \\[6pt] \left.\frac{\partial(u_1\cdot f_0)}{\partial x_{d-1}}\right|_{x=0} & \left.\frac{\partial(u_2\cdot f_0)}{\partial x_{d-1}}\right|_{x=0} & \ldots & \left.\frac{\partial(u_{d-1}\cdot f_0)}{\partial x_{d-1}}\right|_{x=0} & \left.\frac{\partial(u_d\cdot f_0)}{\partial x_{d-1}}\right|_{x=0}\\[6pt] \left.(u_1 \cdot f_1)\right|_{x=0} & \left.(u_2 \cdot f_1)\right|_{x=0} & \ldots & \left.(u_{d-1} \cdot f_1)\right|_{x=0} & \left.(u_d \cdot f_1)\right|_{x=0} \\ \end{array} \right) \] is nondegenerate. Notice that one can always find at least some elements $u_1,\ldots,u_{d-1}$ such that the upper left $(d-1)\times(d-1)$ minor is non-zero, because it corresponds to the map $F_\mathscr L$ restricted to $Z$ and this map is a closed immersion. The nondegeneracy of the above matrix is equivalent to the fact that the function $\left.f_1(u\cdot x)\right|_{x=0}= f_1(u)$ is not a linear combination of the functions: \begin{equation} \label{formula:lindep} \left.\frac{\partial(f_0(u\cdot x))}{\partial x_1}\right|_{x=0}, \ldots, \left.\frac{\partial(f_0(u\cdot x))}{\partial x_{d-1}}\right|_{x=0} \end{equation} as functions of the variable $u\in R^u(P_X)$. Now it is quite easy to prove that the function: \[ \left.\frac{\partial(f_0(u\cdot x))}{\partial x_i}\right|_{x=0} \] is a $T$-eigenvector, with weight the weight of $f_0$ minus the weight of the function $x_i$. On the other hand, the difference between the weights of $f_0$ and $f_1$ is exactly $-\gamma$. We switch to the appropriate set of positive roots as indexes for our variables; if we have: \[ \sum_{\mathrm{\Phi}^+\setminus\mathrm{\Phi}_{S^p_X}} \mu_\alpha \left.\frac{\partial(f_0(u\cdot x))}{\partial x_\alpha}\right|_{x=0} = f_1(u) \] then $\mu_\alpha$ is zero except for $\alpha = \gamma$. The hypothesis of the lemma is just that there is no such $\alpha$. \end{proof} We remain with cases $\mathbf {(9\mathsf B)}$, $\mathbf {(9\mathsf C)}$, $\mathbf {(15)}$. All of them have two colours, moved by two different simple roots; at the beginning we can treat them in a unified way. Let $D_1$ and $D_2$ be the two colours: they intersect $Z$ with multiplicity $1$ (proposition \ref{prop:mult}); define $E_i = D_i \cap Z$ ($i=1,2$). Call $\varphi_i(x)$ the equation on $R^u(P_X)$ of $\dot{w}_0 E_i$. We have: \[ \begin{array}{l} \dot{w}_0 \sigma_{\mathcal O(D_1)}\left(x,y\right) = \ldots + a(x) y + \varphi_1(x) \\[5pt] \dot{w}_0 \sigma_{\mathcal O(D_2)}\left(x,y\right) = \ldots + b(x) y + \varphi_2(x) \end{array} \] We take $\mathscr L=\mathcal O(lD_1 + sD_2)$ with $l,s>0$. Using notations of formula \ref{formula:w0sigmaL} we have: \[ \begin{array}{l} f_0(x) = \varphi_1(x)^l \varphi_2(x)^s \\[5pt] f_1(x) = l a(x) \varphi_1(x)^{l-1} \varphi_2(x)^s + b(x) \varphi_1(x)^l \varphi_2(x)^{s-1} \end{array} \] We can repeat the considerations in the proof of lemma \ref{lemma:notaroot}: the map $F_\mathscr L$ is a closed immersion if and only if there exists no $\mu\in\mathbb C$ such that: \[ \mu \left.\frac{\partial(f_0(u\cdot x))}{\partial x_\gamma}\right|_{x=0} = f_1(u) \] where $\gamma$ is the spherical root of $X$. Using the expression of $f_1$ and $f_0$ and dividing by $\varphi_1^{l-1}\varphi_2^{s-1}$, this equation becomes: \[ \mu\left(l \varphi_2(u) \left.\frac{\partial(\varphi_1(u\cdot x))}{\partial x_\gamma}\right|_{x=0} + s \varphi_1(u) \left.\frac{\partial(\varphi_2(u\cdot x))}{\partial x_\gamma}\right|_{x=0} \right) = l a(u) \varphi_2(u) + s b(u) \varphi_1(u) \] or equivalently: \begin{equation}\label{formula:final} l \varphi_2(u) \left( \mu \left.\frac{\partial(\varphi_1(u\cdot x))}{\partial x_\gamma}\right|_{x=0} - a(u) \right) = s \varphi_1(u) \left(b(u)- \mu \left.\frac{\partial(\varphi_2(u\cdot x))}{\partial x_\gamma}\right|_{x=0} \right) \end{equation} We examine our three cases one by one and prove that the equation above is impossible. \vspace{10pt}\noindent {\bf CASE $\mathbf{(9\mathsf B)}$} \noindent Our $\overline G$ is $PSO_{2n+1}(\mathbb C)$ ($n\geq 2$). The stabilizer $H$ of a point in the open $G$-orbit has a Levi factor isomorphic to $GL_n$, and its unipotent radical has Lie algebra isomorphic to $\bigwedge^2\mathbb C^n$ as a $GL_n$-module. If $e_1, \ldots, e_{2n+1}$ is the canonical basis of $\mathbb C^{2n+1}$, we choose the symmetric form $(\cdot,\cdot)$ that defines the group $PSO_{2n+1}$ to be given by: $(e_i, e_j)= 1$ if $j=2n+2-i$, $(e_i,e_j)=0$ otherwise. With this choice, the Lie algebra $\mathfrak{so}_{2n+1}$ is the set of matrices that are skew symmetric around the skew diagonal. We can choose $B$ to be the (classes of) upper triangular matrices in $G$, and $T$ the (classes of) diagonal matrices. Call $\alpha_1, \ldots, \alpha_n$ the simple roots associated to $B$ and $T$, numbered in the usual way. The spherical root of $X$ is: \[ \gamma=\alpha_1+\ldots+\alpha_n \] The two colours $D_1, D_2$ are moved resp. by $\alpha_1$ and $\alpha_n$. The corresponding functions $\varphi_1(x)$ and $\varphi_2(x)$ are irreducible polynomials, of weights resp.: \[ \begin{array}{rcl} \dot{w}_0(\omega_1)-\omega_1 & = & - 2\gamma \\[5pt] \dot{w}_0(\omega_n)-\omega_n & = & - (\alpha_1 + 2\alpha_2 + \ldots + n\alpha_n) \end{array} \] This gives a more precise information on $\dot{w}_0\sigma_{\mathcal O(D_1)}$ and $\dot{w}_0\sigma_{\mathcal O(D_2)}$: \[ \begin{array}{l} \dot{w}_0 \sigma_{\mathcal O(D_1)}\left(x,y\right) = c y^2 + a(x) y + \varphi_1(x) \\[5pt] \dot{w}_0 \sigma_{\mathcal O(D_2)}\left(x,y\right) = b(x) y + \varphi_2(x) \end{array} \] where $c\in\mathbb C$. We see here that $a(x)$ and $b(x)$ cannot be both zero due to lemma \ref{lemma:appear}. The weight of $a(x)$ is $\dot{w}_0(\omega_1)-\omega_1 + \gamma$, and the weight of $b(x)$ is $\dot{w}_0(\omega_n)-\omega_n + \gamma$. This, and the irreducibiliy of $\varphi_1(x)$ and $\varphi_2(x)$, imply that if equation \ref{formula:final} is true then both sides are zero. We examine the functions $a(x)$ and $\varphi_1(x)$. Here $P_X$ is $G_{S\setminus\{\alpha_1,\alpha_n\}}$, but the colour $D_1$ is stable under the bigger parabolic subgroup $G_{S\setminus \{\alpha_1\}}$. In the same way $\dot{w}_0 D_1$ is stable under $G_{- S\setminus \{\alpha_1\}}$. This means that $\dot{w}_0 \sigma_{\mathcal O(D_1)}\left(x,y\right)$ will be invariant under the intersection of $R^u(P_X)$ with these two parabolic subgroups, and the function $a(x)$ too. Denote with $K$ this intersection. If we represent $\mathsf x\in\mathrm{Lie}(R^u(P_X))$ as a $(2n+1)\times(2n+1)$ upper triangular matrix, then the coordinates on the first row are: \begin{footnotesize} \[ \begin{array} {lcl} x_1 & = & x_{\alpha_1} \\ x_2 & = & x_{\alpha_1 + \alpha_2} \\ & \vdots & \\ x_n & = & x_{\alpha_1+\alpha_2+\ldots+\alpha_{n-1}+\alpha_n} = x_\gamma \end{array} \;\;\; \begin{array}{lcl} x_{n+1} & = & x_{\alpha_1+\alpha_2+\ldots+\alpha_{n-1}+2\alpha_n} \\ x_{n+2} & = & x_{\alpha_1+\alpha_2+\ldots+2\alpha_{n-1}+2\alpha_n} \\ & \vdots & \\ x_{2n-1}& = & x_{\alpha_1+2\alpha_2+\ldots+2\alpha_{n-1}+2\alpha_n} \end{array} \] \end{footnotesize} Label with $x_{2n}$, $x_{2n+1}$, etc. the remaining coordinates. With these notations we have: \[ \mathrm{Lie}(K) = \left\{ x_i = 0 \;|\; i =1,\ldots\ 2n-1 \right\} \] The weight of $a(x)$ and its invariance under $K$ tell us that: \[ a(x) = \nu x_\gamma \] for $\nu\in \mathbb C$. In order to find the function $\varphi_1(x)$, we use the analysis contained in the proof of lemma \ref{lemma:weightsonZ}. The function $f_{D_1\cap Z}$ (with the notation of that lemma) is a regular function on $PSO_{2n+1}$ which is a $B$-eigenvector under left translation and a $B_-$-eigenvector under right translation; moreover $D_1$ is stable under the action of $G_{S\setminus \{\alpha_1\}}$. Thanks to these facts we find easily that $\varphi_1(x)$ is the matrix entry in the upper right corner, if we represent $x\in R^u(P_X)$ as the class of a upper triangular matrix in $SO_{2n+1}$ having all $1$'s on the diagonal. The expression in our coordinates is more complicated: \[ \varphi_1(x) = 2 x_1x_{2n-1} + 2 x_2x_{2n-2} + \ldots + x_n^2 + \mathrm{ terms\; of\; higher\; degree }\] The exponential $\mathrm{Lie}(R^u(P_X))\to R^u(P_X)$ can be expressed explicitly using the fact that matrices in $\mathrm{Lie}(R^u(P_X))$ are nilpotent; we leave this exercise to the Reader. Now we can write the multiplication on $R^u(P_X)$ in terms of our coordinates, and it is not difficult to conclude that: \[ \left.\frac{\partial(\varphi_1(u\cdot x))}{\partial x_\gamma}\right|_{x=0} = u_{\alpha_1}u_{\gamma-\alpha_1} + u_{\alpha_1+\alpha_2} u_{\gamma-\alpha_1-\alpha_2}+\ldots+u_{\gamma-\alpha_n}u_{\alpha_n} + 2 u_\gamma \] The above expression is not a scalar multiple of $a(u)$. We conclude that the left hand side of equation \ref{formula:final} is zero if and only if $\mu= \nu =0$. But now $b$ cannot be zero, and hence the right hand side of equation \ref{formula:final} cannot be zero. \vspace{10pt}\noindent {\bf CASE $\mathbf{(9\mathsf C)}$} \noindent This case is quite similar to the one above. We have $\overline G= PSp_{2m}$, $m\geq 3$ (the case $m=2$ being equal to case $\mathbf{(9\mathsf B)}$). The stabilizer $H$ of a point in the open $G$-orbit has a Levi factor isomorphic to $\mathbb C^\times\times Sp_{2n-2}$ (up to a central isogeny), and its unipotent radical has Lie algebra isomorphic to $\mathbb C$ as a $\mathbb C^\times$-module, where $Sp_{2n-2}$ acts trivially. If $e_1, \ldots, e_{2n}$ is the canonical basis of $\mathbb C^{2n}$, we choose the skew symmetric form $(\cdot,\cdot)$ which defines the group $PSp_{2n}$ to be given by: $(e_i, e_j)=1$ for $j=2n+1-i$ and $1\leq i \leq n$, $(e_i, e_j)=-1$ for $j=2n+1-i$ and $n+1\leq i \leq 2n$, $(e_i,e_j)=0$ otherwise. With this choice, the Lie algebra $\mathfrak{sp}_{2n}$ has the following form: \[ \mathfrak{sp}_{2n}=\left\{\left(\begin{array}{cc}A & B \\mathbb C & \widetilde A\end{array}\right)\right\}\] where $A$ is any $n\times n$-matrix, $B$ and $C$ are $n\times n$-matrices symmetric around the skew diagonal, and $\widetilde A$ is the transpose of $-A$ around the skew diagonal. We can choose $B$ to be the (classes of) upper triangular matrices in $\overline G$, and $T$ the (classes of) diagonal matrices. Call $\alpha_1, \ldots, \alpha_n$ the simple roots associated to $B$ and $T$, numbered in the usual way. The spherical root is: \[ \gamma=\alpha_1 + 2\alpha_2 + \ldots + 2\alpha_{n-1} + \alpha_n \] The colours $D_1$, $D_2$ are moved resp. by $\alpha_1$ and $\alpha_2$; the corresponding functions $\varphi_1(x)$ and $\varphi_2(x)$ have weights resp.: \[ \begin{array}{rcl} \dot{w}_0(\omega_1)-\omega_1 & = & - \gamma - \alpha_1 \\[5pt] \dot{w}_0(\omega_2)-\omega_2 & = & - 2\gamma \end{array} \] Again, these weights provide a more precise expression of the following functions: \[ \begin{array}{rcl} \dot{w}_0 \sigma_{\mathcal O(D_1)}\left(x,y\right) & = & a(x) y + \varphi_1(x) \\[5pt] \dot{w}_0 \sigma_{\mathcal O(D_2)}\left(x,y\right) & = & c y^2 + b(x) y + \varphi_2(x) \end{array} \] where $c\in \mathbb C$, and $a(x)$, $b(x)$ are not both zero. As in the previous case, if equation \ref{formula:final} is true then both sides are zero. The coordinates on $R^u(P_X)$ are: \begin{footnotesize} \[ \begin{array}{lcl} x_1 &=& x_{\alpha_1} \\ x_2 &=& x_{\alpha_1+\alpha_2} \\ &\vdots& \\ x_n &=& x_{\alpha_1+\ldots+\alpha_n} \\ x_{n+1} &=& x_{\alpha_1+\ldots+\alpha_{n-2}+2\alpha_{n-1}+\alpha_n} \\ &\vdots& \\ x_{2n-2} &=& x_{\gamma} \\ x_{2n-1} &=& x_{\gamma+\alpha_1} \end{array} \;\;\; \begin{array}{lcl} x_{2n} &=& x_{\alpha_2} \\ x_{2n+1} &=& x_{\alpha_2+\alpha_3} \\ &\vdots& \\ x_{3n-2} &=& x_{\alpha_2+\ldots+\alpha_n} \\ x_{3n-1} &=& x_{\alpha_2+\ldots+\alpha_{n-2}+2\alpha_{n-1}+\alpha_n} \\ &\vdots& \\ x_{4n-4} &=& x_{\gamma-\alpha_1} \\ \end{array} \] \end{footnotesize} Using the same technique as in the previous case, we find that $b$ must be invariant under the subgroup of $R^u(P_X)$ given by $x_i = 0 \; \forall i\neq 1$. This implies that: \[ b(x) = \nu (x_\gamma - x_{\alpha_1} x_{\gamma-\alpha_1}) + \widetilde b(x) \] where $\nu\in\mathbb C$ and $\widetilde b(x)$ is a polynomial that does not depend on the coordinates $u_\gamma$ and $u_{\gamma-\alpha_1}$. The function $\varphi_2(x)$ is the upper right $2\times2$-minor of $x\in R^u(P_X)$, and in our coordinates we have: \[ \left.\frac{\partial(\varphi_2(u\cdot x))}{\partial x_\gamma}\right|_{x=0} = 2u_\gamma - u_{\alpha_1} u_{\gamma-\alpha_1} \] The above expression is not a scalar multiple of $b(u)$, hence $b$ and $\mu$ are zero. But now $a$ cannot be zero, nor the left hand side of equation \ref{formula:final}. \vspace{10pt}\noindent {\bf CASE $\mathbf{(15)}$} \noindent Here our group $G$ is of type $\mathsf G_2$, $H$ has a Levi factor quotient of $\mathbb C^\times \times SL_2$, and the Lie algebra of its unipotent radical is isomorphic to $\mathbb C \oplus \mathbb C^2$ as a $\mathbb C^\times \times SL_2$-module. If $\alpha_1$ and $\alpha_2$ are the two simple roots (short and long, resp.), the spherical root of $X$ is: \[ \gamma = \alpha_1 + \alpha_2 \] The colours $D_1$ and $D_2$ are moved resp. by $\alpha_1$ and $\alpha_2$, so $B=P_X$. The functions $\varphi_1$ and $\varphi_2$ have weights: \[ \begin{array}{rcl} \dot{w}_0(\omega_1)-\omega_1 & = & - 4\alpha_1 - 2\alpha_2 \\[5pt] \dot{w}_0(\omega_2)-\omega_2 & = & - 6\alpha_1 - 4\alpha_2 \end{array} \] hence: \[ \begin{array}{rcl} \dot{w}_0 \sigma_{\mathcal O(D_1)}\left(x,y\right) & = & c(x) y^2 + a(x) y + \varphi_1(x) \\[5pt] \dot{w}_0 \sigma_{\mathcal O(D_2)}\left(x,y\right) & = & f(x) y^4 + e(x) y^3 + d(x) y^2 + b(x) y + \varphi_2(x) \end{array} \] In order to do explicit calculations on $R^u(P_X)$, we use the embedding of $\mathsf G_2$ inside $SO_8(\mathbb C)$ given by: \begin{footnotesize} \[ \mathrm{Lie}(G) = \left\{ \left( \begin{array}{cccccccc} a_1 & x_7 & x_2 & -x_4 & x_4 & x_5 & x_3 & 0 \\ x_6 & a_1+a_2 & x_4 & -x_5 & x_5 & x_1 & 0 & -x_3 \\ x_{11} & x_9 & a_2 & x_6 & -x_6 & 0 & -x_1 & -x_5 \\ -x_9 & -x_8 & x_7 & 0 & 0 & x_6 & -x_5 & -x_4 \\ x_9 & x_8 & -x_7 & 0 & 0 & -x_6 & x_5 & x_4 \\ x_8 & x_{12} & 0 & x_7 & -x_7 & -a_2 & -x_4 & -x_2 \\ x_{10} & 0 & -x_{12} & -x_8 & x_8 & -x_9 & -a_1-a_2 & -x_7 \\ 0 & -x_{10} & -x_8 & -x_9 & x_9 & -x_{11} & -x_6 & -a_1 \end{array} \right)\right\} \] \end{footnotesize} for $a_1$, $a_2$, $x_1$, \ldots, $x_{12}$ coordinates on $\mathrm{Lie}(G)$. We have: \begin{footnotesize} \[ \mathrm{Lie}(R^u(Q)) = \left\{ \left( \begin{array}{cccccccc} 0 & 0 & x_2 & -x_4 & x_4 & x_5 & x_3 & 0 \\ x_6 & 0 & x_4 & -x_5 & x_5 & x_1 & 0 & -x_3 \\ 0 & 0 & 0 & x_6 & -x_6 & 0 & -x_1 & -x_5 \\ 0 & 0 & 0 & 0 & 0 & x_6 & -x_5 & -x_4 \\ 0 & 0 & 0 & 0 & 0 & -x_6 & x_5 & x_4 \\ 0 & 0 & 0 & 0 & 0 & 0 & -x_4 & -x_2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -x_6 & 0 \\ \end{array} \right)\right\} \] \end{footnotesize} These $x_i$ are our coordinates given by root space decomposition, and if we label them using the corresponding roots, we have: \[ \begin{array}{lcl} x_1 &=& x_{3\alpha_1+\alpha_2} \\ x_2 &=& x_{\alpha_2} \\ x_3 &=& x_{3\alpha_1+2\alpha_2} \end{array} \;\;\; \begin{array}{lcl} x_4 &=& x_{\alpha_1+\alpha_2} \\ x_5 &=& x_{2\alpha_1+\alpha_2} \\ x_6 &=& x_{\alpha_1} \end{array} \] At this point we need the expression of the group operation on $R^u(P_X)$ in terms of these coordinates. This is an easy task since $\mathrm{Lie}(R^u(P_X))$ is nilpotent and thus the exponential map can be written explicitly; we leave this exercise to the Reader. As in case $\mathbf{(9\mathsf B)}$, the functions $a(x)$, $b(x)$, $c(x)$, $d(x)$, $e(x)$ and $f(x)$ are stable under some nontrivial subgroup of $R^u(P_X)$. The possible monomials occurring in $a(x)$ are $x_1$, $x_5 x_6$, $x_4 x_6^2$, $x_2 x_6^3$. Invariance under the translation by $\exp(\mathfrak g_{\alpha_2})$ tell us that: \[ a(x) = \mu_1 x_1 + (3\mu_2 - 6\mu_3) x_5 x_6 + \mu_2 x_4 x_6^2 + \mu_3 x_2 x_6^3 \] with $\mu_1, \mu_2, \mu_3\in \mathbb C$. There exist sixteen possible monomials occurring in $b(x)$. Unfortunately, lemma \ref{lemma:appear} does not help proving that $a(x)$ or $b(x)$ are non-zero, due to the presence of $e(x)$ (which one can prove it is actually non-zero!). We need a deeper analysis, which we describe leaving the computations to the Reader. We set up a system of coordinates $\xi_1,\ldots,\xi_{14}$ on the big cell $R^u(B) T R^u(B_-)\cong \mathbb C^6\times \left(\mathbb C^\times\right)^2 \times \mathbb C^6$ of $G$, using the exponential map for $R^u(B)$ and $R^u(B_-)$. We find the subgroup $H$ described in \cite{Wa96}, and we can express it in terms of the coordinates $\xi_1,\ldots,\xi_{14}$. In \cite{Wa96} we find the $B$-weight and the $H$-weight\footnote{As usual, $B$ acts on functions on $G$ by left translation and $H$ acts by right translation.} of $f_{D_2}\in\mathbb C[G]$ a global equation of $D_2$ pulled back to $G$ along the projection $\pi\colon G\to G/H$. These weights are enough to find $f_{D_2}$ up to a multiplicative constant, using the decomposition of $\mathbb C[G]$ as a $G\times G$-module. We obtain the rational function $F=\dot{w}_0 f_{D_2}/f_{D_2}\in\mathbb C(G/H)\subset \mathbb C(G)$. The function $F$ is nothing but $\dot{w}_0 \sigma_{\mathcal O(D_2)}$ expressed in our coordinates on the big cell of $G$. Fix an arbitrary point $p$ in the big cell, lying also inside $BH$. Since $p\in BH$, then $\pi(p)$ is inside the canonical chart $X_{Z,B}$ of $X$, as well as the whole $\pi(u \cdot p)$ for all $u\in R^u(P_X)$. More precisely, if $y_0\in\mathbb C$ is the value of the coordinate $y$ in the point $\pi(p)$, then: \[\left\{ \pi(u \cdot p) \;\;|\;\; u\in R^u(Q)\right\} = \{y = y_0 \}\subset X_{Z,B} \] Consider the function $F(u \cdot p)$ for $u\in R^u(P_X)$, expressed in terms of the coordinates $x(u)=(x_1(u),\ldots,x_{d-1}(u))$ of $u$. If $\pi(p)$ had all coordinates $x_1,\ldots,x_{d-1}$ equal to zero, then $F(u\cdot p)$ would actually be $\dot{w}_0 \sigma_{\mathcal O(D_2)}(x(u),y_0)$, and we could recover the functions $f(x)$, $e(x)$, $d(x)$, $b(x)$ and $\varphi_2(x)$ from it. The problem is that we have no control on the coordinates of $\pi(p)$ relative to $X_{Z,B}$, so we cannot go back and choose $p$ in order to make this happen. However, since $R^u(P_X)$ is unipotent, we have: \[ F(u \cdot p) = \dot{w}_0 \sigma_{\mathcal O(D_2)}(x(u),y_0) + \mathrm{ other \;terms\; depending\; on\; the\; coordinates\; of\; } p \] Now, the explicit calculations show that the possible monomials of $b(x)$ appear in $F(u \cdot p)$ with coefficients which depend only on $y_0$ (and not on the other coordinates of $p$). This implies that from the expression of $F(u\cdot p)$ we can actually recover $b(x)$: \[ \begin{array}{rl} b(x) =& \mu_4 \left(-720 x_3x_5 + 720 x_3x_4x_6 -240 x_3x_6^2x_2 + 360 x_1x_5x_2 -720 x_1x_4^2 \right.\\[5pt] & 360 x_1x_4x_6x_2 -60 x_1x_6^2x_2^2 + 360 x_5^2x_4 -360 x_5^2x_6x_2 + \\[5pt] & \left. 60 x_5x_4x_6^2x_2 + 18 x_5x_6^3x_2^2 -8 x_4x_6^4x_2^2 + x_6^5x_2^3\right) \end{array} \] for $\mu_4\in \mathbb C\setminus \{0\}$. Therefore $b\neq 0$. Functions $\varphi_1$ and $\varphi_2$ can be found as in the previous cases, thanks to the inclusion $\mathsf G_2\subset SO_8$. Indeed, we can use the functions on $SO_8$ which are $\widetilde B$-eigenvectors under left translation and $\widetilde B_-$-eigenvectors under right translation, where $\widetilde B$ is a suitable Borel subgroup of $SO_8$. For $x\in R^u(P_X)\subset SO_8$, the function $\varphi_1(x)$ is the pfaffian of the upper right $3\times3$-submatrix of $x$, and $\varphi_2(x)$ is the upper right $2\times2$-minor of $x$. The expressions in our coordinates are: \[ \begin{array}{rl} \varphi_1(x)=& 360 x_1 x_4 + 360 x_5^2 - 360 x_3 x_6 + 30 x_2 x_5 x_6^2 - x_2^2 x_6^4 \\[5pt] \varphi_2(x)=& \frac{1}{4} x_1^2x_2^2 -x_3^2 - \frac{3}{4} x_4^2 x_5^2 +x_2x_5^3 +x_1\left(x_4^3 - \frac{3}{2}x_2x_4x_5\right) + \\[5pt] & \frac{1}{240} \left( x_2^2 x_4^2 x_6^4 - x_2^3 x_5 x_6^4\right) - \frac{1}{21600}x_2^4x_6^6 + x_3\left(-x_4^2x_6 + x_2x_5x_6 + \frac{1}{10}x_2^2x_6^3 \right) \end{array} \] At this point, the formulas we have found and the multiplication on $R^u(P_X)$ in our coordinates are enough to express explicitly equation \ref{formula:final}; the proof that this equation cannot be true is straightforward. The results in this section, together with lemma \ref{lemma:reduction}, complete the proof of theorem \ref{thm:SI}. \section{Proof of Theorem \ref{thm:main}} \label{sect:proof} We begin proving the uniqueness of a $G$-equivariant closed immersion $F\colon X\to \mathbb P(V)$ for any fixed simple $V$, using theorem \ref{thm:SI}. \begin{lem} \label{lemma:restriction} Let $X$ be a wonderful variety admitting a $G$-equivariant closed immersion into the projective space of a simple $G$-module. Then the restriction of a line bundle from $X$ to $Z$, the unique closed orbit, gives an inclusion $\mathrm{Pic}(X)\subseteq \mathrm{Pic}(Z)$ where the latter is identified as usual with a sublattice of the integral weights. \end{lem} \begin{proof} Theorem \ref{thm:SI} guarantees that inside $X$ there is no $G$-stable subvariety being a parabolic induction of the $SL_2$-variety $\mathbb P^1\times\mathbb P^1$: this implies that there is no simple root moving two colours on $X$. By \cite{Lu01} (proposition 3.2), it follows that any colour $D$ on $X$ will be moved either by a single simple root $\alpha_D$, or by two orthogonal simple roots $\alpha_D, \alpha'_D$. In view of proposition \ref{prop:mult}, a colour $D$ will intersect $Z$ in the union of at most two colours of $Z$, in such a way that $D\cap Z$ corresponds to either $\omega_D$, or $2\omega_D$, or $\omega_D+\omega'_D$ (where $\omega_D$ is the fundamental dominant weight corresponding to $\alpha_D$). These three cases occur resp. when $D$ is moved by $\alpha_D\in S$ with $2\alpha_D \notin \Sigma_X$, or $D$ is moved by $\alpha_D\in S$ with $2\alpha_D\in\Sigma_X$, or $D$ is moved by $\alpha_D,\alpha'_D \in S$. In this situation, moreover, two different colours will be moved by two disjoint sets of simple roots, and this completes the proof. \end{proof} The identification of $\mathrm{Pic}(X)$ with a sublattice of the weights is exactly the application $\mathscr L \mapsto \chi_\mathscr L$. Fix $V$ and suppose we are given a closed immersion $F\colon X\to \mathbb P(V)$: thanks to the lemma, the highest weight of $V$ determines uniquely the line bundle giving this immersion. More precisely, there exists a unique line bundle $\mathscr L$ such that $V\cong V_{\mathscr L}^*$. This implies that $F$ is determined up to composition with elements of $GL(V)$. But we are dealing with $G$-equivariant maps: any $A\in GL(V)$ such that $A\circ F$ remains $G$-equivariant will have to commute with the $G$-action on $V$, thus $A$ will act as a scalar on $V$. This proves uniqueness of $F$. \vspace{10pt} \noindent {\bf Remark} The image of $\mathscr L \mapsto \chi_\mathscr L$ (for $\mathscr L$ varying among the ample line bundles on $X$) gives a subset of dominant weights which determines exactly all the simple $G$-modules whose projective space contains a copy of $X$. \vspace{10pt} Finally, what we are left to prove is that if $X$ has property (R) of theorem \ref{thm:SI} then it is strict, i.e. all the stabilizers of its points are equal to their normalizers. Let $x\in X$ and let $G_x$ be its stabilizer. The closure of the $G$-orbit of $x$ is a wonderful $G$-subvariety $Y$, whose generic stabilizer is $G_x$. This $Y$ has the property (R) as well, so there exists a simple $G$-module $V$ and a unique closed immersion $F\colon Y\to\mathbb P(V)$. Suppose that $G_x$ is different from its normalizer, and take an element in $N_G(G_x)\setminus G_x$. This element induces a non-trivial $G$-equivariant automorphism $\phi$ of $Y$; this is absurd because then $F$ and $F\circ\phi$ would be two different closed immersions of $Y$ in $\mathbb P(V)$. This finishes the proof of theorem \ref{thm:main}. \section{Ample and very ample line bundles} \label{sect:veryample} \begin{thm} \label{thm:veryample} Any ample line bundle on a wonderful variety is very ample. \end{thm} \begin{proof} For strict wonderful varieties, theorem \ref{thm:SI} assures our result. For non-strict varieties, the problem can be reduced to rank $1$ cuspidal wonderful varieties exactly in the same way as in lemma \ref{lemma:reduction}. We remark that here we cannot ignore the cases where the centre of $G$ doesn't act trivially. Following again \cite{Wa96}, the non-strict cuspidal wonderful varieties of rank $1$ are: $\mathbf{(1\mathsf A)}$, $\mathbf{(3)}$, $\mathbf{(5\mathsf A)}$, $\mathbf{(7\mathsf B)}$, $\mathbf{(10)}$, $\mathbf{(5\mathsf D)}$, $\mathbf{(5)}$, $\mathbf{(13)}$. Some of these varieties have easy explicit descriptions, however there is no need of a case-by-case proof. Let $X$ be one of these varieties, $\gamma$ the spherical root of $X$, and $f_\gamma$ a rational function on $X$, $B$-eigenvector of weight $\gamma$. From table 1 in \cite{Wa96}, we see that in all these cases $1/f_\gamma$ has poles of order $1$ on (all) the colour(s) of $X$, a zero of order $1$ on the closed orbit (by construction), and no other poles. Therefore $1/f_\gamma$ belongs to $\Gamma(X,\mathscr L)$ for any ample line bundle $\mathscr L$ associated to a sum of colours with positive coefficients. If we focus on the canonical chart as in section \ref{sect:rank1}, the function $1/f_\gamma$ is nothing but the function $y\in\mathbb C[X_{Z,B}]$ (up to a multiplicative constant). Let us use the notations of lemma \ref{lemma:diffcond}; thanks to the Borel-Weil theorem there exist $u_1,\ldots,u_{d-1}\in R^u(P_X)$ such that the jacobian matrix of the functions $(u_1\dot{w}_0)\sigma_\mathscr L$, $\ldots$, $(u_{d-1}\dot{w}_0)\sigma_\mathscr L$ with respect only to the coordinates of $R^u(P_X)$ is nondegenerate in $z$. But now $y$ is among the global sections we can consider, and it is clear that the jacobian matrix of the functions $(u_1\dot{w}_0)\sigma_\mathscr L,\ldots, (u_{d-1}\dot{w}_0)\sigma_\mathscr L, y$ with respect to all the coordinates is nondegenerate in $z$. Therefore in $\Gamma(X,\mathscr L)$ there are enough sections to give an immersion $X\to\mathbb P(\Gamma(X,\mathscr L)^*)$, and $\mathscr L$ is very ample. \end{proof}
1,477,468,750,553
arxiv
\section{Introduction} \label{sec:intro} In supervised discriminative model learning, given a finite number of training data samples, optimal exploitation of the information content in the extracted features with respect to their class conditions is essential. Applications in various research fields have developed different domain-specific methods for feature learning and subsequent supervised model training \cite{Lemm:2011,Larranaga:2006,Jiang:2016}. Many exploratory applications in practice are further characterized by high-dimensional feature representations where the dimensionality reduction problem is to be addressed. One traditional approach towards supervised dimensionality reduction is \textit{feature selection}, referring to the process of selecting the most class-informative subset from the high-dimensional feature set and discarding others \cite{Guyon:2003}. Particularly, feature selection based on information theoretic criteria (e.g., maximum mutual information) have shown significant promise in earlier studies \cite{Battiti:1994,Kwak:2002}. Although selecting a class-relevant subset of features leads to intuitively interpretable and preferable learning algorithms, feature ranking and selection algorithms are known to potentially yield sub-optimal solutions due to their inability to thoroughly assess feature dependencies \cite{Erdogmus:2008,Torkkola:2008}. In that regard, \textit{feature transformation} based dimensionality reduction methods provide a more robust alternative \cite{Guyon:2003}, which have been also studied in the form of information theoretic projections or rotations \cite{Torkkola:2003,Hild:2006,Faivishevsky:2012}. These latter studies constitute the basis of our current work, in which we address the problem of learning feature transformations based on a maximum mutual information criterion between transformed features and their associated class labels using artificial neural networks. Beyond exhaustively aiming to estimate the mutual information quantity between continuous valued features and discrete valued class labels across training data samples \cite{Ross:2014,Gao:2017}, we claim that feature transformations under a maximum mutual information criterion can be obtained by using a stochastic estimate of the gradient of the mutual information. This feature transformation approach can be further realized as a dimensionality reduction neural network which: (1) can be trained via standard gradient descent, (2) reduces the inference time to a single forward pass through the learned network, and (3) simplifies the overall supervised dimensionality reduction problem by alleviating the need for heuristic and sub-optimal feature selection algorithms. In this paper we present MMINet, a generic dimensionality reduction neural network training procedure based on maximum mutual information criterion between the network-transformed features and their associated class labels. We derive a stochastic estimate of the gradient of the mutual information between the continuous valued projected feature random variables and discrete valued class labels, and use this stochastic quantity for the loss function in artificial neural network learning. Furthermore, we formulate the training objective non-parametrically, relying on non-parametric kernel density estimations to approximate projected feature space class-conditional probability densities. We interpret our approach as determining a manifold on which transformations of the original features carry maximal mutual information with the class labels. Subsequently, feature selection becomes a special sparse solution case of all possible solutions that MMINet can provide when it is restricted to a single linear layer architecture. For our empirical assessments, we demonstrate our results on publicly available high-dimensional biological microarray datasets for cancer diagnostics, in comparison to several conventional feature selection methods. The remainder of this article is organized as follows. In the upcoming Section~\ref{sec:relatedwork} we briefly present related work on feature selection and feature transformation based dimensionality reduction approaches, as well as some recent information theoretic neural network training studies. We then describe the proposed MMINet approach on feature transformation learning neural networks with maximum mutual information criterion in Section~\ref{sec:mminet}. As part of our experimental studies in Section~\ref{sec:expstudies}, we initially illustrate the limitations of a simple feature selection approach with a toy example in Section~\ref{sec:toy}. In Section~\ref{sec:expdata} we describe both the synthetically generated and the diagnostic biological data sets that we used in our empirical assessments. Subsequently we describe our implementations and present our results in Sections~\ref{sec:implementation} and \ref{sec:results}. We conclude the article with a discussion of our methodology, results, current limitations and potential improvements. \section{Related Work} \label{sec:relatedwork} Supervised dimensionality reduction by feature selection refers to selecting the most class-informative feature subset from a high-dimensional feature set based on a defined optimality criterion to maximize class separability \cite{Guyon:2003}. A theoretically optimal dimensionality reduction procedure for a specified classifier is to iteratively adjust a pre-determined feature dimensionality reduction framework until the best cross validated decoding accuracy is achieved, which are known as the \textit{wrapper} methods (see Figure~\ref{fig:featwrapper}). One well-known example is the support vector machine (SVM) recursive feature elimination (RFE) approach \cite{Guyon:2002}. SVM-RFE is a wrapper feature selection method around an SVM classifier which uses backward elimination of features with the smallest model weights. Intuitively, as the dimensionality and amount of training data increases, wrapper methods become computationally cumbersome and time consuming for model learning. \textit{Filter} methods provide an alternative in the form of feature ranking and subset selection algorithms based on a pre-defined optimality criterion (see Figure~\ref{fig:featfilter}). In particular, feature selection based on information theoretic criteria, where salient statistical properties of features can be exploited by a probabilistic dependence measure, have shown significant promise in supervised dimensionality reduction \cite{Battiti:1994,Kwak:2002,Peng:2005}. Feature selection methods offer the advantage of preserving original representations of the variables. This subsequently translates to sustaining better and easier model interpretability, and makes them preferable depending on the learning application domain \cite{Garrett:2003,Lazar:2012}. Nevertheless there exists significant evidence on feature ranking and selection algorithms leading to potentially sub-optimal solutions for class separability \cite{Erdogmus:2008,Torkkola:2008}. This argument can be simply illustrated by considering the case where two redundant features can become informative jointly (as will be shown in Section~\ref{sec:toy}). Accordingly, feature transformation based dimensionality reduction methods can provide a more robust and viable alternative \cite{Guyon:2003,Hinton:2006} (see Figure~\ref{fig:feattrans}), which are also demonstrated in the form of information theoretic linear projections or rotations \cite{Torkkola:2003,Hild:2006,Nenadic:2007,Zhang:2010,Faivishevsky:2012}. We motivate our study in the light of these work, where we aim to use standard gradient descent based artificial neural network training and inference pipelines to perform nonlinear maximum mutual information based feature transformations. We previously explored this idea for neurophysiological feature transformations in brain-computer interfaces \cite{Ozdenizci:2019}, which we re-address here in the context of neural networks. Recently a different line of work focused on estimating mutual information of high dimensional continuous variables over neural networks, initially proposed as mutual information neural estimation (MINE) \cite{Belghazi:2018}. From an unsupervised representation learning perspective \cite{Hjelm:2018} extended MINE to learn powerful lower dimensional data representations that perform well on a variety of tasks, by maximizing the estimated mutual information between the input and output of a deep neural network encoder. More recently \cite{Wen:2020} proposed to estimate the gradient of mutual information rather than itself for similar representation learning setups, which was argued to provide a more stable estimate for unsupervised representation learning. Yet, these studies are particularly interested in learning unsupervised deep representations of continuous high-dimensional random variables from an information theoretic perspective, which are however being successfully translated into the convention of artificial neural networks. Going further towards application domains, neural network based information theoretic metric estimators also demonstrated significant promise in various uses within diverse artificial intelligence settings. One of such use cases include medical dialogue systems for automatic diagnosis \cite{Xia:2020}, where mutual information estimation models are embedded within a policy learning framework to enhance the reward function and encourage the model to select the most discriminative symptoms to make a diagnosis. Another example extends disentangled representation learning models by an information theoretic formulation for image classification and retrieval problems in computer vision \cite{Sanchez:2019}. Potential contemporary use cases can further extend to mobile cloud computing applications \cite{Ciobanu:2019}, as well as end-to-end deep learning models for communication systems with efficient mutual information based encoding \cite{Fritschek:2019}. \begin{figure} \centering \subfigure[Feature selection via wrapper methods] {\begin{tikzpicture}[thick,scale=0.4, every node/.style={transform shape}] \tikzstyle{sblock} = [cloud, rectangle, rounded corners=2pt, draw, fill=white!10, text width=16em, minimum height=9em, text centered] \tikzstyle{block} = [cloud, rectangle, rounded corners=2pt, draw, fill=white!10, text width=19em, minimum height=9em, text centered] \node [sblock, inner sep=0pt] (HD) {\Huge High-dim.\\\vspace{0.2cm}feature vectors}; \node [sblock, right = 1cm of HD] (SEL) {\Huge Selecting a\\\vspace{0.2cm}feature subset}; \node [sblock, right = 1cm of SEL] (LD) {\Huge Low-dim.\\\vspace{0.3cm}feature vectors}; \node [sblock, right = 1cm of LD] (CLS) {\Huge Classification\\\vspace{0.3cm}Model}; \node [block, right = 1cm of CLS] (CLA) {\Huge Classifier error\\\vspace{0.3cm}as a criterion}; \draw [draw, -latex'] (HD) -- (SEL); \draw [draw, -latex'] (SEL) -- (LD); \draw [draw, -latex'] (LD) -- (CLS); \draw [draw, -latex'] (CLS) -- (CLA); \draw [draw, -latex'] (CLA) |- ([yshift=10mm] LD.north) -| (SEL); \end{tikzpicture}\label{fig:featwrapper}} \subfigure[Feature selection via filter methods] {\begin{tikzpicture}[thick,scale=0.4, every node/.style={transform shape}] \tikzstyle{sblock} = [cloud, rectangle, rounded corners=2pt, draw, fill=white!10, text width=16em, minimum height=9em, text centered] \tikzstyle{block} = [cloud, rectangle, rounded corners=2pt, draw, fill=white!10, text width=19em, minimum height=9em, text centered] \node [sblock, inner sep=0pt] (HD) {\Huge High-dim.\\\vspace{0.2cm}feature vectors}; \node [block, right = 1cm of HD] (RNK) {\Huge Feature ranking\\\vspace{0.2cm}with a criterion}; \node [block, right = 1cm of RNK] (SEL) {\Huge Selecting the top\\\vspace{0.2cm}\textit{best} features}; \node [sblock, right = 1cm of SEL] (LD) {\Huge Low-dim.\\\vspace{0.3cm}feature vectors}; \node [sblock, right = 1cm of LD] (CLS) {\Huge Classification\\\vspace{0.3cm}Model}; \draw [draw, -latex'] (HD) -- (RNK); \draw [draw, -latex'] (RNK) -- (SEL); \draw [draw, -latex'] (SEL) -- (LD); \draw [draw, -latex'] (LD) -- (CLS); \end{tikzpicture}\label{fig:featfilter}} \subfigure[Feature transformation methods] {\begin{tikzpicture}[thick,scale=0.4, every node/.style={transform shape}] \tikzstyle{sblock} = [cloud, rectangle, rounded corners=2pt, draw, fill=white!10, text width=16em, minimum height=9em, text centered] \tikzstyle{block} = [cloud, rectangle, rounded corners=2pt, draw, fill=white!10, text width=19em, minimum height=9em, text centered] \node [sblock, inner sep=0pt] (HD) {\Huge High-dim.\\\vspace{0.2cm}feature vectors}; \node [block, fill=gray!20, right = 1cm of HD] (TRS) {\Huge Learning a feature\\\vspace{0.2cm}transformation}; \node [sblock, right = 1cm of TRS] (LD) {\Huge Low-dim.\\\vspace{0.3cm}feature vectors}; \node [sblock, right = 1cm of LD] (CLS) {\Huge Classification\\\vspace{0.3cm}Model}; \draw [draw, -latex'] (HD) -- (TRS); \draw [draw, -latex'] (TRS) -- (LD); \draw [draw, -latex'] (LD) -- (CLS); \end{tikzpicture}\label{fig:feattrans}} \caption{An illustration of common supervised dimensionality reduction approaches: (a) feature selection with wrapper methods which are particularly tailored for a classification model, (b) feature selection via filter methods which generally consider ranking and selection of features based on a pre-defined criterion independent of the classification model, (c) feature transformation approaches which aim to learn a mapping function based on an optimality criterion independent of the classification model.} \label{fig:selectiondiagrams} \end{figure} \section{MMINet: Information Theoretic Dimensionality Reduction Neural Network} \label{sec:mminet} \subsection{Problem Statement} \label{sec:formulation} Let $\{(\bm{x}_i,c_i)\}_{i=1}^{n}$ denote the finite training data set where $\bm{x}_i\in\mathbb{R}^{d_x}$ is a sample of a continuous valued random variable $\mathit{X}$, and $c_i\in\{1,\ldots,L\}$ is a sample of a discrete valued random variable $\mathit{C}$, indicating the discrete class label for $\bm{x}_i$. From a dimensionality reduction perspective, the objective is to find a mapping network $\varphi^\star:\mathbb{R}^{d_x}\mapsto\mathbb{R}^{d_y}$ such that the high $d_x$-dimensional input feature space is mapped to a lower $d_y$-dimensional transformed feature space while maximizing the mutual information between the transformed data and corresponding class labels based on the observations, as expressed by Equation~\eqref{eq:objective}. \begin{equation} \varphi^\star = \argmax_{\varphi\in\Omega} \{I(\mathit{Y},\mathit{C})\}, \label{eq:objective} \end{equation} where the continuous random variable $\mathit{Y}$ has transformed data samples $\bm{y}_i=\varphi^\star(\bm{x}_i;\bm{\theta}^\star)$ in a $d_y$-dimensional feature space, $\bm{\theta}$ denotes the parameters of the mapping $\varphi$, and $\Omega$ denotes the function space for possible feature mappings $\varphi$. In Bayesian optimal classification, upper and lower bounds on the probability of error in estimating a discrete valued random variable $\mathit{C}$ from an observational random variable $\mathit{Y}$ can be determined by information theoretic criteria (i.e., Fano's lower bound inequality \cite{Fano:1961} and Hellman-Raviv upper bound on Bayes error \cite{Hellman:1970}). Specifically, these bounds suggest that the lowest possible Bayes error of any given classifier can be achieved when the mutual information between the random variables $\mathit{Y}$ and $\mathit{C}$ is maximized (cf. \cite{Ozdenizci:2019,Torkkola:2003}). \subsection{Learning with Maximum Mutual Information Criterion} \label{sec:learning} Mutual information between the continuous random variable $\mathit{Y}$ and the discrete random variable $\mathit{C}$ is defined as: $I(\mathit{Y},\mathit{C}) = H(\mathit{Y}) - H(\mathit{Y}\vert\mathit{C})$, which also can be expressed by Equation~\eqref{eq:mutualinfo}. \begin{equation} \begin{split} I(\mathit{Y},\mathit{C}) = - \int_{\bm{y}} p(\bm{y})\log p(\bm{y})d\bm{y} + \int_{\bm{y}} \sum_{c} p(\bm{y},c)\log p(\bm{y} \vert c)d\bm{y}. \end{split} \label{eq:mutualinfo} \end{equation} To solve the objective in Equation~\eqref{eq:objective}, exact estimation of the mutual information quantity is not necessary. Instead, we are only interested in adaptively estimating the optimal feature mapping network parameters $\bm{\theta}$ under maximum mutual information criterion. Motivated by similar work from information theory \cite{Erdogmus:2003,Chen:2008}, we approach the optimization problem stochastically. As illustrated in Figure~\ref{fig:itfldiagram}, the network parameters $\bm{\theta}$ will be iteratively updated based on the instantaneous estimate of the gradient of mutual information at each iteration $t$ (i.e., $\nabla_{\bm{\theta}}\widehat{I}_t(\mathit{Y},\mathit{C})$), which we define as the \textit{stochastic mutual information gradient (SMIG)}. During this network training procedure, in fact we approximate the true gradient of the mutual information $\nabla_{\bm{\theta}}I(Y,C)$ stochastically, and perform parameter updates based on the SMIG $\nabla_{\bm{\theta}}\widehat{I}_t(\mathit{Y},\mathit{C})$ evaluated with the instantaneous sample $\bm{y}_t$ and the values of $\bm{\theta}$ at iteration $t$. This stochastic estimate quantity can be obtained by dropping the expectation operation over $\mathit{Y}$ from the true gradient given in Equation~\eqref{eq:mutualinfograd}. \begin{equation} \begin{split} \nabla_{\bm{\theta}}I(\mathit{Y},\mathit{C}) = \frac{\partial}{\partial\bm{\theta}} \Bigg[ - \int_{\bm{y}} p(\bm{y})\log p(\bm{y})d\bm{y} + \int_{\bm{y}} p(\bm{y}) \sum_{c} P(c \vert \bm{y})\log p(\bm{y} \vert c)d\bm{y} \Bigg]. \end{split} \label{eq:mutualinfograd} \end{equation} Subsequently, the expression for SMIG at iteration $t$ can be denoted by Equation~\eqref{eq:smig}. \begin{equation} \begin{split} \nabla_{\bm{\theta}}\widehat{I}_t(\mathit{Y},\mathit{C}) = \frac{\partial}{\partial\bm{\theta}} \Bigg[- \log \widehat{p}(\bm{y}_t) + \sum_{c} \widehat{P}(c \vert \bm{y}_t) \log \widehat{p}(\bm{y}_t \vert c)\Bigg]. \end{split} \label{eq:smig} \end{equation} \begin{figure} \centering \begin{tikzpicture}[thick,scale=0.4, every node/.style={transform shape}] \tikzstyle{empty} = [cloud, rectangle, text width=20em, text centered, minimum height=6em] \tikzstyle{sblock} = [cloud, rectangle, rounded corners=3pt, draw, fill=white!10, text width=20em, minimum height=9em, text centered] \tikzstyle{lblock} = [cloud, rectangle, rounded corners=3pt, draw, fill=white!10, text width=18em, minimum height=9em, text centered] \tikzstyle{gmghost} = [cloud, rectangle, rounded corners=3pt, draw, text width=15em, minimum height=9em, text centered] \tikzstyle{gmblock} = [cloud, rectangle, rounded corners=3pt, draw, fill=gray!20, text width=15em, minimum height=26em, text centered] \tikzstyle{loss} = [cloud, rectangle, rounded corners=3pt, dashed, draw, fill=white!10, text width=16em, minimum height=9em, text centered] \tikzstyle{arrow} = [draw, -latex'] \tikzstyle{eqtext} = [cloud, rectangle, text width=80em, text centered, minimum height=4em] \node [sblock] (HD) {\Huge High-dim.\\\vspace{0.3cm} feature vector: $\bm{x}_t$}; \node [empty, above = 0cm of HD] (DUMMY) {}; \node [sblock, above = 0cm of DUMMY] (TR) {\Huge Training Set\\\vspace{0.3cm}$\{(\bm{x}_i,c_i)\}_{i=1,i \ne t}^{n}$}; \node [gmghost, right = 1.7cm of HD] (G1) {}; \node [gmghost, right = 1.7cm of TR] (G2) {}; \node [gmblock, right = 1.7cm of DUMMY] (TRS) {\Huge MMINet\\\vspace{0.4cm} $\bm{y}=\varphi(\bm{x};\bm{\theta})$}; \node [sblock, right = 1.2cm of G1] (LD) {\Huge Low-dim.\\\vspace{0.4cm}feature vector: $\bm{y}_t$}; \node [sblock, right = 1.2cm of G2] (PR) {\Huge Transformed Set\\\vspace{0.5cm}$\{(\bm{y}_i,c_i)\}_{i=1,i \ne t}^{n}$}; \node [lblock, right = 1.2cm of LD] (MI) {\Huge Inst. Loss\\\vspace{0.4cm}$-\widehat{I}_t(\mathit{Y},\mathit{C})$}; \node [loss, below = 1cm of LD] (Grad) {\Huge SMIG\\\vspace{0.4cm}$-\nabla_{\bm{\theta}}\widehat{I}_t(\mathit{Y},\mathit{C})$}; \draw [arrow] (TR) -- (G2); \draw [arrow] (G2) -- (PR); \draw [arrow] (PR) -| (MI); \draw [arrow] (HD) -- (G1); \draw [arrow] (G1) -- (LD); \draw [arrow] (LD) -- (MI); \draw [arrow,dashed] (MI) |- (Grad); \draw [arrow,dashed] (Grad) -| (TRS); \end{tikzpicture} \caption{Stochastic training flow of MMINet which uses instantaneous training data samples $\bm{x}_t$ to calculate the instantaneous loss $-\widehat{I}_t(\mathit{Y},\mathit{C})$, and perform parameter updates based on its gradient (i.e., SMIG). Note that at every iteration $t$, the current transformed set samples based on the current $\bm{\theta}$ estimate are also needed to evaluate the instantaneous loss in Equation~\eqref{eq:instloss}.} \label{fig:itfldiagram} \end{figure} In the neural network training process, consistently with Figure~\ref{fig:itfldiagram}, we simply use $-\widehat{I}_t(\mathit{Y},\mathit{C})$ as the instantaneous loss to be backpropagated over the network for parameter updates at iteration $t$. Applying the Bayes' Theorem, the instantaneous loss estimate from Equation~\eqref{eq:smig} can be expressed via Equation~\eqref{eq:instloss}. \begin{equation} \begin{split} -\widehat{I}_t(\mathit{Y},\mathit{C}) = + \log \left( \sum_{c} \widehat{P}(c)\widehat{p}(\bm{y}_t \vert c) \right) - \sum_{c} \left(\frac{\widehat{P}(c)\widehat{p}(\bm{y}_t \vert c)}{\sum_{c} \widehat{P}(c)\widehat{p}(\bm{y}_t \vert c)} \right) \log \widehat{p}(\bm{y}_t \vert c), \end{split} \label{eq:instloss} \end{equation} where the class priors $\widehat{P}(c)$ will be empirically determined over the training data samples, and $\widehat{p}(\bm{y}_t \vert c)$ at each iteration $t$ will be approximated via non-parametric kernel density estimations \cite{Principe:2000} on class conditional distributions of the transformed data samples expressed as in Equation~\eqref{eq:kde}. \begin{equation} \begin{split} \widehat{p}(\bm{y}_t \vert c) = \frac{1}{n_c}\sum_{j=1}^{n_c} \textbf{K}_{\textbf{H}}(\bm{y}_t-\bm{y}_j), \end{split} \label{eq:kde} \end{equation} where index $j$ iterates over the training samples of the conditioned class $c$ and $n_c$ denotes the number of samples in that class. Since a continuously differentiable kernel choice is necessary for proper evaluation of the gradients, we use Gaussian kernels as denoted in Equation~\eqref{eq:gaussiankernel}. \begin{equation} \begin{split} \textbf{K}_{\textbf{H}}(\bm{y}_t-\bm{y}_j)=\frac{1}{(2\pi)^{d_y/2}|\textbf{H}|^{1/2}}e^{\frac{1}{2}(\bm{y}_t-\bm{y}_j)^{\text{T}}\textbf{H}^{-1}(\bm{y}_t-\bm{y}_j)}, \end{split} \label{eq:gaussiankernel} \end{equation} with the kernel bandwidth matrix $\textbf{H}$ determined using Silverman's rule of thumb \cite{Silverman:1986}. Finally, note that the SMIG in Equation~\eqref{eq:smig} is a biased estimator of the true gradient of mutual information in Equation~\eqref{eq:mutualinfograd}, since it is based on kernel density estimators with finite samples which are biased estimators \cite{Parzen:1962}. An increase in the training data sample size per class can yield better class conditional kernel density estimates \cite{Hwang:1994} that can be exploited during the neural network optimization process. \section{Experimental Studies} \label{sec:expstudies} \subsection{An Illustrative Example} \label{sec:toy} We first demonstrate a simple example on how feature selection can lead to confounding results regarding class separability as we highlighted in Section~\ref{sec:intro}. We will illustrate a two-class classification problem with two-dimensional data distributions such that there is significant overlap in distributions when an individual feature is selected (see Figure~\ref{fig:toy_results}). While class distributions are easily separable when both features are considered together, a feature selection between the two dimensions will lead to significant information loss. We subsequently show the projection results using a simple MMINet architecture with a single linear (dense) layer $\varphi(\bm{x};\bm{\theta})=\bm{W}\bm{x}$, where $\bm{W}$ is a one by two projection array. We observe that maximum mutual information criterion based linear feature transformation ensures minimum probability of error based on the available training data samples. This example illustrates one setting on how feature selection can lead to sub-optimal solutions for class separability. \subsection{Experimental Data} \label{sec:expdata} We evaluate our information theoretic dimensionality reduction approach on two different types of datasets. Firstly we perform feasibility assessments on a synthetically generated dataset, and later conduct experiments using three diagnostic biological microarray datasets. \subsubsection{Synthetically Generated Data} \label{sec:synthetic} Preliminary evaluations of our approach are performed using an artificially generated dataset with regards to a well-known basis for comparison of learning algorithms \cite{Thrun:1991}. We use the Monk3 Dataset, from the MONK's problems \cite{Thrun:1991}, which handles a binary classification task where 432 data samples are described by $d_x=6$ features $(x_1,\ldots,x_6)$. For each data sample, binary class labels are obtained by the following logical operation: $(x_5=3 \wedge x_4=1) \lor (x_5\ne 4 \wedge x_2\ne 3)$. From the 432 data samples, 5\% have noisy labels. Overall, the problem implies that there are only three relevant features $(x_2,x_4,x_5)$ to infer the class label and the remaining three features are redundant. \subsubsection{High-Dimensional Diagnostic Biological Data} \label{sec:biodata} We perform further empirical assessments using high-dimensional biological microarray data from the following three datasets: (1) Breast Cancer Wisconsin Diagnostic Dataset \cite{Dua:2019} consisting of 569 samples of 30 dimensional features extracted from digitized images of a fine needle aspirate of a breast mass, describing cell characteristics where the cell is either classified as malignant or benign, (2) Glioma Dataset \cite{Nutt:2003} containing 50 samples of four class data (i.e., cancer/non-cancer glioblastomas or oligodendrogliomas) defined by high-dimensional microarray gene expression data of 4434 features, (3) Lung Carcinoma Dataset \cite{Bhattacharjee:2001} containing 203 samples in five classes adenocarcinomas, squamous cell lung carcinomas, pulmonary carcinoids, small-cell lung carcinomas and normal lung, defined by 3312 mRNA gene expression variables. \begin{figure} \centering \includegraphics[clip, trim=0.5cm 0cm 0.3cm 0.1cm, width=0.26\textwidth]{toy_data.pdf}\hspace{-0.3cm} \includegraphics[clip, trim=0.5cm 0cm 0.3cm 0.1cm,width=0.26\textwidth]{toy_f1.pdf}\hspace{-0.3cm} \includegraphics[clip, trim=0.5cm 0cm 0.3cm 0.1cm,width=0.26\textwidth]{toy_f2.pdf}\hspace{-0.3cm} \includegraphics[clip, trim=0.5cm 0cm 0.3cm 0.1cm,width=0.26\textwidth]{toy_mminet.pdf}\hspace{-0.3cm} \caption{An illustration of how feature selection can confound class separability, using a two-dimensional data distribution from two color-coded classes as demonstrated at the top left figure. For the two classes, there is significant overlap in distributions when an individual feature is selected. Bottom right figure shows single linear layer MMINet projections onto a one dimension.} \label{fig:toy_results} \end{figure} \subsection{Implementations} \label{sec:implementation} We evaluate MMINet feature transformation method in comparison to four supervised feature selection methods: (1) feature selection based on \textit{fisher score} as a similarity based approach \cite{Duda:2012}, (2) minimum redundancy maximum relevance (\textit{mRMR}) feature selection \cite{Peng:2005} as an information theoretic feature ranking and selection criterion, (3) \textit{$l_1$-SVM} as a sparse regularization based method that utilizes an $l_1$-norm regularizer on a linear support vector machine (SVM) classifier \cite{Zhu:2004}, (4) \textit{SVM-RFE} as a wrapper feature selection method around an SVM classifier with recursive feature elimination (RFE) where features with lowest SVM weights are recursively eliminated backwards \cite{Guyon:2002}. For our MMINet implementations\footnote{Codes are available at: https://github.com/oozdenizci/MMIDimReduction} we used the Chainer deep learning framework \cite{Tokui:2015}. Stochastic model training was performed by considering one instantaneous sample at a time (i.e., one sample per batch) for one complete pass across the whole training dataset (i.e., one epoch), and we employed momentum stochastic gradient descent \cite{Qian:1999}. The neural network architecture was arbitrarily defined as a two hidden layer network with ELU activations following the hidden layer outputs. Dimensionalities of the dense layers were chosen to be $d_x$ to $d_x/2$, $d_x/2$ to $d_x/4$, and $d_x/4$ to $d_y$. All features were standardized by removing the mean and scaling to unit variance. After dimensionality reduction (i.e., feature selection or MMINet transformation), for classification purposes we used linear SVM classifiers and reported averaged 5-fold cross-validation accuracies in all experiments. \subsection{Results} \label{sec:results} Results with the synthetically generated Monk3 Dataset \cite{Thrun:1991} demonstrated higher accuracies with MMINet in several experiments. We performed dimensionality reduction (from $d_x=6$) and 5-fold cross-validated classification of the 432 data samples for output dimensionalities of $d_y\in\{1,2,3\}$. For $d_y=1$, MMINet yields the highest average accuracy of $80.81\%$, with regards to $63.87\%$ with mRMR and $72.21\%$ with the other methods. Considering that the dimensionality reduction problem handles six input features, we observed that several of the feature selection methods identified the same feature as the most informative to construct $d_y$, which consistently resulted in this $72.21\%$ decoding accuracy. Similar behavior is observed for $d_y=2$, where MMINet yields a $77.05\%$ accuracy, whereas all feature selection methods selected the same two features and yield an average accuracy of $75.92\%$. Finally for $d_y=3$, MMINet yields $76.63\%$, $l_1$-SVM and SVM-RFE yields $76.87\%$, and the other two methods yield an average accuracy of $76.93\%$. This indicates that for selection of three features, in almost all cross-validation folds feature selection methods choose the truly relevant features, while MMINet transformations also yield comparable results. The overall upper accuracy range is further dependent on our choice to use a linear classifier for this problem. We did not increase the output feature dimensionality higher than three due to the nature of the constructed artificial dataset. \renewcommand{\arraystretch}{1.1} \begin{table} \caption{Dataset descriptives and averaged 5-fold cross-validation classification accuracies (\%) after dimensionality reduction by feature selection or MMINet feature transformation, for specific $d_y$.}\vspace{0.2cm} \label{tab:bioresults} \centering \begin{tabular}{c c c c} \toprule \textbf{} & \textbf{Breast Cancer} \cite{Dua:2019} & \textbf{Glioma} \cite{Nutt:2003} & \textbf{Lung Carcinoma} \cite{Bhattacharjee:2001} \\ \midrule number of classes & $2$ & $4$ & $5$ \\ number of data samples & $569$ & $50$ & $203$ \\ $d_x \rightarrow d_y$ & $30 \rightarrow 2$ & $4434 \rightarrow 4$ & $3312 \rightarrow 5$ \\ \midrule Fisher Score & $93.50$ & $50.00$ & $76.85$ \\ mRMR & $91.56$ & $42.00$ & $78.33$ \\ $l_1$-SVM & $92.09$ & $34.00$ & $84.28$ \\ SVM-RFE & $93.14$ & $56.00$ & $89.65$ \\ \textbf{MMINet} & $\mathbf{94.73}$ & $\mathbf{64.00}$ & $\mathbf{92.61}$ \\ \bottomrule \end{tabular} \end{table} Regarding our experiments with high-dimensional diagnostic biological data, Table~\ref{tab:bioresults} presents averaged 5-fold cross-validation accuracies for the cases where output feature dimensionality $d_y$ is chosen equal to the number of classes for consistency across methods. MMINet yields accuracies of $94.73\%$ for binary classification with the Breast Cancer Wisconsin Diagnostic Dataset \cite{Dua:2019}, $64.00\%$ for 4-class classification with the Glioma Dataset \cite{Nutt:2003}, and $92.61\%$ for 5-class classification with the Lung Carcinoma Dataset \cite{Bhattacharjee:2001}, all relatively higher than the compared feature selection methods. We observe that our feature transformation approach provides a performance upper bound to several feature selection methods in classification, based on the same classifier modality. We argue this to be due to feature selection algorithms being more restricted and simply resemble to sparse linear projection solutions when MMINet would be constrained to have a single dense layer. Figure~\ref{fig:result_plots} demonstrates an extension of the results in Table~\ref{tab:bioresults}, where we vary the output feature dimensionality $d_y\in\{1,2,3,4,5\}$ for all datasets. We observe in almost all cases that MMINet continues to provide a better performance than the other methods. Mainly SVM-RFE, a wrapper method, is competitive with MMINet as anticipated due to the classifier-oriented nature of the algorithm. Note that we did not arbitrarily increase the number of dimensions too high for MMINet, since the method relies on $d_y$-dimensional kernel density estimators at the output feature space and higher dimensional density estimates are known to be unstable \cite{Silverman:1986}. \begin{figure} \centering \subfigure[Breast Cancer \cite{Dua:2019}]{\includegraphics[width=0.33\textwidth]{wisconsin.pdf}}\hspace{-0.1cm} \subfigure[Glioma \cite{Nutt:2003}]{\includegraphics[width=0.33\textwidth]{glioma.pdf}}\hspace{-0.1cm} \subfigure[Lung Carcinoma \cite{Bhattacharjee:2001}]{\includegraphics[width=0.33\textwidth]{lung.pdf}}\hspace{-0.1cm} \caption{Averaged 5-fold cross-validation classification accuracies (\%) for varying output feature dimensionalities ($d_y$) with all five dimensionality reduction methods and the three datasets. Line color and styles are consistent across all three plots as indicated in the legend for (a).} \label{fig:result_plots} \end{figure} \section{Discussion} We present a supervised dimensionality reduction network training procedure based on the stochastic estimate of the mutual information gradient. Based on the construction of the objective function, at the network output feature space the transformed features and their associated class labels carry maximum mutual information. Complete process is formulated non-parametrically based on kernel density estimates which approximate class-conditional densities in the projected feature space. We demonstrate our approach empirically using pilot experimental biological data, where feature selection algorithms are widely popular approaches for dimensionality reduction. We interpret our approach to be a more general solution than maximum mutual information based feature selection algorithms. Such selection algorithms resemble to sparse linear projection solutions when MMINet is constrained to have a single dense layer. It is well known that the ultimate objective in Equation~\eqref{eq:objective} is hard to estimate due to entangling multiple continuous and discrete random variables where continuous random variables can have infinitely large positive or negative entropy values, whereas the entropy of a discrete random variable is always positive \cite{Ross:2014,Gao:2017}. Due to the fact that there is not a global solution to optimize this objective, it is important to note that the stopping criteria is an important factor in our model training. For our current implementations we did not optimize this aspect by using a validation set based stopping criterion, which could further improve the robustness of the approach. We stress the importance of the distinction between our study and conventional discriminative neural network training protocols. Such discriminative networks are trained end-to-end using raw data to minimize negative log-likelihood as a measure of classification error minimization based on a training data set. On the other hand, our approach is a general supervised feature dimensionality reduction and lower-dimensional feature space learning method which relies on maximum mutual information criterion. Therefore we did not perform comparisons of the dimensionality reduction methods to discriminative neural networks such that a comparable basis is maintained. Going beyond multilayer perceptrons, stochastic training of the MMINet framework can also embed any deep neural network architecture for lower dimensional representation learning. It is important to note that in contrast to feature selection methods, which preserve the original representations of feature variables, our transformation based approach will deeply modify features onto a new feature space. In combination with the theoretical advancements on gradient-based methods of neural network interpretability (e.g., layer-wise relevance propagation \cite{Bach:2015,Montavon:2018}), obtained synergies across features as highlighted by high-dimensional feature \textit{relevances} can yield significant insights based on the application domain. Such feature-synergy based ideas were particularly found interesting for feature learning in brain interfacing as we studied earlier \cite{Ozdenizci:2019,Ozdenizci:2020}, as well as gene expression data analysis \cite{Jacob:2009} in consistency with their biological interpretations. \section*{Acknowledgments} Our work is supported by NSF (IIS-1149570, CNS-1544895, IIS-1715858), DHHS (90RE5017-02-01), and NIH (R01DC009834). \bibliographystyle{model5-names}
1,477,468,750,554
arxiv
\section{Introduction} \label{sec:1} The combined MAXIMA-1 \cite{MAXIMA-1} , BOOMERANG \cite{BOOMERANG} , DASI \cite{DASI} , COBE/DMR Cosmic Microwave Background (CMB) observations \cite{COBE} , the recent WMAP data \cite{SPERGEL} and SDSS \cite{SDSS} imply that the Universe is flat \cite{flat01} and that most of the matter in the Universe is dark, i.e. exotic.These results have been confirmed and improved by the recent WMAP data \cite{WMAP06}. The deduced cosmological expansion is consistent with the luminosity distance as a function of redshift of distant supernovae \cite{supernova1,supernova2,supernova3}. According to the scenario favored by the observations there are various contributions to the energy content of our Universe. The most accessible energy component is baryonic matter, which accounts for $\sim 5\%$ of the total energy density. A component that has not been directly observed is cold dark matter (CDM)): a pressureless fluid that is responsible for the growth of cosmological perturbations through gravitational instability. Its contribution to the total energy density is estimated at $\sim 25\%$. The dark matter is expected to become more abundant in extensive halos, that stretch up to 100--200 kpc from the center of galaxies. The component with the biggest contribution to the energy density has an equation of state similar to that of a cosmological constant and is characterized as dark energy. The ratio $w=p/\rho$ is negative and close to $-1$. This component is responsible for $\sim 70\%$ of the total energy density and induces the observed acceleration of the Universe \cite{supernova1}$^-$\cite{supernova3} . The total energy density of our Universe is believed to take the critical value consistent with spatial flatness. Additional indirect information about the existence of dark matter comes from the rotational curves \cite{Jung} . The rotational velocity of an object increases so long is surrounded by matter. Once outside matter the velocity of rotation drops as the square root of the distance. Such observations are not possible in our own galaxy. The observations of other galaxies, similar to our own, indicate that the rotational velocities of objects outside the luminous matter do not drop. So there must be a halo of dark matter out there. Since the non exotic component cannot exceed $40\%$ of the CDM ~\cite {Benne} , there is room for exotic WIMP's (Weakly Interacting Massive Particles).\\ In fact the DAMA experiment ~\cite {BERNA2} has claimed the observation of one signal in direct detection of a WIMP, which with better statistics has subsequently been interpreted as a modulation signal \cite{BERNA1} . These data, however, if they are due to the coherent process, are not consistent with other recent experiments, see e.g. EDELWEISS and CDMS \cite{EDELWEISS} . It could still be interpreted as due to the spin cross section, but with a new interpretation of the extracted nucleon cross section. Since the WIMP is expected to be very massive, $m_{\chi} \geq 30 GeV$, and extremely non relativistic with average kinetic energy $T \leq 100 KeV$, it can be directly detected mainly via the recoiling of a nucleus in the WIMP-nucleus elastic scattering. The above developments are in line with particle physics considerations. \begin{enumerate} \item Dark matter in supersymmetric theories\\ The lightest supersymmetric particle (LSP) or neutralino is the most natural WIMP candidate. In the most favored scenarios the LSP can be simply described as a Majorana fermion, a linear combination of the neutral components of the gauginos and Higgsinos \cite{Jung}$^-$\cite{Hab-Ka} . In order to compute the event rate one needs an effective Lagrangian at the elementary particle (quark) level obtained in the framework of supersymmetry ~\cite{Jung,ref2,Hab-Ka} . One starts with representative input in the restricted SUSY parameter space as described in the literature, e.g. Ellis {\it et al} \cite{EOSS04} , Bottino {\it et al} , Kane {\it et al} , Castano {\it et al} and Arnowitt {\it et al} \cite{ref2} as well as elsewhere \cite{GOODWIT}$^-$\cite{UK01} . We will not, however, elaborate on how one gets the needed parameters from supersymmetry. Even though the SUSY WIMPs have been well studied, for tor the reader's convenience we will give a description in sec. \ref{sec:diagrams} of the basic SUSY ingredients needed to calculate LSP-nucleus scattering cross section. \item Kaluza-Klein (K-K) WIMPs. \\ These arise in extensions of the standard model with compact extra dimensions. In such models a tower of massive particles appear as Kaluza-Klein excitations. In this scheme the ordinary particles are associated with the zero modes and are assigned K-K parity $+1$. In models with Universal Extra Dimensions one can have cosmologically stable particles in the excited modes because of a discreet symmetry yielding K-K parity $-1$ (see previous work \cite{ST02a,ST02b,CFM02} as well as the recent review by Servant \cite{SERVANT}). \\The kinematics involved is similar to that of the neutralino, leading to cross sections which are proportional $\mu^2_r$, $\mu_r$ being the WIMP-nucleus reduced mass. Furthermore the nuclear physics input is independent of the WIMP mass, since for heavy WIMP $mu_r\simeq Am_p$. There are appear two differences compared to the neutralino, though, both related to its larger mass. \\i) First the density (number of particles per unit volume) of a WIMP falls inversely proportional to its mass. Thus, if the WIMP's considered are much heavier than the nuclear targets, the corresponding event rate takes the form: \beq R(m_{WIMP})=R(A)\frac{A \mbox{ GeV }}{m_{WIMP}} \label{eq:rate} \eeq where $R(A)$ are the rates extracted from experiment up to WIMP masses of the order of the mass of the target. \\ii) Second the average WIMP energy is now higher. In fact one finds that $\langle T_{WIMP}\rangle =\frac{3}{4}m_{WIMP} \upsilon^2_0\simeq 40 \left ({m_{WIMP}}/({100 \mbox{ GeV)}}\right )$keV ($\upsilon_0\simeq 2.2\times 10^5$km/s). Thus for a K-K WIMP with mass $1$ TeV, the average WIMP energy is $0.4$ MeV. Hence, due to the high velocity tail of the velocity distribution, one expects {\bf an energy transfer to the nucleus in the MeV region. Thus many nuclear targets can now be excited by the WIMP-nucleus interaction and the de-excitation photons can be detected.} \end{enumerate} In addition to the particle model one needs the following ingredients: \begin{itemize} \item A procedure in going from the quark to the nucleon level, i.e. a quark model for the nucleon. The results depend crucially on the content of the nucleon in quarks other than u and d. This is particularly true for the scalar couplings as well as the isoscalar axial coupling ~\cite{Dree}$^-$\cite{Chen} . Such topics will be discussed in sec. \ref{sec:nuc}. \item computation of the relevant nuclear matrix elements~\cite{Ress}$^-$\cite{SUHONEN03} using as reliable as possible many body nuclear wave functions. By putting as accurate nuclear physics input as possible, one will be able to constrain the SUSY parameters as much as possible. The situation is a bit simpler in the case of the scalar coupling, in which case one only needs the nuclear form factor. \item Convolution with the LSP velocity Distribution. To this end we will consider here Maxwell-Boltzmann \cite {Jung} (MB) velocity distributions, with an upper velocity cut off put in by hand. The characteristic velocity of the M-B distribution can be increased by a factor $n$ ($\upsilon_0\rightarrow n \upsilon_0,~n\ge1 $)by considering the interaction of dark matter and dark energy \cite{TETRVER06}. Other distributions are possible, such as non symmetric ones, like those of Drukier \cite {Druk} and Green \cite{GREEN02} , or non isothermal ones, e.g. those arising from late in-fall of dark matter into our galaxy, like Sikivie's caustic rings \cite {SIKIVIE} . In any event in a proper treatment the velocity distribution ought to be consistent with the dark matter density as, e.g., in the context of the Eddington theory \cite{OWVER} . \end{itemize} Since the expected rates are extremely low or even undetectable with present techniques, one would like to exploit the characteristic signatures provided by the reaction. Such are: \begin{enumerate} \item The modulation effect, i.e the dependence of the event rate on the velocity of the Earth \item The directional event rate, which depends on the velocity of the sun around the galaxy as well as the the velocity of the Earth. has recently begun to appear feasible by the planned experiments \cite {UKDMC,DRIFT} . \item Detection of signals other than nuclear recoils, such as \begin{itemize} \item Detection of $\gamma$ rays following nuclear de-excitation, whenever possible \cite{eji93,VQS04} . This seems to become feasible for heavy WIMPs especially in connection with modified M-B distributions due to the coupling of dark matter and dark energy ($\langle T_{WIMP} \rangle \simeq n^2 40 \left ({m_{WIMP}}/({100 \mbox{ GeV})}\right ),~n\ge 1$keV) \item Detection of ionization electrons produced directly in the LSP-nucleus collisions \cite{VE05,MVE05} . \item Observations of hard X-rays produced\cite{EMV05} , when the inner shell electron holes produced as above are filled. \end{itemize} \end{enumerate} In all calculations we will, of course, include an appropriate nuclear form factor and take into account the influence on the rates of the detector energy cut off. We will present our results a function of the LSP mass, $m_{\chi}$, in a way which can be easily understood by the experimentalists. \section{The Feynman Diagrams Entering the Direct Detection of WIMPS.} \label{sec:diagrams} \subsection{The Feynman Diagrams involving the neutralino} \label{sec:FEYLSP} The neutralino is perhaps the most viable WIMP candidate and has been extensively studied (see, e.g., our recent review \cite{JDV06}). Here we will give a very brief summary of the most important aspects entering the direct neutralino searches. In currently favorable supergravity models the LSP is a linear combination~\cite{Jung} of the neutral four fermions ${\tilde B}, {\tilde W}_3, {\tilde H}_1$ and ${\tilde H}_2$ which are the supersymmetric partners of the gauge bosons $B_\mu$ and $W^3_\mu$ and the Higgs scalars $H_1$ and $H_2$. Admixtures of s-neutrinos are expected to be negligible. The relevant Feynman diagrams involve Z-exchange, s-quark exchange and Higgs exchange. \subsubsection{The Z-exchange contribution.} \label{sec:Z-exc} The relevant Feynman diagram is shown in Fig. \ref{LSPZH}. It does not lead to coherence, since $\bar{\Psi}\gamma_{\lambda}\Psi=0$ for a Majorana fermion like the neutralino (the Majorana fermions do not have electromagnetic properties). The coupling $\bar{\Psi}\gamma_{\lambda}\gamma_5\Psi$ yields negligible contribution for a non relativistic particle in the case of the spin independent cross section \cite{JDV96}. It may be important in the case of the spin contribution, which arises from the axial current). \begin{figure} \psfig{file=zvec.ps,width=2.0in} \psfig{file=higgs.ps,width=2.0in} \caption{The LSP-quark interaction mediated by Z and Higgs exchange.} \label{LSPZH} \end{figure} \subsubsection{The $s$-quark Mediated Interaction } \label{sec:sq-exc} The other interesting possibility arises from the other two components of $\chi_1$, namely ${\tilde B}$ and ${\tilde W}_3$. Their corresponding couplings to $s$-quarks (see Fig. \ref{LSPSQVS} ) can be read from the appendix C4 of Ref.~\cite{Hab-Ka} and our earlier review \cite{JDV06}. \begin{figure} \begin{center} \psfig{file=sqvec.ps,width=2.0in} \psfig{file=sqsca.ps,width=2.0in} \end{center} \caption{The LSP-quark interaction mediated by s-quark exchange. Normally it yields V-A interaction which does not lead to coherence at the nuclear level. If, however, the isodoublet s-quark is admixed with isosinglet one to yield a scalar interaction at the quark level.} \label{LSPSQVS} \end{figure} Normally this contribution yields vector like contribution, i.e it does not lead to coherence. If, however, there exists mixing between the s-quarks with isospin $1/2$ ($\tilde{q}_L$) and the isospin 0 ($\tilde{q}_R$), the s-quark exchange may lead to a scalar interaction at the quark level and hence to coherence over all nucleons at the nuclear level \cite{JDV06}. \subsubsection{The Intermediate Higgs Contribution} \label{sec:Higgs-exc} The most important contribution to coherent scattering can be achieved via the intermediate Higgs particles which survive as physical particles. In supersymmetry there exist two such physical Higgs particles, one light $h$ with a mass $m_h\leq$120 GeV and one heavy $H$ with mass $m_H$, which is much larger. The relevant interaction can arise out of the Higgs-Higgsino-gaugino interaction \cite{JDV06} leading to a Feynman diagram shown in Fig. \ref{LSPZH}. In the case of the scalar interaction the resulting amplitude is proportional to the quark mass. \subsection{The Feynman Diagrams involving the K-K WIMPs} \label{sec:FEYKK} \subsubsection{The Kaluza-Klein Boson as a dark matter candidate} \label{KK} We will assume that the lightest exotic particle, which can serve as a dark matter candidate, is a gauge boson $B^{1}$ having the same quantum numbers and couplings with the standard model gauge boson $B$, except that it has K-K parity $-1$. Thus its couplings must involve another negative K-K parity particle. In this work we will assume that such a particle can be one of the K-K quarks, partners of the ordinary quarks, but much heavier \cite{ST02a,ST02b,CFM02} . \begin{itemize} \item Intermediate K-K quarks.\\ this case the relevant Feynman diagrams are shown in fig. \ref{fig:kkq}. \begin{figure} \psfig{file=kkqa.ps,width=2.2in} \psfig{file=kkqb.ps,width=2.2in} \caption{K-K quarks mediating the interaction of K-K gauge boson $B^{1}$ with quarks at tree level.} \label{fig:kkq} \end{figure} \\The amplitude at the nucleon level can be written as: \beq {\cal M}_{coh}= \Lambda(\mbox{\boldmath $\epsilon^{*'}$}.\mbox{\boldmath $\epsilon$})N\left [ \frac{11+12\tau_3}{54} \frac{m_p m_W}{(m_{B^{(1)}})^2} f_1(\Delta )+\frac{1+\tau_3}{3} \frac{m_W}{ m_{B^{(1)}}} f_2(\Delta ) \right ] N \eeq $$\Lambda=i 4 \sqrt{2} G_F m_W \tan^2{\theta_W },f_1(\Delta )=\frac{1+\Delta +\Delta ^2 /2}{\Delta ^2(1+\Delta /2)^2},$$ $$ f_2 (\Delta )=\frac{1+\Delta }{\Delta (1+\Delta /2)}~, ~\Delta =\frac{m_{q^{(1)}}}{m_{B^{(1)}}}-1$$ We see that the amplitude is very sensitive to the parameter $\Delta $ ("resonance effect"). In going from the quark to the nucleon level the best procedure is to replace the quark energy by the constituent quark mass $\simeq 1/3m_p$, as opposed to adopting \cite{ST02a,ST02b,CFM02} a procedure related to the current mass encountered in the neutralino case \cite{JDV06}. In the case of the spin contribution we find at the nucleon level that: \barr {\cal M}_{spin}&=& -i 4 \sqrt{2} G_F m_W \tan^2{\theta_W }\frac{1}{3} \frac{m_p m_W}{(m_{B^{(1)}})^2} f_1(\Delta ) i(\mbox{\boldmath $\epsilon^{*'}$}\times \mbox{\boldmath $\epsilon$}). \nonumber\\ &&\left [ N\mbox{\boldmath $\sigma$} (g_0+g_1 \tau_3) N \right ] \earr $$g_0=\frac{17}{18}\Delta u+\frac{5}{18} \Delta d+\frac{5}{18} \Delta s~, ~g_1=\frac{17}{18}\Delta u-\frac{5}{18} \Delta d$$ for the isoscalar and isovector quantities \cite{JDV06}. The quantities $\Delta_q$ are given by \cite{JDV06} $$\Delta u=0.78\pm 0.02~,~\Delta d=-0.48\pm 0.02~,~\Delta s=-0.15\pm 0.02$$ We thus find $g_0=0.26~,~g_1=0.41\Rightarrow a_p=0.67~,~a_n=-0.15$. \item Intermediate Higgs Scalars.\\ The corresponding Feynman diagram is shown in Fig. \ref{fig:kkhz} \begin{figure}[!ht] \psfig{file=kkh.eps,width=2.2in} \psfig{file=kknu.eps,width=2.2in} \caption{The Higgs H mediating interaction of K-K gauge boson $B^{1}$ with quarks at tree level (on the left). The Z-boson mediating the interaction of K-K neutrino $\nu^{(1)}$ with quarks at tree level (on the right).} \label{fig:kkhz} \end{figure} The relevant amplitude is given by: \beq {\cal M}_N(h)= -i~4 \sqrt{2}G_F m^2_W \tan^2{\theta_W}~\left [\frac{1}{4}\frac{m_p}{m^2_h} \left (-\mbox{\boldmath $\epsilon^{*'}$}.\mbox{\boldmath $\epsilon$} \right ) \prec N|N\succ \sum_q f_q\right ] \eeq In going from the quark to the nucleon level we follow a procedure analogous to that of the of the neutralino, i.e. $\prec N |m_q q \bar{q}|N \succ \Rightarrow f_q m_p$ \end{itemize} \subsubsection{K-K neutrinos as dark matter candidates} The other possibility is the dark matter candidate to be a heavy K-K neutrino. We will distinguish the following cases: \begin{itemize} \item Process mediated by Z-exchange.\\ The amplitude associated with the diagram of Fig. \ref{fig:kkhz} becomes: \beq {\cal M}_{\nu^{(1)}}=-\frac{1}{2 \sqrt{2}}G_FJ^{\lambda}(\nu^{(1)}) J_{\lambda}(NNZ) \eeq with $J_{\lambda}(NNZ)$ the standard nucleon neutral current and $$J_{\lambda}(\nu^{(1)})= \bar{\nu}^{(1)}\gamma _{\lambda }\gamma_5\nu^{(1)}~,~J_{\lambda}(\nu^{(1)})= \bar{\nu}^{(1)}\gamma _{\lambda }(1-\gamma_5)\nu^{(1)}$$ for Majorana and Dirac neutrinos respectively. \item Process mediated by right handed currents via Z'-boson exchange.\\ The process is similar to that exhibited by Fig. \ref{fig:kkhz}, except that instead of Z we encounter Z', which is much heavier. Assuming that the couplings of the $Z'$ are similar to those of $Z$, the above results apply except that now the amplitudes are retarded by the multiplicative factor $\kappa=m^2_{Z}/m^2_{Z'}$ \item Process mediated by Higgs exchange.\\ In this case in Fig \ref{fig:kkhz} the Z is replaced by the Higgs particle. Proceeding as above we find that the amplitude at the nucleon level is: \beq {\cal M}_{\nu^{(1)}}(h)= -2 \sqrt{2}G_F \frac{m_p m_{\nu^{(1)}}}{m_h^2} \bar{\nu}^{(1)}~\nu^{(1)} \prec N|N \succ \sum_q f_q \eeq In the evaluation of the parameters $f_q$ one encounters both theoretical and experimental errors. \end{itemize} \section{Other non SUSY Models} \label{Zmodel} We should mention that there exist extensions of the standard model not motivated by symmetry. Such are: \begin{itemize} \item Models which introduce extra higgs particles and impose a discrete symmetry which leads to a "parity" a la R-parity or K-K parity \cite{MA06}. \item Extensions of the standard model, which do not require a parity, but introduce high weak isospin multiplets \cite{CFA06} with Y=0. So the WIMP-nucleus interaction via Z-exchange at tree level is absent and the dominant contribution to the WIMP-nucleus scattering occurs at the one loop level. \item Another interesting extension of the standard model is in the direction of tecnicolor \cite{GKS06}. In this case the WIMP is the neutral LTP (lightest neutral technibaryon). This is scalar particle, which couples to the quarks via derivative coupling through Z-exchange. \end{itemize} \section{Going from the Quark to the Nucleon Level} \label{sec:nuc} In going from the quark to the nucleon level one has to be a bit more careful in handling the quarks other than $u$ and $d$. This is especially true in the case of the scalar interaction, since in this case the coupling of the WIMP to the quarks is proportional to their mass~\cite{JDV06} . Thus one has to consider in the nucleon not only sea quarks ($u {\bar u}, d {\bar d}$ and $s {\bar s}$) but the heavier quarks as well due to QCD effects ~\cite{Dree00} . This way one obtains the scalar Higgs-nucleon coupling by using effective parameters $f_q$ defined as follows: \beq \Big<N| m_q \bar{q}q|N \Big> = f_q m_N \label{fofq} \eeq where $m_N$ is the nucleon mass. The parameters $f_q,~q=u,d,s$ can be obtained by chiral symmetry breaking terms in relation to phase shift and dispersion analysis (for a recent review see \cite{JDV06}). We like to emphasize here that since the current masses of the u and d quarks are small, the heavier quarks tend to dominate even though the probability of finding them in the nucleus is quite small. In fact the s quark contribution may become dominant, e.g. allowed by the above analysis is the choice: $$f_d=0.046,f_u=0.025,f_s=0.400,f_c=0.050,f_b=0.055,f_t=0.095$$ The isoscalar and the isovector axial current in the case of K-K theories has already been discussed above. In the case of the neutralino these couplings at the nucleon level, $ f^0_A$, $f^1_A$, are obtained from the corresponding ones given by the SUSY models at the quark level, $ f^0_A(q)$, $f^1_A(q)$, via renormalization coefficients $g^0_A$, $g_A^1$, i.e. $ f^0_A=g_A^0 f^0_A(q),f^1_A=g_A^1 f^1_A(q).$ The renormalization coefficients are given terms of $\Delta q$ defined above \cite{JELLIS}, via the relations $$g_A^0=\Delta u+\Delta d+\Delta s=0.77-0.49-0.15=0.13~,~g_A^1=\Delta u-\Delta d=1.26$$ We see that, barring very unusual circumstances at the quark level, the isoscalar contribution is negligible. It is for this reason that one might prefer to work in the isospin basis. \section{The allowed SUSY Parameter Space} \label{sec:parameter} It is clear from the above discussion that the LSP-nucleon cross section depends, among other things, on the parameters of supersymmetry. One starts with a set of parameters at the GUT scale and predicts the low energy observables via the renormalization group equations (RGE). Conversely starting from the low energy phenomenology one can constrain the input parameters at the GUT scale. The parameter space is the most crucial. In SUSY models derived from minimal SUGRA the allowed parameter space is characterized at the GUT scale by five parameters: \begin{itemize} \item two universal mass parameters, one for the scalars, $m_0$, and one for the fermions, $m_{1/2}$. \item $tan\beta $, i.e the ratio of the Higgs expectation values, $\left < H_2 \right>/\left < H_1 \right>$. \item The trilinear coupling $ A_0 $ (or $ m^{pole}_t $) and \item The sign of $\mu $ in the Higgs self-coupling $\mu H_1H_2$. \end{itemize} The experimental constraints \cite{JDV06} restrict the values of the above parameters yielding the {\bf allowed SUSY parameter space}. \section{Event rates} \label{sec:rates} The differential non directional rate can be written as \begin{equation} dR_{undir} = \frac{\rho (0)}{m_{\chi}} \frac{m}{A m_N} d\sigma (u,\upsilon) | \mbox{\boldmath $\upsilon$}| \label{2.18} \end{equation} where A is the nuclear mass number, $\rho (0) \approx 0.3 GeV/cm^3$ is the WIMP density in our vicinity, m is the detector mass, $m_{\chi}$ is the WIMP mass and $d\sigma(u,\upsilon )$ is the differential cross section. The directional differential rate, i.e. that obtained, if nuclei recoiling in the direction $\hat{e}$ are observed, is given by \cite{JDVSPIN04,JDV06} : \beq dR_{dir} = \frac{\rho (0)}{m_{\chi}} \frac{m}{A m_N} |\upsilon| \hat{\upsilon}.\hat{e} ~\Theta(\hat{\upsilon}.\hat{e}) ~\frac{1}{2 \pi}~ d\sigma (u,\upsilon\ \nonumber \delta(\frac{\sqrt{u}}{\mu_r \upsilon \sqrt{2}}-\hat{\upsilon}.\hat{e}) \label{2.20} \eeq where $\Theta(x)$ is the Heaviside function. The differential cross section is given by: \beq d\sigma (u,\upsilon)== \frac{du}{2 (\mu _r b\upsilon )^2} [(\bar{\Sigma} _{S}F(u)^2 +\bar{\Sigma} _{spin} F_{11}(u)] \label{2.9} \end{equation} where $ u$ the energy transfer $Q$ in dimensionless units given by \begin{equation} u=\frac{Q}{Q_0}~~,~~Q_{0}=[m_pAb]^{-2}=40A^{-4/3}~MeV \label{defineu} \end{equation} with $b$ is the nuclear (harmonic oscillator) size parameter. $F(u)$ is the nuclear form factor and $F_{11}(u)$ is the spin response function associated with the isovector channel. The scalar contribution is given by: \begin{equation} \bar{\Sigma} _S = (\frac{\mu_r}{\mu_r(p)})^2 \sigma^{S}_{p,\chi^0} A^2 \left [\frac{1+\frac{f^1_S}{f^0_S}\frac{2Z-A}{A}}{1+\frac{f^1_S}{f^0_S}}\right]^2 \approx \sigma^{S}_{N,\chi^0} (\frac{\mu_r}{\mu_r (p)})^2 A^2 \label{2.10} \end{equation} (since the heavy quarks dominate the isovector contribution is negligible). $\sigma^S_{N,\chi^0}$ is the LSP-nucleon scalar cross section. The spin contribution is given by: \begin{equation} \bar{\Sigma} _{spin} = (\frac{\mu_r}{\mu_r(p)})^2 \sigma^{spin}_{p,\chi^0}~\zeta_{spin}, \zeta_{spin}= \frac{1}{3(1+\frac{f^0_A}{f^1_A})^2}S(u) \label{2.10a} \end{equation} \begin{equation} S(u)\approx S(0)=[(\frac{f^0_A}{f^1_A} \Omega_0(0))^2 + 2\frac{f^0_A}{ f^1_A} \Omega_0(0) \Omega_1(0)+ \Omega_1(0))^2 \, ] \label{s(u)} \end{equation} The couplings $f^1_A$ ($f^0_A$) and the nuclear matrix elements $\Omega_1(0)$ ($\Omega_0(0)$) associated with the isovector (isoscalar) components are normalized so that, in the case of the proton at $u=0$, they yield $\zeta_{spin}=1$. With these definitions in the proton neutron representation we get: \beq \zeta_{spin}= \frac{1}{3}S^{'}(0)~,~S^{'}(0)=\left[(\frac{a_n}{a_p}\Omega_n(0))^2+2 \frac{a_n}{a_p}\Omega_n(0) \Omega_p(0)+\Omega^2_p(0)\right] \label{Spn} \eeq where $\Omega_p(0)$ and $\Omega_n(0)$ are the proton and neutron components of the static spin nuclear matrix elements. In extracting limits on the nucleon cross sections from the data we will find it convenient to write: \begin{equation} \sigma^{spin}_{p,\chi^0}~\zeta_{spin} =\frac{\Omega^2_p(0)}{3}|\sqrt{\sigma_p}+\frac{\Omega_n}{\Omega_p} \sqrt{\sigma_n} e^{i \delta}|^2 \label{2.10ab} \end{equation} In Eq. (\ref{2.10ab}) $\delta$ the relative phase between the two amplitudes $a_p$ and $a_n$, which in most models is 0 or $\pi$, i.e. one expects them to be relatively real. The static spin matrix elements are obtained in the context of a given nuclear model. Some such matrix elements of interest to the planned experiments can be found in \cite{JDV06}. The spin ME are defined as follows: \beq \Omega_p(0)=\sqrt{\frac{J+1}{J}}\prec J~J| \sigma_z(p)|J~J\succ ~~,~~ \Omega_n(0)=\sqrt{\frac{J+1}{J}}\prec J~J| \sigma_z(n)|J~J\succ \label{Omegapn} \eeq where $J$ is the total angular momentum of the nucleus and $\sigma_z=2 S_z$. The spin operator is defined by $S_z(p)=\sum_{i=1}^{Z} S_z(i)$, i.e. a sum over all protons in the nucleus, and $S_z(n)=\sum_{i=1}^{N}S_z(i)$, i.e. a sum over all neutrons. Furthermore $\Omega_0(0)=\Omega_p(0)+\Omega_n(0)~~,~~\Omega_1(0)=\Omega_p(0)-\Omega_n(0)$ \section{The WIMP velocity distribution} To obtain the total rates one must fold with WIMP velocity distribution and integrate the above expressions over the energy transfer from $Q_{min}$ determined by the detector energy cutoff to $Q_{max}$ determined by the maximum LSP velocity (escape velocity, put in by hand in the Maxwellian distribution), i.e. $\upsilon_{esc}=2.84~\upsilon_0$ with $\upsilon_0$ the velocity of the sun around the center of the galaxy($229~Km/s$). For a given velocity distribution f(\mbox{\boldmath $\upsilon$}$^{\prime}$), with respect to the center of the galaxy, one can find the velocity distribution in the Lab f(\mbox{\boldmath $\upsilon$},\mbox{\boldmath $\upsilon$}$_E$) by writing \mbox{\boldmath $\upsilon$}$^{'}$= \mbox{\boldmath $\upsilon$}$ \, + \,$ \mbox{\boldmath $\upsilon$}$_E \, ,$ \mbox{\boldmath $\upsilon$}$_E$=\mbox{\boldmath $\upsilon$}$_0$+ \mbox{\boldmath $\upsilon$}$_1$, with \mbox{\boldmath $\upsilon$} $_1 \,$ the Earth's velocity around the sun. It is convenient to choose a coordinate system so that $\hat{x}$ is radially out in the plane of the galaxy, $\hat{z}$ in the sun's direction of motion and $\hat{y}=\hat{z}\times\hat{x}$. Since the axis of the ecliptic lies very close to the $x,y$ plane ($\omega=186.3^0$) only the angle $\gamma=29.8^0$ becomes relevant. Thus the velocity of the earth around the sun is given by \begin{equation} \mbox{\boldmath $\upsilon$}_E = \mbox{\boldmath $\upsilon$}_0 \hat{z} + \mbox{\boldmath $\upsilon$}_1 (\, sin{\alpha} \, {\bf \hat x} -cos {\alpha} \, cos{\gamma} \, {\bf \hat y} + cos {\alpha} \, sin{\gamma} \, {\bf \hat z} \,) \label{3.6} \end{equation} where $\alpha$ is phase of the earth's orbital motion. The WIMP velocity distribution f(\mbox{\boldmath $\upsilon$}$^{\prime}$) is not known. Many velocity distributions have been used. The most common one is the M-B distribution with characteristic velocity $\upsilon_0$ with an upper bound $\upsilon_{esc}=2.84 \upsilon_0$. \beq f(\mbox{\boldmath $\upsilon$}^{\prime})=\frac{1}{(\sqrt{\pi}\mbox{\boldmath $\upsilon_0$})^3} e^{-(\mbox{\boldmath $\upsilon$}^{\prime}/\mbox{\boldmath $\upsilon_0$})^2} \label{fv} \eeq Modifications of this velocity distribution have also been considered such as: i) Axially symmetric M-B distribution \cite {Druk,Verg00}. and ii) modifications of the characteristic parameters of the M-B distribution by considering a coupling between dark matter and dark energy \cite{TETRVER06} ($\upsilon_0 \rightarrow n \upsilon_0,\upsilon_{esc}\rightarrow n \upsilon_{esc}$). Other possibilities are adiabatic velocity distribution following the Eddington approach \cite{EDDIN}$^-$\cite{VEROW06} , caustic rings \cite{SIKIVI1}$^-$\cite{Gelmini} and Sagittarius dark matter \cite{GREEN02} . For a given energy transfer the velocity $\upsilon$ is constrained to be \beq \upsilon\geq \upsilon_{min}~,~\upsilon_{min}= \sqrt{\frac{ Q A m_p}{2}}\frac{1}{\mu_r}. \eeq \section{The Direct detection rate} The event rate for the coherent WIMP-nucleus elastic scattering is given by \cite{Verg01,JDV03,JDVSPIN04,JDV06}: \beq R= \frac{\rho (0)}{m_{\chi^0}} \frac{m}{m_p}~ \sqrt{\langle v^2 \rangle } \left [f_{coh}(A,\mu_r(A)) \sigma_{p,\chi^0}^{S}+f_{spin}(A,\mu_r(A))\sigma _{p,\chi^0}^{spin}~\zeta_{spin} \right] \label{fullrate} \eeq with \beq f_{coh}(A, \mu_r(A))=\frac{100\mbox{GeV}}{m_{\chi^0}}\left[ \frac{\mu_r(A)}{\mu_r(p)} \right]^2 A~t_{coh}\left(1+h_{coh}cos\alpha \right) \eeq \beq f_{spin}(A, \mu_r(A))=\left[ \frac{\mu_r(A)}{\mu_r(p)} \right]^2 \frac{t_{spin}(A)}{A}t_{spin}\left(1+h_{spin}cos\alpha \right) \eeq with $\sigma_{p,\chi^0}^{S}$ and $\sigma _{p,\chi^0}^{spin}$ the scalar and spin proton cross sections $~\zeta_{spin}$ the nuclear spin ME. In the above expressions $h$ is the modulation amplitude. The number of events in time $t$ due to the scalar interaction, which leads to coherence, is: \beq R\simeq 1.60~10^{-3} \frac{t}{1 \mbox{y}} \frac{\rho(0)}{ {\mbox0.3GeVcm^{-3}}} \frac{m}{\mbox{1Kg}}\frac{ \sqrt{\langle v^2 \rangle }}{280 {\mbox kms^{-1}}}\frac{\sigma_{p,\chi^0}^{S}}{10^{-6} \mbox{ pb}} f_{coh}(A, \mu_r(A)) \label{scalareventrate} \eeq In the above expression $m$ is the target mass, $A$ is the number of nucleons in the nucleus and $\langle v^2 \rangle$ is the average value of the square of the WIMP velocity. In the case of the spin interaction we write: \beq R\simeq 16 \frac{t}{1 \mbox{y}} \frac{\rho(0)}{ {\mbox0.3GeVcm^{-3}}} \frac{m}{\mbox{1Kg}}\frac{ \sqrt{\langle v^2 \rangle }}{280 {\mbox kms^{-1}}}\frac{\sigma_{p,\chi^0}^{S}}{10^{-2} \mbox{ pb}} f_{spin}(A, \mu_r(A)) \label{spineventrate} \eeq Note the different scale for the proton spin cross section. The parameters $f_{coh}(A,\mu_r(A))$, $f_{spin}(A,\mu_r(A))$, which give the relative merit for the coherent and the spin contributions in the case of a nuclear target compared to those of the proton, have already been tabulated \cite{JDV06} for energy cutoff $Q_{min}=0,~10$ keV. It is clear that for large A the coherent process is expected to dominate unless for some reason the scalar proton cross section is very suppressed. In the case of directional experiments the event rate is given by Eqs (\ref{scalareventrate}) and (\ref{spineventrate}) except that now: \beq f_{coh}(A, \mu_r(A))=\frac{100\mbox{GeV}}{m_{\chi^0}}\left[ \frac{\mu_r(A)}{\mu_r(p)} \right]^2 A\frac{\kappa}{2 \pi}t_{coh}\left(1+h_m(coh)cos{(\alpha+\alpha_m \pi)} \right) \eeq \beq f_{spin}(A, \mu_r(A))=\frac{100\mbox{GeV}}{m_{\chi^0}}\left[ \frac{\mu_r(A)}{\mu_r(p)} \right]^2 \frac{\kappa}{2 \pi}\frac{t_{spin}}{A}\left(1+h_m(spin)cos{(\alpha+\alpha_m \pi)} \right) \eeq In the above expressions $h_m$ is the modulation amplitude and $\alpha _m$ the shift in the phase of the modulation (in units of $\pi$) relative to the phase of the Earth. $\kappa/(2 \pi)$, $\kappa\leq 1$, is the suppression factor entering due to the restriction of the phase space. $\kappa$, $h_m$ and $\alpha_m$ depend on the direction of observation. It is precisely this dependence as well as the large values of $h_m$, which can be exploited to reject background \cite{JDV06}, that makes the directional experiments quite attractive in spite of the suppression factor relative to the standard experiments. \section{Bounds on the scalar proton cross section} Using the above formalism one can obtain the quantities of interest $t$ and $h$ both for the standard as well as the directional experiments. Due to lack of space we are not going to present the obtained results here. The interested reader can find some of these results elsewhere \cite{JDVSPIN04,JDV06} . Here we are simply going to show how one can employ such results to extract the nucleon cross section from the data. Due to space considerations we are not going to discuss the limits extracted from the data on the spin cross sections, since in this case one has to deal with two amplitudes (one for the proton and one for the neutron). We will only extract some limits imposed on the scalar nucleon cross section (the proton and neutron cross section are essentially the same). In what follows we will employ for all targets \cite{BCFS02}$^-$\cite{PAVAN01} the limit of CDMS II for the Ge target \cite{CDMSII04} , i.e. $<2.3$ events for an exposure of $52.5$ Kg-d with a threshold of $10$ keV. This event rate is similar to that for other systems \cite{SGF05}. The thus obtained limits are exhibited in Fig. \ref{b127.73}. For larger WIMP masses one can extrapolate these curves, assuming an increase as $\sqrt{m_{\chi}}$. \begin{figure} \rotatebox{90}{\hspace{0.0cm} $\sigma_p\rightarrow 10^{-5}$pb} \psfig{file=bcoh127.eps,width=2.0in} \rotatebox{90}{\hspace{0.0cm} $\sigma_p\rightarrow 10^{-5}$pb} \psfig{file=bcoh73.eps,width=2.0in} \hspace{-2.0cm} $m_{\chi}\rightarrow$ GeV \caption{ The limits on the scalar proton cross section for A$=127$ on the left and A$=73$ on the right as functions of $m_{\chi}$. The continuous (dashed) curves correspond to $Q_{min}=0~(10)$ keV respectively. Note that the advantage of the larger nuclear mass number of the A$=127$ system is counterbalanced by the favorable form factor dependence of the A$=73$ system.} \label{b127.73} \end{figure} \section{Transitions to excited states} The above formalism can easily be extended to cover transitions to excited states. Only the kinematics and the nuclear physics is different. In other words one now needs: \begin{itemize} \item The inelastic scalar form factor.\\ The transition amplitude is non zero due to the momentum transfer involved. The relevant multipolarities are determined by the spin and parity of the final state. \item Spin induced transitions.\\ In this case one can even have a Gamow-Teller like transition, if the final state is judiciously chosen. \end{itemize} In the case of $^{127}I$ the static spin matrix element involving the first excited state around 50 keV is twice as large compared to that of the ground state \cite{VQS04}. The spin response function was assumed to be the same with that of the ground state. The results obtained \cite{VQS04} are shown in Fig. \ref{ratio}. \begin{figure} \begin{center} \rotatebox{90}{\hspace{1.0cm} {\tiny BRR}$\rightarrow$} \includegraphics[height=.17\textheight]{ratio0.eps} \rotatebox{90}{\hspace{1.0cm} {\tiny BRR}$\rightarrow$} \includegraphics[height=.17\textheight]{ratioQ.eps}\\ \hspace{0.0cm}$m_{LSP}\rightarrow$ ($GeV$) \caption{ The ratio of the rate to the excited state divided by that of the ground state as a function of the LSP mass (in GeV) for $^{127}I$. We found that the static spin matrix element of the transition from the ground to the excited state is a factor of 1.9 larger than that involving the ground state and assumed that the spin response functions $F_{11}(u)$ are the same. On the left we show the results for $Q_{min}=0$ and on the right for $Q_{min}=10~KeV$. \label{ratio} }. \end{center} \end{figure} These results are very encouraging, since, as we have mentioned, for heavier WIMPS like those involved in K-K theories, the branching ratios are expected to be much larger. Thus one may consider such transitions, since the detection of de-excitation $\gamma$ rays is much easier than the detection of recoiling nuclei. \section{Other non recoil experiments} As we have already mentioned the nucleon recoil experiments are very hard. It is therefore necessary to consider other possibilities. One such possibility is to detect the electrons produced during the WIMP-nucleus collisions \cite{VE05,MVE05} employing detectors with low energy threshold with a high Z target. Better yet one may attempt to detect the very hard X-rays generated when the inner shell electron holes are filled \cite{EMV05}. The relative X-ray to nucleon recoil probabilities $[Z \sigma _K /\sigma _r]_i$, for $i=L (m_{\chi}\leq \mbox{100GeV}),~M(\mbox{100 GeV}\leq m_{\chi}\leq \mbox{200 GeV})$ and $H (m_{\chi}\simeq \mbox{200 GeV})$ are shown in table \ref{table:X-rays}. For even heavier WIMPs, like those expected in K-K theories, the relative probability is expected to be even larger. \begin{table} \begin{center} \caption{K X-ray cross sections relative to the nuclear recoil, rates and energies in WIMPs nuclear interactions with $^{131}$Xe. $[Z \sigma _K /\sigma _r]_L, [Z \sigma _K /\sigma _r]_M$ and $[Z \sigma _K /\sigma _r]_H$ are the ratios for light (30 GeV), medium (100 GeV) and heavy (300 GeV) WIMPs.} \label{t:2} \vspace{0.5cm} \label{table:X-rays} \begin{tabular}{|ccccc|} \hline K X-ray & $E_K(K_{ij})$ keV & $[\frac{Z \sigma _K(K_{ij})}{\sigma _r}]_{L}$ & $[\frac{Z \sigma _K(K_{ij})}{\sigma _r} ]_{M} $ & $[\frac{Z \sigma _K(K_{ij})}{\sigma _r}]_{H} $ \\ \hline K$_{\alpha 2}$ & 29.5 & 0.0086 & 0.0560 & 0.0645 \\ K$_{\alpha 1}$ & 29.8 & 0.0160 & 0.1036 & 0.1196 \\ K$_{\beta 1}$ & 33.6 & 0.0047 & 0.0303 & 0.0350 \\ K$_{\beta 2}$ & 34.4 & 0.0010 & 0.0067 & 0.0077 \\ \hline \end{tabular} \end{center} \end{table} The K$_{\alpha}$ and K$_{\beta}$ lines can be separated experimentally by using good energy-resolution detectors, but the sum of all K lines can be measured in modest energy-resolution experiments. \section{Conclusions} We examined the various signatures expected in the direct detection of WIMPs via their interaction with nuclei. We specially considered WIMPs predicted in supersymmetric models (LSP or neutralino) as well as theories with extra dimensions. We presented the formalism for the modulation amplitude for non directional as well as directional experiments. We discussed the role played by nuclear physics on the extraction of the nucleon cross sections from the data. We also considered non recoil experiments, such as measuring the $\gamma$ rays following the de-excitation of the nucleus and/or the hard X-rays after the de-excitation of the inner shell electron holes produced during the WIMP nucleus interaction. These are favored by very heavy MIMPs in the TeV region and velocity distributions expected in models allowing interaction of dark matter and dark energy. {\bf Acknowledgments}: This work was supported in part by the European Union contract MRTN-CT-2004-503369. Special thanks to Professor Raduta for support and hospitality during the Predeal Summer School.
1,477,468,750,555
arxiv
\section{Introduction} Superluminous supernovae (SLSNe) are extremely luminous explosions with absolute magnitudes of $\lesssim$$-21$~mag, which are $\sim$10--100 times brighter than typical Type Ia and core-collapse SNe \citep{gal12}. SLSNe are a new class of SNe that was discovered only recently by wide-field, untargeted, time-domain surveys (e.g., \cite{quim07, quim11}). They are detected from local ($z = 0.03$) to high-redshift galaxies ($z \sim 4$; \cite{cook12}), and therefore can be powerful indicators of environments in the distant universe. SLSNe are classified into two main subclasses depending on the presence of hydrogen signatures in the observed spectra: hydrogen-poor Type I (SLSN-I) and hydrogen-rich Type II (SLSN-II) \citep{gal12}. Due to their huge luminosity and scarcity, the physical nature of SLSNe is still a matter of debate, and especially SLSNe-I are among the least understood SN populations. Spatially resolving observations of molecular gas provide the physical properties of the interstellar medium (ISM) in the local environment of stellar explosions, such as molecular gas content, star-formation efficiency, and velocity field (e.g., \cite{galb17, arab19, moro19}). \citet{arab19} conducted CO(1--0) observations of the host galaxy of a SLSN-II, PTF10tpz, at $z = 0.03994$ with the Atacama Large Millimeter/submillimeter Array (ALMA), and found that PTF10tpz is located close to the intersection of the gas lanes and the inner structure of the host galaxy. They suggested that in situ formation of massive stars due to the internal dynamics of the host galaxy and high densities are favorable conditions for the formation of SLSN progenitors. SN~2017egm/Gaia17biu at $z = 0.03063$, one of the closest SLSNe-I, was discovered on May 23, 2017 \citep{dong17, sdss17}. The host galaxy, NGC~3191, is a massive spiral galaxy ($M_* = 5 \times 10^{10}$ $M_{\odot}$) with active star formation (SFR $\sim 5$--15 $M_{\odot}$~yr$^{-1}$) \citep{stol13, nich17, chen17a, bose18}. The metallicity at the SN site shows a (super-)solar metallicity ($\sim$1.3--2.6 $Z_{\odot}$; \cite{nich17, chen17a, bose18}), while there is a work showing a sub-solar metallicity (0.6 $Z_{\odot}$; \cite{izzo18}). It is notable that NGC~3191 also hosted two other SNe: SN~1988B (Type Ia) and PTF10bgl (Type II). SN 1988B was reported to be located at $10''$ north of the galaxy center \citep{schi88, fill88}, although the precise location was not provided. PTF10bgl was located $\sim$10$''$ north-west of the galaxy center \citep{arca10}. This enables us to compare the environments between a SLSN-I and a Type II SN located in the same galaxy. In this Letter, we present the results of ALMA CO(1--0) observations of the host galaxy of SN~2017egm. This is the first study on molecular gas in a SLSN-I host galaxy. Throughout the paper, we adopt the cosmological parameters $H_0=67.8$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm{M}}=0.308$, and $\Omega_{\Lambda}=0.692$ \citep{plan16}. The luminosity distance to the host galaxy is 138.7 Mpc, and $1''$ corresponds to 0.65 kpc. \section{Observations and Results} \label{sec:observations} ALMA CO(1--0) observations were conducted on Mar. 28 and 29, 2019, for a Cycle 6 program (Project code: 2018.1.00370.S). The redshifted CO(1--0) line was observed with Band 6. The correlator was used in the time domain mode with a bandwidth of 1875 MHz (488.28~kHz $\times$ 3840 channels). Four basebands were used, providing a total bandwidth of 7.5 GHz. The array configuration was C43-2 with baseline lengths of 15.0--457.3 m. The number of available antenna was 46--48, and the on-source integration time was 79 min. Bandpass and flux calibrations were performed with J1058+0133 and phase calibrations with J0927+3902. The data were reduced with Common Astronomy Software Applications (CASA; \cite{mcmu07}). Maps were processed with a \verb|tclean| task with \verb|Briggs| weighting and a \verb|robust| parameter of 0.5. The synthesized beamsize is $3\farcs9 \times 1\farcs8$ (2.6 kpc $\times$ 1.2 kpc) with a position angle of $-3.7^{\circ}$. The rms noise level is 1.5 mJy~beam$^{-1}$ for a spectrum with a velocity resolution of 5 km~s$^{-1}$. Figure~\ref{fig:map} shows the obtained maps of CO(1--0) velocity-integrated intensity, intensity-weighted velocity field, and intensity-weighted velocity dispersion. The CO emission is clearly detected with a smooth rotation signature, which is consistent with the H$\alpha$ IFU observations \citep{chen17a}. The bright CO peak $\sim$$7''$ west of the galaxy center coincident with an {\sc Hii} region \citep{chen17a, izzo18} and the brightest peak of a 10 GHz continuum map \citep{bose18}. SN~2017egm is located close to a bright CO blob east of the galaxy center. The CO emission is also detected at the location of PTF10bgl at the $\sim$$2\sigma$ level. \citet{izzo18} found a tangential or warp-like disturbance, based on a detailed kinemetric analysis on the H$\alpha$ map, and suggest that this could be a sign of interaction with its companion, MCG$+$08-19-017, at a projected distance of $\sim$45 kpc and a radial velocity difference of $\sim$200 km s$^{-1}$. We do not find any atypical feature in the CO maps around the location of SN~2017egm or PTF10bgl. \section{Discussion} \label{sec:discussion} \subsection{Host Galaxy} \label{sec:host} The CO luminosity of the host galaxy is calculated to be $L'_{\rm CO} = (1.1 \pm 0.1) \times 10^9$~$L_{\odot}$ following the equation of \citet{solo05}. The molecular gas mass is $M_{\rm gas} = (4.8 \pm 0.3) \times 10^9$~$M_{\odot}$ derived from $M_{\rm gas} = \alpha_{\rm CO} L'_{\rm CO}$, where $\alpha_{\rm CO}$ is a CO-to-H$_2$ conversion factor including the contribution of the helium mass. The conversion factor can vary with different environments (see, e.g., \cite{bola13} for a review). The conversion factor is thought to be dependent on gas-phase metallicity, increasing $\alpha_{\rm CO}$ with decreasing metallicity (e.g., \cite{wils95, bola13}). Because the host galaxy has a metallicity close to the solar value, we adopt a Galactic conversion factor of $\alpha_{\rm CO} = 4.3$ $M_{\odot}$~(K~km~s$^{-1}$~pc$^2$)$^{-1}$ (with 30\% uncertainty; \cite{bola13}). The derived physical quantities are presented in Table~\ref{tab:results}. Note that errors take into account only flux measurement uncertainties. The molecular gas mass fraction ($\mu_{\rm gas} = M_{\rm gas}/M_*$) is 0.095, which is comparable to those of local star-forming galaxies with a similar stellar mass \citep{sain11, sain17, both14}. The molecular gas mass is compared with the SFR in Figure \ref{fig:mgas-sfr}. Because the SFR of the host galaxy ranges from 5 to 15 $M_{\odot}$~yr$^{-1}$ in the literature \citep{stol13, nich17, chen17a}, we adopt the range as a vertical line in the plot. The host galaxy is located in a similar region for local galaxies and on the sequence of normal star-forming galaxies. The gas depletion timescale ($\tau_{\rm gas} = M_{\rm gas}$/SFR) is 0.32--0.95 Gyr, which is comparable to those of local star-forming galaxies with a similar stellar mass \citep{both14, sain17}. The gas depletion timescale is also comparable to the host galaxies of PTF10tpz (SLSN-II; \cite{arab19}) and SN~2009bb (broad-line Ic SN; \cite{mich18}). \begin{table} \tbl{Derived properties of the host galaxy and at the sites of SN~2017egm and PTF10bgl}{ \begin{tabular}{lll} \hline Host galaxy & $L'_{\rm CO}$ (K km s$^{-1}$~pc$^2$) & $(1.1 \pm 0.1) \times 10^9$ \\ & $M_{\rm gas}$ ($M_{\odot}$) & $(4.8 \pm 0.3) \times 10^9$ \\ & $\mu_{\rm gas}$\footnotemark[$*$] & 0.095 \\ & $\tau_{\rm depl}$\footnotemark[$\dag$] (Gyr) & 0.32--0.95 \\ & SFE\footnotemark[$\dag$] (Gyr$^{-1}$) & 1.0--3.1 \\ \hline SN~2017egm site & $N({\rm H_2})$ (cm$^{-2}$) & $(1.6 \pm 0.3) \times 10^{21}$ \\ & $\Sigma_{\rm gas}$ ($M_{\odot}$~pc$^{-2}$) & $35 \pm 6$ \\ \hline PTF10bgl site & $N({\rm H_2})$ (cm$^{-2}$) & $(5.6 \pm 2.7) \times 10^{20}$ \\ & $\Sigma_{\rm gas}$ ($M_{\odot}$~pc$^{-2}$) & $12 \pm 6$ \\ \hline \end{tabular}}\label{tab:results} \begin{tabnote} Errors take into account only flux measurement uncertainty (1$\sigma$). Galactic CO-to-H$_2$ conversion factor of $\alpha_{\rm CO}= 4.3$ $M_{\odot}$~(K~km~s$^{-1}$~pc$^2$)$^{-1}$ is assumed. \\ \footnotemark[$*$] Molecular gas fraction ($M_{\rm gas}/M_*$). \\ \footnotemark[$\dag$] Gas depletion timescale ($\mu_{\rm gas} = M_{\rm gas}$/SFR) and star-formation efficiency (SFE $=$ SFR/$M_{\rm gas}$) assuming SFR = 5--15 $M_{\odot}$~yr$^{-1}$ based on the measurements in previous studies. \\ \end{tabnote} \end{table} \begin{figure} \begin{center} \includegraphics[width=.92\linewidth]{fig2.eps} \end{center} \caption{ Comparison of molecular gas mass and SFR. The vertical bar for the SN~2017egm host shows the range of SFR in the literature, while the horizontal bar shows the error caused by flux measurement uncertainty (1$\sigma$). For comparison, we plot the PTF10tpz (SLSN-II) host galaxy \citep{arab19}, the SN~2009bb (broad-line Ic) host galaxy \citep{mich18}, the host galaxies of long-duration GRBs (arrows are upper limits) compiled by \citet{hats20}, local galaxies \citep{sain11, sain17, both14}, $z \sim 1$--2 main-sequence galaxies \citep{tacc13, seko16}, and submillimeter galaxies \citep{both13}. The solid and dashed lines represent gas depletion times of 0.1 and 1 Gyr, respectively. } \label{fig:mgas-sfr} \end{figure} \subsection{SLSN Site} \label{sec:site} The metallicity at the SN~2017egm site measured in previous studies is controversial. \citet{nich17} and \citet{chen17a} showed a (super-)solar metallicity of $12+\log{\rm (O/H)} = 8.8$ and $9.11$, respectively, using the $R_{23}$ diagnostic with the \citet{kobu04} calibration. \citet{bose18} also found a super-solar metallicity of $12+\log{\rm (O/H)} = 9.0$ using [{\sc Nii}]/H$\alpha$ diagnostic with the \citet{naga06} calibration. On the other hand, \citet{izzo18} found a sub-solar metallicity of $12+\log{\rm (O/H)} = 8.49$ and 8.45 using the N2 and O3N2 diagnostics, respectively, based on the calibrations of \citet{mari13}. It is know that metallicity diagnostics are uncertain (e.g., \cite{kewl08}) and the differences in the previous studies can be due to different diagnostics \citep{chen17a, izzo18}. In oder to see the effect of metallicity on $\alpha_{\rm CO}$, we apply the relation between metallicity and $\alpha_{\rm CO}$ of \citet{genz15}, where they took the geometric mean of the empirical relations of \citet{genz12} and \citet{bola13} and derived the relation for the local and high-redshift sample. To apply the relation, we convert the metallicity to the calibration of \citet{pett04} by using the metallicity conversion of \citet{kewl08}. The derived metallicity-dependent $\alpha_{\rm CO}$ is 3.4--6.6 $M_{\odot}$~(K~km~s$^{-1}$~pc$^2$)$^{-1}$. In the following discussions, we assume a Galactic $\alpha_{\rm CO}$ of 4.3 $M_{\odot}$~(K~km~s$^{-1}$~pc$^2$)$^{-1}$ (corresponding $X_{\rm CO}$ is $2 \times 10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$), which is in the range of the metallicity-dependent conversion factor and is used in previous studies on the host galaxies of SNe \citep{galb17, mich18, arab19}\footnote{\citet{mich18} assumed a Galactic conversion factor of $\alpha_{\rm CO} = 5$ $M_{\odot}$~(K~km~s$^{-1}$~pc$^2$)$^{-1}$.}. The column densities of molecular gas at the positions of SN~2017egm and PTF10bgl are $N({\rm H_2}) = (1.6 \pm 0.3) \times 10^{21}$ cm$^{-2}$ and $(5.6 \pm 2.7) \times 10^{20}$ cm$^{-2}$, respectively. Here we adopt the same $\alpha_{\rm CO}$ for both the SN sites, because \citet{izzo18} found that metallicities at the sites are similar. We note that even if we assume the higher $\alpha_{\rm CO}$, the following discussions and conclusions would not change. The column density at the SN~2017egm site is found to be higher than that of the PTF10bgl site by a factor of three. We compare the column densities with the results of spatially resolving CO(1--0) observations of host galaxies of Type Ia, Ibc/IIb, and II SNe in \citet{galb17}. Figure~\ref{fig:nH2} shows the cumulative distributions of $N({\rm H_2})$ for the SNe. The vertical lines represent the values for SN~2017egm and PTF10bgl obtained in this study. We find that the column density at the SN~2017egm site is higher than those of SNe Ia and II, suggesting that SLSN-I progenitors have a preference for a higher-molecular-gas-density environment. The higher surface density of molecular gas is also reported for PTF10tpz, a SLSN-II, by \citet{arab19}. This appears to suggest that a dense molecular gas environment is an important factor for producing SLSN progenitors. \begin{figure} \begin{center} \includegraphics[width=.86\linewidth]{fig3.eps} \end{center} \caption{ Cumulative distribution of molecular column density $N({\rm H_2})$ for the three SN types (Ia, Ibc/IIb, and II) including upper limits derived from spatially resolved observations of SN hosts by \citet{galb17}. Vertical lines represent the column densities at the positions of SN~2017egm and PTFbgl. Shaded regions show errors caused by flux measurement uncertainty (1$\sigma$). } \label{fig:nH2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=.92\linewidth]{fig4.eps} \end{center} \caption{ Map of star-formation efficiency (SFR/$M_{\rm gas}$) in the host galaxy derived from the molecular gas surface density map base on the CO(1--0) map and the SFR map based on the MaNGA H$\alpha$ observations \citep{chen17a}. The region where the CO(1--0) velocity-integrated intensity map is above $2\sigma$ is presented. The spatial resolution is shown in the lower-left corner. } \label{fig:sfe} \end{figure} On the other hand, the column density at the SN~2017egm site is comparable to the median value of Type Ibc/IIb SNe ($N({\rm H_2}) = 1.5 \times 10^{21}$ cm$^{-2}$ for six CO-detected SN sites; \cite{galb17}). The molecular gas surface density is an order of magnitude lower than at the PTF10tpz site ($\Sigma_{\rm gas} \sim 700$ $M_{\odot}$~pc$^{-2}$ over $\sim$350 pc scale; \cite{arab19}), where the SLSN occurred near the intersection region of gas lanes and the inner structure in the host galaxy. Note that although the gas surface density at the PTF10tpz site is corrected for the inclination of the host galaxy \citep{arab19}, its large inclination angle of $68^{\circ}$ makes it difficult to estimate the actual column density. Figure~\ref{fig:sfe} shows the map of the star-formation efficiency (SFE $=$ SFR/$M_{\rm gas}$) in the host galaxy. The map is created from the molecular gas surface density map based on our CO(1--0) observations and the SFR map based on the MaNGA H$\alpha$ observations by \citet{chen17a}. Both the maps are convolved with the beam of the other map to match the spatial resolution. The SFE at the location of SN~2017egm does not appear to be special within the host galaxy. This is illustrated in Figure~\ref{fig:sigmaGas-sigmaSFR}, which compares the surface densities of the molecular gas and the SFR. The pixel-by-pixel variations within the host galaxy are plotted. We used the region where the CO(1--0) velocity-integrated intensity map is above $2\sigma$. We also compare the results of spatially resolved (kpc-scale) observations of local star-forming galaxies. The location of SN~2017egm in Figure~\ref{fig:sigmaGas-sigmaSFR} is consistent with the kpc-scale properties of local spiral galaxies and with the Schmidt--Kennicutt relation. This suggests that SLSNe can occur in environments that follow the same star-formation law as normal star-forming galaxies. It is not known whether the environment of SN~2017egm can be regarded as representative of SLSNe. The stellar mass is atypical among SLSN hosts, but is comparable to those of SNe Type Ib or Ic (that are not the broad-line type) (e.g., \cite{kell12}). The similarity between the environments of Type Ibc SNe and SN~2017egm is also presented in this study for the hydrogen column density. This could indicate that the progenitors of SLSNe-I are an extension of Type Ibc SNe. Because observations of molecular gas in the environments of SLSNe are very limited, it is important to increase the number of samples to achieve a better understanding of SLSNe. \begin{figure} \begin{center} \includegraphics[width=.92\linewidth]{fig5.eps} \end{center} \caption{ Comparison of molecular gas mass surface density and SFR surface density. The data points for the SN~2017egm host galaxy are measured at pixels where the CO(1--0) velocity-integrated intensity map is above $2\sigma$. For comparison, we plot other type of galaxies in the literature, where size measurements are available: local spirals \citep{kenn98a, bigi10}, and local LIRGs \citep{kenn98a}. The dashed line represents the relation of \citet{kenn98b}. } \label{fig:sigmaGas-sigmaSFR} \end{figure} \begin{ack} We thank the referee for helpful comments and suggestions. We would like to acknowledge Patricia Schady and Janet Ting-Wan Chen for providing their MaNGA data. We are grateful to the PDJ collaboration for providing opportunities for fruitful discussions. BH is supported by JSPS KAKENHI Grant Number 19K03925. This work is supported by the ALMA Japan Research Grant of NAOJ Chile Observatory (NAOJ-ALMA-239). This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2018.1.00370.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. \end{ack}
1,477,468,750,556
arxiv
\section{INTRODUCTION} High resolution spectrograph in near-infrared (NIR) wavelength range is a powerful tool to explore a variety of astronomical objects from planets to cosmological objects by measuring chemical abundance and gas dynamics with atomic and/or molecular lines. We are developing a NIR high-resolution spectrograph WINERED (Warm Near infrared Echelle spectrograph to Realize Extreme Dispersion; Ikeda~et~al. 2006\cite{Ikeda+2006}, Yasui~et~al. 2006\cite{Yasui+2006}, 2008\cite{Yasui+2008}). The primary objective of WINERED is to realize {\it NIR high-resolution spectrograph with high sensitivity} by achieving high throughput ($>25-30\%$), which is about twice as high as those of conventional high resolution spectrographs. WINERED has a wide wavelength coverage mode, ``Wide-Mode'' with a normal reflective echelle grating, which can simultaneously cover a wide wavelength range (0.9-1.35\,${\rm \mu m}$) with a resolving power that is comparable to those of many IR high resolution spectrographs (R$\sim$29,000; Yasui~et~al. 2006\cite{Yasui+2006}). WINERED also aims for the highest spectral resolution (R$\sim$100,000) by developing ZnSe or ZnS immersion grating (``High-Resolution-Mode'') while this immersion grating is now under development\cite{Ikeda+2009,Ikeda+2010}. Because the wavelength range of WINERED is limited to 0.9-1.35 ${\rm \mu m}$, where the ambient thermal background is very small, a warm optical system with no cold stop can be realized. Because of the compact design (the size is 1.8m $\times$ 1.0m $\times$ 0.5m and the total weight is $\sim$250 kg), WINERED, which is now located at the Nasmyth platform of 1.3 m Araki Telescope at Koyama Astronomical Observatory of Kyoto-Sangyo University, can be moved to any larger telescopes as a PI-type instrument. This paper is structured as follows: \S2 shows the optical performance of WINERED from the engineering observations. \S3 briefly presents our science grade array and its cassette. \S4 shows the results of the ambient background measurement. \S5 presents the detection limits of WINERED. In \S6, we comment on our future plan. \section{Optical performances} \subsection{Overview} WINERED has two observational modes, one is wide wavelength coverage mode (``Wide-Mode'') covering 0.90-1.35\,${\rm \mu m}$ in one exposure with R=28,300 using a reflective echelle grating. The other is high-resolution mode (``High-Resolution-Mode''), which has two setting, $Y$ and $J$ that cover 0.96-1.13\,${\rm \mu m}$ and 1.12-1.35\,${\rm \mu m}$, respectively, with R=103,000 using ZnSe or ZnS immersion grating. The optical configuration of WINERED is shown in Figure 2 of Yasui et al. (2008)\cite{Yasui+2008}. Overall specifications and optical parameters are summarized in Tables \ref{tab:spec} and \ref{tab:optical_para}, respectively. At present, WINERED has been completed except for the immersion grating. We mount WINERED on the Nasmyth focus of the 1.3\,m Araki Telescope of Koyama Astronomical Observatory (KAO) at Kyoto-Sangyo University in Kyoto Japan and has started engineering and science observations with Wide-Mode (Figure \ref{fig:winered_araki}). Almost all optical components are in the ambient environment with room temperature except for the camera lenses and the infrared array, which are operated with $\sim$90 K and 70 K in a cryostat, respectively. \begin{figure}[!hbt] \begin{center} \includegraphics[scale=0.28]{./SBSH0095.ps} \includegraphics[scale=0.8]{./fig_cryostat.eps} \caption{The 1.3\,m Araki Telescope at KAO (left), and WINERED installed on the Nasmyth platform of the telescope (right). The cover of WINERED is removed for viewing purpose. The slit, the collimator, the echelle grating, and the cross-disperser (VPH) are in the ambient environment with room temperature. The figure in left top corner of this panel shows covered WINERED.\label{fig:winered_araki}} \end{center} \end{figure} \begin{table}[!ht] \begin{center} \small \begin{tabular} {ccc} \hline \hline &High-Resolution-Mode&Wide-Mode\\ \hline Maximum spectral resolution & 103,000 (2-pix sampling)&28,300 (2-pix sampling)\\ Wavelength coverage&$Y$: 0.96-1.13\,${\rm \mu m}$ & 0.90-1.35\,${\rm \mu m}$\\ &$J$: 1.12-1.35\,${\rm \mu m}$&\\ Volume&\multicolumn{2}{c}{1800 mm(L) $\times$ 1000 mm(W) $\times$ 500 mm(H)}\\ \hline \end{tabular} \caption{WINERED basic specifications.\label{tab:spec}} \end{center} \end{table} \begin{table}[!hbt] \begin{center} \small \begin{tabular} {ccccc} \hline \hline & &High-Resolution-Mode& Wide-Mode\\\hline Slit &Width& \multicolumn{2}{c}{100, 200, 400 $\mu$m}\\ &Length &\multicolumn{2}{c}{3.12 mm} \\ \hline Collimator &Focal length& \multicolumn{2}{c}{770 mm}\\ &Clear aperture&\multicolumn{2}{c}{84 mm}\\ \hline Echelle&Type&ZnSe or (ZnS) immersion grating&classical echelle grating&\\ &Blaze angle& 70 deg.& 63.9 deg.\\ &Groove density & 31.80 gr/mm&31.60 gr/mm\\\hline Cross-disperser &Frequency &710 lines/mm ($Y$)& 280 lines/mm\\ &&510 lines/mm ($J$)&\\ &Bragg angle&20.8 deg. ($Y$)&9.3 deg.\\ &&17.9 deg. ($J$)&\\ \hline Camera&Focal length& \multicolumn{2}{c}{266.80 mm}\\ &Clear aperture&\multicolumn{2}{c}{128.25 mm}\\ \hline Detector &Array format& \multicolumn{2}{c}{2k$\times$2k (Teledyne, HAWAII-2RG)}\\ &Pixel size&\multicolumn{2}{c}{${\rm 18\, \mu m \times 18\, \mu m}$}\\ &Cut-off wavelength&\multicolumn{2}{c}{1.76 ${\rm \mu m}$}\\ \hline Slit viewer&FOV&\multicolumn{2}{c}{${\rm 4.8'\times 3.5'}$ (w/ the 1.3\, m Araki Telescope)}\\ &Wavelength region&\multicolumn{2}{c}{${\rm 0.6-0.9\, \mu m }$}\\ \hline Artificial light source&&\multicolumn{2}{c}{Th-Ar (for wavelength calibration)}\\ &&\multicolumn{2}{c}{Halogen lamp}\\ \hline \end{tabular} \caption{Optical parameters of WINERED.\label{tab:optical_para}} \end{center} \end{table} \subsection{Coverage} Figure \ref{fig:echelle_format} shows the echellograms of $\alpha$ Boo (Arcturus) and a flat-lamp obtained with Wide-Mode. We confirmed that the entire wavelength range, 0.90-1.35 ${\rm \mu m}$ (m=41-61), is covered in a single exposure by investigating the echellogram of Th-Ar comparison lamp. Figure \ref{fig:winered_spectrum} shows the Wide-Mode spectra of a A0V star (HIP 58001) and P Cyg, which show broad hydrogen absorption lines and strong emission lines, respectively. This wide wavelength range of about 4,500 {\r A} can be covered in one exposure, which should enables extensive classifications of a variety of astronomical objects. \begin{figure}[!h] \begin{center} \includegraphics[scale=1.5]{arcturus_obj-sky_20120704_mini.eps} \includegraphics[scale=1.5]{flat_20120704_mini.eps} \caption{Echellogram of $\alpha$ Boo (left) and a flat-lamp (right). The faint spectra seen between low orders are the ghosts from the 2nd-order lines of the VPH cross-disperser (HAWAII-2RG has the sensitivity for the optical wavelength). However, because the ghosts are enough separated from the object spectrum, they do not produce any critical problem. \label{fig:echelle_format}} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.8]{std_all_v3.eps}\\ \vspace{0.3cm} \includegraphics[scale=0.8]{pcyg_v3.eps} \caption{Top panel: the spectrum of a star HIP 58001 (A0V). Broad Pa$\beta,\gamma,\delta$ absorption lines are clearly seen. The strong telluric absorption features due to water vapor are seen between $z$, $Y$, and $J$-bands. Bottom panel: the spectrum of P Cyg. Pa$\beta,\gamma,\delta$, emission lines as well as very strong HeI emission have clear P Cygni profiles. \label{fig:winered_spectrum}} \end{center} \end{figure} \clearpage \subsection{Spectral Resolution} We measured the spectral resolution of Wide-Mode using the Th-Ar lamp. The measured spectral resolutions are defined as the FWHM of single Th-Ar emission lines. Figure \ref{fig:resolution} shows the obtained spectral resolution as a function of wavelength. We confirmed that the designed spectral resolving power ($R=28,300$) is achieved through the entire wavelength range. \begin{figure}[!hbt] \begin{center} \includegraphics[scale=1.3]{fig_resolution.eps} \caption{Measured spectral resolution for N-mode. The black points show the measured values. The solid line shows the target spectral resolution, which is defined by 2-pixel sampling.\label{fig:resolution}} \end{center} \end{figure} \subsection{Throughput} In order to estimate the throughput of WINERED, we observed a photometric standard star (HD87822), which is listed in the IRTF Spectral Library\cite{Rayner+2009}, with the 400~${\rm \mu m}$ (=6$^{\prime\prime}$.6) wide slit to avoid the flux loss at the slit during engineering observation using our engineering array. We assumed that the efficiency of the telescope, determined by reflectance of mirrors and vignetting by the baffle for the secondary mirror and the pupil aperture (This is because WINERED is designed for f/11 telescopes though f-number of Araki telescope is 10), is about 0.5 from the past measurements of the telescope. The atmospheric absorption at the KAO site is calculated with LBLRTM code (Clough et al. 2005\cite{Clough+2005}) accessing the HITRAN database (Figure \ref{fig:winered_throughput}: bottom panel). The obtained throughput of the optics as a function of wavelength is shown in Figure \ref{fig:winered_throughput}. While the black curve in the figure is in the case without the EG array, the red curve is in the case with the SG array assumed. The throughput included an array QE in $J$-band is found to be over 40\% as designed. However, the throughput at shorter wavelengths is unexpectedly degraded (down to 20\% at z-band). We consider that the aerosol scattering is more efficient in the actual city environment than we expected in our calculation, but more investigation is necessary. \begin{figure}[!hbt] \begin{center} \includegraphics[scale=1]{throughput_all_edit0715.eps} \caption{Estimated throughput of WINERED for Wide-Mode using EG array. The top panel shows the throughput (Black: WINERED optics only, Red: WINERED optics times QE of the SG array, Dark gray: as observed with the EG array, whose QE is 30-60\% from Teledyne lnc.. The bottom panel shows the assumed telluric absorption spectrum for estimating the throughput.\label{fig:winered_throughput}} \end{center} \end{figure} \section{Infrared array} We use a 1.7 ${\rm \mu m}$ cut-off 2k$\times$2k HAWAII-2RG array\cite{Beletic+2008} to suppress ambient thermal backgrounds at longer wavelengths beyond H-band, and SIDECAR ASIC and JADE2\cite{Loose+2007} for readout electronics. A science grade (SG) array has been installed. \subsection{Array Cassette} Figure \ref{fig:winered_fig_mecha} shows the new design of our array cassette. We designed this cassette for safe assembly, releasing thermal stress, and easily cooling to the purpose temperature. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{./cassette_nakanishi2012_0710_edit_sk4edit.eps} \caption{Array cassette.\label{fig:winered_fig_mecha}} \end{center} \end{figure} \subsection{Performance of the SG array} The performances of the SG array are summarized in Table \ref{tab:detector}. The quantum efficiency (QE) was measured by Teledyne lnc.. Readout noise was measured from the variance of dark frames with short integration time (15 sec) for which Poisson noise from the dark electrons is negligible. With the Fowler-sampling, the readout noise decreases from 19.2$\pm$ 2.9 $[{\rm e^{-}}]$ (NDR=1) to 5.3$\pm$ 1.0 $[{\rm e^{-}}]$ (NDR=32). The dark current was estimated from the ramp sampling over 1,500 sec and is found to be ${\rm 7.6\pm 0.2\times 10^{-3}\, [e^{-}/s]}$. Conclusively, we can say that this SG array meets our specifications. The conversion gain is set to be 2.27 ${\rm e^{-}/ADU}$ for the detector bias of 0.25 v. Readout time is about 1.45 sec per frame for 32-ch output operation mode with 100 kHz pixel rate. The detector is reset 4 times before readout, so it takes at least 10 sec to obtain one frame even for the minimum integration time. The counts of the output frame are corrected with those of the reference pixels. To reduce readout noise, we use Fowler-sampling of 2, 4, 8, 16 non-destructive reads depending on the integration time during actual observations. \begin{table}[h] \begin{center} \small \begin{tabular} {cccccc} \hline \hline QE [\%]& Readout noise (NDR=1) [${\rm e^-}$]& Readout noise (NDR=32) [${\rm e^-}$] &Dark [${\rm e^-/s}$]& Full well [${\rm e^-}$]& \\ \hline 63-114 & 19.2$\pm$2.9& 5.3$\pm$1.0& $7.6\pm 0.2 \times 10^{-3}$& $1.4\times 10^5$& \\ \hline \end{tabular} \caption{Science grade array performance. QE is provided by Teledyne lnc.. The uncertainty of QE is probably over 10\% (from Teledyne lnc.). \label{tab:detector}} \end{center} \end{table} \section{Ambient thermal background} All optical components except for the camera lens and the infrared array are placed under the ambient temperature. To block the ambient thermal background over 1.35\,${\rm \mu m}$ as much as possible, a thermal cut filter is coated on the cold camera lens in front of the array (Yasui et al. 2008\cite{Yasui+2008}), and additional thermal blockers (PK50 and a custom thermal cut filter) are installed. When the ambient temperature is sufficiently low, the noise from the ambient thermal background is expected to be less than the readout noise $({\rm \sim 5\,e^-})$ by combination of the thermal cut filter, the thermal blockers and a 1.7\,${\rm \mu m}$ cut-off array. We measured the ambient thermal background by putting a black cover on the window of the cryostat so that the detector looks at a black body with the room temperature. We confirmed that leak of light is negligible for this measurement. A cold mask with two holes at the center/edge was installed at the 4 mm distance from the array. The hole size is 3.2 mm which is determined as no vignetting for the full FOV of the camera lens. We measured the ambient thermal background in the bright region and estimated the dark current and detector bias in the shadow region caused by the mask simultaneously, which were fount to be negligible. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.8]{plot_background_all_2014_12_18.eps} \caption{Measured ambient thermal backgrounds. The black points are the measured values. The black line is the expected ambient thermal background in the optimum case. The red lines show an equivalent readout noise for 1800 sec and the dark current. Hatched region shows the nominal operating temperature of WINERED. \label{fig:temp-count}} \end{center} \end{figure} We measured the ambient thermal background only at the lab temperatures (287--299 K), which is higher than the typical operation temperatures we expect on the telescopes. The relation between the ambient temperature and photon counts is shown in Figure \ref{fig:temp-count}. This figure shows that the measured ambient background counts are well correlated with ambient temperatures (${\rm \sim 0.06\, [e^-/sec/pix]}$ at 287 K and ${\rm \sim 0.14\, [e^-/sec/pix]}$ at 299 K) and are roughly consistent with those we expected for the temperatures. There are some differences of counts between the holes, but the ratios are almost constant for all the temperatures. The noise from the ambient thermal background is expected to be less than readout noise under $\sim 280$ K if the ambient thermal background decreases with decreasing ambient temperature like optimal case. To further verify this expectation, we will measure ambient thermal backgrounds at low temperatures ($<$ 280 K) in the telescope dome during the cold winter months. \section{Detection Limit} Table \ref{tab:detection_limit} summarizes the estimated limiting magnitudes of WINERED for various telescopes. The ambient background count highly depends on the environment. For Araki Telescope, we adopt the ambient thermal background at around 290 K, which is about an average ambient temperature throughout the year. For the other telescopes, we adopt the ambient thermal background at 273 K. The value is extrapolated from our measured thermal background at 287--299 K, assuming that the logarithm of the ambient thermal background decreases linearly with decreasing temperature. Table \ref{tab:detection_limit} shows that goal magnitudes are achieved if ambient backgrounds decrease as we expect. If WINERED is installed on a 10 meter telescope, the limiting magnitude is expected to be ${\rm m_J}$=18-19, which can provide high-resolution spectra with high quality even for faint objects. \begin{table}[!h] \begin{center} \small \begin{tabular} {ccccc} \hline \hline &Araki1.3m&WHT4.2m&Magellan6.5m&Keck10m\\ \hline Location &KAO&Roque de los Muchachos&Las Campanas&Mauna Kea\\ &Kyoto, Japan&La Palma, Spain&Chile&Hawaii\\ \hline Seeing& $3^{\prime\prime}.0$&$0^{\prime\prime}.8$&$0^{\prime\prime}.6$&$0^{\prime\prime}.4\ (0^{\prime\prime}.2)$\\ \hline Pixel scale (${\rm /pix}$)&$0^{\prime\prime}.82$&0$^{\prime\prime}$.23&0$^{\prime\prime}$.15&0$^{\prime\prime}$.098\\ Slit width for $R_{max}$&1$^{\prime\prime}$.65&0$^{\prime\prime}$.47&0$^{\prime\prime}$.30&0$^{\prime\prime}$.20\\ \hline Ambient Temperature (K)&290&273&273&273\\ Thermal Background (${\rm e^- /s /pixel}$) &0.08&0.01&0.01&0.01\\ \hline Goal $m_{J}$&12.8&15.9&16.5&17.6 (18.4)\\ $m_{J}$&13.1&16.6&17.4&18.3 (19.1)\\ \hline \hline \end{tabular} \caption{Estimated detection-limit of WINERED of Wide-Mode in $J$-band for the total integration time of 8 hrs (1800 sec$\times$16) and S/N=30. Goal $m_J$, and $m_J$ are the magnitudes when the parameters (e.g., throughput, QE of a detector, and the ambient background) of Ikeda et al. (2006)\cite{Ikeda+2006} and of this paper are assumed, respectively. For the case with the Keck telescope, the use of a focal reducer from f/15 to f/11 is assumed. In the line of Keck, the values in parentheses are shown using AO.\label{tab:detection_limit}} \end{center} \end{table} \section{Current status and future plan} Since the first light on May 23 2012, we have conducted engineering and science observation four times to obtain $\sim$200 spectra of a variety of astronomical objects. The ZnSe or ZnS immersion grating is being developed and the detail will be reported elsewhere. We plan to fabricate the final large immersion grating (probably with ZnSe) and to install it to complete High-Resolution-Mode of WINERED. \acknowledgments We are grateful to Y. Shinnaka for providing us the atmospheric absorption spectrum at the KAO site. We would like to thank the staffs of KAO, Kyoto-Sangyo University. This study was financially supported by KAKENHI (16684001) Grant-in-Aid for Young Scientists (A), KAKENHI (20340042) Grant-in-Aid for Scientific Research (B), KAKENHI (26287028) Grant-in-Aid for Scientific Research (B), KAKENHI (21840052) Grant-in-Aid for Young Scientists (Start-up). This study has been financially supported by the MEXT- Supported Program for the Strategic Research Foundation at Private Universities, 2008-2012 (No. S0801061) and 2014-2018 (No. S1411028). S.H. is supported by Grant-in-Aid for JSPS Fellows Grant Number 13J10504.
1,477,468,750,557
arxiv
\subsection{Signal} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/charge/inPxl_charge_epi_202202.pdf} \caption{Epitaxial layer} \label{fig:clsChargeMap_epi} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/charge/inPxl_charge_cz_202202.pdf} \caption{High-resistivity Czochralski} \label{fig:clsChargeMap_cz} \end{subfigure}% \caption{In-pixel representation of the seed pixel charge at the minimum detection threshold for a sensor with (a) epitaxial layer and (b) high-resistivity Czochralski substrate. Both sensors have a segmented n-implant and are biased at -6\,V/-6\,V at p-wells/substrate.} \label{fig:chargeMap} \end{figure*} The in-pixel representation of the cluster seed charge is presented in Fig.~\ref{fig:chargeMap} for a sensor with epitaxial layer and high-resistivity Czochralski substrate. The seed pixel charge exhibits a maximum in the pixel centre and decreases towards the pixel corners due to increased charge sharing. The larger signal of the sensor with high-resistivity Czochralski substrate is distinguishable in the entire pixel cell. \subsection[Cluster Size]{Cluster Size} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/charge_sharing/sizeVsThd_contThinned_202204.pdf} \caption{Continuous n-implant} \label{fig:meanSizeThd_modThinning} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/charge_sharing/sizeVsThd_thinnedwCz_202204.pdf} \caption{Segmented n-implant} \label{fig:meanSizeThd_gapThinning} \end{subfigure}% \caption{Mean cluster size as a function of the detection threshold using sensors with different sensor thicknesses and wafer materials for the pixel flavour with (a) continuous and (b) segmented n-implant using a bias voltage of -6\,V/-6\,V at the p-well/substrate.} \label{fig:meanSizeThd} \end{figure*} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/charge_sharing/clsSizeMap_epi_rgb_202204.pdf} \caption{Epitaxial layer} \label{fig:clsSizeMap_epi} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/charge_sharing/clsSizeMap_cz_rgb_202204.pdf} \caption{Czochralski substrate} \label{fig:clsSizeMap_cz} \end{subfigure}% \caption{In-pixel representation of the total cluster size at the minimum operation threshold for a sensor with (a) epitaxial layer and (b) Czochralski substrate. Both sensors have a segmented n-implant and are biased at -6\,V/-6\,V at p-wells/substrate.} \label{fig:sizeMap} \end{figure*} Comparing the cluster size of different sensors is sensitive to the total amount of induced charge and its distribution among adjacent pixel cells. The mean cluster size for the two pixel flavours as a function of the detection threshold is presented in Fig.~\ref{fig:meanSizeThd} and the mean size at the minimum detection threshold is listed in Table~\ref{tab:performance_all}. The shaded band represents the uncertainties discussed in the previous section. For both pixel flavours, the mean cluster size is the same within the uncertainties for sensor thicknesses between \SI{50}{\micro \meter} and \SI{300}{\micro \meter}. The results imply that only the fraction of the low-resistivity substrate is removed from which charge carrier do not contribute to the measured signal. Thus, thinning the sensor to \SI{50}{\micro \meter} still leaves the active sensor material intact. On the other hand, the mean cluster size for the \SI{40}{\micro \meter} thick sensor is reduced by approximately \SI{10}{\percent} at the minimum operation threshold. As the \SI{40}{\micro \meter} thick sensor consists of approximately \SI{10}{\micro \meter} of metal layers and \SI{30}{\micro \meter} sensor material, it can be assumed that the substrate is fully removed. Damage to the epitaxial layer by the thinning procedure~\cite{mizushima2014impact} is expected to affect the signal as well, which results in a lower cluster size. The decrease in mean cluster size for the \SI{40}{\micro \meter} sensors is more pronounced for the pixel flavour with segmented n-implant (cf. Fig.~\ref{fig:meanSizeThd_gapThinning}), which is consistent with the reduced charge sharing expected for this flavour. A high degree of charge sharing leads to the distribution of the total signal to several adjacent pixel cells, thus reducing the amount of charge collected per pixel. In particular, charge carriers generated at the lower border of the active sensor region are subject to intense charge sharing, since their longer propagation path allows for a stronger contribution of diffusion processes. If the induced signal on a given pixel is not enough to surpass the threshold, the charge carriers that propagated to this cell are effectively lost. Therefore, this phenomenon is particularly important for the flavour with continuous n-implant and affects mostly charge carriers from the lower part of the active sensor volume. A removal of this volume is thus less severe, since a fraction of charge carriers are anyway lost due to sub-threshold effects. The stronger concentration of charge carriers for the pixel flavour with segmented n-implant mitigates the charge-sharing-induced signal loss and this flavour is consequently more sensitive to the thinning. The mean cluster size for a \SI{100}{\micro \meter} thick sensor fabricated on a Czochralski substrate is shown in Fig.~\ref{fig:meanSizeThd_gapThinning}. At the minimum threshold, the mean cluster size is increased by approximately \SI{30}{\percent} compared to sensors with epitaxial layer. The in-pixel representation of the cluster size allows for a detailed investigation of the cluster size difference, as presented in Fig~\ref{fig:sizeMap}. In this representation, the cluster size is depicted as a function of the particle incident position within the pixel cell by folding data from a full CLICTD pixel matrix into a single cell. The largest clusters originate from the pixel corners owing to geometrical effects and the low electric field in this region resulting in a high contribution from charge carrier diffusion. For the sensor fabricated on Czochralski substrate, the cluster size is larger regardless of the incident position. Especially in the pixel centre, the map exhibits mean cluster size values well above one, even though the lowest degree of charge sharing is expected from this region. The results are thus indicative of an overall higher signal resulting from a larger active sensor volume. The depletion region within the Czochralski substrate is not expected to extend to the sensor backside at a bias voltage of -6\,V/-6\,V, which still limits the active sensor depth. An increase in substrate bias voltage, increases the depletion depth and therefore also affects the active depth, as illustrated in Fig.~\ref{fig:sizeVsBias_gap_202111}, where the mean cluster size as a function of the substrate bias voltage is displayed for the pixel flavour with segmented n-implant. The p-well voltage is fixed to -6 V and a higher detection threshold of 348\,e is applied to the sensor due to the different front-end operation settings as explained before. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figures/charge_sharing/sizeVsBias_gap_202203.pdf}% \caption{Mean cluster size as a function of the substrate bias voltage at a threshold of 348\,e for a Czochralski sensor with segmented n-implant. The p-well voltage is fixed to -6\,V.} \label{fig:sizeVsBias_gap_202111} \end{figure} \subsection{Sensor Design} The CLICTD sensor is fabricated in a modified 180\,nm CMOS imaging process~\cite{tj-modified} using two different pixel flavours, as shown schematically in Fig.~\ref{fig:clictdFlavours}. The sensor is characterised by a small n-type collection electrode on top of a \SI{30}{\micro \meter} thin high-resistivity (few k$\Omega$cm) epitaxial layer, that is grown on a low-resistivity ($\sim 10^{-2}\SI{}{\ohm \centi \metre}$) p-type substrate. The on-channel front-end electronics are shielded by p-wells at the pixel edges. A low-dose n-type implant below the p-wells allows for full lateral depletion of the epitaxial layer~\cite{tj-modified}. In the second pixel flavour, the n-implant is segmented at the pixel edges, which causes an increase in the lateral electric field. As a consequence, an accelerated charge collection and reduced charge sharing is achieved with this design. In the CLICTD sensor, the segmentation is only introduced in the column direction. In the row direction, a high degree of charge sharing is desired in order to improve the spatial resolution. A reverse bias voltage is applied to nodes in the p-wells and the substrate. The bias voltage at the p-wells is limited to -6\,V to prevent breakdown of the on-channel NMOS transistors~\cite{thesis-jacobus}. \subsection{Sensor Material} CLICTD sensors with different thicknesses were produced using backside grinding. The total device thickness ranges from \SI{40}{\micro \meter} to \SI{300}{\micro \meter}, including a metal stack of approximately \SI{10}{\micro \meter} on top of the sensor~\cite{prabket2019resistivity}. The size of the active sensor volume is limited by the thickness of the \SI{30}{\micro \meter} epitaxial layer. To increase the active volume, an alternative substrate material is studied, which consists of high-resistivity (few \SI{}{\kilo \ohm \cm}) p-type Czochralski silicon~\cite{pernegger2021radiation}. The implants are introduced directly on the Czochralski substrate and no additional epitaxial layer is grown on top. The advantages of the Czochralski substrate are twofold: Firstly, the isolation between p-well and substrate bias nodes is improved, allowing for a larger difference between the two voltages. Secondly, the depletion can evolve further in depth owing to the larger size of the high-resistivity volume. The benefits of the larger active volume depend on the aspect ratio of the pixel cell and the target applications of the sensor. For instance, sensors with a comparably large pixel pitch that aim for a good spatial resolution, profit from a larger active depth by tuning the depth such that an optimal degree of charge sharing and an enhanced signal are achieved. Sensors with a pixel pitch considerably smaller than the active depth are less suited for the Czochralski substrate, since the cluster size increases considerably, which typically does not lead to additional improvements of the performance. It should be noted that the availability of high-resistivity Czochralski substrates for silicon sensor fabrication depends on foundry specifications, since it is not a standard material for the investigated CMOS process. \subsection{Analogue and Digital Front-End} Each sub-pixel has an analogue front-end that consists of a voltage amplifier connected to a discriminator, where an adjustable detection threshold is compared to the input pulses. Effective threshold variations are corrected using a 3-bit threshold-tuning DAC. The discriminator output of the eight sub-pixels in a detection channel are combined with a logical \textit{OR} in the on-channel digital front-end. The binary hit pattern of the sub-pixels is recorded as well as the 8-bit Time-of-Arrival (ToA) and the 5-bit Time-over-Threshold (ToT) for time and energy measurements, respectively. As a consequence of combining the sub-pixel discriminator outputs, the ToA is set by the earliest sub-pixel timestamp and the ToT is determined by the number of clock cycles in which at least one sub-pixel is above the detection threshold. No conversion from ToT to physical units is applied for the measurements shown in this paper, since the conversion was found to have a limited precision owing to non-linearities in the analogue front-end~\cite{clictdTestbeam}. \subsection{Sensor Operation} The front-end and operation settings were optimised in laboratory studies detailed elsewhere~\cite{clictd_design_characterization, clictdTestbeam}. Most importantly, for each sensor a minimum operation threshold is defined as the lowest possible threshold at which a noise free operation ($< 1 \times 10^{-3}$\,hits/s for the full pixel matrix) is achievable with up to 10 noisy pixels masked, which is less than one per mille of the entire matrix. The sensors presented in this paper are compared at their respective minimum operation threshold. It should be noted that measurements below the minimum operation threshold are nevertheless feasible, since a small noise contribution can be tolerated. The difference between the substrate and p-well bias voltages is limited by the punch-through between the two nodes. Whereas this requirement constraints the difference to a few volts for sensors with epitaxial layer, for the Czochralski sensors, the difference can easily exceed tens of volts. For the sensors with epitaxial layer, a high substrate bias voltage has a negligible impact, since the depletion depth is limited by the thickness of the epitaxial layer itself. Therefore, the bias voltage is fixed to -6\,V/-6\,V at the p-well/substrate nodes for measurements presented in the following sections. For the Czochralski sensors, the depletion region can evolve further into the substrate, thus justifying measurements with increased substrate bias voltage. \paragraph{Front-End Optimisation for Large Substrate Voltages} The CLICTD front-end is optimised for sensors with a \SI{30}{\micro \meter} epitaxial layer. Sensors fabricated on Czochralski substrates are subject to a higher sensor leakage current, if the difference between p-well and substrate voltage exceeds 5\,V. The increased current can saturate the leakage current compensation circuit, which renders parts of the pixel matrix insensitive to incoming particles. To counteract the saturation, the front-end settings are adapted such that a faster return to baseline at the input node is achieved. With these settings, the sensor can be operated up to -20\,V substrate and -6\,V p-well bias voltage before any saturation effects set in. However, the adaptations reduce the signal gain, which leads to coarser steps in the threshold settings and a larger minimum operation threshold, since the front-end is operated in conditions it was not designed for. The higher thresholds have important implications for the sensor performance, as presented in Section~\ref{sec:performance}. \subsection{Hit-Detection Efficiency} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/efficiency/20220421_effThdScan_cont.pdf} \caption{Continuous n-implant} \label{fig:effVsThd_modThinning} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/efficiency/effVsThd_thinnedWCz_202204.pdf} \caption{Segmented n-implant} \label{fig:effVsThd_gapThinning} \end{subfigure}% \caption{Hit-detection efficiency as a function of the detection threshold using sensors with different sensor thicknesses and wafer materials for the pixel flavour with (a) continuous and (b) segmented n-implant using a bias voltage of -6\,V/-6\,V at the p-well/substrate.} \label{fig:effVsThd} \end{figure*} The hit-detection efficiency is closely related to the maximum single-pixel charge (\textit{seed charge}) in a cluster and is thus correlated with the total signal and the degree of charge sharing. The efficiency is determined as a function of the detection threshold as presented in Fig.~\ref{fig:effVsThd} for both pixel flavours. While efficiencies well above \SI{99}{\percent} are achieved at low detection thresholds, the efficiency deteriorates for values greater than 500\,e, since all single-pixel signals in a cluster can fall below the detection threshold. The degradation is stronger for the pixel flavour with continuous n-implant due to the enhanced charge sharing, which leads to a smaller charge per pixel, as discussed in detail in~\cite{clictdTestbeam}. For high thresholds, inefficient regions start to form at the pixel borders, as illustrated in Fig.~\ref{fig:effMap_contN}, where the in-pixel efficiency is shown at a threshold of 1950\,e for a \SI{300}{\micro \meter} thick sensor with segmented n-implant and epitaxial layer. As the diffusion of charge carriers to neighbouring pixels is enhanced at the edges, a smaller seed signal and consequently a lower efficiency is associated with these regions. For the \SI{40}{\micro m} thick sensors, the high-efficiency plateau is noticeably reduced compared to the thicker sensors. In agreement with the smaller cluster size observed in the previous section, the degraded efficiency indicates an overall reduction in signal compared to the thicker sensors. These results support the assumption of a smaller active depth due to the removal of sensitive sensor volume. The degradation in efficiency is less severe for the pixel flavour with continuous n-implant as discussed above. A slight trend towards smaller efficiencies is also visible for \SI{50}{\micro m} thick sensors, although it is covered by the systematic uncertainties. The results indicate that parts of the active material are potentially already damaged in the \SI{50}{\micro m} thick sensors. The sensor fabricated on a Czochralski substrate exhibits a larger efficiency at high detection thresholds compared to sensors with epitaxial layer as a direct consequence of the higher signal. The in-pixel representation of the efficiency is depicted in Fig.~\ref{fig:effMap_cz} at a detection threshold of approximately 1950\,e and confirms that the efficiency is larger especially in the pixel edges, where the highest degree of charge sharing is expected. The impact of the substrate voltage for samples with Czochralski substrate is illustrated in Fig.~\ref{fig:effVsSubstrate_gapN}, where the detection threshold corresponding to an efficiency of \SI{80}{\percent} is presented as a function of the substrate bias voltage. The threshold increases by about \SI{30}{\percent} from -6\,V to -20\,V. At -20\,V, the value is about \SI{85}{\percent} higher compared to the corresponding threshold for samples with epitaxial layer, which evaluates to approximately 1400\,e. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figures/efficiency/202204_effOver80VsBias_2.pdf}% \caption{Detection threshold corresponding to an efficiency of \SI{80}{\percent} as a function of the substrate bias voltage for a Czochralski sample with segmented n-implant. The p-well voltage is fixed to -6\,V.} \label{fig:effVsSubstrate_gapN} \end{figure} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/efficiency/effMap_epi_rgb_202204.pdf} \caption{Epitaxial layer} \label{fig:effMap_contN} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/efficiency/effMap_cz_rgb_202204.pdf} \caption{Czochralski substrate} \label{fig:effMap_cz} \end{subfigure}% \caption{In-pixel representation of the hit-detection efficiency at a threshold of 1950\,e for a sensor with (a) epitaxial layer and (b) Czochralski substrate. Both sensors have a segmented n-implant and are biased at -6\,V/-6\,V at p-wells/substrate.} \label{fig:effMap} \end{figure*} \section{Introduction} \label{sec:introduction} \input{introduction} \section{The CLICTD Sensor} \label{sec:clictd} \input{clictd} \section{Test-Beam and Analysis Setup} \label{sec:setup} \input{setup} \section{Performance for Perpendicular Particle Tracks} \label{sec:performance} \input{charge_sharing} \input{efficiency} \input{position_resolution} \input{time_resolution} \section{Studies with Inclined Particle Tracks} \input{performance_inclined} \subsection{Determination of Active Sensor Depth} \input{active_depth} \section{Summary \& Outlook} \input{summary} \section*{Acknowledgements} \label{sec:acknowledgements} \input{acknowledgements} \section*{CRediT authorship statement} \input{credit_statement} \subsection{Performance} In many HEP applications, particles enter the sensor under an oblique angle, due to e.g. mechanical rotation of detector modules or helical particle trajectories in a magnetic field. Therefore, the sensor performance for inclined particle tracks merits detailed investigation. Here, a \SI{300}{\micro \meter} thick sensor with epitaxial layer and continuous n-implant is used to exemplify the effects of the inclination angle on the sensor performance. \paragraph{Cluster Size} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/rotation/cSVsRotAngle_row_202202.pdf} \caption{Cluster size in row direction} \label{fig:meanSizeRowThd_rotation} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/rotation/cSVsRotAngle_col_202202.pdf} \caption{Cluster size in column direction} \label{fig:meanSizeColThd_rotation} \end{subfigure}% \caption{Cluster size as a function of the detection threshold for different rotation angles for a \SI{300}{\micro \meter} thick sensor with epitaxial layer and continuous n-implant tilted in row direction. A bias voltage of -6\,V/-6\,V is applied to the p-well/substrate.} \label{fig:meanSizeThd_rotation} \end{figure*} The amount of active silicon traversed by particles is varied by inclining the sensor relative to the beam axis. For high inclination angles, particle tracks cross several adjacent pixel cells, giving rise to a larger cluster size as illustrated in Fig.~\ref{fig:meanSizeThd_rotation} for a sensor tilted in row direction. The mean cluster size at the minimum detection threshold is listed in Table~\ref{tab:inclined_sensor_performance}. A considerable increase in cluster size in row direction is distinguishable principally due to the geometrical effect of charge deposition in several pixel cells. Between $0^\circ$ and $70^\circ$, the increase is as high as \SI{250}{\percent} at the minimum operation threshold. The simultaneous increase in cluster size in column direction is consistent with an overall increase in the number of liberated charge carriers, whose undirected diffusion also affects charge sharing in column direction. At the minimum operation threshold, the mean cluster size in column direction is approximately \SI{6}{\percent} larger at $70^\circ$ compared to perpendicular incidence. \paragraph{Efficiency} With increasing inclination angle, the total energy deposition in the sensor increases due to the longer particle path in the active sensor region. As a result, a higher signal is detected, which leads to an appreciable increase in efficiency at high thresholds, as depicted in Fig.~\ref{fig:effVsThd_rot_202110}, where the efficiency as a function of the detection threshold is shown for three different rotation angles. At a threshold of 2300\,e, the efficiency increases from about \SI{38}{\percent} at 0$^\circ$ to \SI{70}{\percent} at 70$^\circ$. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figures/rotation/effVsRotAngle_202202.pdf}% \caption{Detection efficiency as a function of the detection threshold for different rotation angles for a \SI{300}{\micro \meter} thick sensor with epitaxial layer and continuous n-implant. The sensor was tilted in row direction and the p-well/substrate was biased at -6\,V/-6\,V.} \label{fig:effVsThd_rot_202110} \end{figure} \paragraph{Spatial Resolution} The spatial resolution in row direction improves with increasing rotation angle until approximately $40^\circ$, where it evaluates to $3.6 \pm \SI{0.2}{\micro m}$ after $\eta$-correction, as illustrated in Fig.~\ref{fig:spatialResVsAngle_202110}. The $\eta$-correction allows for an improvement in spatial resolution for rotation angles below $40^\circ$. At higher angles, an increase of cluster size $\geq 3$ complicates the application of the reconstruction algorithms and no improvement with respect to the centre-of-gravity algorithm is achievable. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figures/rotation/sRVsRotAngle_etaCog_202202.pdf}% \caption{Spatial resolution as a function of the rotation angle using a charge-weighted centre-of-gravity algorithm (CoG) and an $\eta$-correction (ETA) to reconstruct the cluster position on the DUT. A bias voltage of -6\,V/-6\,V was applied to the p-well/substrate.} \label{fig:spatialResVsAngle_202110} \end{figure} \begin{table}[bpt] \centering \caption{Cluster size (CS) for different rotation angles (RA) using a sensor with epitaxial layer and continuous n-implant operated at a threshold of approximately 150\,e.} \label{tab:inclined_sensor_performance} \begin{tabular}{ccc} \hline \toprule \textbf{RA [$^\circ$]} & \textbf{CS (col.)} & \textbf{CS (row)} \\ \midrule 0 & $1.46 \pm 0.01$ & $1.38 \pm 0.01$ \\ 50 & $2.19 \pm 0.01$ & $1.41 \pm 0.01$ \\ 70 & $3.78 \pm 0.01$ & $1.46 \pm 0.01$ \\ \bottomrule \end{tabular} \end{table} \subsection[Spatial Resolution]{Spatial Resolution} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/position_resolution/20220421_spatResThdScan_cont.pdf} \caption{Continuous n-implant} \label{fig:sRThd_contThinning} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/position_resolution/sRVsThd_thinnedWCz_202202_2.pdf} \caption{Segmented n-implant} \label{fig:effVsThd_gapThinning} \end{subfigure}% \caption{Spatial resolution as a function of the detection threshold using sensors with different thicknesses and wafer materials for the pixel flavour with (a) continuous and (b) segmented n-implant using a bias voltage of -6\,V/-6\,V at the p-well/substrate..} \label{fig:sRThd} \end{figure*} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figures/position_resolution/sRVsThd_thinnedWCz_202203.pdf}% \caption{Spatial resolution as a function of the substrate bias voltage at a threshold of 348\,e for a Czochralski sample with segmented n-implant. The p-well voltage is fixed to -6\,V.} \label{fig:sRThd_CzBias_gapN} \end{figure} The spatial resolution in row direction as a function of the detection threshold is presented in Fig.~\ref{fig:sRThd} for both pixel flavours and the results at the minimum threshold are listed in Table~\ref{tab:performance_all}. For thresholds above 1200\,e, no $\eta$-correction is applied, since the application of the algorithm becomes challenging due to the small number of two-pixel clusters. As the modifications to the n-implant are not applied in row direction, the charge sharing behaviour is similar for both pixel flavours and the spatial resolution is thus in good agreement within the uncertainties. Although the resolution degrades with increasing threshold due to the decrease in cluster size, the binary resolution of \SI{8.7}{\micro \meter} is never exceeded. For high threshold values, an improvement of the spatial resolution is caused by the formation of inefficient regions at the pixel edges, as displayed in Fig.~\ref{fig:effMap_contN}. These inefficiencies lead to an effectively smaller pixel pitch that results in an artificial improvement in spatial resolution. Within the uncertainties, the spatial resolution for the $\geq \SI{50}{\micro \meter}$ thick sensors are in good agreement owing to the similar cluster size at a given threshold. The spatial resolution of the \SI{40}{\micro \meter} thick sensor degrades for thresholds smaller than $1000$\,e owing to the smaller cluster size at a given threshold (cf. Fig.~\ref{fig:meanSizeThd}). For the flavour with continuous n-implant, the degradation is as high as \SI{7}{\percent} at the minimum detection threshold. The difference vanishes at high thresholds, where single-pixel clusters dominate for all sensor thicknesses. The higher signal from the Czochralski sensors leads to a larger cluster size and consequently an improved spatial resolution. The difference is particularly noticeable at small threshold values in accordance with the larger difference in cluster size that was presented in Fig.~\ref{fig:meanSizeThd_gapThinning}. At the minimum operation threshold listed in Table~\ref{tab:performance_all}, the resolution improves by about \SI{15}{\percent}. At high thresholds, the mean cluster size converges to one resulting in an identical resolution within the uncertainties. With increasing substrate bias voltage, the depleted region expands evoking a higher signal that leads to a larger cluster size and consequently an improved spatial resolution, as illustrated in Fig.~\ref{fig:sRThd_CzBias_gapN} for a Czochralski sensor with segmented n-implant at a comparably high threshold of 348\,e. Between -6\,V and -20\,V, the spatial resolution improves by approximately \SI{13}{\percent}. While the comparably high threshold limits the absolute performance improvement, the potential of the Czochralski substrate is still distinguishable. \subsection{Reconstruction and Analysis} The software framework Corryvreckan~\cite{corry_paper, corry_manual} is used to perform offline reconstruction and analysis of the test-beam data. Individual events are defined by CLICTD readout frames. The Timepix3 hit timestamp and the TLU trigger timestamp associated to MIMOSA-26 hits determine their allocation to a specific event by requiring that the timestamp is within a CLICTD frame. The subsequent analysis is performed on an event-by-event basis. For each telescope plane and the DUT, adjacent pixel hits are combined into clusters and the cluster position is calculated by a ToT-weighted centre-of-gravity algorithm. For the CLICTD sensor, the cluster position in row direction is corrected using the $\eta$-formalism to take non-linear charge sharing between pixel cells into account~\cite{Akiba:2011vn, clictdTestbeam}. In addition, \textit{split clusters} are considered for measurements with rotated DUT i.e. a gap of one pixel is permitted between pixel hits in a cluster. Track candidates are formed from clusters on each of the seven telescope planes. For track fitting the General Broken Lines (GBL) formalism~\cite{Blobel:2006yi} is used to account for multiple scattering in the material. The telescope alignment is performed by minimising the track $\chi^2$ distribution. Tracks with a $\chi^2$ per degree of freedom larger than three are discarded. The telescope track resolution at the position of the DUT is \SI{2.4}{\micro \meter} for the close telescope plane spacing and \SI{5.6}{\micro \meter} for the wide rotation configuration, as estimated from analytical calculations based on~\cite{resolutionSimulator, Jansen:2016bkd}. A reconstructed track is associated with a CLICTD cluster by requiring a spatial distance of less than 1.5 pixel pitches between the global track intercept position on the DUT and the reconstructed cluster position as well as a track timestamp within the same CLICTD frame as the cluster. It has been verified that the spatial cut is sufficiently large even for the larger track resolution at the position of the DUT in the wide telescope-plane configuration. Clusters adjacent to the edge of the pixel matrix are rejected to exclude edge effects. The following observables are considered to characterise the DUT: \paragraph{Cluster size} The cluster size is defined as the number of pixels in a given cluster. Correspondingly, the cluster size in column/row direction is given by the size of the cluster projected onto the respective axis. The systematic uncertainty on the cluster size arises from uncertainties in the threshold calibration, as detailed in~\cite{clictdTestbeam}. At the minimum operation threshold, the systematic uncertainty evaluates to $\pm 0.01$ for the mean cluster size and the statistical uncertainty is of the order of $10^{-4}$. \paragraph{Hit-detection efficiency} The hit-detection efficiency is calculated as the number of associated tracks divided by the total number of tracks. The considered tracks are required to pass through the acceptance region of the DUT, excluding one column/row at the pixel edge as well as masked pixels and their direct neighbours. The statistical uncertainty is calculated using a Clopper-Pearson interval of one sigma~\cite{clopper_pearson} and the systematic uncertainty arises from the threshold calibration as mentioned above. \paragraph{Spatial resolution} The unbiased spatial residuals are calculated as the difference between the reconstructed cluster position and the track intercept on the DUT. The RMS of the central $3\,\sigma$ of the distribution is extracted and the spatial telescope track resolution of \SI{2.4}{\micro \meter} for the close and \SI{5.6}{\micro \meter} for the wide telescope configuration is quadratically subtracted, which yields the spatial resolution of the DUT. At the minimum operation threshold, the statistical uncertainty on the spatial resolution is of the order of $10^{-2}\SI{}{\micro \meter}$. The systematic uncertainties result from uncertainties in the telescope single-plane resolution given in~\cite{Jansen:2016bkd}. In addition, the plane positions in z-direction are shifted independently by $\pm \SI{1}{\milli \meter}$ and the calculation of the track resolution at the position of the DUT is repeated. Propagating the deviations to the spatial resolution yields an uncertainty of $\pm \SI{0.1}{\micro \meter}$. The propagated threshold uncertainty evaluates to $\pm \SI{0.1}{\micro \meter}$ as well and the total systematic uncertainty is given by the quadratic sum of the two. \paragraph{Time resolution} Similar to the spatial residuals, the time residuals are defined as the difference between the DUT timestamp and the track timestamp. Signal-dependent time-walk effects are corrected by exploiting the ToT information. The mean time difference between the DUT and the track timestamp are subtracted for each ToT bin separately. After correction, the RMS of the central $3\,\sigma$ of the time residuals distribution is calculated and the track time resolution of 1.1\,ns~\cite{Pitters:2019yzg} is quadratically subtracted. The statistical uncertainties are of the order of 0.01\,ns. The systematic uncertainties are composed of the threshold uncertainty evaluating to $\pm 0.1$\,ns and sub-pixel by sub-pixel variations. To quantify the latter, the analysis is repeated for every sub-pixel in a detection channel individually and the spread of the time resolution is used to define the systematic uncertainty, which yields $\pm 0.1$\,ns at the minimum operation threshold. \paragraph{Studies with inclined particle tracks} The inclination angle of the DUT with respect to the beam is taken from the alignment procedure. The angle agrees with the nominal rotation angle set for the rotation stage apart from a constant offset. It was confirmed that the alignment has converged by manually modifying the plane orientation by $\pm 0.5^\circ$ and repeating the alignment. A deviation of less than $\pm 0.01^\circ$ is found with respect to the initial alignment. \subsection{Time Resolution} \begin{figure*}[bpt] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/timing/20220421_timeVsThd_cont.pdf} \caption{Continuous n-implant} \label{fig:tS_modThinning_202112} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{Figures/timing/timeVsThd_thinnedWCz_202202.pdf} \caption{Segmented n-implant} \label{fig:timeVsThd_thinnedWCz} \end{subfigure}% \caption{Time resolution as a function of the detection threshold using sensors with different thicknesses and wafer materials for the pixel flavour with (a) continuous and (b) segmented n-implant using a bias voltage of -6\,V/-6\,V at the p-well/substrate..} \label{fig:imeVsThd} \end{figure*} The time resolution after time-walk correction is depicted in Fig.~\ref{fig:imeVsThd} as a function of the detection threshold for both pixel flavours. The results at the minimum operation threshold are listed in Table~\ref{tab:performance_all}. With increasing threshold, the time resolution degrades owing to a stronger contribution of amplitude noise causing a time jitter. The jitter is inversely proportional to the slope of the signal at the threshold-crossing point, which flattens towards the peak of the signal. It has been shown that the time resolution is mostly dominated by the front-end of the device~\cite{clictdTestbeam}, which overshadows sensor effects related to the device thickness. Nevertheless, a \SI{14}{\percent} improvement is visible for the Czochralski sensor owing to a larger seed signal, which suppresses time jitter. An increase in substrate bias voltage leads to an additional improvement in time resolution, as presented in Fig.~\ref{fig:tS_CzBias_gapN} at a threshold of 348\,e. Between -6\,V and -20\,V, the time resolution improved by approximately \SI{9}{\percent}. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figures/timing/timeVsBias_CzW15_202203.pdf}% \caption{Time resolution as a function of the substrate bias voltage at a threshold of 348\,e for a Czochralski sensor with segmented n-implant. The p-well voltage is fixed to -6\,V.} \label{fig:tS_CzBias_gapN} \end{figure}
1,477,468,750,558
arxiv
\section{Introduction} The question of how a background medium affects the motion of a charge carrier is one of the most heavily debated issues in solid state physics. In this connection the background may typify a great variety of situations. It could, for example, represent a simple deformable lattice~\cite{La33,Pe46a}, or a highly correlated Mott insulator~\cite{LNW06,Be09,WOH09}. In the former case, the mutual interaction between the charge carrier and the lattice deformation may constitute a new quasiparticle, the so-called (lattice) polaron~\cite{Fi75}, an electron dressed by a phonon cloud. Here, depending on the nature of the particle-phonon coupling~\cite{SS93}, non-polar short-ranged or polar long-ranged, (small) Holstein~\cite{Ho59a,Ho59b} or (large) Fr\"ohlich~\cite{Fr54,Fr74} polarons will form, with distinct transport and optical properties~\cite{Alex07,IRF06,FT07,AD10}. In the latter case, the undoped (insulating) parent compounds develop magnetic, orbital, or charge ordered phases at low temperatures~\cite{TNFS00}. Prominent examples are the three-dimensional (3D) ferromagnetic (colossal magnetoresistive) manganites~\cite{JTMFRC94}, the quasi-2D antiferromagnetic (high-temperature superconducting) cuprates~\cite{BM86}, the 2D transition metal dichacogenides (competition between unconventional superconductivity and charge-density-wave order)~\cite{KLVT14}, or the 1D halogen-bridged (charge-density-wave) transition-metal complexes~\cite{BS93}. Doping such systems, the charge carriers, electrons or holes, cannot propagate freely as their motion normally disturbs the spin-, orbital-, or charge-order of the background. Nevertheless coherent particle transfer may occur on a reduced energy scale: this time the particles have to carry a cloud of background (e.g., spin or orbital) excitations. The corresponding quasiparticles are frequently called in the literature spin~\cite{KLR89,MH91a} or orbital polarons~\cite{WOH09}. The Edwards fermion-boson model constitutes a paradigmatic model to describe quantum transport and polaron formation for such situations~\cite{Ed06}. It is based only on a few, very plausible assumptions: (i) as a charge carrier moves along a transport path in a solid, it creates an excitation with a certain energy in the background medium at the site it leaves or annihilates an existing excitation at the site it enters, (ii) because of quantum fluctuations, excitations in the background may appear and disappear spontaneously, and (iii) the (de)excitations of the background can be parameterized as bosonic degrees of freedom. In this way, the model captures, to varying degrees, some of the basic aspects of the Holstein-, $t$-$J$-, Hubbard- or Falicov-Kimball-model physics. The Edwards Hamiltonian reads \begin{equation} H=H_{b} -\lambda\sum_i(b_i^{\dagger}+b_i^{}) +\omega_0\sum_i b_i^{\dagger}b_i^{}\,, \label{model1} \end{equation} where the first term, $H_{b}$=$-t_{b}\sum_{\langle i, j \rangle} f_j^{\dagger}f_{i}^{} (b_i^{\dagger}+b_j^{})$, describes a boson-affected nearest-neighbor hopping $(\propto t_b)$ of spinless fermionic particles $(f_i^{(\dagger)})$, the second term allows for the relaxation $(\propto \lambda)$ of the bosons ($b_{i}^{(\dagger)}$), and the third term gives the energy $(\propto \omega_0)$ of the bosonic background excitations. We note that in the Edwards model, the coupling between fermions and bosons notably differs from that in the Su-Schrieffer-Heeger (SSH) model~\cite{SSH79,CSG97,MDCBNPMS10} where the modulation of the electronic hopping is given by the difference of the on-site lattice displacements $(\hat{X_i}-\hat{X}_j)$, with $\hat{X_i}\propto (b_i^\dagger + b_i^{})$ and $b_i^\dagger$ creating a phonon at site $i$. Self-evidently, the Edwards fermion-boson coupling also differs from the local Holstein electron-phonon interaction~\cite{Ho59a,Ho59b}, which is to a quantized (dispersionless) optical normal mode of lattice vibration. In the Edwards model, the (Einstein) boson simply accounts for the (de)excitation of the background, through which the fermion is moving. So far the Edwards model could only be solved in 1D, namely by numerical approaches like exact-diagonalization and density matrix renormalization group (DMRG) techniques. There, for a single particle, quasi-free, diffusive, or correlated transport emerges~\cite{AEF07}. The latter sets in at small $\lambda$ and large $\omega_0$ when the background becomes ``stiff'', a case that resembles the motion of a hole in an antiferromagnetic spin background~\cite{Tr88,KLR89,MH91a,EEAF10}. At half-filling, $\tfrac{1}{N} \sum_{i}\langle f_i^{\dagger}f_{i}^{}\rangle$=$1/2$, a metal-insulator quantum-phase transition has been proven to exist: Entering the strongly correlated regime a repulsive Tomonaga-Luttinger liquid gives way to a charge-density-wave ground state~\cite{WFAE08,EHF09}. Off half-filling, attractive Tomonaga-Luttinger-liquid phases and regions with phase separation have been detected~\cite{ESBF12}. In 2D, the treatment of the Edwards model was only approximate. In the single-particle sector, using the momentum-average approach~\cite{BF10}, the quasiparticle dispersion throughout the Brillouin zone has been calculated. Very recently, employing the projective renormalization method~\cite{CBFBS16}, a tendency towards unconventional superconductivity has been observed for the 2D half-filled band case. In this paper we focus on Edwards polaron formation in the single-particle sector. Since the microscopic structure of the Edwards polaron is rather diverse, with---depending on the model parameters---lattice polaron or spin polaron characteristics, we utilize a self-consistent variational numerical diagonalization technique~\cite{CCM07,CDC11,CMCD12,CM13,CTM14} to address this issue in one to three spatial dimensions. Due to the huge bosonic Hilbert space, the dimensionality effects on the Edwards polaron problem have not been studied before. The proposed method is capable of computing the band dispersion, the quasiparticle weight, the effective mass, the Drude weight and the spatial particle-boson correlations of the polaron in 1D to 3D. Thereby we particularly investigate how the new energy scale of ``coherent'' particle transport develops. That the Edwards model actually captures {\it two} transport channels, a free-fermion hopping channel on a reduced energy scale and the original boson-affected one, becomes already visible performing an unitary transformation, $b_i\to b_i +\lambda/\omega_0$, which eliminates the boson relaxation term. Omitting the constant energy shift $N\lambda^2/\omega_0$ ($N$ is the number of lattice sites), we obtain \begin{equation} H\to H=H_{f} +H_{b} +\omega_0\sum_i b_i^{\dagger}b_i^{}\,, \label{model1} \end{equation} where $H_{f}$=$-t_f\sum_{\langle i, j \rangle} f_j^{\dagger}f_{i}^{}$ with $t_f$=$2\lambda t_b/\omega_0$. The physics of the Edwards model is governed by two parameter ratios: $t_f/t_b$ (relative strength of free and boson-affected hopping) and $(\omega_0/t_b)^{-1}$ (rate of bosonic fluctuations). In this way $H$ perfectly describes the interplay of ``coherent'' and ``incoherent'' transport channels realized in many condensed matter systems. In what follows we measure all energies in units of $t_b$. The paper is organized as follows. Section II briefly introduces our numerical approach. In Sec. III, we determine the ground-state and spectral properties of the Edwards model and discuss several issues of the Edwards polaron problem, especially the dimensionality effect. Section IV gives a brief summary and contains our conclusions. \section{Numerical Approach} \begin{table}[b] \begin{tabular}{|c|c|c|cc|c|} \hline \hline $t_f$ & $\omega_0$ & $k$ & $E_0$(SC-VED) &[basis size]& $E_0$(VED) \\ \hline 20 & 0.5 & $0$ & -40.5922 &(2000000) & -40.591 \\ \hline 20 & 0.5 & $\pi$ & -40.05 &(2000000) & -40.01 \\ \hline 2 & 0.5 & $0$ & -5.427354 &(1250000) & -5.42734 \\ \hline 2 & 0.5 & $\pi$ & -5.020042 &(1500000) & -5.02 \\ \hline 5 & 2.0 & $0$ & -10.388823488 & (1250000) & -10.388823488 \\ \hline 5 & 2.0 & $\pi$ & -8.386998 &(1250000) & -8.38 \\ \hline 1 & 2.0 & $0$ & -2.59317697703908 & (750000) & -2.59317697704 \\ \hline 1 & 2.0 & $\pi$ & -0.8637159668 &(1500000) & -0.86371596 \\ \hline \hline \end{tabular} \caption {Ground-state energy in a certain $k$ sector for the single-particle Edwards model in 1D. SC-VED results are compared with data obtained by the VED approach (which is basically the same as used in Ref.~\onlinecite{AEF07}). Within VED, a variational basis of $18054141$ states is used. The numerical accuracy is specified in such a way, that the ground-state energies of the $(N_h-1)$-th shell and the $N_h$-th shell match up to the digit presented. For the SC-VED this means that these digits do not change in going from the penultimate to the final iteration cycle. Given the dimension of the basis and the computational effort, the accessible precision of the data strongly depends on the model parameters and the momentum.} \label{t1} \end{table} A variational basis is constructed by diagonalizing the Edwards fermion-boson model numerically, starting with a state of a bare electron and adding new states by repeated application of the Hamiltonian, say $N_h$ times. All translations of these states on the infinite lattice are included. Hereafter, we refer to such variational approaches based on exact diagonalization as VED~\cite{BTB99,BT01,CCM07,CDC11,CMCD12,CM13,CTM14,PCT16}. We will also apply a self-consistent VED (SC-VED) scheme, which has successfully been used to investigate the (extended) Holstein model~\cite{CM13,CTM14}. In the SC-VED framework, we first generate a relatively small basis set and calculate the ground-state energy and the wave function. Then the states with highest probability were identified and the basis is optimized by only applying the Hamiltonian on the chosen (highly probable) states. Accordingly, the size of the basis is increased. Then the ground-state energy and the wave function are calculated again. This process is continued self-consistently by increasing the basis size at each cycle till the desired accuracy in the ground-state energy is obtained. To test the accuracy and efficiency of the SC-VED method, we have recalculated the ground-state energy for a single electron in the 1D Edwards model, a problem that has been solved previously~\cite{AEF07}. Table~I demonstrates the high precision of the SC-VED data, in spite of using a much smaller basis space. For comparison, the VED results included in Table~I were obtained within a variational space of $18054141$ states, corresponding to $N_h=18$. Actually, the SC-VED scheme gives even a lower ground-state energy. Note that keeping constant the computational effort, the numerical accuracy of our data depends on the model parameters, as well as on the momentum. Similarly to the Holstein model~\cite{BTB99}, a higher precision can be reached with less resources if the number of phonons involved is smaller. For the Edwards (Holstein) model this is the strong-correlation (weak-coupling) case, realized at small $\lambda$ and large $\omega_0$ (small polaron-binding-energy phonon-frequency ratio). That one achieves a lesser accuracy for large momenta was observed for Holstein polaron model as well~\cite{BTB99}. The reason is the extent of the lattice deformation (size of the polaron), which increases as $k$ approaches the Brillouin-zone boundary, thereby making any finite-cluster calculation more susceptible to finite-size effects. To ensure that the basis contains an adequate number of bosons for a given parameter set, the weight of the $m$-boson state in the ground state, $|C_{0}^{m}|^{2}$ (for definition, see Refs.~\onlinecite{BWF98,FLW00}), has to be monitored. The main panel of Fig.~\ref{f1} illustrates the convergence of $|C_{0}^{m}|^{2}$ in the course of the VED iteration process. Beyond that, one recognizes that most bosons are required in the limit of small boson frequencies $\omega_0$ (see inset). We note by now that the limit of $\omega_0\to 0$ substantially differs from the adiabatic limit of the Holstein polaron model~\cite{AFT10}, in that the fermions in the Edwards model do not couple to an (optical) phonon which leads to a static lattice displacement as the frequency of the vibrational mode goes to zero. \begin{figure}[b] \includegraphics[scale=0.4]{fig1.pdf} \caption{ (Color online) Main panels: Pointwise convergence of the weight of the $m$-boson state in the ground state, $|C_{0}^{m}|^{2}$, for $k$=$0$ (top) and $k$=$\pi$ (bottom), where $t_f=20$, $\omega_{0}=0.5$ (left) and $t_f=1$, $\omega_0=2$ (right). $\Delta$ specifies the absolute value of the difference between the first (red triangles up), second (green diamonds), third (blue squares), fourth (violet circles) iteration step and the final result obtained---with the requested accuracy--after five iterations. Insets: Converged values of $|C_{0}^{m}|^{2}$ (filled black circles).} \label{f1} \end{figure} \begin{figure}[b] \vskip -0.0cm \includegraphics[scale=0.33]{fig12.pdf} \caption{(Color online) Comparison of polaron dispersion with VED and VED-PBC basis set for 3D and 1D (inset) Edwards polaron for $t_b=1$ and $\omega_0=1$ for small values of $t_f$. $N_h= 8$ basis set has been used for 3D polaron with basis sizes $1755748$ for VED and $1500868$ for VED-PBC (on a $9\times 9$ lattice). The basis size for 1D Edwards with VED is $18054141$ ($N_h$=$18$) and for VED-PBC is $41485$ ($N_h$=$11$). The basis size for $N_h$=$11$ with VED is $41488$. } \label{fcom} \end{figure} Let us emphasize that the VED method allows for a (de facto) continuous variation of the momentum $k$. This is because all translations of the basis states, generated by ``acting'' $N_h$ times with the ``off-diagonal'' hopping and fermion-boson coupling terms of the Hamiltonian on an initial root state, on an infinite lattice are included~\cite{BTB99,FT07}. Treating the 1D Edwards model with $N_h=18$ means that a single bosonic excitation 18 lattice sites away from the fermion is still taken into account. That is why a small Edwards (Holstein or SSH) polaron never feels the boundary in reality. This advantage of the VED persists in the SC-VED scheme~\cite{CM13}. What happens if we now apply periodic boundary conditions (PBC)? Generating the VED basis set on a 1D lattice with $35$ sites and PBC, the Edwards polaron, will be unaffected by the boundary conditions until $N_h$=$17$. The PBC comes first into play at the $18^{th}$ basis generation step, but even here, states having 18 bosons but no bosonic excitation at the boundary remain unaffected. This argument holds also in higher dimensions, albeit to a weaker extent. Constructing an $N_h=9$ basis set on a 2D $9\times 9$ lattice with open and periodic boundary conditions, a difference arises at $N_h=5$. To substantiate our reasonings, Fig.~\ref{fcom} shows the Edwards polaron band dispersion for the 3D and 1D (inset) cases, comparing the VED and VED-PBC schemes. Apparently, the data match very well: In 3D (1D) the first 3 (9) decimals agree. Sincethe physically most important processes take place in the immediate vicinity of the polaronic quasiparticle, the smaller the radius of the Edwards polaron the better is the agreement of the approaches. Hence the VED-PBC based on small lattices becomes highly efficient, whenever the Edwards polaron is rather small, i.e., the background medium is strongly correlated. Next, just to show that our numerical scheme also admits the calculation of excited states and spectral properties, Fig.~\ref{f2} displays the dispersion $E_{1}(k)$ of the first excited state [besides those of the ground state $E_{0}(k)$], and the single-particle spectral function, \begin{equation} A(k,\omega)=\sum_n |\langle \psi_n^{(1)} | f_k^\dagger |0\rangle |^2 \delta(\omega-\omega_0)\,, \label{ako} \end{equation} in the strongly correlated regime. In Eq.~\eqref{ako}, $|\psi_n^{(1)}\rangle $ is the wave function of the $n$-th excited state in the one-particle sector and $|0\rangle$ is the vacuum. Since the particle motion in this parameter regime is essentially determined by the boson-assisted hopping, we find well separated peaks in the spectrum of all the selected $k$-sectors ~\cite{AEF07}. Of course, the ground-state band dispersion follows those of the first peak in $A(k,\omega)$. Note that the peak corresponding to the first excited state has only tiny spectral weight, and therefore is hardly visible in the spectral function. \begin{figure}[h] \vskip -0.0cm \includegraphics[scale=0.4]{fig2.pdf} \caption{(Color online) Spectral function $A(k,\omega)$ (top) and band dispersion (bottom) of the 1D Edwards model in the first Brillouin zone. Results are given for $t_f=0.1$ and $\omega_0=2$. In the upper panels, vertical red lines indicate the position of the first excited state (located at $\omega\simeq0.63$). All calculations were performed with a VED basis set of $18054141$ states ($N_h=18$).} \label{f2} \end{figure} Concerning the computational resources, our 1D VED single-electron calculation takes a basis constructed with $N_h$=$18$. Then, for a lattice with $37$ sites, the matrix dimension is of the order of $18$ millions. For comparison: In 2D (3D), we will take $N_h$=$10$ (8), which means a dimension of about $11$ millions for a $9\times 9$ ($5\times 5 \times 5$) lattice. In what follows we employ the SC-VED scheme to obtain a better convergence (in the $k$=$0$ sector) for all spatial dimensions. \section{Results and Discussion} \subsection{Polaron band dispersion} \begin{figure}[b] \vskip 0.5cm \includegraphics[scale=0.4]{fig3.pdf} \caption{(Color online) Polaronic band dispersion ($E({\bf k})-E_0$) of the 1D, 2D, and 3D Edwards model in the small-$t_f$ large-$\omega_0$ regime.} \label{f3} \end{figure} \begin{figure}[t] \vskip -0.0cm \includegraphics[scale=0.39]{fig13a.pdf} \hspace*{3.3cm}{\includegraphics[scale=0.50]{fig13b.pdf}} \caption{ (Color online)\label{fCL} Sketch of the lowest order vacuum-restoring processes in the Edwards model. The top panel illustrates the three-boson, three-site sequence of process that gives rise to an effective second nearest-neighbor fermion hopping in 1D~\cite{AEF07}. The site occupied by a fermion is blue and the arrow indicates the direction of the next $t_b$-hopping. Any fermion hopping either creates a boson (drawn as a red asterisk) at the site the particle leaves or absorbs a boson from the site it enters. While only collinear hops are allowed in 1D, collinear, noncollinear round the corner~\cite{BF10}, and closed loop processes are allowed in 2D (middle panel). Note that, in 3D, any hopping process to the next nearest-neighbor body-diagonal site contains an odd number of hops and therefore is not vacuum restoring (both points belong to disconnected vacuum states). Vacuum-restoring hopping processes are only possible to the second nearest-neighbor body diagonal site, see bottom panel. Again these processes are composed of 1D collinear, noncollinear, and/or 2D closed loop hops; they are of much higher ($18^{th}$) order in $t_b$, however.} \end{figure} We first explore the quasiparticle energies $E({\bf k})$ in the Edwards model. Figure~\ref{f3} gives the polaron band dispersion in the regime where strong correlations in the background hinder the particle motion. Such a situation is realized at large values of $\omega_0$, where the bosonic excitations that are inherently connected to particle hopping are costly in terms of energy, and at small $t_f$, i.e., at small $\lambda$, when the ability of the background to relax is low. As a result the (coherent) bandwidth, defined by the difference between the maximum and minimum of $E({\bf k})$ in the first Brillouin zone, is strongly reduced compared to the free particle's one. Clearly a ``true'' polaron band $E({\bf k})$ becomes only apparent if its bandwidth is smaller than the distance to the polaron-plus-one-boson continuum. In other words, the (lowest) quasiparticle band is well-separated from the incoherent part of the spectrum (or higher quasiparticle bands). This obviously is achieved in the parameter regime used for Fig.~\ref{f3}. We furthermore see that the polaron's bandwidth becomes larger as the dimensionality of the system increases from 2D to 3D. This is not difficult to understand because a string of bosonic excitations tends to bind the particle to the place where it starts its excursion. In higher dimensions, there are more ways to unwind such a string. Interestingly, coherent motion is nevertheless possible in 1D, and even for $t_f$=$0$, because there exists a six-step vacuum-restoring process~\cite{AEF07} which is a 1D representative of the 2D ``Trugman path''~\cite{Tr88} observed in a 2D N\'{e}el-ordered spin background, see Fig.~\ref{fCL}. Since any hop of the particle changes the boson number by one, any vacuum-restoring process has to be in relationship to an even number of hopping events. It is worth noting that the strong correlations in the background medium give rise to a boson-modulated hopping that triggers a doubling of the unit cell (halving of the Brillouin zone). When $t_f=0$ (i.e., $\lambda=0$, and only vacuum-restoring hopping processes are allowed) the backfolding becomes perfect. This has been previously observed in 1D~\cite{AEF07} and 2D~\cite{BF10}. Figure~\ref{f3} demonstrates that $E({\bf k})$ is $(\pi, \pi, \pi)$-periodic also in 3D. The resulting dispersion reflects the developing many-body correlations in the background medium. Since the coherent bandwidth (and the effective mass, see Sec.~III.C) is dominated by the sequence of vacuum-restoring closed-loop hopping processes (which are two-dimensional in 3D as well, cf. Fig.~\ref{fCL}), the 2D and 3D bandwidths do not differ much. The new periodicity of the Brillouin zone at $t_f$=$0$ is illustrated by the contourplots Fig.~\ref{f4} and Fig.~\ref{f5} for the 2D and 3D Edwards model, respectively. Of course, any finite $t_f$ will weaken the backfolding of the polaron band dispersion (see Fig.~\ref{f4}, right panel). \begin{figure}[t] \hspace{-2.00cm}\includegraphics[scale=0.45]{fig4.pdf}\hspace{-1.5cm} \caption{ (Color online) Ground-state energy shift, $E(k_x,k_y)$-$E_0$, as a function of $(k_x,k_y)$ for the 2D Edwards model with $\omega_0$=$1$ and $t_f$=$0$ [note the band folding along the $(1,1)$-direction in reciprocal space] (left), $t_f$=$0.1$ (right).} \vskip 0.5cm \label{f4} \end{figure} \begin{figure}[t] \vskip -0.0cm \includegraphics[scale=0.5]{fig5.pdf} \caption{ (Color online) Contour plot of $E(k_x,k_y,k_z)$-$E_0$ as a function of $(k_x,k_y)$ at $k_z$=$0, \pm \pi$ for the 3D Edwards model. Again, $t_f$=$0$ and $\omega$=$2$ [note the band folding along the $(1,1,1)$-direction in reciprocal space].} \label{f5} \end{figure} \subsection{Quasiparticle weight} Further information about the nature of the polaronic quasiparticle in the Edwards model can be obtained by computing the quasiparticle residue, \begin{eqnarray} Z({\bf k})=\vert\langle \psi^{(1)}_k \vert f_{k}^{\dag} \vert 0\rangle\vert^{2}\,, \end{eqnarray} which measures the overlap (squared) between the bare particle's band state $f_{k}^{\dag} \vert 0\rangle$ and the polaron ground-state wave function $|\psi^{(1)}_k\rangle$~\cite{BTB99,BF10}. Figure~\ref{f6} gives $Z({\bf k})$ along lines of high symmetry in the Brillouin zone. First, we note that $Z({\bf k})$ is significantly reduced compared to free particle value (one). That is the Edwards polaron is heavily dressed by a cloud of bosonic excitations. Even so, it is much less renormalized than the Holstein polaron~\cite{WRF96,WF98a,FT07}. Obviously, in the strong correlation regime, the Edwards polaron rather behaves as a spin polaron. Second, while $Z({\bf k})$ has a similar profile as $E({\bf k})$ throughout the Brillouin zone (cf. Fig.~\ref{f3}), it changes very little in real terms. This has been already observed for the 2D case within the momentum average approximation~\cite{BF10}, and retains it validity, as the exact data of Fig.~\ref{f6} indicate, in 1D and 3D as well. We note that at finite $t_f$, the quasiparticle weight is larger [smaller] at (0,[0,0]) [$(\pi,[\pi,\pi])$] than the corresponding $t_f$=$0$-value. This is because the effective next-nearest-neighbor vacuum-restoring hopping process becomes less important if $t_f$$>$$0$. \begin{figure}[t] \includegraphics[scale=0.4]{fig6.pdf} \caption{(Color online) Quasiparticle weight, $Z({\bf k})$, along the major directions of the Brillouin zone for the 1D (top), 2D (middle), and 3D (bottom) Edwards model at $\omega_0$=$1$, and $t_f$=$0$ (red), $t_f$=$0.01$ (blue). The inset gives $Z({k})$ at $\omega_0$=$2$ for the 1D case.} \label{f6} \end{figure} \subsection{Effective mass} Being able to calculate $E({\bf k})$ with high precision for continuously varying ${\bf k}$, we can compute the effective mass of the Edwards polaron for a $d$-dimensional hypercubic lattice, using the standard formula, \begin{equation} \frac{m}{m^*}=\frac{1}{d}\left[\sum_{i=1}^{d}\frac{\partial^2E(\vec{k})}{\partial {k_{i}}^2}\right]_{k_i=0}\;, \label{rem} \end{equation} where $m$ denotes the ``reference'' mass, describing the a situation when both hopping channels are of equal importance (i.e., $t_f=t_b=1$). \begin{figure}[t] \includegraphics[scale=0.4]{fig7.pdf} \caption{ (Color online) Effective mass $m^*/m$ in dependence on $t_f$ for the 1D, 2D, and 3D Edwards model (from top to bottom). Insets magnify the region of very small $t_f$.} \label{f7} \end{figure} Figure~\ref{f7} displays the results obtained for the Edwards polaron's effective mass in 1D, 2D, and 3D. We first note that a finite $m^*$ results even if the ```free particle'' has an infinite mass ($t_f$=$0$). The Edwards polaron transition is always continuous. By contrast, in the SSH model, a sharp transition might appear when the coupling depends not only on phonon momentum but also on the electron momentum~\cite{MDCBNPMS10}. Considerable differences are also observed compared to the Holstein model. For example, the dimensionality affects the polaron crossover in a different manner (cf. the results given for the Holstein model in Ref.~\onlinecite{KTB02}). While the crossover becomes more defined in higher dimensions for the Holstein case, the opposite tendency is observed for the Edwards polaron. Moreover, for the small Holstein polaron, the inverse effective mass obtained from Eq.~\eqref{rem} differs from $Z({\bf k=0})$ by less than 1\% ~\cite{BTB99}. As can be seen by comparing Figs.~\ref{f6} and~\ref{f7}, this difference is much larger (up to a factor of 100) for the Edwards polaron when $t_f\to 0$ in the strongly correlated regime. In the case of boson-assisted transport, the dynamical generation of the effective mass is dominated by contributions from closed loops, which are comparably important in 2D and 3D (we already discussed in Sec. III~A that, in 3D, the lowest-order vacuum-restoring processes are basically the same as in 2D.) Two more comments are in order here. First, in the ``diffusive'' or ``fluctuation-dominated'' transport regimes~\cite{AEF07} of small $\omega_0$, the mass enhancement is considerably smaller. In this regimes, the quasiparticle band picture may even break down for the Edwards model (mainly because $E({\bf k})$ is no longer separated from the polaron-plus-one-boson continuum). Second, as $t_f$ considerably exceeds $t_b$, we enter the quasi-free transport regime. Of course, for $t_f \to \infty$, $m^*$ (measured in units of the reference mass $m$) tends to zero. \begin{figure}[b] \includegraphics[scale=0.4]{fig8.pdf} \caption{(Color online) Drude weight $D$ scaled to the kinetic energy $E_{kin}$ as a function of $t_f$ for the 1D, 2D, and 3D Edwards model. Insets magnify the small-$t_f$ regime.} \label{f8} \end{figure} \subsection{Drude weight} In situations where electrical transport differs entirely from free particle motion, the Drude weight is {typically used to characterize transport}~\cite{Ko64,AEF10}. The Drude weight $D$ serves as a measure of coherent, free-particle like transport, and fulfills the $f$-sum rule. We have $-D/E_{kin}$=$1/2$ for a free particle, where $E_{kin}$ is the kinetic energy. By contrast, $-D/E_{kin}\ll 1/2$ for diffusive transport. For our fermion-boson system, the Drude weight can be obtained by adding the same phase factor on the hopping matrix elements along the spatial directions of the hypercubic lattice ($t_{f} \rightarrow t_{f} e^{i \phi}$, $t_{b} \rightarrow t_{b} e^{i \phi}$, which breaks time-reversal symmetry), and then exploit the relation~\cite{Ko64,PCT16}: \begin{equation} D=\left. \frac{\partial^2E_0(\phi)}{\partial \phi^2} \right|_{\phi=0} \end{equation} (in units of $\pi e^2$), where $E_0(\phi)$ is the ground-state energy in the presence of a non-vanishing phase $\phi$. Figure~\ref{f8} shows the dependence of $-D/E_{kin}$ on $t_f$ at different values of $\omega_0$. The 1D results are in excellent agreement with those of Ref.~\onlinecite{AEF07}. Here, the data for $\omega_0$=$2$ indicate that transport is quasi-free with $-D/E_{kin}\lesssim 1/2$ in a wide range of $t_f$. For $\omega_0$=$2$ and $t_f$=$0$, $D$ increases by about a factor of two (three) in going from 1D to 2D (3D), which is basically due to the increasing coordination numbers of the corresponding hypercubic lattices. When $\omega_0$ decreases, the particle will be strongly scattered by background fluctuations, and $-D/E_{kin}$ tends to its asymptotic value 1/2 as $t_f\to \infty$ much slower. This characterizes the incoherent regime. On the other hand, for very small $t_f$, boson-assisted hopping is the dominant transport channel. Here $D$ increases with decreasing $\omega_0$ (see insets). Interestingly, for $t_f$=$0$, it can be shown analytically~\cite{AEF10}, that $D$ remains finite as $\omega_0\to 0$. These overall trends persist in 2D and 3D. However, there are subtle distinctions relative to the 1D case, for instance, in the regime of small $\omega_0$: while $-D/E_{kin}$ stays almost constant for $t_f\ll 1$ when going from 1D to 3D, in higher dimensions, it significantly exceeds its value at 1D for larger $t_f$ (note the different scales of the abscissae in Fig.~\ref{f8}). That means, opening more hopping channels, the system approaches much faster the free-electron value in the diffusive regime (e.g., $D$ increases by a factor of 7-8, when going from 1D to 3D at $\omega_0$=$0.5$, $t_f$=$2$). \begin{figure}[b] \includegraphics[scale=0.4]{fig9.pdf} \caption{ (Color online) Particle-boson density-density correlation function $\chi(i-j)$ for the 1D Edwards model.} \label{f9} \end{figure} \subsection{Particle-boson correlations} The ground-state expectation value \begin{eqnarray} \chi({\bf r})=\langle \psi_0 \vert f_i ^\dagger f_i^{} (b_{i+{\bf r}}^{\dagger} b_{i+{\bf r}}^{}) \vert \psi_0\rangle \end{eqnarray} captures the density-density correlation between the fermionic particle located at a certain site $i$ and the bosons in its proximity. Figures~\ref{f9}, \ref{f10}, and~\ref{f11} show $\chi({\bf r})$ for the one-, two-, and three-dimensional cases, respectively. In the incoherent, diffusive transport regime (i.e., at rather small $\omega_0$, $t_f > 1$), the bosons form a cloud surrounding the fermion. Here, the maximum of $\chi$ coincides with the position of the fermionic particle and the bosons are only weakly correlated. In total, many bosons are excited at the fermion site and in its neighborhood. To a certain extent, this resembles the situation for a large Holstein polaron. By contrast, in the boson-assisted transport regime, realized at large $\omega_0$ and very small or zero $t_f$, the particle-boson correlations are large at the nearest-neighbor sites. A boson existing on a site next to the particle triggers transport because, according to the second term in $H_b$, the particle can hop to this site and will thereby lower the total energy of the system by annihilating the bosonic excitation in the background. The same mechanism will strengthen hopping processes along the coordinate directions in higher dimensions too, whereupon, in 3D, transport along the body diagonal is not supported. This reveals once more the importance of closed loops for the dynamical generation of the effective mass in the strongly correlated regime (cf. the results of Ref.~\onlinecite{BF10} for the 2D case). We would like to point out that the nearest-neighbor particle-boson correlations are even more pronounced in 3D (and 2D) than in 1D (cf., the discussion of Fig.~\ref{f8} in Sec.~III~B). \begin{figure}[t] \includegraphics[scale=0.5]{fig10.pdf} \caption{\label{fy} (Color online) Particle-boson density-density correlation function $\chi(x,y)$ for the 2D Edwards model with $\omega_0$=$0.5$, $t_f$=$2$ (top), and $\omega_0$=$2$, $t_f$=$0$ (bottom).} \label{f10} \end{figure} \begin{figure}[t] \includegraphics[width=0.9\linewidth]{fig11.pdf} \caption{\label{fy} (Color online) Particle-boson density-density correlation function for the 3D Edwards model with $\omega_0$=$0.5$, $t_f$=$2$ (left), and $t_f$=$0$ and $\omega_0$=$2$ (right). The distance from the particle-site is measured in lattice spacing along the (1,0,0) [black circles], (1,1,0) [red squares], and (1,1,1) [blue diamonds] direction.} \label{f11} \end{figure} \section{Conclusions} To summarize, we have investigated the formation of polarons in the Edwards fermion-boson model, placing special emphasis on transport and dimensionality effects. The Edwards model features two transport channels, a coherent and an incoherent one. Exploiting unbiased (variational) diagonalization techniques, we presented numerically exact results for the Edwards model, including correlation functions and quantities that characterize transport, in spatial dimensions one through three. It turned out that an Edwards polaron mainly develops when the background is stiff (highly correlated). Then coherent particle transport takes place on a strongly reduced energy scale. Entirely different from the Holstein- and SSH-type models, where the bosons are phonons and only (small) lattice polarons, comprising many phonons, will be formed (in D$>$1)~\cite{KTB02,FT07}, the Edwards polaron is a few-boson state in the regime of boson-assisted transport~\cite{AEF07} when vacuum-restoring processes play a dominant role. In that case, the Edwards polaron is confined to a few lattice sites with pronounced nearest-neighbor particle-boson correlations. Edwards polaron formation requires a sizable mass enhancement, just as in the case of Holstein- or SSH-polarons. Likewise, the Edwards polaron transition is always continuous, i.e., a crossover, triggered---in a self-induced way---by the strength of the background correlations. Interestingly, the inverse effective mass of the Edwards polaron substantially differs from the quasiparticle weight which, of course, is reduced from one, but rather moderate if compared to the Holstein polaron. For the dynamical generation of the Edwards polaron's effective mass, closed loops are important in all spatial dimensions. In the opposite limit, when the background heavily fluctuates, the particle will be strongly scattered by the bosonic fluctuations. This might enable transport when the ``free'' hopping channel ($\propto t_f$) is absent, but at the same time limits transport. In either case the Drude weight is finite, even if the energy of the background excitations ($\propto \omega_0$) tends to zero. We note that the limit $\omega\to0$ thoroughly differs from the adiabatic limit of the Holstein model~\cite{AFT10} (for the SSH model the polaron crossover is unaffected by the adiabaticity ratio~\cite{CSG97}). If, at small values of $\omega_0$, the ``free'' hopping channel is well-developed, the Drude weight (scaled to the kinetic energy) approaches its free-particle limiting value more readily in higher dimensions. Here, the boson cloud around the particle is spread but weakly correlated. Obviously, the Edwards model captures very different transport regimes, and the dimensionality noticeably affects the properties of the system Since the charge carriers in a rich variety of materials with strong electronic correlation, including 1D MX chains, 2D high-$T_c$ cuprates, and 3D colossal magnetoresistive manganates feature polaronic properties, our results contribute, at least qualitatively, to a better understanding of lattice, spin or orbital polaron formation in these materials, where particles move through an ordered insulator. \acknowledgements M.C. and H.F. would like to thank A. Alvermann for useful discussions. The authors appreciate access to the computing facilities of the DST-FIST (phase-II) project installed in the Department of Physics, IIT Kharagpur, India. M.C. would like to acknowledge funding from the NRF South Korea (No. 2009- 0079947), the POSTECH Physics BK21 fund, as well the computational facility at Department of Solid State Physics, Indian Association for the Cultivation of Science, Kolkata, India. B.I.M acknowledges the support from National Research Foundation of Korea (Grants No.2015R1A2A1A15053564). Work in Greifswald was supported by the Deutsche Forschungsgemeinschaft through SFB 652, project B5.
1,477,468,750,559
arxiv
\section{Introduction} Quantum computation is a subject that is strongly attracting the interest of the physics community in recent times \cite{qc-nas1,qc-nas2}, mainly because of the vast potential it has for extremely high-performance calculations. The main reason for that derives from the fact that instead of employing a binary system as the basic computing unit, it uses the infinitely many quantum states obtained by coherent linear combinations of certain base states. Loss of quantum coherence through interaction with the environment, however, is a fatal threat for a quantum computer, since it implies complete loss of information. For this reason it is important to find systems, capable of performing the required operations of quantum computation, being at the same time protected against the process of decoherence. Systems presenting non-Abelian statistics can provide the required stability through the mechanism known as topological quantum computation \cite{qc-nas2}. For these systems, exchanging particles in a many-particle state produces the entanglement thereof, thus making them robust against decoherence. An intense search for many-particle systems presenting such peculiar behavior under the particle exchange -- or braiding -- operation has then started. In this work, we consider the non-Abelian Chern-Simons field in 2+1 D, minimally coupled to a Higgs field in the adjoint representation of the SU(2) group (the CSH-theory). The Higgs field potential is such that there are two phases according to whether the vacuum expectation value of this field vanishes or not. We study in detail the quantum magnetic vortex excitations of this system in the ordered phase. In order to accomplish the full quantization of such excitations, we apply to the CSH-theory the method of quantization of topological excitations, which is based on the concept of order-disorder duality \cite{marswi,marino-topexc1,marinoap,marino-topexc2}. We obtain, in particular, the explicit form of the creation operator of quantum excitations carrying both magnetic flux and charge, as well as their Euclidean correlation functions. These electrically charged magnetic vortices may be boson, fermion or, more generally anyons. We show that special self-adjoint combinations of vortices and anti-vortices possess non-Abelian statistics whenever the vortices are anyonic. Furthermore, for a specific value of the anyon spin, namely $s=1/4$, we show that we can construct the NOT and CNOT logic gates, required in quantum computation, from the corresponding monodromy matrices. Our results, based on a fully quantized approach of topological excitations, therefore show that the CSH model with an SU(2) group is an excellent example of a system exhibiting the requisites needed for the operation of a quantum computer. Related results have been reported in the literature. For instance, non-Abelian statistics has been obtained for certain quasi-particle excitations of the quantum Hall liquid \cite{qhl} and also for Ising anyons \cite{NAS,IA}. Models inspired in non-Abelian anyons have been proposed in \cite{naa}. \section{The Chern-Simons-Higgs Theory} \subsection{The Theory} Let us consider the SU(2) non-Abelian Chern-Simons theory to which we couple a Higgs field in the adjoint representation: \begin{eqnarray} \label{CSH} S_{CS}[A] = \frac{\kappa}{4\pi}\int d^{3}z\,\epsilon^{\mu\nu\rho}\left(A^{a}_{\mu} \partial_{\nu}A^{a}_{\rho} +\frac{2}{3}\,\epsilon^{abc}A^{a}_{\mu} A^{b}_{\nu}A^{c}_{\rho}\right) + \mbox{Tr}D_{\mu}\Phi D^{\mu}\Phi - V(\vert \Phi\vert, \eta) \end{eqnarray} where the Higgs self-interaction potential is given by \cite{hong1990multivortex,jackiw1990self,navarro2009non} \begin{eqnarray} V = (4\lambda)^{2}\mbox{Tr}\Phi^{2}(\eta^{2} + \Phi^{2})^{2}. \end{eqnarray} The Euler-Lagrange equation is \begin{eqnarray} \label{EL} \frac{\kappa}{2\pi}\,\epsilon^{\mu\nu\rho}D^{ac}_{\nu}A^{c}_{\rho} = J^{\mu a} \end{eqnarray} where $J^{\mu a} = -2[\Phi, D^{\mu}\Phi]^{a}$. The theory presents ordered or disordered phases, according to whether $\eta^2<0$ or $\eta^2>0$, where, respectively, we have $\langle \Phi \rangle \neq 0$ and $\langle \Phi \rangle = 0$. \subsection{Charge and Magnetic Flux Carrying Operators} We are now going to obtain the operators $\sigma$ and $\mu$, which create states carrying, respectively, charge and magnetic flux. These two quantities are given respectively by \begin{eqnarray} Q = \int d^{2}x\,J^{0 a}n^{a} \quad \mbox{and} \quad \Phi_{M} = \int d^{2}x\,B^{a}n^{a} \end{eqnarray} where $n^{a}$ is an unit vector, subject to the action of the group (for instance $n^a=\frac{\phi^a}{\vert \phi \vert}$ where $\phi^a \equiv \langle \Phi^a \rangle$) and \begin{eqnarray} J^{0a} = \frac{\kappa}{2\pi}\epsilon^{ij}D^{ac}_{i}A^{c}_{j}(x) \qquad \mbox{and}\qquad B^{a}(x) =\frac{1}{2} \epsilon^{ij}\,F_{ij}^{a} \end{eqnarray} In order to construct the local operators $\sigma$ and $\mu$, we will follow the method for quantization of topological excitations that was developed with basis on the order-disorder duality \cite{marswi,marino-topexc1,marinoap,marino-topexc2}. According to this, the $\sigma$ and $\mu$ operators act, respectively, as order and disorder operators and therefore satisfy the corresponding dual algebra. Then, correlation functions determined by this algebra are obtained by coupling certain special external fields to the dynamical fields of the system. In the case of the $\mu$-operator these external fields are given by $\bar{A}^{b}_{\mu}(z;x_{1},\ldots, y_{M}) = \bar{A}^{b}_{\mu}(z; x_{1},\ldots,x_{N}) - \bar{A}^{b}_{\mu}(z; y_{1},\ldots,y_{M})$, whereas, for the $\sigma$-operator, by $\bar{C}_{\mu}^{ d}(z; x_{1},\ldots,y_{M}) = \bar{C}_{\mu}^{ d}(z; x_{1},\ldots,x_{N}) - \bar{C}_{\mu}^{ d}(z; y_{1},\ldots,y_{M})$ where \begin{eqnarray} \bar{A}^{\mu b}(z; x_{1},\ldots,x_{N}) = a\sum_{i = 1}^{N}\mbox{arg}(z - x_{i})n^{b} \int_{S_{x_{i}}} d^{2}\xi^{\mu}\,\delta^{3}(z -\xi), \label{campoexterno1} \end{eqnarray} and \begin{eqnarray} \bar{C}^{\mu d}(z; x_{1},\ldots,x_{N}) = b\sum_{i = 1}^{N}\mbox{arg}(z - x_{i})n^{d}\int_{S_{x_{i}}} d^{2}\xi^{\lambda}\,\epsilon^{\lambda\mu\nu}\partial_{\nu}\delta^{3}(z - \xi). \label{campoexterno2} \end{eqnarray} In the above expressions $d^{2}\xi^{\mu} = \frac{1}{2} \epsilon^{\mu\alpha\beta}(d \xi_\alpha d \zeta_\beta-d \xi_\beta d \zeta_\alpha)$ is the covariantized vector surface integration element, perpendicular to the integration surface $S_{x_{i}}$. This consists of the complex plane, excluding the singularities at $x_i$ and along the cut of the function $\mbox{arg}(z - x_{i})$. It turns out that the mixed multicorrelation function is given by the vacuum functional in the presence of these external fields: \begin{eqnarray} \label{ff} & & \langle \sigma(x^{a}_{1})\mu_{R}(x^{b}_{1})\ldots \sigma(x^{a}_{N})\mu_{R}(x^{b}_{N})\mu^{\dagger}_{R}(y^{b}_{M})\sigma^{\dagger}(y^{a}_{M}) \ldots\mu^{\dagger}_{R}(y^{b}_{1})\sigma^{\dagger}(y^{a}_{1})\rangle = \nonumber\\ & & {\cal Z}^{-1}\int {\cal D} A^{a}_{\mu}{\cal D}\Phi^{b}{\cal D}\eta {\cal D} \bar{\eta}\,\exp\Bigg\{ -\int d^{3}z\Bigg[ \frac{\kappa}{4\pi}\epsilon^{\mu\nu\rho}\Big[[A^{d}_{\mu} + \bar{A}^{d}_{\mu}(x^{b}_{1}, \ldots, y^{b}_{M}) + \bar{C}^{d}_{\mu}(x^{a}_{1}, \ldots, y^{a}_{M})] \partial_{\nu}[ \nonumber\\ & & A^{d}_{\rho} + \bar{A}^{d}_{\rho}(x^{b}_{1}, \ldots, y^{b}_{M}) + \bar{C}^{d}_{\rho}(x^{a}_{1}, \ldots, y^{a}_{M})] +\frac{2}{3}\,\epsilon^{def}[A^{d}_{\mu} + \bar{A}^{d}_{\mu}(x^{b}_{1}, \ldots, y^{b}_{M}) + \bar{C}^{d}_{\mu}(x^{a}_{1}, \ldots, y^{a}_{M})] \nonumber\\ & & A^{e}_{\nu} + \bar{A}^{e}_{\nu}(x^{b}_{1}, \ldots, y^{b}_{M}) + \bar{C}^{e}_{\nu}(x^{b}_{1}, \ldots, y^{b}_{M})] [A^{f}_{\rho} + \bar{A}^{f}_{\rho}(x^{b}_{1}, \ldots, y^{b}_{M}) + \bar{C}^{f}_{\rho}(x^{a}_{1}, \ldots, y^{a}_{M})]\Big] +\mbox{Tr}D_{\mu}\Phi D^{\mu}\Phi \nonumber\\ & & - (4\lambda)^{2}\mbox{Tr}\Phi^{2}(\eta^{2} + \Phi^{2})^{2} +{\cal L}_{GF}[A] + {\cal L}_{gh}[A]\Bigg]\Bigg\}, \nonumber \end{eqnarray} where ${\cal L}_{GF}$ and ${\cal L}_{gh}$ are respectively the gauge-fixing and ghost lagrangians. We may obtain an equivalent expression for the $\sigma\mu$-correlation function, by shifting the $A^{\mu}$ functional integration variable in the above equation as \begin{eqnarray} A^{\mu} \rightarrow A^{\mu} - \bar{A}^{\mu} - \bar{C}^{\mu}. \end{eqnarray} This produces the equivalent expression \begin{eqnarray} \label{fff} & & \langle \sigma(x^{a}_{1})\mu_{R}(x^{b}_{1})\ldots \sigma(x^{a}_{N})\mu_{R}(x^{b}_{N})\mu^{\dagger}_{R}(y^{b}_{M})\sigma^{\dagger}(y^{a}_{M}) \ldots\mu^{\dagger}_{R}(y^{b}_{1})\sigma^{\dagger}(y^{a}_{1})\rangle = \nonumber\\ & & {\cal Z}^{-1}\int {\cal D} A^{a}_{\mu}{\cal D}\Phi^{b}{\cal D}\eta {\cal D} \bar{\eta}\,\exp\Bigg\{ -\frac{\kappa}{4\pi}\int d^{3}z\,\epsilon^{\mu\nu\rho}\left( A^{a}_{\mu}\partial_{\nu}A^{a}_{\rho} +\frac{2}{3}\,\epsilon^{abc}A^{a}_{\mu}A^{b}_{\nu} A^{c}_{\rho}\right) + \mbox{Tr}\bar{D}_{\mu}\Phi \bar{D}^{\mu}\Phi \nonumber\\ & & - (4\lambda)^{2}\mbox{Tr}\Phi^{2}(\eta^{2} + \Phi^{2})^{2} +{\cal L}_{GF}[A \rightarrow A^{\mu} - \bar{A}^{\mu} - \bar{C}^{\mu}] + {\cal L}_{gh}[A \rightarrow A^{\mu} - \bar{A}^{\mu} - \bar{C}^{\mu}]\Bigg\}, \end{eqnarray} where \begin{eqnarray} \bar{D}_{\mu} = 1\partial_{\mu} + [A_{\mu} - \bar{A}_{\mu} - \bar{C}_{\mu}]. \end{eqnarray} From this form of the correlation function we can extract the operators carrying, respectively, charge and magnetic flux for the non-Abelian Chern-Simons-Higgs theory. This is easily done, because, according to (\ref{fff}), a 2-point correlator of the $\mu$-operator, for instance, is expressed as a functional integral having an integrand of the form: \begin{eqnarray} \label{mu-op} \exp\left\{-\int d^3x \left [ \frac{1}{2} W_\mu W^\mu + W_\mu \bar{A}^\mu +\frac{1}{2} \bar{A}_\mu \bar{A}^\mu \right] \right\} \end{eqnarray} where $W_\mu$ is given in terms of the dynamical fields and $\bar{A}_\mu$ is given by (\ref{campoexterno1}). The last term clearly does not involve dynamical fields, being therefore a kind of renormalization factor. The first term is the measure weight, which is used for computing averages, namely \begin{eqnarray} \label{mu-av} \langle\mu(x) \mu^\dagger (y)\rangle = \int {\cal D} A^{a}_{\mu}{\cal D}\Phi^{b}{\cal D}\eta {\cal D}\bar{\eta} \exp\left\{-\int d^3x \left [ \frac{1}{2} W_\mu W^\mu \right] \right\} \mu(x) \mu^\dagger (y) \end{eqnarray} It follows that, appart from the (c-number) renormalization factor, the $\mu(x) \mu^\dagger(y)$ operators must correspond to the second term in (\ref{mu-op}), which has an exponent linear in $\bar{A}_\mu (z;x,y)= \bar{A}_\mu (z;x)-\bar{A}_\mu (z;y)$. The first one will give $\mu(x)$, whereas the second, $\mu^\dagger(y)$. The $\sigma$-operators, conversely, must be exponentials of those terms that are linear in the external fields $\bar{C}_\mu (z;x)$, given by (\ref{campoexterno2}). Following the previously described inspection procedure, we obtain \begin{eqnarray} \label{mu-Higgs} \mu(x_{i}) = \exp\left\{-n^{b} a\int_{x_{i}, L}^{+\infty} d\xi_{\mu}\epsilon^{\mu\alpha\nu} \partial_{\nu}\,\frac{J_{\alpha}^{b}(\xi)}{(-\Box)}\right\} \end{eqnarray} and \begin{eqnarray} \sigma(x_{i}) = \exp\left\{n^{a}b\int_{x_{i}, L}^{+\infty} d\xi^{\mu}\,J^{a}_{\mu}(\xi)\right\} \end{eqnarray} with $J^{a}_{\mu}$ given by (\ref{EL}). In the above equations, $ d\xi^{\mu}$ is the covariantized vector line integration element along the integration line $L$. This is a line going from $x_{i}$ to $\infty$ along the cut of the $\mbox{arg}(z - x_{i})$ function. We investigate now the commutation rules of $\sigma$ and $\mu$ operators obtained above with the charge and magnetic flux operators. Let us evaluate firstly the equal time commutator of $\mu$ with the magnetic flux operator. Using the field equation, we may cast the $\mu$-operator in the form \cite{emijmp} \begin{eqnarray} \label{operator-mu} \mu(x_{i}) = \exp\left\{\kappa a n^{b}\int_{x_{i}, L}^{+\infty}d\xi^{\mu}A^{b}_{\mu}(\xi)\right\}. \end{eqnarray} From this, we get \begin{eqnarray} [\mu(x_{i}), \Phi_{M}] &=&\frac{1}{2} \mu(x_{i})\kappa a\,n^{b}n^{c}\int d^{2}y\int_{x_{i}, L}^{+\infty} d\xi^k [A^{b}_{k}(\xi), \epsilon^{jl}F_{jl}^{c}] \nonumber\\ &=&- 2\pi a \mu(x_{i})\,n^{b}n^{c}\int d^{2}y\int_{x_{i}, L}^{+\infty} d\xi^k \partial_k^{(\xi)} \delta^{bc} \delta(\xi - y) \nonumber\\ & & +\,\mu(x_{i})\kappa a\,n^{b}n^{c}\int d^{2}y\int_{x_{i}, L}^{+\infty} d\xi^k \epsilon^{jl} \epsilon^{ced}[A^{b}_{k}(\xi), A^{e}_{j}(y)A^{d}_{l}(y)] \nonumber\\ &=& 2\pi a\mu(x_{i}) + \mu(x_{i}) \kappa a n^{b}n^{c}\int d^{2}y\int_{x_{i}, L}^{+\infty}d\xi^k \epsilon^{jl} \epsilon^{ced}\left(A^{e}_{j}(y)[A^{b}_{k}(\xi), A^{d}_{l}(y)] + [A^{b}_{k}(\xi), A^{e}_{j}(y)]A^{d}_{l}(y)\right) \nonumber\\ &=& 2\pi a\mu(x_{i}) \nonumber\\ & & + \mu(x_{i}) \kappa a n^{b}n^{c} \int d^{2}y\int_{x_{i}, L}^{+\infty} d\xi^k \epsilon^{jl} \epsilon^{ced}\Big(A^{e}_{j}(y)\frac{2\pi}{\kappa}\epsilon_{kl}\delta^{bd}\delta^{2}(\xi - y) + \frac{2\pi}{\kappa}\epsilon_{kj}\delta^{be}\delta^{2}(\xi - y)A^{d}_{l}(y)\Big) \nonumber\\ &=& 2\pi a\mu(x_{i}) \end{eqnarray} where we used the equal-time commutator $[A^{a}_{i}(x), A^{b}_{j}(y)] = 2\pi/\kappa\, \epsilon_{ij}\delta^{ab}\delta^{2}(\vec{x} - \vec{y})$ and the fact that $n^{a}\delta^{ab} n^{b} = 1$. The second term in the rhs above vanishes because it is proportional to $n^{b}n^{c} \epsilon^{bcd}$. This result shows that $\mu(x_{i})$ creates states bearing a magnetic flux $2\pi a$, being therefore, a magnetic vortex creation operator. Notice that $2\pi$ is the quantum of magnetic flux for $\hbar=c=e=1$, hence the free parameter $a$ determines the number of flux units created by $\mu$. A natural choice, therefore would be $a=1$. In order to evaluate the commutator of $\sigma$ with the matter charge operator $Q$, we must consider the current-current commutator. This is given in general by the current algebra relation \cite{itzykson1980quantum} \begin{eqnarray} [J^{0 a}(\vec{x}, t), J^{i b}(\vec{y}, t)] = {\cal M}\delta^{ab}\partial^{i}\delta^{(2)}( \vec{x} - \vec{y}) \end{eqnarray} where ${\cal M}$ is a functional of the spectral density of the theory. Using this, we find \begin{eqnarray} [J^{0 a}(\vec{y}, t), \sigma(\vec{x}, t)] = b {\cal M}\sigma(\vec{x}, t) \delta^{(2)}(\vec{x} - \vec{y}) \end{eqnarray} or \begin{eqnarray} [Q, \sigma(\vec{x}, t)] = b {\cal M}\sigma(\vec{x}, t), \end{eqnarray} indicating that $\sigma$ bears a charge $b {\cal M}$ From this, we can see that the choice $b^{-1} = {\cal M}$ would imply that the operator $\sigma$ carried one unit of electric charge. We see that the $\mu$ and $\sigma$ operators obtained above carry, respectively, magnetic flux and charge. We therefore expect their product will be, in general anyon fields \cite{fw}. It is precisely out of combinations of these that we will construct the fields with non-Abelian statistics. \subsection{Broken and Symmetric phases} \subsubsection{Symmetric phase} In the symmetric phase, $\eta^{2} > 0$ in (\ref{CSH}), we have to add to (\ref{CSH}) a gauge-fixing term ${\cal L}_{GF}$ along with the corresponding ghost term ${\cal L}_{gh}$. For the gauge-fixing term, we are going to choose a Lorentz-type gauge. Then, we add to (\ref{CSH}), in the symmetric phase, the terms \begin{eqnarray} \label{gauge-symmetric} {\cal L}^{S}_{GF} &=& -\frac{\xi}{2}\,(\partial_{\mu}A^{\mu a})^{2} \nonumber\\ {\cal L}^{S}_{gh} &=& [\partial_{\mu}\bar{\eta}^{a}][D_{\mu}^{\tiny{adj}}\eta]^{a} \end{eqnarray} where $\eta^{a}$ are ghosts fields and $\xi$ is the gauge parameter. \subsubsection{Broken phase} In the broken phase, $\eta^{2} < 0$ in (\ref{CSH}), the potential has a minimum at $\Phi^{2} = \phi^{a}_{0}$, with $\phi^{2}_{0} = \vert \eta^{2}\vert$. Taking the vacuum pointing along the third direction, that is, $\phi^{a} = \phi_{0}\delta^{a3}$, we can see that the fields will be given by ($\Phi^{1}, \Phi^{2}, \chi$), with $\chi = \Phi^{3} - \phi_{0}$. The fields $\Phi_{1}$, $\Phi_{2}$, and $\chi$ have a zero vacuum expection value. Then, in the broken phase we choose an 't Hooft gauge, where the quadratic mixed terms involving ($A_{\mu}, \Phi$) in the expression ${\cal L}^{B}$ disappear. To be more general, however, this unwanted term disappears if we add a gauge-fixing term of the form \begin{eqnarray} \label{gauge-assymmetric} {\cal L}^{B}_{GF} = -\frac{\xi}{2}\left[\partial_{\mu}A^{\mu a} +\frac{2M}{\xi}\,\epsilon^{ab3}\Phi_{b} \right]^{2}. \end{eqnarray} where $M$ is the vacuum expectation value of the Higgs field. From this gauge-fixing we have \begin{eqnarray} \label{ghosts-assymmetric} {\cal L}^{B}_{gh} = [\partial_{\mu}\bar{\eta}^{a}][D_{\mu}^{\tiny{adj}}\eta]^{a} - \bar{\eta}^{a}\left[\frac{2M}{\xi}\Phi_{3} + \frac{4M^{2}}{\xi}\right] (\delta^{ab} - \delta^{a3}\delta^{b3})\eta^{b}. \end{eqnarray} From Eqs. (\ref{CSH}) and (\ref{gauge-symmetric}), we have the Langrangian density in the symmetric phase, ${\cal L}_{eff}^{S} = {\cal L}^{S} + {\cal L}_{GF}^{S} + {\cal L}_{gh}^{S}$, while in the broken phase, the Lagrangian density is ${\cal L}_{eff}^{B} = {\cal L}^{B} + {\cal L}_{GF}^{B} + {\cal L}_{gh}^{B}$. From the quadratic terms in ${\cal L}_{eff}^{S}$ and ${\cal L}_{eff}^{B}$ we obtain the propagators for the fields. In Euclidean space these are \begin{eqnarray} \Delta_{(i)}(x) &=& \int\frac{d^{3}k}{(2\pi)^{3}}\frac{e^{ik\cdot x}}{k^{2} + m_{i}^{2}},\quad i = 1, 2, 3, \nonumber\\ D^{\mu\nu}_{(1)}(x) &=& D^{\mu\nu}_{(2)}(x) = \int \frac{d^{3}k}{(2\pi)^{3}} \frac{e^{ik\cdot x}}{2(\alpha^{2}k^{2} + M^{4})}\left[\alpha\epsilon^{\mu\lambda\nu}k_{\lambda} +M^{2}\left(\delta^{\mu\nu} - \frac{(\xi - 2\alpha^{2}/M^{2})k^{\mu}k^{\nu}}{(\xi k^{2} + 2M^{2})}\right)\right], \nonumber\\ D^{\mu\nu}_{(3)}(x) &=& \int \frac{d^{3}k}{(2\pi)^{3}} e^{ik\cdot x}\left[\frac{1}{2\alpha}\epsilon^{\mu\lambda\nu}\frac{k_{\lambda}}{k^{2}} + \frac{1}{\xi}\frac{k^{\mu}k^{\nu}}{k^{4}}\right], \nonumber\\ \Delta_{gh}^{(i)}(x) &=& \int\frac{d^{3}k}{(2\pi)^{3}}\frac{e^{ik\cdot x}}{k^{2} + m_{gh_{(i)}}^{2}},\quad i = 1, 2, 3, \label{propagators} \end{eqnarray} where $\alpha = \kappa/4\pi$ and $\Delta_{(i)}$ are the propagators for the Higgs-field components, $\Phi_{1}$, $\Phi_{2}$, and $\phi_{3}$ ($\chi$ in the broken phase), $D^{\mu\nu}_{(a)}(x)$ are the propagators for the gauge fields $A^{a}_{\mu}$ and $\Delta^{(i)}_{gh}(x)$ are the propagators for ghosts-field components. In the symmetric phase we have $M = 0$ and $m_{i}^{2} = (4\lambda)^{2}\vert \eta^{2}\vert^{2}$ and $m^{2}_{gh_{(i)}} = 0$ ($i = 1, 2, 3$). In the broken phase we have $m_{1}^{2} = m_{2}^{2} = 4M^{2}/\xi$, $m_{3}^{2} = m_{\chi}^{2}$, and $m_{gh_{(1)}}^{2} = m_{gh_{(2)}}^{2} = 4M^{2}/\xi$ and $m^{2}_{gh_{(3)}}= 0$. \section{Vortex Correlation Functions} \subsection{Introducing the external field in ${\cal L}_{eff} = {\cal L} + {\cal L}_{GF} + {\cal L}_{gh}$} Let us write the exponent in (\ref{fff}) as $S_{eff} = \int d^{3}z[{\cal L}^{Eucl}_{eff} + \bar{{\cal L}}^{Eucl}_{eff}(\bar{A}_{\mu} + \bar{C}_{\mu})]$, where $\bar{{\cal L}}^{Eucl}_{eff}(\bar{A}_{\mu} + \bar{C}_{\mu})$ contains all the dependence on the external field $\bar{A}_{\mu}(z; x_{1},\ldots, y_{M})$ and $\bar{C}_{\mu}(z; x_{1},\ldots, y_{M})$. In the symmetric phase, from (\ref{fff}) and (\ref{gauge-symmetric}), we obtain that \begin{eqnarray} \hspace{-0.2cm} \bar{{\cal L}}^{S}_{eff}(\bar{A}_{\mu} + \bar{C}_{\mu}) &=& -2\epsilon^{abc}(\bar{A}_{\mu}^{b} + \bar{C}_{\mu}^{a})\Phi^{c}\partial_{\mu}\Phi^{a} + (\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})(\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})\Phi^{b}\Phi^{b} - 2(\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a}A_{\mu}^{a})\Phi^{b}\Phi^{b} \nonumber\\ & & - (\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})(\bar{A}_{\mu}^{b} + \bar{C}_{\mu}^{b})\Phi^{a}\Phi^{b} + 2 (\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})A_{\mu}^{b}\Phi^{a}\Phi^{b} -\frac{\xi}{2} [[\partial_{\mu}(\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})]^{2} \nonumber\\ & & - 2\partial_{\mu}(\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})\partial_{\nu}(\bar{A}_{\nu}^{a} + \bar{C}_{\nu}^{a})] -\bar{\eta}^{a}[\epsilon^{abc}\partial^{\mu}(\bar{A}_{\mu}^{c} + \bar{C}_{\mu}^{c}) + \epsilon^{abc}(\bar{A}_{\mu}^{c} + \bar{C}_{\mu}^{c})\partial^{\mu}]\eta^{b} \end{eqnarray} while in the broken phase, we obtain \begin{eqnarray} \hspace{-0.8cm} \bar{{\cal L}}^{B}_{eff}(\bar{A}_{\mu} + \bar{C}_{\mu}) &=&\frac{ M^{2}}{2}[(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})^{2} - 2A^{\mu}_{1}(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1}) + (\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})^{2} - 2A^{\mu}_{2}(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})] \nonumber\\ & & - [(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})(\Phi_{2}\partial_{\mu}\chi - \chi\partial_{\mu}\Phi_{2}) + (\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})(\chi\partial_{\mu}\Phi^{1} - \Phi^{1}\partial_{\mu}\chi) \nonumber\\ & & + (\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3})(\Phi^{1}\partial_{\mu}\Phi^{2} - \Phi^{2}\partial_{\mu}\Phi^{1})] + [(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})^{2} - 2A^{\mu}_{1}(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})] (\Phi_{2}^{2} \nonumber\\ & & + \chi^{2} + 2\phi_{0}\chi) + [(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})^{2} - 2A^{\mu}_{2}(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})](\Phi_{1}^{2} + \chi^{2} + 2\phi_{0}\chi) \nonumber\\ & & +[(\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3})^{2} - 2A^{\mu}_{3}(\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3})] (\Phi^{2}_{1} + \Phi^{2}_{2}) - 2[(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2}) \nonumber\\ & & A^{\mu}_{1}(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2}) - A^{\mu}_{2}(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})] \Phi_{1}\Phi_{2} - 2[(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})(\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3}) \nonumber\\ & & A^{\mu}_{1}(\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3}) - A^{\mu}_{3}(\bar{A}^{\mu}_{1} + \bar{C}^{\mu}_{1})] (\Phi_{1}\chi + \phi_{0}\Phi_{1}) - 2[(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})(\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3}) \nonumber\\ & & A^{\mu}_{2}(\bar{A}^{\mu}_{3} + \bar{C}^{\mu}_{3}) - A^{\mu}_{3}(\bar{A}^{\mu}_{2} + \bar{C}^{\mu}_{2})] (\Phi_{2}\chi + \phi_{0}\Phi_{2})] - \frac{\xi}{2} [[\partial_{\mu}(\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})]^{2} \nonumber\\ & & - 2\partial_{\mu}(\bar{A}_{\mu}^{a} + \bar{C}_{\mu}^{a})\partial_{\nu}(\bar{A}_{\nu}^{a} + \bar{C}_{\nu}^{a})] -\bar{\eta}^{a}[\epsilon^{abc}\partial^{\mu}(\bar{A}_{\mu}^{c} + \bar{C}_{\mu}^{c}) + \epsilon^{abc}(\bar{A}_{\mu}^{c} + \bar{C}_{\mu}^{c})\partial^{\mu}]\eta^{b}. \end{eqnarray} From $\bar{{\cal L}}^{S}_{eff}(\bar{A}_{\mu} + \bar{C}_{\mu})$ and $\bar{{\cal L}}^{B}_{eff}(\bar{A}_{\mu} + \bar{C}_{\mu})$, we can extract the Feynman rules involving the external field. The relevant vertices are shown in Fig. \ref{Figura-1}. \begin{figure}[h!!] \begin{center} \begin{picture}(440,130)(0,0) \Gluon(50,20)(20,50){4}{5} \Vertex(50,20){4} \Gluon(80,50)(50,20){4}{5} \put(85,45){$a$} \put(10,45){$a$} \put(100,30){$-\frac{\xi}{2}[\partial_{\mu}(\bar{A}^{\mu a} + \bar{C}^{\mu a})]^{2}$} \put(241,105){$a$} \Gluon(280,80)(250,110){4}{5} \BCirc(280,80){4} \Gluon(310,110)(280,80){4}{5} \put(315,105){$a$} \put(330,90){$-M^{2}\,(\bar{A}^{\mu a} + \bar{C}^{\mu a})^{2}$} \Gluon(50,80)(20,110){4}{5} \Vertex(50,80){4} \Photon(50,80)(90,80){4}{5} \put(10,105){$a$} \put(81,68){$a$} \put(100,90){$\xi \partial_{\mu}(\bar{A}^{\mu a} + \bar{C}^{\mu a})\partial_{\nu}A^{\nu a}$} \Gluon(280,20)(250,50){4}{5} \BCirc(280,20){4} \Photon(280,20)(320,20){4}{5} \put(241,45){$a$} \put(311,8){$a$} \put(330,30){$2M^{2}A^{\mu a}(\bar{A}^{\mu a} + \bar{C}^{\mu a})$} \end{picture} {\caption{\small{Vertices involving the external field $\bar{A}^{\mu a} + \bar{C}^{\mu a}$ (curly line) relevant for the evaluation of the mixed multi-correlation function. Those proportional to $M^2$ only occur in the broken phase.}} \label{Figura-1}} \end{center} \end{figure} \subsection{The mixed correlation function} The mixed correlation function can be expressed as \begin{eqnarray} \label{exponencial} \langle \sigma(x^{a}_{1})\mu(x^{b}_{1})\ldots \sigma(x^{a}_{N})\mu(x^{b}_{N})\sigma^{\dagger}(y^{a}_{N})\mu^{\dagger}(y^{b}_{N}) \ldots\sigma^{\dagger}(y^{a}_{1})\mu^{\dagger}(y^{b}_{1})\rangle = e^{-\Lambda(x^{a}_{1}, x^{b}_{1},\ldots, x^{a}_{N},x^{b}_{N}; y^{a}_{1}, y^{b}_{1},\ldots, y^{a}_{M},y^{b}_{M})}, \end{eqnarray} It has been shown \cite{marino1992mass} that only the two legs graphs, containing the external field $\bar{A}_{\mu} + \bar{C}_{\mu}$ will contribute to the large distance behavior of $\Lambda(x^{a}_{1}, x^{b}_{1},\ldots, x^{a}_{N},x^{b}_{N}; y^{a}_{1}, y^{b}_{1},\ldots, y^{a}_{M},y^{b}_{M})$. At tree level the relevant graphs are depicted in Fig. \ref{figura-5} In the symmetric phase only the first two graphs in Fig \ref{figura-5} would contribute. Their sum, however, actually vanishes, as we can see by using the gauge field propagators given in Eq. (\ref{propagators}). This result leads us to the conclusion that in the symmetric phase we have \begin{eqnarray} \langle \sigma(x^{a}_{1})\mu_{R}(x^{b}_{1})\ldots \sigma(x^{a}_{N})\mu_{R}(x^{b}_{N})\mu^{\dagger}_{R}(y^{b}_{M})\sigma^{\dagger}(y^{a}_{M}) \ldots\mu^{\dagger}_{R}(y^{b}_{1})\sigma^{\dagger}(y^{a}_{1})\rangle_{S} \stackrel{\vert \mathbf{x} - \mathbf{y}\vert \rightarrow \infty}{\sim} 1. \end{eqnarray} \begin{figure}[h!!] \begin{center} \begin{picture}(400,130)(0,0) \put(10,105){$a$} \Gluon(50,80)(20,110){4}{5} \Vertex(50,80){4} \Gluon(80,110)(50,80){4}{5} \put(85,105){$a$} \put(110,85){$+$} \put(135,105){$a$} \Gluon(175,80)(145,110){4}{5} \Vertex(175,80){4} \Photon(175,80)(225,80){4}{5} \Gluon(255,110)(225,80){4}{5} \Vertex(225,80){4} \put(260,105){$a$} \put(285,85){$+$} \put(306,105){$a$} \Gluon(345,80)(315,110){4}{5} \BCirc(345,80){4} \Gluon(375,110)(348,80){4}{5} \put(380,105){$a$} \put(35,30){$+$} \put(60,45){$a$} \Gluon(100,20)(70,50){4}{5} \Vertex(100,20){4} \Photon(100,20)(150,20){4}{5} \Gluon(180,50)(150,20){4}{5} \BCirc(150,20){4} \put(185,45){$a$} \put(210,30){$+$} \put(232,45){$a$} \Gluon(270,20)(240,50){4}{5} \BCirc(270,20){4} \Photon(274,20)(320,20){4}{5} \Gluon(350,50)(320,20){4}{5} \BCirc(320,20){4} \put(355,45){$a$} \end{picture} {\caption{\small{Leading graphs contributing to the long distance behavior of (\ref{exponencial}) in the non-Abelian CS-H theory. In the symmetric phase only the first two appear.}} \label{figura-5}} \end{center} \end{figure} On the other hand, the last three graphs of Fig. \ref{figura-5} only occur in the broken symmetry phase where the Higgs field possesses a nonzero vacuum expectation value, $M$. From these, using the gauge propagators $D^{\mu\nu}_{de}(z)$ given in (\ref{propagators}) we can write explicitly \begin{eqnarray} & & \Lambda(x^{a}_{1}, x^{b}_{1},\ldots, x^{a}_{N},x^{b}_{N}; y^{a}_{1}, y^{b}_{1},\ldots, y^{a}_{M},y^{b}_{M}) = -M^{4}\sum_{a = 1}^{2}\int d^{3}zd^{3}z'\, [\bar{A}^{d}_{\mu}(z; x^{b}_{1},\ldots, y^{b}_{M}) \label{26} \\ & & \hspace{-0.8cm} + \bar{C}^{d}_{\mu}(z; x^{a}_{1},\ldots, y^{a}_{M})]\Big[i\alpha\epsilon^{ \mu\lambda\nu}\partial_{\lambda} +\frac{\alpha^{2}}{M^{2}}\left(-\Box\delta^{\mu\nu} + \partial^{\mu}\partial^{\nu}\right)\Big]F(z - z') [\bar{A}^{d}_{\mu}(z; x^{b}_{1},\ldots, y^{b}_{M}) + \bar{C}^{d}_{\mu}(z; x^{a}_{1},\ldots, y^{a}_{M})], \nonumber \end{eqnarray} where \begin{eqnarray} F(z - z') = \int\frac{d^{3}k}{(2\pi)^{3}}\frac{e^{ik(z - z')}}{\alpha^{2}k^{2} + M^{4}}. \end{eqnarray} Expression (\ref{26}) has been evaluated in \cite{tese} giving \begin{eqnarray} & & \langle \sigma(x^{a}_{1})\mu_{R}(x^{b}_{1})\ldots \sigma(x^{a}_{N})\mu_{R}(x^{b}_{N})\mu^{\dagger}_{R}(y^{b}_{N})\sigma^{\dagger}(y^{a}_{N}) \ldots\mu^{\dagger}_{R}(y^{b}_{1})\sigma^{\dagger}(y^{a}_{1})\rangle \stackrel{\vert \mathbf{x} - \mathbf{y}\vert \rightarrow \infty}{\sim} \nonumber\\ & & \exp\Bigg\{ - \pi a^{2}M^{2}\sum_{i, j= 1}^{N,N}\left(\vert x^{b}_{i} - y^{a}_{j}\vert - \vert x^{b}_{i} - x^{a}_{j}\vert - \vert y^{b}_{i} - y^{a}_{j}\vert + \vert y^{b}_{i} - x^{a}_{j}\vert\right) \nonumber\\ & & -4\pi iabM^{2}\sum_{i, j = 1}^{N,N}[\mbox{arg}({\bf x}^{b}_{i} - {\bf y}^{a}_{j}) + \mbox{arg}({\bf y}^{b}_{i} - {\bf x}^{a}_{j}) - \mbox{arg}({\bf x}^{b}_{i} - {\bf x}^{a}_{j}) - \mbox{arg}({\bf y}^{b}_{i} - {\bf y}^{a}_{j})]\Bigg\} \label{mcf} \end{eqnarray} Now, we can introduce a composite operator $\Psi(x)$ bearing charge and magnetic flux, through \begin{eqnarray} & & \Psi(x) = \lim_{x^{a}, x^{b}\rightarrow x} \sigma(x^{a})\mu(x^{b}) \exp\Big\{-4\pi iabM^{2}\mbox{arg}({\bf x}^{b} - {\bf x}^{a})\Big\}. \nonumber \end{eqnarray} From this and (\ref{mcf}) we obtain the large distance behavior of the composite operator correlation function: \begin{eqnarray} \label{phiphi} & & \langle \Psi(x_{1})\ldots\Psi(x_{N})\Psi^{\dagger}(y_{N})\ldots\Psi^{\dagger}(y_{1})\rangle \stackrel{\vert \mathbf{x} - \mathbf{y}\vert \rightarrow \infty}{\sim} \exp\Bigg\{ -\pi a^{2}M^{2}\sum_{i, j= 1}^{N}\left(\vert x_{i} - y_{j}\vert + \vert y_{i} - x_{j}\vert\right) \nonumber\\ & & + \pi a^{2}M^{2}\sum_{i \neq j= 1}^{N}\left( \vert x_{i} - x_{j}\vert + \vert y_{i} - y_{j}\vert\right) -4\pi iabM^{2} \sum_{i, j = 1}^{N}[ \mbox{arg}({\bf x}_{i} - {\bf y}_{j}) + \mbox{arg}({\bf y}_{i} - {\bf x}_{j})] \nonumber\\ & & \hspace{6.5cm} + 4\pi iabM^{2} \sum_{i \neq j = 1}^{N}[\mbox{arg}({\bf x}_{i} - {\bf x}_{j}) + \mbox{arg}({\bf y}_{i} - {\bf y}_{j})]\Bigg\} \end{eqnarray} where $\mbox{arg(z)} = \mbox{Arg(z)}+ 2\pi n $ and we choose the cuts of the ${\mbox{Arg}}$ functions as $-\pi \leqslant \mbox{Arg(z)} < \pi$ and $0 \leqslant \mbox{Arg(-z)} < 2\pi$, in such a way that we may write $ \mbox{Arg(- z)} = \mbox{Arg(z)} + \pi$. The composite field $\Psi$ carries magnetic flux and charge, which are both conserved quantities in the broken phase, hence the only non-vanishing functions are the ones with the same number of operators and their Hermitean conjugates. This selection rule appears naturally in the calculation leading to (\ref{phiphi}) \cite{tese,marinoap}. The first term in (\ref{phiphi}) produces the exponential decay of the vortex correlation function. This implies the energy spectrum of the quantum vortices $\Psi$ possesses a gap proportional to the vacuum expectation value of the Higgs field squared: $M^2$. The fact that the above two-point function vanishes asymptotically at large-distances means the quantum vortex states $|\Psi \rangle$ are orthogonal to the vacuum. They are also orthogonal to isolated charge and magnetic flux states $|\sigma \rangle$ and $|\mu \rangle$. These properties indicate that the $\Psi$-states are genuine and stable quantum excitations of the system. \subsection{Analytic Properties of the Correlation Functions} The analytic structure of the eucllidean correlation functions is closely related to the spin value. Except for the case of bosons, the euclidean correlators are multivalued, each sheet corresponding to a different ordering of operators in the vacuum expectation values that correspond to these euclidean functions \cite{marswi}. Observe that the composite field $\Psi$ (Euclidean) correlation function above is multivalued whenever $4\pi abM^{2} $ is not an integer. The composite field $\Psi$, indeed, has spin $s=4\pi abM^{2} $ and, as expected, is in general an anyon. Notice that $\Phi_M=2\pi a$ and $Q =b {\cal M}$ are respectively the magnetic flux and the charge carried by the vortex operator. The spin, consequently, can be written in a more physical way as $s= 2\Phi_M QM^{2} {\cal M}^{-1}$. The values of the euclidean functions on adjacent sheets differ by a phase $e^{i 2 \pi s}$, hence, we have the following property for the real time vacuum expectation values of fields \cite{marswi}. \begin{eqnarray} \label{braids} \langle \Psi(x)\Psi^{\dagger}(y)\rangle^{(1)} = e^{i 2 \pi s} \langle \Psi^{\dagger}(y) \Psi(x)\rangle^{(1)} = e^{i 4 \pi s} \langle \Psi(x)\Psi^{\dagger}(y)\rangle^{(2)} = e^{i 6 \pi s} \langle \Psi^{\dagger}(y) \Psi(x)\rangle^{(2)} = ... \end{eqnarray} Notice first that whenever $2 s = $ {\it integer} (bosons or fermions), $e^{i 2 \pi s} = \pm 1$. It follows that each of the vev's $\langle \Psi(x)\Psi^{\dagger}(y)\rangle $ and $\langle \Psi^{\dagger}(y) \Psi(x)\rangle$ is univalued in this case. Notice now that, conversely, in the case of anyons, $2 s \neq$ {\it integer} and consequently the previous vev's are themselves multivalued, the particular sheet being indicated by the superscript. Observe that the values of the function in two adjacent sheets differ by a factor $e^{i 4 \pi s}$. This means the vev's of field operators have a branch cut, the number of sheets being determined by the spin. For $ s = 1/N$ ($2 s \neq$ {\it integer}), for instance, there are N sheets. For irrational spin, the vev's above would have an infinite number of sheets. This analytic structure will be the basis for our costruction of states with Non-Abelian statistics. \section{Fields with Non-Abelian Statistics} \subsection{$2-$point correlation functions} In this section, we show how to construct states with non-Abelian statistics out of the composite anyon vortex fields. We start by considering the $2-$point correlation function of this field, namely \begin{eqnarray} \label{doispontos} \langle \Psi(x)\Psi^{\dagger}(y)\rangle &=& \exp\left\{ -4\pi iabM^{2}[\mbox{Arg}(\bf{x} - \bf{y}) + \mbox{Arg}(\bf{y} - \bf{x})]\right\}\,e^{-D_{2}} \nonumber\\ &=& e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}\,e^{-D_{2}} \end{eqnarray} in which $s = 4\pi abM^{2}$ and $D_{2} = 2\pi a^{2}M^{2}\vert x - y\vert $. We now introduce the fields that will present non-Abelian statistics. For that purpose, let us consider the combined fields \begin{eqnarray} \label{29} \Psi_{\pm}(x) = \frac{1}{2}(\Psi(x) \pm \Psi^{\dagger}(x)), \end{eqnarray} which are, respectively, self-adjoint and anti-self-adjoint. Using the fact that $\langle \Psi(x)\Psi(y)\rangle = \langle \Psi^{\dagger}(x)\Psi^{\dagger}(y)\rangle = 0$ we conclude that their correlation functions satisfy \begin{eqnarray} \langle\Psi_{+}(x)\Psi_{+}(y)\rangle = -\langle \Psi_{-}(x)\Psi_{-}(y)\rangle = \frac{1}{4}\left(\langle \Psi(x)\Psi^{\dagger}(y)\rangle + \langle \Psi^{\dagger}(x)\Psi(y)\rangle\right) \end{eqnarray} and \begin{eqnarray} \langle \Psi_{-}(x)\Psi_{+}(y)\rangle = - \langle \Psi_{+}(x)\Psi_{-}(y)\rangle = \frac{1}{4}\left(\langle \Psi(x)\Psi^{\dagger}(y)\rangle - \langle \Psi^{\dagger}(x)\Psi(y)\rangle\right). \end{eqnarray} Using Eq. (\ref{doispontos}) we can write \begin{eqnarray} \label{mm} \langle \Psi_{+}(x)\Psi_{+}(y)\rangle &=& \frac{1}{4}\left[e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}e^{-D_{2}} + e^{-2is\mbox{Arg}(\bf{y} - \bf{x})}e^{-is\pi}e^{-D_{2}}\right] \nonumber\\ \langle \Psi_{-}(x)\Psi_{+}(y)\rangle &=& \frac{1}{4}\left[e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}e^{-D_{2}} - e^{-2is\mbox{Arg}(\bf{y} - \bf{x})}e^{-is\pi}e^{-D_{2}}\right] \end{eqnarray} Let us see what are the braiding properties of the states associated to the fields $\Psi_\pm$. Using the properties of $\mbox{Arg (z)}$ in (\ref{mm}) we obtain \begin{eqnarray} & & \left[e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}e^{-D_{2}} + e^{-2is\mbox{Arg}(\bf{y} - \bf{x})}e^{-is\pi}e^{-D_{2}}\right] \stackrel{\longrightarrow}{x \leftrightarrow y} \Big[e^{-2\pi is}e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}e^{-D_{2}} \nonumber\\ & & \hspace{10.0cm} +\,\,e^{2\pi is}e^{-2is\mbox{Arg}(\bf{y} - \bf{x})}e^{-is\pi}e^{-D_{2}}\Big] \nonumber \end{eqnarray} and \begin{eqnarray} & & \left[e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}e^{-D_{2}} - e^{-2is\mbox{Arg}(\bf{y} - \bf{x})}e^{-is\pi}e^{-D_{2}}\right] \stackrel{\longrightarrow}{x \leftrightarrow y} \Big[e^{-2\pi is}e^{-2is\mbox{Arg}(\bf{x} - \bf{y})}e^{-is\pi}e^{-D_{2}} \nonumber\\ & & \hspace{10.0cm} -\,\,e^{2\pi is}e^{-2is\mbox{Arg}(\bf{y} - \bf{x})}e^{-is\pi}e^{-D_{2}}\Big]. \nonumber \end{eqnarray} Observe that, whenever the operator $\Psi$ is bosonic or fermionic the phases generated by braiding the $\Psi_{\pm}$-particles are identical, i. e., $e^{2\pi is} = e^{-2\pi is} = \pm 1$. This implies \begin{eqnarray} \langle \Psi_{\pm}(y)\Psi_{\pm}(x)\rangle = e^{2\pi is} \langle \Psi_{\pm}(x)\Psi_{\pm}(y)\rangle. \nonumber \end{eqnarray} In this case, the above expression shows that whenever the charged vortex operator $\Psi$ is bosonic or fermionic, then the self-sdjoint operators $\Psi_{\pm}$ are also bosonic or fermionic. On the other hand, when the vortex field is an anyon, namely, for $2s \neq $ {\it integer}, the $\Psi_{\pm}(x)$ fields have non-abelian braiding given by \begin{eqnarray} \langle \Psi_{+}(y)\Psi_{+}(x)\rangle &=& \frac{1}{4}\Big[\alpha^{*}\langle (\Psi_{+}(x) + \Psi_{-}(x))(\Psi_{+}(y) - \Psi_{-}(y))\rangle \nonumber\\ & & \quad +\,\, \alpha\langle (\Psi_{+}(x) - \Psi_{-}(x))(\Psi_{+}(y) + \Psi_{-}(y))\rangle\Big] \nonumber\\ &=& \frac{1}{2}\Big[(\alpha + \alpha^{*})\langle \Psi_{+}(x)\Psi_{+}(y)\rangle - (\alpha - \alpha^{*})\langle \Psi_{-}(x)\Psi_{-}(y)\rangle\Big] \nonumber\\ &=& \cos \delta \langle \Psi_{+}(x)\Psi_{+}(y)\rangle - i\sin\delta\langle \Psi_{-}(x)\Psi_{-}(y)\rangle \end{eqnarray} and \begin{eqnarray} \langle \Psi_{-}(y)\Psi_{+}(x)\rangle &=& \frac{1}{4}\Big[\alpha^{*}\langle (\Psi_{+}(x) + \Psi_{-}(x))(\Psi_{+}(y) - \Psi_{-}(y))\rangle \nonumber\\ & & \quad \,\, -\,\, \alpha\langle (\Psi_{+}(x) - \Psi_{-}(x))(\Psi_{+}(y) + \Psi_{-}(y))\rangle\Big] \nonumber\\ &=& \frac{1}{2}\Big[-(\alpha - \alpha^{*})\langle \Psi_{+}(x)\Psi_{+}(y)\rangle + (\alpha + \alpha^{*})\langle \Psi_{-}(x)\Psi_{-}(y)\rangle\Big] \nonumber\\ &=& -i\sin\delta\langle \Psi_{+}(x)\Psi_{+}(y)\rangle + \cos\delta\langle \Psi_{-}(x)\Psi_{-}(y)\rangle \end{eqnarray} where in the above expression $\alpha = e^{i \delta}$ and $\delta = 2\pi s$. We conclude that when the composite vortex field $\Psi$ is an anyon it follows that the $\Psi_{\pm}$ fields will have non-Abelian braiding given by \begin{eqnarray} \left( \begin{array}{c} \langle \Psi_{+}(y)\Psi_{+}(x)\rangle \\ \langle \Psi_{-}(y)\Psi_{+}(x)\rangle \\ \end{array} \right) = \left( \begin{array}{cc} \cos\delta & -i\sin\delta \\ -i\sin\delta & \cos\delta \\ \end{array} \right) \left(\begin{array}{c} \langle \Psi_{+}(x)\Psi_{+}(y)\rangle \\ \langle \Psi_{-}(x)\Psi_{+}(y)\rangle \\ \end{array} \right) \nonumber \end{eqnarray} The braiding matrix and its hermitean adjoint \begin{eqnarray} \rho(M)=\left(\begin{array}{cc} \cos\delta & -i\sin\delta \\ -i\sin\delta & \cos\delta \\ \end{array} \right) \qquad \rho(M)^{\dagger} = \left(\begin{array}{cc} \cos\delta & i\sin\delta \\ i\sin\delta & \cos\delta \\ \end{array} \right) \nonumber \end{eqnarray} satisfy $\rho(M)^{\dagger}\rho(M) = 1$, being therefore unitary. We now come to one of our most interesting results: Observe that a NOT gate can be obtained out of the braiding matrix $M$ (up to an i-factor) by making $\delta = \pi/2$ or, equivalently, $s=1/4$, namely, \begin{eqnarray} M = -i X, \qquad \mbox{in which}\qquad X = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right) \end{eqnarray} \subsection{$4$-point correlation functions} Let us consider here the 4-point function of the vortex operator in the broken phase. From Eq. (\ref{phiphi}) we can extract the following expression \begin{eqnarray} \langle \Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle &=& \langle \Psi^{\dagger}(x_{1})\Psi^{\dagger}(x_{2})\Psi(x_{3})\Psi(x_{4})\rangle \nonumber\\ &=&\exp\Big\{2is[\mbox{Arg}(\vec{x}_{1} - \vec{x}_{2}) - \mbox{Arg}(\vec{x}_{1} - \vec{x}_{3}) - \mbox{Arg}(\vec{x}_{1} - \vec{x}_{4}) \nonumber\\ & & - \mbox{Arg}(\vec{x}_{2} - \vec{x}_{3}) - \mbox{Arg}(\vec{x}_{2} - \vec{x}_{4}) + \mbox{Arg}(\vec{x}_{3} - \vec{x}_{4})] - 2\pi is + C_{4a}\Big\} \nonumber\\ \langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle &=& \langle \Psi(x_{1})\Psi^{\dagger}(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle \nonumber\\ &=&\exp\Big\{2is[-\mbox{Arg}(\vec{x}_{1} - \vec{x}_{2}) + \mbox{Arg}(\vec{x}_{1} - \vec{x}_{3}) - \mbox{Arg}(\vec{x}_{1} - \vec{x}_{4}) \nonumber\\ & & - \mbox{Arg}(\vec{x}_{2} - \vec{x}_{3}) + \mbox{Arg}(\vec{x}_{2} - \vec{x}_{4}) - \mbox{Arg}(\vec{x}_{3} - \vec{x}_{4})] - 2\pi is + C_{4b}\Big\} \nonumber\\ \langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle &=& \langle \Psi(x_{1})\Psi^{\dagger}(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle \nonumber\\ &=&\exp\Big\{2is[-\mbox{Arg}(\vec{x}_{1} - \vec{x}_{2}) - \mbox{Arg}(\vec{x}_{1} - \vec{x}_{3}) + \mbox{Arg}(\vec{x}_{1} - \vec{x}_{4}) \nonumber\\ & & + \mbox{Arg}(\vec{x}_{2} - \vec{x}_{3}) - \mbox{Arg}(\vec{x}_{2} - \vec{x}_{4}) - \mbox{Arg}(\vec{x}_{3} - \vec{x}_{4})] - 2\pi is + C_{4c}\Big\} \nonumber\\ \label{1a} \end{eqnarray} where \begin{eqnarray} C_{4a} &=& -\pi a^{2}M^{2}\left(\vert x_{1} - x_{3}\vert + \vert x_{1} - x_{4}\vert + \vert x_{2} - x_{3}\vert + \vert x_{2} - x_{4}\vert \right) \nonumber\\ & & +\,\pi a^{2}M^{2}\left(\vert x_{1} - x_{2}\vert + \vert x_{3} - x_{4}\vert\right) \nonumber\\ C_{4b} &=& -\pi a^{2}M^{2}\left(\vert x_{4} - x_{3}\vert + \vert x_{4} - x_{1}\vert + \vert x_{2} - x_{3}\vert + \vert x_{2} - x_{1}\vert \right) \nonumber\\ & & +\,\pi a^{2}M^{2}\left(\vert x_{4} - x_{2}\vert + \vert x_{3} - x_{1}\vert\right) \nonumber\\ C_{4c} &=& -\pi a^{2}M^{2}\left(\vert x_{3} - x_{1}\vert + \vert x_{3} - x_{4}\vert + \vert x_{2} - x_{1}\vert + \vert x_{2} - x_{4}\vert \right) \nonumber\\ & & +\,\pi a^{2}M^{2}\left(\vert x_{3} - x_{2}\vert + \vert x_{1} - x_{4}\vert\right). \nonumber \end{eqnarray} The correlation functions of the new fields given by (\ref{29}) may be expressed in terms of the correlation functions above as \begin{eqnarray} \label{JKJ} & & \hspace{-0.5cm} \langle\Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle = \langle \Psi_{-}(x_{1})\Psi_{-}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{4})\rangle = \nonumber \\ & & \hspace{-0.5cm} 2[\langle\Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle + \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle + \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle] \nonumber\\ \nonumber\\ & & \hspace{-0.5cm} \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{4})\rangle = \langle \Psi_{-}(x_{1})\Psi_{-}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle = \nonumber\\ & & \hspace{-0.5cm} 2[\langle\Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle - \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle - \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle] \nonumber\\ \nonumber\\ & & \hspace{-0.5cm} \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{4})\rangle = \langle \Psi_{+}(x_{1})\Psi_{-}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{4})\rangle = \nonumber\\ & & \hspace{-0.5cm} 2[-\langle\Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle + \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle - \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle] \nonumber\\ \nonumber\\ & & \hspace{-0.5cm} \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{4})\rangle = \langle \Psi_{+}(x_{1})\Psi_{-}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{4})\rangle = \nonumber\\ & & \hspace{-0.5cm} 2[-\langle\Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle - \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle + \langle\Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle] \nonumber\\ \label{2a} \end{eqnarray} Let us see what are the braiding properties of the above functions. Using (\ref{1a}) and the expression above, we get \begin{eqnarray} & & \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} \quad 2\Big[e^{2\pi is}\langle \Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle \nonumber\\ & & \hspace{1.0cm} +\, e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle + e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle\Big] \nonumber\\ \nonumber\\ & & \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} \quad 2\Big[ e^{2\pi is}\langle \Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle \nonumber\\ & & \hspace{1.0cm} - e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle - e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle\Big] \nonumber\\ \nonumber\\ & & \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} \quad 2\Big[- e^{2\pi is}\langle \Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle \nonumber\\ & & \hspace{1.0cm} +\, e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle - e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle\Big] \nonumber\\ \nonumber\\ & & \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} \quad 2\Big[- e^{2\pi is}\langle \Psi(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi^{\dagger}(x_{4})\rangle \nonumber\\ & & \hspace{1.0cm} - e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi(x_{3})\Psi^{\dagger}(x_{4})\rangle + e^{-2\pi is}\langle \Psi^{\dagger}(x_{1})\Psi(x_{2})\Psi^{\dagger}(x_{3})\Psi(x_{4})\rangle\Big]. \label{E1} \end{eqnarray} Now with the help of the Eq. (\ref{2a}) we can write the right-hand side of Eq. (\ref{E1}) in terms of correlators of the new fields $\Psi_{+}$ and $\Psi_{-}$, namely \begin{eqnarray} \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} & & \cos\delta\langle\Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle \nonumber\\ & & +\, i\sin\delta\langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{4})\rangle \nonumber\\ \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} & & i\sin\delta\langle\Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle \nonumber\\ & & +\, \cos\delta\langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{4})\rangle \nonumber\\ \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} & & i\sin\delta\langle\Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{4})\rangle \nonumber\\ & & +\, \cos\delta\langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{4})\rangle \nonumber\\ \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{4})\rangle \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} & & \cos\delta\langle\Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{4})\rangle \nonumber\\ & & +\, i\sin\delta\langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{4})\rangle \label{85} \end{eqnarray} From (\ref{85}) we can determine the unitary matrix corresponding to the braiding operation (monodromy matrix) $M_{12}$. Indeed, we may write the above equation as \begin{eqnarray} \left(\begin{array}{c} \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle \\ \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{3})\rangle \\ \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{3})\rangle \\ \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{3})\rangle \\ \end{array} \right) \stackrel{\longrightarrow}{x_{1} \leftrightarrow x_{2}} \rho(\mbox{M}_{12}) \left(\begin{array}{c} \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{+}(x_{4})\rangle \\ \langle \Psi_{+}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{-}(x_{3})\rangle \\ \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{-}(x_{3})\Psi_{+}(x_{3})\rangle \\ \langle \Psi_{-}(x_{1})\Psi_{+}(x_{2})\Psi_{+}(x_{3})\Psi_{-}(x_{3})\rangle \\ \end{array} \right)\nonumber \end{eqnarray} where \begin{eqnarray} \rho(\mbox{M}_{12}) = \left(\begin{array}{cccc} \cos\delta & i\sin\delta & 0 & 0 \\ i\sin\delta & \cos\delta & 0 & 0\\ 0 & 0 & i\sin\delta & \cos\delta\\ 0 & 0 & \cos\delta & i\sin\delta \end{array} \right)\nonumber \end{eqnarray} and $\delta = 2\pi s$. We see that it satisfies $\rho(\mbox{M}_{12})^{\dagger}\rho(\mbox{M}_{12}) = 1$, being therefore unitary. Using the same procedure we get the monodromy matrices that correspond to the braiding operations $M_{13}$, $M_{14}$, $M_{23}$, $M_{24}$ and $M_{34}$. These are given by \begin{eqnarray} \rho(\mbox{M}_{34}) &=& \rho(\mbox{M}_{12}) \nonumber \\ \nonumber \\ \rho(\mbox{M}_{13}) &=& \rho(\mbox{M}_{24}) = \left(\begin{array}{cccc} \alpha^{*} & 0 & 0 & 0 \\ 0 & 0 & 0 & \alpha^{*}\\ 0 & 0 & \alpha^{*} & 0\\ 0 & \alpha^{*} & 0 & 0 \end{array} \right)\nonumber \end{eqnarray} \begin{eqnarray} \rho(\mbox{M}_{14}) = \frac{1}{2}\left(\begin{array}{cccc} \alpha^{*} + \beta^{*} & 0 & 0 & -\alpha^{*} + \beta^{*} \\ 0 & -\alpha^{*} + \beta^{*} & \alpha^{*} + \beta^{*} & 0\\ 0 & \alpha^{*} + \beta^{*} & -\alpha^{*} + \beta^{*} & 0\\ -\alpha^{*} + \beta^{*} & 0 & 0 & \alpha^{*} + \beta^{*} \end{array} \right)\nonumber \end{eqnarray} {\small{\begin{eqnarray} \rho(\mbox{M}_{23}) = \left(\begin{array}{cccc} \cos\delta & 0 & 0 & i\sin\delta \\ 0 & i\sin\delta & \cos\delta & 0\\ 0 & \cos\delta & i\sin\delta & 0\\ i\sin\delta & 0 & 0 & \cos\delta \end{array} \right) \quad \nonumber \end{eqnarray}}} where $\alpha = e^{i\delta}$ and $\beta = e^{3i\delta}$. The only commuting braiding matrices are $\rho(\mbox{M}_{14})$ and $\rho(\mbox{M}_{23})$, i. e. \begin{eqnarray} [\rho(\mbox{M}_{14}), \rho(\mbox{M}_{23})] = 0 \nonumber \end{eqnarray} It can be easily verified that the unitary monodromy braiding matrices satisfy the Yang-Baxter relations, \begin{eqnarray} \rho(\mbox{M}_{12})\rho(\mbox{M}_{23})\rho(\mbox{M}_{12}) = \rho(\mbox{M}_{23})\rho(\mbox{M}_{12})\rho(\mbox{M}_{23}). \end{eqnarray} or, equivalently \begin{eqnarray} \rho(\mbox{M}_{23})\rho(\mbox{M}_{34})\rho(\mbox{M}_{23}) = \rho(\mbox{M}_{34})\rho(\mbox{M}_{23})\rho(\mbox{M}_{34}). \end{eqnarray} We now come to another of our most interesting results. Again we will see that for a particular choice of the spin, the monodromy matrices become logic gates, which are essential for quantum computation algorithms. Indeed, a simply controlled-NOT operation (CNOT gate), then can be obtained by choosing $s = 1/4$ or $\delta = \pi/2$ in $\rho(\mbox{M}_{12})$ (or $\rho(\mbox{M}_{34})$), namely \begin{eqnarray} \rho(\mbox{M}_{12}) = i\left(\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right). \nonumber \end{eqnarray} More logic keys may be obtained accordingly by a straightforward generalization. At this point, one should inquire more precisely about what ultimately determines the value of the spin of the vortices. We have seen that $s= 2\Phi_M Q M^{2} {\cal M}^{-1}$. Assuming the vortices carry one unit of magnetic flux ($a=1$), we have $\Phi_M= \frac{hc}{Q}$, hence the spin is \begin{eqnarray} s=2hc M^{2} {\cal M}^{-1}, \label{spin} \end{eqnarray} where we retrieved the physical units of magnetic flux. We conclude that the spin is determined by the ratio of the squared Higgs vacuum expectation value to the current algebra scalar functional. The latter is a fixed number, determined by the spectral density of the theory. The former, is determined in principle by the Higgs potential parameters, however, in any concrete associate condensed matter system, the Higgs expectation value is a physically adjustable parameter, which will depend on the temperature of the system. This means, therefore that the value of the spin s is ultimately determined by the temperature and therefore could be adjusted to the value $1/4$ or to any other value by tuning the temperature appropriately. \section{Conclusion} We have shown that the 2+1 D Chern-Simons-Higgs theory in the broken phase contains quantum states with non-Abelian statistics. These states are created by operators that are combinations of electrically charged magnetic vortex fields and their Hermitean adjoints. The Euclidean correlation functions of these composite vortex operators have been obtained by the method of quantization of topological excitations, which is based on the idea of order-disorder duality. All properties of such states may be derived from these correlation functions. For instance, we may infer from their large distance behavior, that in the ordered phase the quantum vortices are in excited quantum states carrying both nonzero magnetic flux and charge and having a gap proportional to the vacuum expectation value of the Higgs field. We may also show, out of these correlation functions behavior, that the quantum vortices are in general anyons, as one should expect from a field carrying both magnetic flux and charge. In special cases, however, they can be fermions or bosons, depending on the value of some input parameters. The analytic properties of the Euclidean vortex correlation functions, by its turn, have been used in order to show that proper (self-adjoint or anti-self-adjoint) combinations of vortices and anti-vortices possess non-Abelian statistics, whenever these electrically charged vortices are anyons. The unitary matrices corresponding to each of the non-commuting braiding operations have been explicitly determined as a function of the spin $s$ for the case of the two and four-point correlation functions. For a special value of the spin, namely $s=1/4$, we have shown that the monodromy matrices, which result from the exchange of the (anti) self-adjoint vortex states, become the basic logic gates (NOT, CNOT, and so on) required by the algorithm of a quantum computer. The spin value $s$, being proportional to the vacuum expectation of the Higgs field can be tuned by the temperature in a in any associated condensed matter system. It would be nice to find a concrete material realization for the system studied here. Pure non-Abelian Chern-Simons theory has been claimed to describe the state corresponding to the plateau $\nu=5/2$ of Quantum Hall systems \cite{qc-nas2,fradkin1998chern,moore1991nonabelions,naa1,naa3,naa4}. It should be investigated, for that matter, whether the coupling of a Higgs field could have any physical meaning in this or in any related system. \bigskip \noindent{\bf Aknowledgments} This work was supported in part by CNPq and FAPERJ
1,477,468,750,560
arxiv
\section{Introduction}\label{intro} For businesses that want to promote their items and services, running online advertisements on advertising platforms is an effective way to achieve their marketing goals. With the aim of attracting users to know more about the displayed items, advertisers design ad creative (such as text, image and video). Figure~\ref{fig:example} is an illustration that shows the creative of an ad in news feed, which contains a text and an image. An appropriate creative design capturing user interest accurately can improve the ad's click-through rate (CTR). CTR is a key metric that quantifies the effect of an ad, because click is the precondition for any further actions such as sharing and purchase taken by users. Thus designing ad creatives that can achieve higher CTR is crucial for ad delivery. Traditionally, advertisers need to manually design the creative of each ad, and then resort to online A/B test results to continually refine initial creative for catching user interests. Such trail-and-error process is labor-intensive and usually inefficient. In terms of the text in a creative, due to the variation characteristic of language expressions, it may need to be polished multiple times for obtaining an ideal one. To improve the efficiency of ad delivery for advertisers, especially for small advertisers that may not afford to hire professional writers, this paper focuses on automatically generating the text for an ad, and the goal is that the generated text can capture user interest for achieving higher CTR. \begin{figure}[t] \centering \centerline{\includegraphics[width=0.95\columnwidth]{title_case_ppt_update.pdf}} \caption{An illustration that shows the creative of an online advertisement in news feed on mobile.} \label{fig:example} \end{figure} There are several challenges to achieve this goal. \textbf{(I)} First, it is important to choose a suitable source for generating ad texts. A straightforward source is the corresponding item's title in landing page. However, a title is usually a mixture of item attributes while may not reflect user preference. In contrast, an ad text should contain insightful and informative contents that can arouse purchasing desire of users. \textbf{(II)} Second, most of current natural language generation (NLG) models are optimized using cross-entropy criterion, which is discrepant to the CTR metric we concern. To encourage the model to generate texts achieving higher CTR, there is a great need to incorporate CTR objective into training. \textbf{(III)} Last but not least, a well-trained NLG model usually need a large amount of paired data. However it is costly to collect sufficient human-written ad texts, especially for small advertisers, thus we are faced to low-resource problem. In this paper we propose \textsc{Creater}, a CTR-driven advertising text generation approach, to address the above challenges. \textbf{(I)} First, we choose high-quality user reviews as input source for generation. Compared to titles, user reviews intuitively contain contents that reflect real experience after purchasing. We also introduce an aspect term as input control code to improve the informativeness of generated text. \textbf{(II)} Second, to explicitly incorporate CTR objective during optimizing NLG models, we make use of collected user feedback through online A/B test. Advertisers always perform online A/B test to compare two different texts of a same ad, where online CTR metric reflects the distinction between a relatively ``good'' text and a ``bad'' one. We employ contrastive learning for model optimization, which encourages our model to generate texts that can achieve high CTR. \textbf{(III)} Finally, to alleviate the low-resource problem, we make use of large-scale unpaired reviews to perform pre-training that provides warm-starting. We design a novel self-supervised objective customized to our scenario, which reduces the gap between pre-training and fine-tuning. \textsc{Creater} has been deployed online in a leading advertising platform and it achieves significant improvement on core online metrics. The main contributions of this work are summarized as follows: $\bullet$ We propose \textsc{Creater} for generating ad texts that capture user interest based on high-quality user reviews. We make use of online A/B test data to perform contrastive learning, which encourages the model to generate texts that achieve higher CTR. $\bullet$ We propose a novel self-supervised objective to provide warm-starting with unpaired reviews, which is customized to our scenario and reduces the gap between pre-training and fine-tuning. $\bullet$ Experiments on industrial datasets show that \textsc{Creater} outperforms previous approaches on both automatic and human evaluation, and online results verify that it brings significant uplift on core metrics. \begin{figure*}[t] \centering \centerline{\includegraphics[width=2\columnwidth]{overall_update.pdf}} \caption{Overview of our proposed approach \textsc{Creater} for CTR-driven advertising text generation. } \label{fig:overview} \end{figure*} \section{Problem Formulation}\label{method:formulation} Given a source $x$ and a control code $c$ for an ad, where the source is a high-quality user review of the ad item, the control code is an aspect term of such review to guide generation, we aim to learn a generation model $p_{\Theta}(y\mid x,\ c)$ that can produce an appropriate ad text $y$ (where $\Theta$ denotes trainable parameters of the model). Our goal is that the generated ad text can capture user interest and attract users to know more about the ad item. \iffalse An appropriate ad text can attract users to click on the ad and obtain higher CTR. As we mentioned in §~\ref{intro}, to generate contents that capture user interest, we utilize user reviews of the corresponding item in an ad as the source. Further, to improve the informativeness of ad texts, we also use aspect terms as control codes to explicitly guide the generation process. Given a source $x$ and a control code $c$ for an ad, where the source is a user review of the ad item, the control code is an aspect term, our goal is to generate an appropriate ad text $y$ that can capture user interest the target is an ad text which is manually written by human editor, the goal is to learn a generation model that can maximize the conditional probability $p(y\mid x, c)$. We can produce the set of aspect terms from user reviews using off-the-shelf methods for aspect extraction and aspect term extraction, with a slight modification based on business demands. The above formulation omits the objective that the generated ad text is expected to obtain higher CTR. Inspired by~\cite{mishra2020learning}, we make use of online A/B test data to incorporate the CTR objective during optimization. For an ad, we provide two distinct ad texts, and then conduct online A/B test to collect their CTR based on user feedback. After the A/B test procedure, we obtain a training dataset $\mathcal D$ composed of tuples (source $x$, control code $c$, positive target $y^+$, negative target $y^-$), where the CTR of positive target $y^+$ is higher than that of negative target $y^-$. Thus the goal is to maximize the generation probability of the positive target $p(y^+\mid x, c)$. Modern sequence-to-sequence learning based neural text generation model usually need a large amount of source-target paired data for training. The size of $\mathcal D$ is determined by how many human-written ad texts (i.e., targets) can be used for online A/B test. Because it is time-consuming to obtain large size of ad texts, we are confronted with the low-resource scenario where the size of training dataset $\mathcal D$ is relatively small. To alleviate this, we utilize a large set of user reviews to perform self-supervised pre-training, without the need of paired source-target data. \fi \section{Proposed Approach: \textsc{Creater}} Figure~\ref{fig:overview} illustrates the workflow of our \textsc{Creater}, and it consists of two stages. The first stage is \textit{Controlled Pre-Training}, which learns from unpaired user reviews to provide warm-starting for low-resource scenario. The second stage is \textit{Contrastive Fine-Tuning}, which further learns from online A/B test data that reflects user feedback, aiming to encourage the model to generate ad text that can achieve higher CTR. \subsection{Stage 1: Controlled Pre-Training}\label{method:stage1} We construct a large set of user reviews as the pre-training corpus $\mathcal D_x$. Based on $\mathcal D_x$, we extract a set of aspect terms $\mathcal D_c$ using an off-the-shelf unsupervised model \textsc{Abae}~\cite{he2017unsupervised}, and each aspect term is typically represented as a word. Recall that we aim to learn a generation model $p_{\Theta}(y\mid x,\ c)$, while the pre-training stage only makes use of \textit{unpaired} user reviews $\mathcal D_x$. To ensure that the model benefits from pre-training, we propose a novel self-supervised objective customized to our scenario, which reduces the gap between pre-training and fine-tuning. The core is that, for each review $x\in\mathcal D_x$, we construct an aspect-based \textit{pseudo-target} $\tilde y$ from the review $x$ and mask this segment in $x$. The self-supervised objective is to perform aspect-controlled generation, which aims to recover the segment $\tilde y$ given the masked review with the guidance of corresponding aspect term. \paragraph{Aspect-Controlled Masking}\label{method:stage1_masking} For a review $x\in\mathcal D_x$, we tokenize it as a list of segments $[x_\mathrm{seg\_1}, x_\mathrm{seg\_2}, \ldots]$ based on punctuations and dependency parser, where each segment $ x_\mathrm{seg\_i}$ is a sub-sequence of $x$. Given an aspect term $c\in\mathcal D_c$ existed in the review $x$, we compute the matching score between $c$ and each $ x_\mathrm{seg\_i}$ with a matching function $f\left(c,\ x_\mathrm{seg\_i}\right)$.\footnote{The function $f(\cdot, \cdot)$ can either be a lexical-based one (such as similarity of sparse TF-IDF vectors) or an embedding-based one (such as similarity of averaged word embeddings).} We then select the segment with highest matching score as the \textit{pseudo-target} ${\tilde y}$ for the given pair of (source $x$, control code $c$): \begin{equation} \label{method:matching} {\tilde y}=\arg\max\limits_{x_\mathrm{seg\_i}\ \in\ x} f\left(c,\ x_\mathrm{seg\_i}\right) \end{equation} For each triple (source $x$, control code $c$, pseudo-target $\tilde y$), our aspect-controlled masking strategy masks the review $x$ by replacing its pseudo-target $\tilde y$ with a special word ``\texttt{[MASK]}''. Thus we transform each triple $(x,\ c,\ \tilde y)$ to a masked one $(\tilde x,\ c,\ \tilde y)$, where the masked review $\tilde x$ is specific to the aspect term $c$. \paragraph{\textbf{Aspect-Controlled Generation}} Given a masked review $\tilde x$ with an aspect term $c$, our self-supervised objective is to recover the masked segment (i.e., pseudo-target $\tilde y$) of original review $x$ with the controlling of $c$: \begin{equation}\label{method:stage1-objective} \min_{\Theta}\ -\log\ p_{\Theta}\left(\tilde y\mid \tilde x,\ c\right)\,. \end{equation} Such aspect-controlled generation enforces the model to understand the context of input masked review better. Compared to general pretraining models~\cite{zhang20pegasus,lewis2020bart,raffel2020exploring}, the proposed objective is customized to our scenario. The input information $\tilde x$ does not contain the content to be generated, improving the ability of generating abstractive contents other than simply copying from input only. Formally, we first prepend the control code $c$ to the masked source $\tilde x$, and add a special word \texttt{``[SEP]''} between them. We then feed the concatenated sequence $[c, {\small\texttt{[SEP]}},\ \tilde x]$ into \textsc{Creater} to generate the pseudo-target $\tilde y=[\tilde y_1,\tilde y_2,\ldots,\tilde y_{\tilde T}]$ (where $\tilde T$ denotes the length), where the model architecture is a Transformer encoder-decoder~\cite{vaswani2017attention} and it is optimized via teacher-forcing: \begin{equation} \begin{aligned} \bm{{\tilde h}}_1, \bm{{\tilde h}}_2, \ldots, \bm{{\tilde h}}_{\tilde N}\ =\ \mathsf{Enc}\left([c, {\small\texttt{[SEP]}},\ \tilde x]\right) \\ p\left(\tilde y_t\mid \tilde y_{0:t-1},\ \tilde x,\ c\right) \sim \mathsf{Dec}\left(\tilde y_{0:t-1},\ \bm{{\tilde h}}_{1:{\tilde N}} \right)\\ \min_{\Theta} \sum_{t=1}^{\tilde T} -\log\ p_{\Theta}\left(\tilde y_t\mid \tilde y_{0:t-1},\ \tilde x,\ c\right) \end{aligned} \end{equation} where $\tilde N$ is the length of the sequence $[c, {\small\texttt{[SEP]}},\ \tilde x]$, and $\bm{{\tilde h}}_i$ is the $i$-th word's representation. \subsection{Stage 2: Contrastive Fine-Tuning}\label{method:stage2} To incorporate CTR objective during generation, we make use of existing online A/B test data that reflects user preference. Specifically, we construct a dataset $\mathcal D$, where each sample is a tuple (source $x$, control code $c$, positive target $y^+$, negative target $y^-$). Both $y^+$ and $y^-$ are human-written ad texts (given $x$ and $c$), while $y^+$ achieves higher CTR than $y^-$ during online A/B test. Next, we start from describing a vanilla fine-tuning objective that only considers $y^+$. We then introduce two contrastive fine-tuning objectives which take good advantage of online A/B test data. \paragraph{Vanilla Fine-Tuning} A straightforward objective is to maximize the generation probability of positive target $y^+$: \begin{equation} \mathcal L_{ft}\ = \ -\log\ p_{\Theta}(y^+ \mid x,\ c)\,. \end{equation} Obviously, this learning objective omits the utility of negative targets. To enhance the model's discriminative ability of ad texts with different CTR, we propose to expose the decoder to both positive and negative ad texts via modeling their distinctness. Specifically, we leverage the paradigm of contrastive learning, where the positive/negative target (i.e., ad text with higher/lower CTR) is used to construct positive/negative paired instance, and introduce two contrastive learning based objectives to fine-tune the pre-trained model. \paragraph{i. Margin-based Contrastive Fine-Tuning} We first propose to directly maximize the margin of generation probabilities between the positive target $y^+$ and the negative target $y^-$. This yields the following loss function: \begin{equation}\label{method:margin} \small \mathcal L_{cont}= \max\left\{0, -\left(\log\ p_{\Theta}(y^+ \mid x,\ c) - \log\ p_{\Theta}(y^- \mid x,\ c)\right) +\gamma \right\} \end{equation} where the margin $\gamma$ is a hyperparameter. Through this loss, the optimization procedure is encouraged to maximize the probability gap of ad texts having distinct CTR. \paragraph{ii. InfoNCE-based Contrastive Fine-Tuning} From the perspective of representation learning, we propose a contrastive loss based on InfoNCE~\cite{oord2018representation}, which maximizes the similarity between source and positive target, and minimizes that between source and negative target: \begin{equation}\label{method:infonce} \small \mathcal L_{cont}=- \log\frac{ \exp\left({\mathrm{sim}\left((c,x),\ y^+\right)}/{\tau}\right) }{ \exp\left({\mathrm{sim}\left((c,x),\ y^+\right)}/{\tau}\right) + \exp\left({\mathrm{sim}\left((c,x),\ y^-\right)}/{\tau}\right) } \end{equation} where $\tau$ is temperature. $\mathrm{sim}(\cdot, \cdot)$ is similarity function of encoder and decoder representations. We adopt mean-pooling to the top layer of the encoder/decoder as their representations. Let $\bm h$, ${\bm z}^+$ and ${\bm z}^-$ denote encoder representation, decoder representations for positive and negative targets. We then add two fully-connected layers to the encoder and the decoder side respectively, transforming them to the same vector space. Thus an inner product operation is used to obtain the similarity scores: \begin{equation} \small \begin{aligned} \operatorname{sim}\left((c, x),\ y^{+}\right) &=\left(\mathbf{W}_{e} \boldsymbol{h}\right)^{\top}\left(\mathbf{W}_{d} \boldsymbol{z}^{+}\right) \\ \operatorname{sim}\left((c, x),\ y^{-}\right) &=\left(\mathbf{W}_{e} \boldsymbol{h}\right)^{\top}\left(\mathbf{W}_{d} \boldsymbol{z}^{-}\right)\ \end{aligned} \end{equation} where $\mathbf{W}_{e}$ and $\mathbf{W}_{d}$ learnable parameters. \paragraph{\textbf{Objective}} The final loss of contrastive fine-tuning stage is the sum of $ \mathcal{L}_{ft}$ and contrastive loss: \begin{equation} \mathcal{L}_{ft}(y^+)\ + \mathcal{L}_{ft}(y^-)\ +\ \alpha \mathcal{L}_{cont} \end{equation} where $\alpha$ is a trade-off hyperparameter, and $\mathcal{L}_{cont}$ can either be margin-based or InfoNCE-based. \paragraph{\textbf{Comparison}} The advantage of margin-based loss is that it does not add extra parameters, directly incorporating CTR objective to generation probabilities. InfoNCE-based loss considers encoder representations to learn better decoder representations. Although it adds a few parameters (i.e., two fully-connected layers), they are pruned at inference. The construction of positive-negative pairs in \textsc{Creater} is designed for CTR objective via user feedback, unlike recent work tackling other issues~\cite{cao2021cliff,pan2021contrastive,lee2021contrastive} \section{Experiments} \subsection{Experimental Setup} \paragraph{\textbf{Datasets}}\label{experiment:dataset} To our knowledge, there is no available public dataset that contains ad texts coupled with CTR information, thus we collected data on a leading advertising platform. We construct a dataset $\mathcal D$ where each sample is a tuple of (user review, aspect term, positive ad text$_1$, negative ad text$_2$), in Chinese, through online A/B test. Overall the user reviews are ensured to be high-quality based on rules and filtering models. Each ad text is written by human editors given the review and aspect term, covering 4,047 advertisers. More details about data preprocessing and filtering can be found in \textbf{Appendix}~\ref{appendix:dataset}. We also produce a large-scale review corpus $\mathcal D_x$ for constructing pre-training dataset via aspect-controlled masking (§~\ref{method:stage1_masking}). Table~\ref{table:datasets} lists the statistics. We split $\mathcal D$ with 7:1:2 to obtain the training/development/test set. \begin{table}[t] \centering \scriptsize \begin{tabular}{ccc} \toprule \textbf{Dataset} & Pre-training ($\mathcal D_x$) & Fine-tuning ($\mathcal D$)\\ \midrule \# Samples & 1,471,106 & 43,985 \\ Avg. length of reviews & 25.05 & 25.31\\ Avg. length of ad texts & N/A & 13.06\\ \bottomrule \end{tabular} \caption{Statistics of the datasets used in our experiments. ``Avg. length'' means the average number of characters in a sequence (review or ad text).} \label{table:datasets} \end{table} \paragraph{\textbf{Comparative Approaches}} We choose two types of comparative approaches in our experiments. The first type contains \textit{non-CTR-driven approaches}: (1) {\underline{\textsc{SegExt}}} (Segment extraction)\quad employs unsupervised aspect-controlled masking (§~\ref{method:stage1_masking}) to return a segment of source as the ad text. {If the returned segment is too short to display, we add its left or right segment based on matching score.} (2) {\underline{\textsc{PGNet}}} (Pointer-generator)\quad is an RNN-based approach via copying mechanism~\cite{see2017get}; (3) {\underline{\textsc{C-PGNet}}}\quad improves \textsc{PGNet} by adding control code during decoding, which imposes on the generation gate; (4) {\underline{\textsc{Trm}}} (Transformer)\quad is the state-of-the-art architecture for text generation; (5) {\underline{\textsc{C-Trm}}}\quad improves \textsc{Trm} by adding control code at both encoder and decoder sides, with the help of fusion layers; (6) {\underline{\textsc{C-Trm-RL}}}\quad fine-tunes the \textsc{C-Trm} with reinforcement learning (RL), where an extra CTR regression model (trained on $\mathcal D$) is the reward estimator that produces click probability of a generated text~\cite{hughes2019generating}. {Negative targets are used to train the reward estimator, and are not explicitly used for optimizing generation model.} The second type contains \textit{CTR-driven approaches}. They exploit negative target $y^-$ during training to explicitly incorporate CTR information: (1) {\underline{\textsc{QualityModel}}}\quad employs click behavior as a quality measure for paired samples~\cite{wang2019quality}. It first builds a CTR latent space to represent source and target, and then computes the cosine similarity between them as the quality score of the sample. Quality scores are used to weight the cross-entropy objective and reduce the probability of generating low-quality texts; (2) {\underline{\textsc{ContraModel}}} \quad is a variant of \textsc{Creater}, which removes the controlled pre-training stage; (3) {\underline{\textsc{Bart+ContraModel}}}\quad performs pre-training from scratch using the self-supervised objective of \textsc{Bart} other than our proposed one, and then performs fine-tuning with \textsc{ContraModel}. \subsection{Implementation Details}\label{appendix:implementation} Both the encoder and the decoder of \textsc{Creater} contain four layers, and the dimension of hidden representations produced by each layer is set to 512. For fair comparison, all comparative approaches that based on Transformer employ the above architecture. For text preprocessing, we tokenize sources and targets to word sequences, and thus our \textsc{Creater} generates ad texts at word-level. We restrict the max length of input as 128 words. The overall parameter size is 129M. At the pre-training stage, we employ Adafactor optimizer~\cite{shazeer2018adafactor}, with a mini-batch size of 4096 for training 10 epochs. Models are trained on 8 Tesla V100 32GB GPUs. We implement our approach with \textit{PyTorch}\footnote{\url{https://github.com/pytorch/pytorch}} and \textit{Transformers}\footnote{\url{https://github.com/huggingface/transformers}}. In terms of the model for extracting aspect term set, during early experiments we found that the performance of \textsc{Creater} is not sensitive to it and thus we employ the representative model \textsc{Abae}. For matching function $f(\cdot, \cdot)$ (Equation~\ref{method:matching}) used in aspect-controlled masking for building pre-training data, we try a lexical-based (similarity of sparse TF-IDF vectors) and an embedding-based one (similarity of averaged word embeddings), and found that the performance of fine-tuned model is not sensitive to them. Thus we choose the former for simplicity. At the fine-tuning stage, we set the mini-batch size to 1024 for 20 epochs. When the margin-based contrastive loss is used, the margin parameter $\gamma$ is set to 1.0. Or if we the use InfoNCE-based contrastive loss, the temperature parameter $\tau$ is set to 1.0. We set the trade-off hyperparameter $\alpha$ to 1e-3 (which is searched from \{1e-2, 1e-3, 1e-5\}). We choose the checkpoint that has lowest perplexity on validation set as the final model. At inference time, we use beam search algorithm to generate texts, where the beam size is set to 5. The BLEU metric is evaluated using \textit{NLTK}\footnote{\url{ https://github.com/nltk/nltk}}, and the ROUGE metric is evaluated using \textit{pyrouge}\footnote{\url{https://github.com/bheinzerling/pyrouge}}. All reported results of different approaches are run based on the same random seed. \begin{table}[t] \centering \scriptsize \begin{tabular}{lcccc} \toprule \textbf{Approach} & BLEU-4 & RG-1 & RG-2 & RG-L \\ \midrule \multicolumn{5}{l}{\textit{Non-CTR-driven Approaches}}\\ \textsc{SegExt} & 13.54 & 31.11 & 7.71 & 23.66 \\ \textsc{PGNet} & 24.85 & 44.79 & 16.76 & 35.21\\ \textsc{C-PGNet} & 37.69 & 55.09 & 31.70 & 46.62\\ \textsc{Trm} & 33.36 & 50.58 & 26.23 & 42.44 \\ \textsc{C-Trm} & 48.66 & 61.73 & 42.43 & 54.82\\ \textsc{C-Trm-RL} & 50.11 & 62.59 & 42.26 & 55.43\\ \midrule \multicolumn{5}{l}{\textit{CTR-driven Approaches}}\\ \textsc{QualityModel} & 49.89 & 62.67 & 43.85 & 55.84\\ \hdashline[2pt/1.2pt] \textsc{ContraModel} & {51.47} & {63.47} & {43.94} & {56.93} \\ \textsc{Bart+ContraModel} & 53.35 & 65.04 & 46.20 & 58.51\\ \textsc{Creater} & \textbf{54.56} & \textbf{65.93} & \textbf{47.44} & \textbf{59.77}\\ \bottomrule \end{tabular} \caption{Main results. ``RG'' stands for ROUGE. Both BLEU and ROUGE scores are multiplied by 100.} \label{results:main} \end{table} \subsection{Performance Comparison} Table~\ref{results:main} shows the comparison results, and we report BLEU-4 and ROUGE-1/2/L (positive targets are regarded as gold-standard).\footnote{Our \textsc{Creater} performs significantly better than the second best comparative approach at the level of $p<0.05$.} It is natural that the approaches considering aspect terms outperform those that do not perform controlling. CTR-driven approaches usually outperforms non-CTR-driven ones, demonstrating that exposing the model to both positive and negative targets improves generation quality. \textsc{QualityModel} and \textsc{ContraModel} represent two paradigms to incorporate CTR information. \textsc{ContraModel} is superior to \textsc{QualityModel}, which indicates that directly modeling the distinctness as an auxiliary objective is more effective than weighting the original loss. \textsc{Bart+ContraModel} performs better than \textsc{ContraModel} by adding a pre-training stage. \textsc{Creater} proposes a customized controlled pre-training objective and achieves the best result. This verifies that designing a suitable self-supervised objective is crucial to improve generation. \begin{table}[t] \centering \scriptsize \begin{tabular}{lcccc} \toprule \textbf{Variants of Pre-training} & BLEU-4 & RG-1 & RG-2 & RG-L \\ \midrule {\textsc{Creater}} ($p(\tilde y\mid \tilde x,c)$) & \textbf{54.56} & \textbf{65.93} & \textbf{47.44} & \textbf{59.77} \\ {\quad w/o masking} ($p(\tilde y\mid x,c)$) & 51.24 & 63.65 & 43.74 & 56.94\\ {\quad w/o control code} ($p(\tilde y\mid \tilde x)$) & 53.11 & 64.64 & 45.91 & 58.28\\ {\quad w/o whole pre-training} & 49.92 & 62.20 & 41.91 & 55.09 \\ \bottomrule \end{tabular} \caption{Comparison of pre-training objectives. } \label{results:discussion_stage1} \end{table} \subsection{Discussion} \paragraph{Effect of Aspect-Controlled Masking} During pre-training, aspect-controlled masking ensures the ability of generating abstractive contents other than simply copying from source. Besides, the model takes aspect terms as control codes to generated masked contents (pseudo-targets). Both the two mechanisms reduce the gap between pre-training and fine-tuning. We verify their effectiveness by removing one of two mechanisms, and the fine-tuning stage keeps unchanged. Results are shown in Table~\ref{results:discussion_stage1}. The two variants are inferior to the full model, demonstrating that both of them can improve pre-training to provide better warm-starting. Aspect-controlled masking brings improvements over 3 BLEU score and 2 ROUGE score. Thus, our novel controlled pre-training objective indeed enhances the performance of advertising text generation via effective self-supervised learning on unpaired corpus. \begin{figure}[t] \centering \centerline{\includegraphics[width=0.65\columnwidth]{creater_update.pdf}} \caption{Results with limited fine-tuning data. Dashed lines are two strongest baselines trained on whole data.} \label{fig:lowresource} \end{figure} \paragraph{Benefit in Low-Resource Scenario} We further verify the effect of controlled pre-training when there are only limited paired data for fine-tuning. We change the size of data (from 25\% to 100\% of the whole training set), and compare to two strongest baselines (\textsc{QualityModel} and \textsc{ContraModel}, without pre-training) that are trained on the whole training set. As shown in Figure~\ref{fig:lowresource}, with only half of fine-tuning data, \textsc{Creater} performs on par with \textsc{QualityModel}, verifying the benefit of our controlled pre-training in low-resource scenario. \paragraph{Analysis of Contrastive Fine-Tuning} Our \textsc{Creater} exposes the model to both positive and negative targets for incorporating CTR information. Table~\ref{results:discussion_stage2} shows the comparison of two contrastive objectives. For with and without pre-training, the best-performing model is based on contrastive learning. An interesting point is that when we perform pre-training, InfoNCE-based model achieves best performance, while margin-based model outperforms other variants if we do not pre-train the model. We suggest that InfoNCE-based loss is designed from the perspective of representation learning, and pre-training can provide better text representations compared to no pre-training. Thus in this situation the utility of InfoNCE-based model is highlighted. \begin{table}[t] \centering \scriptsize \begin{tabular}{cccccc} \toprule \multicolumn{2}{c}{\textbf{Variants of Contrastive Loss}} & \multirow{2}*{BLEU-4} & \multirow{2}*{RG-1} & \multirow{2}*{RG-2} & \multirow{2}*{RG-L} \\ \cmidrule(lr){1-2} Pre-Train & Contrastive Loss & \\ \midrule \checkmark & InfoNCE-based & \textbf{54.56} & \textbf{65.93} & \textbf{47.44} & \textbf{59.77} \\ \checkmark & Margin-based & 54.26 & 65.93 & 47.23 & 59.57 \\ \checkmark & No & 53.70 & 65.38 & 46.57 & 58.94 \\ \cmidrule(lr){1-6} $\times$ & InfoNCE-based & 49.92 & 62.20 & 41.91 & 55.09 \\ $\times$ & Margin-based & \textbf{51.47} & \textbf{63.47} & \textbf{43.94} & \textbf{56.93} \\ $\times$ & No & 50.37 & 62.27 & 42.19 & 55.36 \\ \bottomrule \end{tabular} \caption{Comparison of contrastive learning objectives. } \label{results:discussion_stage2} \end{table} \begin{table}[t] \centering \scriptsize \begin{tabular}{ccccc} \toprule \textbf{Approach} & Gram. & Info. & Suit. & Avg. Rank ($\downarrow$) \\ \midrule \textsc{SegExt} & \textbf{4.97} & 2.19 & 1.92 & 4.53\\ \textsc{C-Trm} & 4.95 & 2.69 & 2.44 & 3.65\\ \textsc{QualityModel} & 4.96 & 2.81 & 2.49 & 3.19 \\ \textsc{Creater} & 4.96 & \textbf{3.21} & \textbf{3.05} & \textbf{2.09}\\ \cmidrule(lr){1-5} Human-written (high-quality) & 4.99 & 3.60 & 3.22 &1.48\\ \bottomrule \end{tabular} \caption{Human evaluation results. ``Gram.'', ``Info.'', ``Suit.'' and ``Avg. Rank'' stand for grammaticality, informativeness, suitability and average rank, respectively. } \label{results:human} \end{table} \subsection{Human Evaluation}\label{experiment:human} An ad text will be measured from the three views: grammaticality, informativeness (whether its content reflects the key points of aspect term and the source) and suitability (whether it is suitable to be displayed). Each view is ranging from 1 to 5 (5 is the best). We randomly choose fifty samples and invite three human judgments. Table~\ref{results:human} shows that \textsc{Creater} performs well on most views and achieves the best ranking results among four comparative approaches, possessing the ability of generating fluent, informative and suitable ad texts. We found that the reason why the informativeness and suitability of \textsc{Creater} are not as high as human-written ones is that the faithfulness of generated texts is not always ideal. We leave the improvement in future work. \subsection{Case Analysis} We further show the generated ad texts from different approaches for case analysis. Table~\ref{results:case} is a case analysis that the input contains a source review with an aspect term. By comparing these generated results, We can see that the ad text generated by \textsc{Creater} is more suitable to attract users. The generated phrase \texttt{``sweet, quenching your thirst''} is more attractive than other results like \texttt{``tastes well''}. On the whole, the overall quality of the ad texts generated by \textsc{Creater} is better than other competitive approaches. \begin{table}[t] \centering \scriptsize \begin{tabular}{p{6em}p{23em}} \toprule \multirow{2}*{\textbf{Approach}} & {\textbf{Source}: \begin{CJK*}{UTF8}{gbsn}水果很新鲜, 口感很好吃着非常甜, 价格优惠, 下次还会光顾\end{CJK*} (The fruit is fresh, and it tastes delicious and sweet. The price is favorable. Will buy it next time.)}\\ & \textbf{Control code}: \begin{CJK*}{UTF8}{gbsn}口感\end{CJK*} (taste) \\ \midrule \textsc{SegExt} & \begin{CJK*}{UTF8}{gbsn}水果很新鲜, 口感很好吃着非常甜\end{CJK*} (The fruit is fresh, and it tastes delicious and sweet.) \\ \textsc{C-Trm} & \begin{CJK*}{UTF8}{gbsn}他家的水果挺新鲜, 口感挺值的\end{CJK*} (The fruit in this shop is really fresh, and the taste is worth the price.)\\ \cmidrule(lr){1-2} \textsc{QualityModel} & \begin{CJK*}{UTF8}{gbsn}超喜欢他家水果, 品质好, 口感很好\end{CJK*} (Really like the fruit in this shop, which is of good quality and tastes well.)\\ \textsc{Creater} & \begin{CJK*}{UTF8}{gbsn}份量很足, 水果新鲜, 口感{\bluehl{\uwave{甘甜很解渴}}}\end{CJK*} (The fruit is a big portion and fresh. It tastes {\bluehl{\uwave{sweet, quenching your thirst}}}.)\\ \bottomrule \end{tabular} \caption{Case analysis. Texts in parentheses are the corresponding contents translated to English.} \label{results:case} \end{table} \begin{table}[t] \centering \scriptsize \begin{tabular}{ccc} \toprule \textbf{Approach} & CTR ($\uparrow$) & CPC ($\downarrow$) \\ \midrule \textsc{Base} & - & -\\ \textsc{QualityModel} & +4.5\% & -4.1\%\\ \textsc{Creater} & \textbf{+6.9\%} & \textbf{-6.1\%}\\ \bottomrule \end{tabular} \caption{Online results (relative improvement).} \label{results:online} \end{table} \subsection{Online Experiments}\label{experiment:online} We have deployed \textsc{Creater} to a leading advertising platform. Our online experiment is conducted for one-week, and all ads are displayed in mobile news feed. For the ad that containing more than one generated texts (because there may be multiple control codes), we randomly choose one of them to display. The experiment traffic covers over 12,000 advertisers, and results are computed based on over ten million impressions to ensure the confidence of online metrics. We compare performance among the ad texts generated by \textsc{Creater}, \textsc{QualityModel}, and those provided by advertisers (as \textsc{Base}). Core metrics are CTR and cost per click (CPC): CTR $=\frac{\text{\#click}}{\text{\#impression}}$ reveals attractiveness; CPC $=\frac{\text{total cost of advertisers}}{\text{\#click}}$ reflects ad delivery efficiency. Table~\ref{results:online} shows that \textsc{Creater} achieves significantly improvements on both CTR and CPC, verifying its effectiveness of improving delivery efficiency. \section{Related Work} Most studies focus on generating ad texts given landing page contents~\cite{thomaidou2013automated}. ~\citet{hughes2019generating} employ a CTR model as reward estimator with self-critical RL, and~\citet{kamigaito2021empirical} consider fluency, relevance and quality rewards to capture the characteristics of effective ad texts. ~\citet{kanungo2021ad} incorporate masked language modeling with self-critical learning to improve the generation for multiple products.~\citet{wang2021reinforcing} design model-based RL system that mimics real user feedback. To model user click behavior, ~\citet{wang2020evolutionary} take click as a measure of text fitness and design click-based reward.~\citet{wang2019quality} build a CTR space to obtain sample quality that weights cross-entropy loss. Unlike these work, we directly model the distinctness of positive and negative targets, and propose a customized pre-training objective. \section{Conclusion} We propose \textsc{Creater} for generating ad texts, which employs contrastive learning to encourage the model to generate texts achieving higher CTR. We design a novel self-supervised objective customized to our scenario, reducing the gap to further fine-tuning. Experiments verify that \textsc{Creater} brings significant uplift on core metrics. In future work we will take a next step to improve faithfulness, and extend the model to handle multiple aspects~\cite{chan2021controllable} and multiple reviews (which may be conflicting) with graph neural networks~\cite{wei2021graph}. \section*{Acknowledgments} We thank all the anonymous reviewers to their valuable comments for improving this work. \section*{Ethical Considerations} When we apply large-scale corpora from the Web, alleviating bias issues is necessary. We make efforts from two perspectives: (1) For input reviews, we have filtering steps to remove harmful contents, and ensure that they do not have user privacy information like age and gender (``Data Collection and Filtering'' of §~\ref{appendix:datacollection}); (2) For output ad texts, we are cautious before online deployment with a risk control procedure (``Post-Processing before Deployment'' of §~\ref{appendix:datacollection}). (3) Our model does not use user privacy information like age and gender.
1,477,468,750,561
arxiv
\section{Introduction} Two-dimensional (2D) materials have been extensively studied for applications in optoelectronics, thermoelectrics, sensing, catalysis, etc. While the catalogue of available 2D materials is vast \cite{Nicolosi13_Sci,Lebegue13_PRX,Mounet18_NNano}, it may be difficult to find a material that perfectly suits the desired specifications. In such cases, alloying can be used to further tune the material properties. Taking the transition metal dichalcogenide (TMD) family of 2D materials as an example, alloying the prototypical member MoS$_2$ with WS$_2$ or MoSe$_2$ leads to straightforward modification of electrical conductivity \cite{Revolinsky64_JAP,Srivastava97_SM}, band gap and band edges \cite{Chen13_ACSNano,Komsa12_JPCL,Kang13_JAP,Li14_JACS,Mann14_AM,Rigosi16_PRB}, and spin-orbit splitting \cite{Wang15_NComm}. More interestingly, alloying can even provide properties that were not present in the constituent phases. For instance, alloying can lead to dramatic reduction of the thermal conductivity \cite{Gu16_PRB,Qian18_APL} or passivation of defect levels \cite{Huang15_PRL,Yao16_ACSAMI}. The beneficial role of alloying has already been demonstrated in few applications: the response characteristics of (Mo,W)S$_2$-based photodetector \cite{Yao16_ACSAMI} and the catalytic activities of Mo(S,Se)$_2$ alloys \cite{Kiran14_Nanos,Wang15_AM} were found to be better than in their parents. Among TMD alloys, a particularly curious case is (Mo,W)Te$_2$ alloy, since MoTe$_2$ is more stable in the H-phase and WTe$_2$ is more stable in the T'-phase, although the energy differences between the two phases are small for both parent materials and, in fact, MoTe$_2$ can also be grown in the T'-phase. The phase tunability is particularly interesting for these materials, as they have drastically different electronic properties in different phases. In the H-phase, these materials are semiconductors, while in the T'-phase they are semimetals or topological insulators depending on the number of layers \cite{Cazalilla14_PRL,Qian14_Sci,Sun15_PRB,Huang16_NMat}. Due to similar energies, coexistence of H/T phase regions has been predicted in Ref.\ \onlinecite{Duerloo16_ACSNano}, and it was also proposed that the H/T'-transition in (Mo,W)Te$_2$ could be promoted by gating \cite{Zhang16_ACSNano}. Moreover, 2D ferroelectricity was recently demonstrated in T'-WTe$_2$ even in the monolayer limit \cite{Fei18_Nat}. Raman spectroscopy is an important and versatile tool for characterizing the composition of 2D alloys and assessing their overall crystal quality but it is not always straightforward to assign new peaks (as compared to the spectrum of the parent systems) to the structural features from which they originate from. Several TMD alloys have already been extensively studied in the literature by Raman spectroscopy providing datasets covering a full composition range in many alloy systems such as (Mo,W)S$_2$ \cite{Chen14_Nanos,Liu14_Nanos,Park18_ACSNano} (Mo,W)Se$_2$ \cite{Tongay14_APL,Zhang14_ACSNano} Mo(S,Se)$_2$ \cite{Mann14_AM,Feng15_ACSNano,Su14_Small,Li14_JACS} Re(S,Se)$_2$ \cite{Wen17_Small}. For bulk alloys, similar studies are also done \cite{Dumcenco10_JAC} and for T'-(Mo,W)Te$_2$ we are only aware of bulk alloy studies \cite{Oliver17_2DM,Lv17_SRep}, but not of monolayer alloys. There also exists a lot of computational studies for the Raman spectra for pristine, constituent phases \cite{MolinaSanchez11_PRB,Zhang15_CSR,Saito16_JPCM}, and even few reports for defective MoS$_2$ \cite{Parkin16_ACSNano,Bae17_PRA}. Despite the importance of Raman spectroscopy in understanding the alloy composition and the structural order, computational studies for alloys are missing. The reason is that, within the conventional computational approach, these calculations are computationally significantly more challenging due to the larger supercells involved and the dramatic scaling of the computational cost with the supercell size. When the maximum computationally feasible supercells sizes are often 3$\times$3 or at maximum 6$\times$6 primitive cells it is clear that (i) the impurity/defect concentration is necessarily high, and (ii) the defects are ordered and thus the simulated spectra for a given alloy are unlikely to correctly mimic that of the randomly distributed system. These issues need to be tackled before computational Raman spectra for alloys can be calculated in a way that can be reliably compared to experiments and even holds predictive power. In this paper, we propose a computational method to simulate Raman spectra of alloys using large supercells. The method relies on the projection of the vibrational eigenvectors of the supercell to those of the primitive cell, which are then used to weight the Raman tensors of the pristine system. When the lattice constants and the bonding chemistry in the two components are similar, as is the case in the systems considered here, the supercell eigenvectors can be efficiently solved using the mass approximation. We benchmark our method both towards the full DFT approach in small supercells as well as experimental results. We first apply our method with the (Mo,W)S$_2$ alloy, for which extensive experimental results are available. We analyze the modes and, in particular, try to distinguish between the one-mode and two-mode behavior, and visualize the eigenmodes that contribute to the most prominent Raman peaks. Next, we consider T'-phase MoWTe$_2$, which is much more involved due to the lower symmetry, larger supercell, and (semi-)metallic electronic structure, while a mass approximation is expected to hold equally well. Finally, we consider dilute concentrations of impurities in MoS$_2$, both in the Mo site and in the chalcogen site, and look for characteristic Raman signatures. \section{Methods} \subsection{Theoretical framework} As mentioned in the introduction, first-principles Raman calculations for large unit cells are computationally challenging. They involve two steps: (i) determination of the vibrational modes of the system and (ii) calculation of the Raman activity for each mode. In step (i), the vibrational modes (eigenmodes) are solutions to \begin{align} \label{eq:motion} M_k \omega^2 \mathbf{v}(k0) &= \sum_{k',l} \Phi(k'l,k0) \mathbf{v}(k'l) \\ &= \sum_{k',l} \Phi(k'l,k0) \exp(-i\mathbf{q}\cdot \mathbf{R}_l) \mathbf{v}(k'0). \end{align} where $\mathbf{v}(kl)$ are the eigenvectors for the displacement of atom $k$ with mass $M_k$ located in cell $l$ specified by the lattice vector $\mathbf{R}_l$. The elements of force constant (FC) matrix $\Phi$ are defined by the change of potential energy, $U$, with respect to the atomic displacements $$ \Phi_{\alpha\beta}(k'l,k0) = \frac{\partial^2 U}{\partial u_\alpha(k'l)\partial u_\beta(k0)} . $$ Above, $u_{\alpha}(kl)$ denotes the displacement of the $k$th atom in the $l$th unit cell in the cartesian direction $\alpha$. Constructing the force constant matrix in the case of alloys, without any symmetry, essentially requires performing $3N$ DFT total energy calculations in which each of the $N$ atoms is displaced in each of the three cartesian directions. In step (ii), the Raman intensity can be written as \begin{equation}\label{eq:Rint} I \sim \rvert \mathbf{e}_s \cdot R \cdot \mathbf{e}_i \rvert^2 \end{equation} where $\mathbf{e}_i$ and $\mathbf{e}_s$ denote the polarization vectors of the incident and scattered light and $R$ is the Raman tensor. In the case of nonresonant first-order Raman scattering, it is obtained from the change of polarizability $\chi$ with respect to the phonon eigenvectors $\mathbf{v}$, and in first-principles calculations it can be evaluated by using the macroscopic dielectric constant $\varepsilon_{\rm mac}$ as \begin{equation}\label{eq:Rpol} R \sim \frac{\partial \chi}{\partial \mathbf{v}} = \frac{\partial \varepsilon_{\rm mac}}{\partial \mathbf{v}}. \end{equation} This derivative needs to be evaluated at both the positive and negative displacements for each of the $3N$ eigenvectors $\mathbf{v}$, yielding a total of $6N$ calculations. Moreover, in spite of different approaches, evaluating $\varepsilon_{\rm mac}$ is generally significantly more time-consuming than DFT total energy calculations. While step (ii) takes more time, already step (i) becomes challenging in large low-symmetry systems. In case of MoS$_2$, the limitations are currently at around 10$\times$10 supercell for step (i) and 6$\times$6 supercell for step (ii). In order to properly account for the random distribution of atoms and the resulting broadening of the spectra, large supercells or averaging over several configurations is required. Herein, we adopt two approximations to tackle each of these issues: a mass-approximation for step (i) and projection to the primitive cell Raman-active eigenmodes for step (ii). In the mass-approximation (MA), only masses are changed in Eq.\ \ref{eq:motion}, whereas the force-constant matrix remains untouched \cite{Baroni90_PRL,Menendez}. Naturally, this can only be applied in cases where the nature of the bonding and the atomic structure remain very similar, such as for instance Al$_x$Ga$_{1-x}$As \cite{Baroni90_PRL}. Due to the small momentum of photons commonly used in Raman spectroscopy, and especially in non-resonant Raman where the photon energy needs to be less than the band gap, first-order Raman scattering can only involve single phonon near $q=0$. For pristine materials the $q=0$ phonons are trivially obtained as the $\Gamma$-point solutions of Eq.\ \ref{eq:motion} in the primitive cell (PC). If we consider a supercell (SC) of pristine material, the $\Gamma$-point contains several modes from the folding of the phonon bands. In an explicit calculation of Raman intensities using Eq. \ref{eq:Rpol} the intensities of the folded modes will be zero and thus the Raman spectra remains the same. Alternatively, the folded modes in the supercell $\Gamma$-point could be unfolded back to the primitive cell Brillouin zone (BZ) through projection to plane waves $g(\mathbf{q}) = \exp(i\mathbf{q} \cdot \mathbf{R})$, where $\mathbf{q}$ corresponds to one of the PC q-points that fold into the $\Gamma$-point of the SC. Adopting the notation where $\mathbf{v}^{\rm SC}(kl)$ refers to the $l$th primitive cell within the supercell and $k$ indexes the atoms in the unit cell, the projection is written out as \begin{equation}\label{eq:proj} \inner{g(\mathbf{q})}{\mathbf{v}^{SC}}_{\alpha,k} = \sum_l \exp(-i\mathbf{q} \cdot \mathbf{R}_{l}) v_{\alpha}^{SC}(kl). \end{equation} While we could use this equation to unfold to any $\mathbf{q}$, we are here primarily interested in the $\Gamma$-point, which fortuitously also yields a particularly simple expression since the exponent in Eq. \ref{eq:proj} is always unity and thus one ends up with a straightforward sum over the eigenvectors. The total $\Gamma$-point weight can be obtained by taking the square of the projections and summing up over $k$ and $\alpha$. Finally, we sum up over all the SC states $i$ with frequency $\omega_i$ to obtain $\Gamma$-point weighted density-of-states \begin{equation}\label{eq:proj2} n(\omega) = \sum_i \sum_{\alpha,k} \rvert \inner{g(\mathbf{q})}{\mathbf{v}^{SC,i}}_{\alpha,k} \rvert^2 \delta(\omega-\omega_i) \end{equation} which we here denote as GDOS{}. Since each mode in pristine supercell has non-zero weight in only a single q-point in the PC BZ, the true $q=0$ modes can easily be found. In alloys or defective systems, where the translational symmetry is broken, the unfolding/projection procedure still works, but leads to each SC mode having contributions from q-points throughout the PC BZ with different weights. This type of unfolding procedures have already been used in the past to analyze both the electronic and phonon band structures of alloys \cite{Allen2013,Zheng2016, Huang2014, Gordienko17_PSSB}. Baroni et al. found that the GDOS of the primitive cell can be used to closely approximate the Raman spectra \cite{Baroni90_PRL} in alloys. The modes which were inactive due to momentum-conservation law can gain weight at $q=0$ and start to show up in the Raman spectra and, vice versa, the modes that were originally purely $q=0$ modes can leak weight to other q-points and thereby lose Raman intensity. Such analysis is straightforward when the frequencies of Raman-active and -inactive modes are clearly separated. If they are close, it is no longer clear which part of the GDOS would be Raman-active. To solve this issue, we here propose to project the SC modes not to plane waves but to PC eigenmodes at the $\Gamma$-point. That is, adopting the same notation for $\mathbf{v}^{SC}(kl)$ as above, \begin{equation}\label{eq:projeig} w_{ij} = \inner{\mathbf{v}^{PC,i}}{\mathbf{v}^{SC,j}} = \sum_{\alpha,k,l} v_{\alpha}^{PC,i}(k0) v_{\alpha}^{SC,j}(kl). \end{equation} Here, due to the mass-approximation, the atoms are in the same positions both in the alloy and in the pristine cells. However, it appears to work well also with the DFT relaxed structures. Since the projection is to PC modes at the $\Gamma$-point, we simultaneously obtain the $\Gamma$-point projection (or unfolding). We note, that the summation of projections $w_{ij}^2$ over all $k$ and $\alpha$ yields the same GDOS as via the plane wave projections (Eq.\ \ref{eq:proj2}), since both constitute a complete basis set. The Raman tensor of the SC mode is obtained by multiplying the PC mode projection by the respective Raman tensors from the pristine system, i.e., \begin{equation}\label{eq:Rsum} R^{\rm SC,j} = \sum_{i} w_{ij} R^{\rm PC,i} \end{equation} where the sum goes over PC modes $i$ and clearly only Raman-active modes contribute. Finally, the Raman intensity of the SC mode $j$ is obtained using Eq.\ \ref{eq:Rint} which yields \begin{align} I^{\rm SC,j} &\sim \rvert \mathbf{e}_s \cdot R^{\rm SC,j} \cdot \mathbf{e}_i \rvert^2 \label{eq:RGDOS} \\ &= \sum_i w_{ij}^2 \rvert \mathbf{e}_s \cdot R^{\rm PC,i} \cdot \mathbf{e}_i \rvert^2 \nonumber \\ &+ \sum_{i\neq k} ( \mathbf{e}_s \cdot w_{ij} R^{\rm PC,i} \cdot \mathbf{e}_i )^* ( \mathbf{e}_s \cdot w_{kj} R^{\rm PC,k} \cdot \mathbf{e}_i ) \label{eq:RGDOS2} \\ &\approx \sum_i w_{ij}^2 I^{PC,i}. \label{eq:RGDOS3} \end{align} Squaring the sum over PC modes leads to $i=k$ and $i\neq k$ terms, which have been separated in the second step. These cross terms can be important if the PC mode has appreciable weight arising from several PC modes. In the last step, we have assumed that they are negligible. While indeed not always a good assumption, the advantage is that we can now sum over intensities rather than Raman tensors. This is useful because we could then, e.g., use experimentally determined intensities instead of the calculated ones. We denote the total Raman intensity weighted GDOS as RGDOS. When the contributions from each mode to the total Raman spectra are shown in the Results section, these correspond only to the first term in Eq.\ \ref{eq:RGDOS2}. We note, that in some previous works the Raman tensor in alloy/defective supercells has been decomposed using the Raman tensors of different symmetries of the pristine host for the analysis purposes \cite{Ikeda17_PRB,Qian18_Langm}. Here, we essentially proceed in the opposite direction in order to construct the final Raman tensor. Moreover, our approach is in principle more general as it can distinguish between different modes of the same symmetry. To sum up, the main ingredients of the method lie in the projection of supercell vibrational eigenmodes to the pristine system eigenmodes (Eq.\ \ref{eq:projeig}), and using those projections as weights when summing up over the primitive cell Raman tensors (Eq.\ \ref{eq:Rsum}). The general applicability of our method is mostly limited by the eigenmode projection, which essentially requires that there needs to be a reasonable mapping between the atomic structures of the non-pristine and pristine systems. Extension of the method to simulate second-order non-resonant scattering should be fairly straightforward. To simulate resonant Raman scattering, in principle one can just plug the resonant Raman tensors to Eq.\ \ref{eq:Rsum}. In practice, the modifications of the electronic structure need to be also carefully considered, the details of which strongly depend on the system. \subsection{Computational details and benchmarking} All first-principles calculations are carried out with VASP \cite{VASP}. Exchange-correlation contributions are treated with the PBEsol functional \cite{PBEsol}. A plane wave basis with a cutoff energy of 550 eV is employed to represent the electronic wave functions. The geometry optimization continues until the energy differences and ionic forces are converged to less than $10^{-6}$ eV and 1 meV/\AA, respectively. The first Brillouin zone of primitive cell is sampled by a 12$\times$12 mesh for H-MoS$_2$/WS$_2$ and by a 12$\times$24 mesh for T'-MoTe$_2$/WTe$_2$, and, changing in proportion to the supercell size N. The polarizability tensors for Raman calculations are determined within the framework of the finite displacement method \cite{Raman_unpolarized}. The phonon spectra are assessed using the PHONOPY code \cite{PHONOPY} using 6$\times$6 supercell for MoS$_2$/WS$_2$ and 4$\times$4 supercell for MoTe$_2$/WTe$_2$. The Raman intensity is calculated as an average over the XX and XY configurations for the light polarization ($\mathbf{e}_i$$\mathbf{e}_s$). \begin{figure}[!ht] \begin{center} \includegraphics[width=8cm]{bands.pdf} \end{center} \caption{\label{fig:valid} (a) Phonon dispersion curves of pristine MoS$_2$ (left) and WS$_2$ (right) calculated either self-consistently using DFT (solid lines) or using the mass-approximation (dashed lines). Dots denote experimental values obtained from Raman spectroscopy \cite{Zhang15_CSR,Livneh15_2DM}. (b) Schematic representation of Mo (green) and S (yellow) atoms vibrations in different optical phonon modes. (c) GDOS from 3$\times$3 SQS of Mo$_{0.56}$W$_{0.44}$S$_2$, either calculated fully with DFT or using the mass approximation. } \end{figure} We start by benchmarking our computational scheme with respect to the mass-approximation. We show in Fig.\ \ref{fig:valid}(a) the phonon dispersion curves of MoS$_2$ and WS$_2$ calculated with DFT and the mass-approximated versions (i.e., using the MoS$_2$ FC matrix but substituting the mass of Mo by that of W and vice versa). The dispersions of the bands are captured very well with MA as are the acoustic mode frequencies. There is a nearly constant downshift of the optical mode frequencies of WS$_2$ by about 10 cm$^{-1}$ with respect to self-consistent WS$_2$ calculation, and vice versa an upshift in MoS$_2$ frequencies if using WS$_2$ FC with Mo mass, suggesting that W-S bonds are slightly stronger than Mo-S bonds. In the following of this work, we have chosen to use the MoS$_2$ force constants. With this choice, when comparing to the experimental values for the two Raman-active modes, E$'$ and A$_1'$, our calculated frequencies are slightly overestimated for MoS$_2$ and slightly underestimated for WS$_2$ when compared to full DFT calculation. The effect of MA is further illustrated in Fig.\ \ref{fig:valid}(c) in the case of the (Mo,W)S$_2$ alloy supercell. The structural models used in the alloy calculations are constructed using the special quasirandom structures (SQS) method \cite{Zunger1990}. As seen in Fig.\ \ref{fig:valid}(c) for 3$\times$3 Mo$_{0.56}$W$_{0.44}$S$_2$ SQS, the MA frequencies are downshifted throughout the spectrum, similar to the pristine systems. To allow for a better comparison with the DFT results, we also show a spectrum shifted up by 4.5 cm$^{-1}$ (from the alloy composition times 10 cm$^{-1}$), after which the main peaks (E$'$, A$_1'$) and the high-frequency part of the E$'$ feature (from 350 to 400 cm$^{-1}$) agree very well. The low-frequency part of E$''$ features has still a too low frequency, which is due to the fact that these modes are localized to W atoms, as will be seen later. \begin{figure}[!ht] \begin{center} \includegraphics[width=8cm]{demo.pdf} \end{center} \caption{\label{fig:valid2} (a) GDOS from 20 random atomic configurations for Mo$_{0.56}$W$_{0.44}$S$_2$ (gray lines) together with their average (blue, solid line). GDOS for the 12$\times$12 SQS supercell is also shown for comparison. Inset: Comparison of results when the averaging is done over 4, 10, or 20 configurations. (b) Primitive cell eigenmode projected GDOS for the SQS cell [same as in panel (a)]. (c) Raman spectra from full DFT calculation for the 3$\times$3 SQS supercell of Mo$_{0.56}$W$_{0.44}$S$_2$ compared to its RGDOS, and the contributions to it from the two primitive cell eigenmodes. } \end{figure} Next, we inspect the importance of statistical sampling. We use the supercell comprising 12$\times$12 primitive cells and 20 different random configurations (not SQS) for each composition. Fig. \ref{fig:valid2}(a) shows the spectra from all the 20 configurations and the averaged spectra. The large variation in the single spectra indicates that 12$\times$12 supercell is still not quite large enough to correctly describe the alloy with a single supercell. As shown in the inset, averaging over just 4 configurations yields a spectrum that is already quite similar to that from 20 configurations. In addition, we compare the averaged spectrum to that of a SQS model created within the 12$\times$12 supercell. We consider pairs up to 8 {\AA} (three effective cluster interaction (ECI) parameters) and three-body clusters up to 4 {\AA} (2 ECI). The SQS performs better than the different random configurations, but fails to correctly capture the smooth broadening of the main peaks, instead yielding more spiked features. This originates from the coarseness of the mesh of k-points that folds into the $\Gamma$-point in small supercell calculations. Note, that the A$_1'$ mode is in practice completely unaffected by the mixing, as it only involves movement of the chalcogen atoms and the metal atoms are fixed (see Fig. \ref{fig:valid}(b)). Finally, we benchmark the eigenmode-projection scheme. First, we illustrate in Fig.\ \ref{fig:valid2}(b) the eigenmode contributions in the case of 12$\times$12 SQS. In H-MoWS$_2$ alloy, the modes remain fairly separated in frequency and thus the resulting Raman spectra could be fairly safely evaluated from just the GDOS. On the other hand, the projection scheme provides further insight in to the origin of the spectral features. For instance, the bump at around 400 cm$^{-1}$ originates from the E$'$ mode and not from the A$_1'$ mode. Also, at large W concentration, the A$_2''$ features start to overlap with the E$'$/A$_1'$ features, as will be seen in the Results section. Moreover, we need to compare how well the approximated Raman spectra match to explicit Raman calculations. For this we need to adopt a smaller system, and since this is only for benchmarking purposes we can take a 3$\times$3 supercell, again created using the SQS scheme. The RGDOS captures surprisingly well all the features of the full Raman calculation, as shown in Fig.\ \ref{fig:valid2}(c). Especially, the peak shapes/structures are correctly reproduced, even if some intensities differ with the most significant discrepancy occuring near 385 cm$^{-1}$. From the comparison of the spectra in Figs.\ \ref{fig:valid2}(b) and (c) it is again obvious that 3$\times$3 SQS cannot describe properly the Raman spectrum of the random alloy. We have demonstrated that large supercells are needed to properly describe the phonon spectra of random alloys and that RGDOS can be used to give a good estimate of the Raman spectrum. While the mass approximation may produce some inaccuracies with the peak positions, we feel that this is acceptable tradeoff for the ability to correctly describe the random alloy. In the following, the results for the alloys are obtained by averaging over 20 configurations of the 12$\times$12 supercell and using the eigenmode-projection. In few cases, the analysis of the results is done using the SQS structure, which results in great simplification. \section{Results} \subsection{H-(Mo,W)S$_2$} \begin{figure*}[!ht] \begin{center} \includegraphics[width=16cm]{bigH.pdf} \end{center} \caption{\label{fig:MoWH} (a) RGDOS for Mo$_{1-x}$W$_{x}$S$_2$ alloy, $x$ ranges from 0 to 1. The total RGDOS is shown with solid black line. The contributions from E$'$, A$_1'$, and A$_2''$ modes are shown by yellow, blue, and grey shaded areas, repectively. (b) Experiments, adapted from Ref.\ \onlinecite{Chen14_Nanos}. (c) The calculated compositional dependence of the Raman peaks frequencies vs. the experimental counterparts. (d) Illustration of selected eigenmodes (i - iv) from (a). The blue, red, and yellow symbols correspond to Mo, W, and S atoms, respectively. The atoms are positioned in the supercell and the magnitude of symbols is propertional to the amplitude of vibrations.} \end{figure*} The simulated Raman spectra for H-(Mo,W)S$_2$ monolayer as a function of the composition are shown in Fig.\ \ref{fig:MoWH}(a), and which can be compared to experimental Raman spectra shown in Fig.\ \ref{fig:MoWH}(b) (from Ref.\ \onlinecite{Chen14_Nanos}). The calculated A$_2''$ mode, although not Raman-active, is also shown, since it is infrared active and shows large changes with the composition. To make it visible in the simulated spectra we use the same Raman tensor as for A$_1'$. The experimental and calculated peak positions are collected in Fig.\ \ref{fig:MoWH}(c). The A$_1'$ mode consists of only chalcogen movement and thus in our mass approximation approach this mode remains strictly constant. Also E$''$ is unaffected by the MA and thus not shown in the calculated spectra, although its activation due to disorder is visible in the experimental spectra. Overall a good agreement with the experiment is observed for the number of peaks as well as their positions: (i) For the E$'$ mode, we confirm pronounced two-mode behavior with the separate MoS$_2$- and WS$_2$-derived peaks. (ii) There is a clear downshift of the E$'$(MoS$_2$) peak, whereas the E$'$(WS$_2$) peak remains nearly constant in energy. In experiment, at large W concentration the MoS$_2$-derived peak broadens and possibly mixes with the d feature (marked d, as it was denoted ``disorder-related mode'' in Ref.\ \onlinecite{Chen14_Nanos}). (iii) There are two additional features around the WS$_2$ peak: one at about 345 cm$^{-1}$ (marked \#) and one at about 360 cm$^{-1}$ (unmarked) in calculations. The latter is difficult to observe in Fig.\ \ref{fig:MoWH}(b), but evident in the line shape fits in Ref.\ \onlinecite{Chen14_Nanos}. (iv) Both in experiment and theory, at small W concentrations, the W-derived features form a broad plateau below the E$'$(MoS$_2$) peak with no particularly distinct peaks. (v) A small bump develops between the E$'$(MoS$_2$)and A$_1'$ peaks, which originates fully from the E$'$-mode. While in calculations it prevails at intermediate concentrations, in experiments this is only clearly visible at the W-rich side, and thus it is not clear if their origin is the same. In order to understand the atomic origin of these peaks, we illustrate the eigenvectors from selected cases in Fig.\ \ref{fig:MoWH}(d), where the sizes of the circles at the position of atom $k$ correspond to the eigenvector weighted by the $\Gamma$-point projection $|\mathbf{v}(k0)|^2 \cdot w^2$ summed over all modes within the selected range of frequencies marked in Fig.\ \ref{fig:MoWH}(a). As expected, the modes corresponding to MoS$_2$- and WS$_2$-derived peaks are localized around Mo and W atoms, respectively. The broader feature between E$'$ and A$_1'$ appears to be localized at the edges of the Mo-regions (panel iii). The ``disorder-related mode'' is not very visible at $x=0.5$, but at $x=0.875$ our analysis clearly shows that it is localized to isolated Mo atoms (panel iv). The smaller peaks around it, on the other hand, are localized to Mo-clusters (not shown), whose density at W-rich samples is naturally small. The peaks denoted by $\#$-modes appear visually very similar to the main WS$_2$-derived modes and thus we think that this shoulder just originates from asymmetrical broadening of the WS$_2$-peak. On the other hand, this mode was assigned to 2LA(M) in Ref.\ \onlinecite{Chen14_Nanos}). Our calculated LA(M) frequency for WS$_2$ is 177 cm$^{-1}${}, yielding 2LA(M) at 354 cm$^{-1}${}, and thus lies slightly above the E$'$(WS$_2$)-peak in our calculations, but could also be slightly below the E$'$(WS$_2$)-peak in experiments. Since we here only simulate the first-order Raman scattering, we know that the shoulder in calculations contains no 2LA(M) contribution, but naturally we cannot exclude such additional contribution in the experimental spectra. \subsection{T'-(Mo,W)Te$_2$} We next study T'-(Mo,W)Te$_2$ alloy, which is computationally a significantly more challenging case, since (i) the unit cell is larger and has lower symmetry than the H-phase, thus leading to larger number of displacements in pristine system, (ii) it is (semi-)metallic, necessitating the use of large k-point meshes. The latter also means that the Raman spectra will necessarily be resonant, but the evaluation of the Raman tensor from the change of macroscopic dielectric constant assumes non-resonant conditions. Resonant Raman tensors can be used just as well in our approach for simulated Raman spectra (Eq.\ \ref{eq:RGDOS}), but their evaluation from first principles is again step up in computational complexity and moreover makes the tensors frequency-dependent. To avoid these problems, we here use the non-resonant Raman tensors, which are moreover normalized in order to better highlight all the Raman-active features, although this means that the relative intensities of the peaks are not correctly captured. The classification of the $\Gamma$-point vibrations, $\Gamma_{C_{i}}$ = 9 A$_{g}$ + 9 A$_{u}$, shows that half (A$_g$) of the modes are Raman-active. These modes can be arranged in two groups: modes vibrating along the direction of the zigzag Mo/W chain, denoted by A$_g^{z}$, and modes vibrating perpendicular to the zigzag chain, denoted by A$_g^{a}$. \begin{figure}[!ht] \begin{center} \includegraphics[width=8.5cm]{bandsTe.pdf} \end{center} \caption{\label{fig:massbandsTe} Phonon dispersion curves of pristine MoTe$_2$ (left) and WTe$_2$ (right) calculated either self-consistently using DFT (solid lines) or using mass-approximation (dashed lines). } \end{figure} Phonon dispersion curves calculated by DFT and by mass approximation are shown in Fig.\ \ref{fig:massbandsTe}. We again observe that frequencies from MA are shifted down by about 10 cm$^{-1}$ in WTe$_2$, but the order and dispersion of the bands is captured well. The only clear deviation occurs for WTe$_2$ around 220 cm$^{-1}$ at the $\Gamma$-point, where the quasi-degenerate Raman active modes from the DFT calculation breaks into two modes at 200 cm$^{-1}$ and 212 cm$^{-1}$ from the MA calculation, echoing the splitting observed in MoTe$_2$ at 250 cm$^{-1}$ and 280 cm$^{-1}$. This feature is observed in experiment for bulk (Mo,W)Te$_2$ \cite{Joshi2016}. It is worth noting that the lattice constants of MoTe$_2$ (3.37 \AA, 7.15 \AA) and WTe$_2$ (3.42 \AA, 7.12\AA) are not quite as close as those of the parent compounds in H-(Mo,W)S$_2$. \begin{figure*}[ht!] \begin{center} \includegraphics[width=17cm]{bigTe.pdf} \end{center} \caption{\label{fig:MoWTe} (a) RGDOS for T'-(Mo,W)Te$_2$. The total RGDOS is shown with solid black line. The shaded areas show contributions from the projection to eigenmodes of the pristine T$'$-MoTe$_2$. The modes are colored sequentially (and loops once). (b) Evolution of the peak maxima positions for Mo$_{1-x}$W$_x$Te$_2$ alloys. Experimental data (red open circles) are taken from Refs.\ \cite{Jiang16_SRep,Chen17_ACSNano}. (c) Illustration of selected SC eigenmodes from the $x=0.75$ case, as indicated in (a). The SC eigenmodes within the frequency range are weighted by the projection to the dominant PC eigenmode. The blue, red, and yellow symbols correspond to Mo, W, and Te atoms, respectively. } \end{figure*} The calculated RGDOS for the monolayer T'-(Mo,W)Te$_2$ alloy as a function of composition are shown in Fig.\ \ref{fig:MoWTe}(a) and the peak positions are collected in Fig.\ \ref{fig:MoWTe}(b). We remind, that while in H-(Mo,W)S$_2$ the alloy modes could be easily assigned to the pristine modes from which they originated thanks to the large separation in frequency, here due to the large number of modes, the mixing is more complicated and thus the eigenmode-projection is necessary to distinguish between the Raman-active and -inactive features. The projection scheme allows us to distinguish the origins of each peak in terms of the primitive cell eigenmodes, revealing that the ordering of the modes is retained in the same order throughout the alloys. The eigenvectors of these modes in the parent phases have been illustrated in several previous works \cite{Jiang16_SRep,Kim16_Nanos,Beams16_ACSNano,Chen16_NL,Grzeszczyk16_2DM,Zhang16_NComm,Wang17_AFM,Chen17_ACSNano}, and are not repeated here. Nevertheless, they show that the six lowest frequency modes are mostly localized to Te atoms, and the three high frequency modes to Mo/W atoms. Consequently, the six lowest frequency modes exhibit single-mode behavior and the three high frequency modes two-mode behavior, reflecting the fact that alloying is carried out in the metal sublattice. Among the six lowest frequency modes that exhibit the single mode behavior, the third one is silent in the metal sublattice and the fifth one nearly silent \cite{Chen17_ACSNano}, and thus they show very little changes upon alloying. There are also clear differences in the degree of the alloying-induced broadening of the other four peaks, with the first one showing least broadening, the second one the strongest broadening, and the fourth and sixth modes falling in between. Fig.\ \ref{fig:MoWTe}(c) illustrates the second and fourth modes of the x=0.75 alloy. The fourth mode (panel ii) is localized very clearly only on the Te atoms and mostly on the rows with long metal-metal distance, whereas the second mode has also weight on the metal atoms and is mostly localized on the rows with short metal-metal distance. The last three modes in Fig.\ \ref{fig:MoWTe}(a) show a very clear two-mode behavior with splitting into MoTe$_2$ and WTe$_2$-like modes at intermediate alloy concentrations. The eigenvectors in Fig.\ \ref{fig:MoWTe}(c) show that these modes are localized almost completely on the metal atoms and the two-mode behavior reflects the localization around Mo and W atoms. The eigenmode projections illustrated in Fig.\ \ref{fig:MoWTe}(c) are found to provide additional insight into the peak origins. For instance, there is a mode at 200 cm$^{-1}${} in both the MoTe$_2$ and WTe$_2$ phases, but the projections reveal that they correspond to different modes. Somewhat similarly, the 160 cm$^{-1}${} peak in WTe$_2$ is seen to contain two modes, which in the MoTe$_2$ region are located at 160 cm$^{-1}${} and 200 cm$^{-1}${}. Comparison to experimental results is hindered by the fact, that to the best of our knowledge, all the experimental T'-Mo$_{(1-x)}$W$_x$Te$_2$ alloy results are from bulk samples \cite{Revolinsky64_JAP,Oliver17_2DM,Lv17_SRep,Rhodes17_NL}. Monolayer data is only available for pure MoTe$_2$ and WTe$_2$ \cite{Chen17_ACSNano,Jiang16_SRep,Kim16_Nanos}. Naturally, there exists also a large body of data for pure bulk or few-layer phases \cite{Joshi16_APL,Beams16_ACSNano,Ma16_PRB,Chen16_NL,Grzeszczyk16_2DM,Wang17_AFM,Zhang16_NComm}. Although the bulk and monolayer frequencies are generally fairly close, to facilitate a proper comparison, in Fig.\ \ref{fig:MoWTe}(b) we only show the available monolayer results for MoTe$_2$ and WTe$_2$. For the low-frequency modes in MoTe$_2$ and WTe$_2$, calculated and experimental frequencies agree very well. The agreement deteriorates for high-frequency modes, but the experimental and calculated peaks can still be mapped. Also the ordering of the A$_g^{a}$ and A$_g^{z}$ modes is correctly reproduced. When comparing to the bulk alloy results, our calculations indicate that the reported disorder-activated modes around 180 cm$^{-1}$ and 202 cm$^{-1}$ \cite{Oliver17_2DM}, can be a mix of the last three high-frequency modes and can be tuned by varying composition. Our calculations produce a large number of small peaks at these frequencies, with contributions from all the three high-frequency modes, but we do not obtain one or two prominent peaks. This might be caused by normalization of Raman tensors in our simulated spectra. The peak at 130 cm$^{-1}${} in MoTe$_2$ was found to split into two peaks separated by about 3 cm$^{-1}${} upon increasing the W concentration \cite{Oliver17_2DM,Lv17_SRep}, and was assigned to mixing in Ref.\ \onlinecite{Oliver17_2DM} and to a phase change from monoclinic to orthorhombic lattice in Refs.\ \cite{Lv17_SRep,Chen16_NL}. Since this peak is silent in the metal sublattice, it shows no alloying-induced splitting nor even any broadening in our calculations, and thus our calculations do not support the assignment to mixing. For the highest frequency mode, our calculations correctly capture the broadening toward higher frequencies on both the MoTe$_2$ and WTe$_2$ regions \cite{Oliver17_2DM}. \subsection{Impurities in H-MoS$_2$} The Raman signatures can be used to identify impurities at small concentrations (small with respect to alloying, i.e., within few percent). In some instances, as seen also in the previous sections, impurities can produce very distinct new peaks, broaden existing peaks, or result in very broad features. In this section, we insert a small number of impurity atoms into the lattice and examine the trends in the changes of the Raman spectra. The mass approximation limits our study to cases where chemical bonding upon substitution is expected to remain fairly similar. To this end, we either replace the Mo atom by other transition metal element or the S atom by an atom from the nitrogen, oxygen, or fluorine groups. Clearly, this is expected to work best for the elements in the same column in the periodic table and worsen the further away from it. The small impurity concentration helps to avoid problems with the large strain. For the calculations, we here adopt a slightly simplified procedure, where we simply take the 5$\times$5 supercell with a single impurity. This is sufficiently large to describe the localized modes, and while the peak broadenings would not be correctly described, there are very little changes in the position and broadening of the main peaks in these dilute cases. \begin{figure*}[!ht] \begin{center} \includegraphics[width=\textwidth]{bigimpuM.pdf} \end{center} \caption{\label{fig:impuM} (a-c) RGDOS for impurities in the Mo site in MoS$_2$, grouped by the rows in the periodic table. (d) Positions of the peak maxima extracted from panels (a-c). (e) Selected eigenmodes from the Cr case. The blue, red, and yellow symbols correspond to Mo, Cr, and S atoms, respectively. } \end{figure*} The RGDOS for the Mo-site impurities are shown in Fig.\ \ref{fig:impuM}(a-c). One impurity in 25 lattice sites corresponds to the $4 \%$ impurity concentration. The behavior is clearly different for 3d, 4d, and 5d transitional metal impurities. Following the impurity masses, the additional impurity induced peaks are at highest frequencies for the 3d elements and at lowest frequencies for 5d elements, whereas the 4d impurities show very little new features. In case of the 3d elements, there is a pronounced splitting between the E$'$ and A$_2''$ modes and an additional, mostly E$'$-derived, mode between the two. We note again that A$_2''$ mode is not Raman-active, and only shown here for reference. The eigenmodes are shown in Fig.\ \ref{fig:impuM}(e). Not surprisingly, the main peak is localized in the MoS$_2$ regions (panel i) The second E$'$ feature is localized around the impurity (panel ii) and the last one strictly at the impurity (panel iii). This last E$'$ peak should have appreciable Raman intensity and frequency that sensitively depends on the transition metal impurity and thus seems to provide the most effective impurity signature. For the two A$_2''$-derived peaks, the lower frequency mode is localized in the MoS$_2$ regions (panel iv, the Cr atom shows intense due to its small mass, but all Mo atoms are also active) and the higher frequency one around the impurity (panel v). Very little happens with the 4d impurities, only a small shift of the main E$'$ mode together with slight broadening, stemming from the small (relative) change of the mass. All the 5d impurities show features similar to the (Mo,W)S$_2$ alloy considered previously: a broad set of weak features at 350--400 cm$^{-1}${} and one peak between E$'$ and A$_1'$ peaks. For the two eigenmodes shown in Fig.\ \ref{fig:MoWH}(d) (panels v,vi), despite having clearly different frequencies, they have fairly similar eigenvectors. Since the MoS$_2$ E$''$, A$_1$, and A$_2''$ modes at the K and M points largely fall at frequencies between 350 and 400 cm$^{-1}$, we think these impurity modes have large contributions from the off-$\Gamma$ k-points and only a small $\Gamma$-point, Raman-active contribution. In essence, these impurities lead to mixing of the vibrational modes at different q-points of the primitive cell BZ. No pronounced features are observed at low frequencies, and there are no gap states. Overall, it appears that it should be possible to resolve the presence of even fairly dilute concentration of 3d transition metal impurities in MoS$_2$ from the splitting of the E$'$ peak, possibly even with the elemental precision, although the absolute values given here may suffer from the limitations of the mass approximation. Dilute concentration 4d impurities are expected to be largely invisible in Raman, whereas 5d impurities might show up in Raman but their identification can be difficult. \begin{figure}[!ht] \begin{center} \includegraphics[width=8.5cm]{bigimpu.pdf} \end{center} \caption{\label{fig:impu} (a) RGDOS for impurities in the S site in MoS$_2$. The LA(M) frequency from pristine MoS$_2$ and the gap in the phonon structure are also indicated. (b) Selected eigenmodes for the O and Se impurity systems. The blue, red, and yellow symbols correspond to Mo, O/Se, and S atoms, respectively. } \end{figure} The RGDOS for the S-site impurities in MoS$_2$ are shown in Fig.\ \ref{fig:impu}(a). One impurity in 50 lattice sites corresponds to $2 \%$ impurity concentration. Again, lighter impurities lead to additional peaks at higher frequencies and heavier impurities at lower frequencies, but the features that are most likely to be observed in experiments are those falling above the A$_2''$ mode or inside the gap between E$''$ mode and the LA(M) edge. In fact, such features have been reported in the literature for MoS$_2$ with light Se alloying at about 270 cm$^{-1}$ \cite{Li14_JACS,Su14_FER,Mann14_AM,Feng15_ACSNano} and with light Te alloying at about 243 cm$^{-1}$ \cite{Yin18_Nanot}, agreeing well with our calculations. O and Se impurities in MoS$_2$ are chosen as representative examples to be discussed in more detail. Selected eigenvectors of these impurity systems are presented in Fig.\ \ref{fig:impu}(b). In the case of the O impurity, the feature (ii) just above A$_2''$ is mainly derived from E$'$ with a small E$''$ contribution and it should thus be visible in Raman measurements. The high frequency feature (iii) is mostly of A$_2''$ type, but it contains also an appreciable A$_1'$ contribution and thus could also be visible. In the case of the Se impurity, there are two features in the gap, with the lower one (iv) derived mostly out of E$''$ with some E$'$, and the higher one (v) mostly from the pristine A$_1'$ mode with some A$_2''$ character. Finally, we mention the features (i) and (vi), which are localized mostly at the S atom on the opposite side of the layer from the impurity atom and thus they also have the same frequency, independent of the impurity element. While this feature is barely visible in the simulated spectrum, it is derived mostly from the pristine A$_1'$ mode and thus could be observable. \section{Conclusions} We have devised an efficient computational method to simulate Raman spectra of large systems, being especially applicable to alloys and also systems with small number of defects. The method is based on the projection of vibrational eigenvectors of the supercell to the eigenvectors from the primitive cell and using them as weights in summing over the Raman tensors calculated at the primitive cell. We moreover used mass approximation to rapidly evaluate the vibrational modes in the supercell. We applied the method to two different transition metal dichalcogenide monolayer alloys, H-(Mo,W)S$_2$ and T'-(Mo,W)Te$_2$, and to impurities in H-MoS$_2$. The accuracy of the method was validated in the case of H-(Mo,W)S$_2$ alloy through comparison to the available experimental reports. T'-(Mo,W)Te$_2$ and impurity cases are used to (i) demonstrate the wider applicability of the method and (ii) provide predictions in few technologically relevant systems. We note that in addition to yielding the simulated Raman spectra, the projection scheme also provides a powerful tool for analyzing the origin of the Raman-active features. The method presented here is not limited to 2D materials, and is applicable to various other bulk and low-dimensional systems. \section*{Acknowledgments} We thank Prof. Liming Xie for providing us the experimental data. We are grateful to the Academy of Finland for the support under Projects No.~286279 and 311058. We also thank CSC--IT Center for Science Ltd. and Aalto Science-IT project for generous grants of computer time.
1,477,468,750,562
arxiv
\section{Definitions} \begin{definition} \label{def:weak_masking_game_graph} Given two transition systems $A=\langle S, \Sigma, E, s_0\rangle$ and $A'=\langle S', \Sigma_{\mathcal{F}}, E_W', s'_0 \rangle$ (where $\Sigma$ and $\Sigma_{\mathcal{F}}$ possible contains the distinguished \textit{silent} action $\tau$), we define the \emph{weak masking game graph} $\mathcal{G}^W_{A^M,A'} = \langle S^G, S_R, S_V, \Sigma^G, E_W^G, {s_0}^G \rangle$ for two players as follows: \begin{itemize} \item $\Sigma^G = \Sigma_{\mathcal{M}} \cup \Sigma_{\mathcal{F}} \cup \{\tau\}$ \item $S^G = (S \times ( \Sigma_{\mathcal{M}}^1 \cup \Sigma_{\mathcal{F}}^2 \cup \{\tau\} \cup\{\#\}) \times S' \times \{ R, V \}) \cup \{s_{err}\}$ \item The initial state is $s_0^G = \langle s_0, \#, s'_0, R \rangle$, where the refuter starts playing \item The refuter's states are $S_R = \{ (s, \#, s', R) \mid s \in S \wedge s' \in S' \} \cup \{s_{err}\}$ \item The verifier's states are $S_V = \{ (s, \sigma, s', V) \mid s \in S \wedge s' \in S' \wedge \sigma \in \Sigma^G\setminus\{M\}\}$ \end{itemize} and $E_W^G$ is the minimal set satisfying: \begin{itemize} \item $\{ (s, \#, s', R) \xrightarrow{\sigma} (t, \sigma^{1}, s', V) \mid \exists\;\sigma \in \Sigma: s \xRightarrow{\sigma} t \in E_W \} \subseteq E_W^G$, \item $\{ (s, \#, s', R) \xrightarrow{\sigma} (s, \sigma^{2}, t', V) \mid \exists\;\sigma \in \Sigma_{\mathcal{F}}: s' \xRightarrow{\sigma} t' \in E_W' \} \subseteq E_W^G$, \item $\{ (s, \sigma^2, s', V) \xrightarrow{\sigma} (t, \#, s', R) \mid \exists\;\sigma \in \Sigma: s \xRightarrow{\sigma} t \in E_W \} \subseteq E_W^G$, \item $\{ (s, \sigma^1, s', V) \xrightarrow{\sigma} (s, \#, t', R) \mid \exists\;\sigma \in \Sigma: s' \xRightarrow{\sigma} t' \in E_W' \} \subseteq E_W^G$, \item $\{ (s, F_i^2, s', V) \xrightarrow{M} (t, \#, s', R) \mid \exists\;s \xRightarrow{M} t \in E^M \} \subseteq E_W^G$, for any $F_i \in \mathcal{F}$ \item If there is no outgoing transition from some state $s$ then transitions $s \xrightarrow{\sigma} s_{err}$ and $s_{err} \xrightarrow{\sigma} s_{err}$ for every $\sigma \in \Sigma$, are added. \end{itemize} \end{definition} \section{Proofs of Properties} \noindent \textbf{Proof of Lemma \ref{lemma:RefWinStrat}.} The refuter has a winning strategy in $\mathcal{G}_{A^M, A'}$ (or $\mathcal{G}^W_{A^M, A'}$) iff $s_{init} \in U^k$, for some $k$. \begin{proof} $\Rightarrow$) Suppose that the Refuter has a winning strategy namely $\pi$ and that $s_{init} \notin U^k_i$ for any $i$ and $k$. This means that $\pi(s_{init})$ returns a node $v$ such that $v \notin U^k_i$ (for any $i$ and $k$) (by definition of $U^k_i$), and from there the Verifier can select a node $v' \notin U^k_i$ (for any $i$ and $k$), and again this can be repeated forever. Therefore, the play never reaches $s_{err}$, which means that the Verifier wins and that leads to a contradiction. $\Leftarrow$) Consider $s_{init} \in U^k$ for some $k$, where we have that $s_{init} \in U^k_i$ for some $i$ by definition. Any winning strategy for the refuter is simple, for any $v \in U^j_i$, $\pi(v) = v'$ being $v'$ some node in $U^{j-1}_{i}$ which exists by definition. Since $s_{init} \in U^k$ and the Refuter has to play, then the play will reach in $j-1$ steps the set $U^1$, i.e., the $s_{err}$ state. \\ \noindent The proof also apply for the weak masking game graph $\mathcal{G}^W_{A^M, A'}$. \end{proof}\\ \noindent \textbf{Proof of Theorem \ref{thm:wingame_strat}.} Let $A=\langle S, \Sigma, E, s_0\rangle$ and $A'=\langle S', \Sigma_{\mathcal{F}}, E', s_0' \rangle$. $A \preceq_{m} A'$ iff the verifier has a winning strategy for the strong masking game graph $\mathcal{G}_{A^M,A'}$.\\ \begin{proof} $\Rightarrow)$ Suppose that $A \preceq_{m} A'$, then there exists a masking simulation $\M \subseteq S \times S'$ by Definition \ref{def:masking_rel}. Then, the strategy of the verifier is constructed as follows, for states $(s, \sigma^i, s', V)$ with $s \; \M \; s'$ and $\sigma \notin \mathcal{F}$, the strategy selects either a transition $s \xrightarrow{\sigma} w$ or $s' \xrightarrow{\sigma} w'$ depending if $i=1$ or $i=2$, respectively. In case of $\sigma \in \mathcal{F}$, then the strategy returns the transition $(s, F_j^2, s', V) \xrightarrow{M} (s, \#, s', R)$, for any $F_j \in \mathcal{F}$. This can be done since $s \; \M \; s'$. Furthermore, in any case we have $w \; \M \; s'$, $s\;\M\;w'$ or $s\;\M\; s'$, respectively. Thus, the strategy can be applied again for any movement of the Refuter. Summing up, the Verifier can play forever and then the strategy is winning for her. Hence, the strategy is winning for game $\mathcal{G}_{A^M, A'}$ since $s_{init}\;\M\;s'_{init}$. $\Leftarrow)$ Suppose that the verifier has a winning strategy from the initial state. Then, we define a masking simulation relation as $\M = \{(s,s') \mid \text{ \emph{V} has a winning stra-}$ \\ $\text{tegy for } (s, \#, s', R) \}$. It is simple to see that it is a masking simulation. Furthermore, $s_{init} \; \M\; s'_{init}$, then $A \preceq_{m} A'$. \\ \noindent The proof is similar for the theorem that $A \preceq^w_{m} A'$ iff the verifier has a winning strategy for the weak masking game graph $\mathcal{G}^W_{A^M, A'}$, but using $\mathcal{G}^W_{A^M, A'}$ instead of $\mathcal{G}_{A^M, A'}$ and by theorem~\ref{thm:weak_thm}. \end{proof}\\ \noindent \textbf{Proof of Theorem \ref{thm:mask_game_det}.} For any quantitative strong masking game $\mathcal{Q}_{A^M, A'}$ with payoff function $f_{m}$,% \[\textstyle \inf_{\pi_V \in \Pi_V} \; \sup_{\pi_R \in \Pi_R} f_{m}(out(\pi_R, \pi_V)) = \sup_{\pi_R \in \Pi_R} \; \inf_{\pi_V \in \Pi_V} f_{m}(out(\pi_R, \pi_V))\] \begin{proof} In order to prove that the masking payoff function $f_{m}$ is determined we have to prove that it is bounded and Borel measurable (Martin's theorem \cite{Martin98}). First, $f_{m}$ is bounded by definition. Second, to see that $f_{m}$ is Borel measurable note that $f_{m}(\Omega) \subseteq [0,1]$, and then it is sufficient to prove that, for every rational $x$, $f_{m}^{-1}((-\infty, x])$ is Borel in the Cantor topology of infinite executions. Consider $f_{m}^{-1}([-\infty,x])$ for an arbitrary $x$, this is the same as $f_{m}^{-1}([0, \frac{1}{a}])$ for a given $a$. But, $f_{m}^{-1}([0, \frac{1}{a}]) = \bigcup_{b \geq a} A_b$ where $A_b = \bigcup_{i >0} A^i_b$ for $A^i_b = \{ \rho_0 \sigma_0 \rho_1 \sigma_1 \dots \mid \rho_i = s_{err} \wedge \sum^{i-1}_{j=0} \chi_{\mathcal{F}}(\sigma_j) =b\}$. Note that $A^i_b = \{ C_{\rho_0 \sigma_0 \dots \rho_i} \mid \sum^{i-1}_{j=0} \chi_{\mathcal{F}}(\sigma_j) =b\}$ where $C_{\rho_0 \sigma_0 \dots \rho_i}$ is the cone corresponding to initial segment $\rho_0 \sigma_0 \dots \rho_i$ which is Borel measurable, and so $A^i_b$, $A_b$ and $f_{m}^{-1}((-\infty, x])$ are Borel measurable. \end{proof} \\ \noindent \textbf{Proof of Theorem \ref{thm:quant_game}.} Let $\mathcal{Q}_{A^M,A'}$ be a quantitative strong masking game. Then, $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'}) = \frac{1}{w}$, with $w = \min \{ i \mid \exists j : s_{init} \in U^j_i \}$, whenever $s_{init} \in U$, and $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'})=0$ otherwise, where sets $U^j_i$ and $U$ are defined in equation~(\ref{def:of:Uji}).\\ \begin{proof} First, note that any play avoiding state $s_{err}$ has value $0$. By definition of the game, each transition performed by the Refuter must be followed by a transition selected by the Verifier. These transitions (the matches performed by the Verifier) have cost $(1,0)$ since the target of any of these transition is different from $s_{err}$. Because we have an infinite number of these matches, when state $s_{err}$ is not reached, the valuation of these plays is $\lim_{n\rightarrow \infty} \frac{0}{1+ \sum^n_{i=0} v_i} = 0$. Otherwise, if $s_{init} \in U^j_i$ for some $j \geq 1$, we denote by $\Pi$ the set of Refuter's strategies satisfying the following: If $v \in U^j_i$ for $i,j>1$ and $post(v) \cap U^{j-1}_{i} \neq \emptyset$, then $\pi(v) = v'$, for some $v' \in post(v) \cap U^{j-1}_{i}$. Note that $\Pi \neq \emptyset$, since any Refuter's node in a set $U^j_i$ has a successor belonging to $U^{j-1}_i$. Now, any play from $s_{init}$ following a strategy in $\Pi$ contains the occurrence of at most $i$ faults since the unique way of decreasing $i$ is by performing a masking after a fault, and $i \leq j$ always. That is, for any $\pi_V \in \Pi_{V}$ and $\pi_R \in \Pi$ we have that $\mathop{\textup{val}}(\pi_V, \pi_R) = \frac{1}{i}$. Thus, $\mathop{\textup{val}}(\mathcal{Q}_{A^M, A'}) \geq \frac{1}{i}$. Hence, $\mathop{\textup{val}}(\mathcal{Q}_{A^M, A'})\geq \frac{1}{w}$ for $w = \min \{ i \mid \exists j : s_{init} \in U^j_i \} $. Now, note that for those nodes $s_i \notin U^i_j$ for every $i$ and $j$, the Verifier has strategies $\pi_V$ such that $\mathop{\textup{val}}(\pi_V, \pi_R)=0$ for any Refuter's strategy $\pi_R$. Then, for any Refuter's strategy $\pi_R \notin \Pi$ we have that $\inf_{\pi_R \in \Pi_R} \mathop{\textup{val}}(\pi_V, \pi_R) = 0$. That is, for any Refuter's strategy we have $\inf_{\pi_V \in \Pi_V} \mathop{\textup{val}}(\pi_V, \pi_R) \leq \frac{1}{w}$ for $w=\min \{ i \mid \exists j : s_{init} \in U^j_i\}$. Therefore, $\sup_{\pi_R \in \Pi_R} \inf_{\pi_V \in \Pi_V} \mathop{\textup{val}}(\pi_V, \pi_R) \leq \frac{1}{w}$. That is, $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'}) \leq \frac{1}{w}$, i.e., $\mathop{\textup{val}}(Q_{A^M,A'}) = \frac{1}{w}$. \end{proof} \\ \noindent \textbf{Proof of Theorem \ref{thm:triang_ineq}. } Let $A = \langle S, \Sigma, E, s_0 \rangle$, $A' = \langle S', \Sigma_{\mathcal{F'}}, E', s'_0 \rangle$, and $A'' = \langle S'', \Sigma_{\mathcal{F''}},E'', s''_0 \rangle$ be transition systems such that $\mathcal{F}' \subseteq \mathcal{F}''$. Then $\delta_{m}(A,A'') \leq \delta_{m}(A,A') + \delta_{m}(A', A'')$ and $\delta_{m}^W(A,A'') \leq \delta_{m}^W(A,A') + \delta_{m}^W(A', A'').$\\ \noindent \begin{proof} Let us consider any node $(s, \#, s'', R)$ of the game $\mathcal{Q}_{A^M,A''}$ belonging to $U^j_i$ with $j \geq 2$. Note that $j$ cannot be the error state and so $j \neq 1$; moreover, after the movement of the Refuter we have at least one movement from the Verifier. In addition, for every nodes $(s,\#, s', R)$ in $\mathcal{Q}_{A^M,A'}$ with $(s,\#, s', R) \in U_{i'}^{k'}$ and $(s',\#,s'', R)$ of game $\mathcal{Q}_{{A'}^M,A''}$ with $(s',\#,s'', R) \in U^{k''}_{i''}$ it holds that $\frac{1}{i} \leq \frac{1}{i'} + \frac{1}{i''}$. For the sake of convenience, when a node $s$ does not belong to any $U^k_i$, we assume $s \in U^{\infty}_{\infty}$. Then, we just define $\frac{1}{\infty}=0$. The result follows from this fact and theorem \ref{thm:quant_game}. The proof is by induction on $i$. \\ \noindent \emph{Base Case:} For $i=1$, we perform an induction on $j$. Let $j=2$ and suppose that $(s, \#, s'', R) \in U^2_1$. This means that we have a transition $(s, \#, s'', R) \xrightarrow{\sigma} (w, \sigma^t, w'', V)$, where $t \in \{1,2\}$, that cannot be matched by the Verifier. In case that $t=1$, then this play is a transition $(s, \#, s'', R) \xrightarrow{\sigma^t} (w, \sigma^t, s'', V)$ from $A$. Now, let $(s, \#, s', R)$ and $(s', \#, s'', R)$ be a pair of nodes of $A$ and $A'$, respectively. By definition, we have a transition $(s, \#, s', R) \xrightarrow{\sigma} (w, \#, s', R)$ in $Q_{A^M,A'}$. In case that the Verifier cannot match this play in that game we have that $(s, \#, s', R) \in U^2_1$. This finishes the proof since $1 \leq 1 + k''$, regardless of the value of $k''$. Otherwise, we have a play by the refuter $(w, \#, s', R) \xrightarrow{\sigma^1} (w, \sigma^1, w', V)$ and we also have a transition $(s', \#, s'', R) \xrightarrow{\sigma} (w', \sigma, s'', V)$. But, this cannot be matched by our initial assumption, that is, $(s', \#, s'', R) \in U^2_1$. This finalizes the base case for $j$. For $t=2$, the reasoning is similar using the transitions of $A''$. Now, for the inductive case of the second induction consider $j>2$ and $i=1$, that is, $(s, \#, s'', R) \in U^j_1$. This means that we have a transition $(s, \#, s'', R) \xrightarrow{\sigma^t} (w, \sigma^t, w'', V)$ with $t \in \{1,2\}$. Consider now any pairs of states $(s, \#, s', R)$ in $\mathcal{Q}_{A,A'}$ and $(s', \#, s'', R)$ in $\mathcal{Q}_{A',A''}$. In case of $t=1$, then we have a transition $(s, \#, s'', R) \xrightarrow{\sigma^1} (w, \sigma^1, s'', V)$ where $post((w, \sigma^1, s'', V)) \subseteq \bigcup^{k \leq j} U^j_1$. By definition of game $Q_{A^M, A'}$, we have a transition $(s, \#, s', R) \xrightarrow{\sigma} (w, \sigma, s', V)$. In case that it cannot be matched, then the result follows. Otherwise, we have transitions $(w, \sigma, s', V) \xrightarrow {\sigma} (w, \#, w', R)$ for $w' \in S'$. Therefore, there must be also a transition $(s', \#, s'', R) \xrightarrow{\sigma} (w', \sigma, s'', V)$. Similarly, in case that this cannot be matched we have $(s', \#, s'', R) \in U^2_1$ and the proof finishes. In other case, we have a collection of transitions $(w', \sigma, s'', V) \xrightarrow{} (w', \#, w'', R)$ for $w'' \in S''$. Note that for any of these pairs $(w, \#, w', R)$ and $(w', \#, w'', R)$, we have that $(s, \#, s'', R) \xrightarrow{\sigma} (w,\sigma^1, s'', V) \xrightarrow{\sigma} (w, \#, w'', R)$. Then, $(w,\#,w'', R) \in U^{j-2}_1$ and by inductive hypothesis for all of these pairs we have $(w, \#, w', R) \in U^{j'}_1$ and $(w', \#, w'', R) \in U^{j''}_1$, Now, taking $k'$ as the maximum of all these $j'$ and $k''$ as the maximum of all these $j''$, we obtain that $(w, \sigma, s', V)\in U^{k'+1}_1$ and $(w', \sigma, s'', V)\in U^{k''+1}_1$. This implies that $(s, \sigma, s', V)\in U^{k'+2}_1$ and $(s', \sigma, s'', V)\in U^{k''+2}_1$, which finishes the proof. \\ \noindent \emph{Inductive Case:} For $i>1$ the proof is as follows. Assume that $(s, \#, s'', R) \in U^j_i$. Since $1<i < j$, we have a transition $(s, \#, s'', R) \xrightarrow{\sigma^t} (w, \sigma^t, w'', V)$. In case that $\sigma^t = F^2$ for some $F \in \mathcal{F}''$, then we must have a transition $(s, F^2, w'', V) \xrightarrow{M} (s, \#, w'', R)$ and $(s, \#, w'', R) \in U^{j-2}_{i-1}$. On the other hand, in game $\mathcal{Q}_{A',A''}$ we must have a transition $(s', \#, s'', R) \xrightarrow{F} (s', F^2, w'', V)$ by definition. In case that $F \in \mathcal{F}'$, then $F \in \Sigma'$. On the contrary, if it cannot be matched, then the result follows. Otherwise, we have a collection of transitions $(s', F^2, w'', V) \xrightarrow{F} (w', \#, w'', R)$. So, in game $\mathcal{Q}_{A,A'}$ we have at least an edge $(s, \#, s', R) \xrightarrow{F} (s, F^2, w', V)$. By the initial assumption, this can be masked, and then there is a transition $(s, F^2, w', V) \xrightarrow{M} (s, \#, w', R)$. By induction, we have $(s, \#, w', R) \in U^{j'}_{i'}$ and $(w', \#, w'', R) \in U^{j''}_{i''}$ such that $\frac{1}{i-1} \leq \frac{1}{i'} + \frac{1}{i''}$. Note that $(s, F^2, w', V) \in U^{j'-1}_{i'-1}$, since we have a unique (masking) transition from $(s, F^2, V, w')$. Now, let us define $k'' = \max \{i'' \mid \text{for all states}~(s', \#, w'', V) \in U^{j''}_{i''} \}$. Then, $(s', F^2, s'', V) \in U^{k''}_{i''}$ and we have that $(s, \#, s', R) \in U^{j'}_{i'+1}$ and $(s', \#, s'', V) \in U^{k''}_{i''}$ with $\frac{1}{i} \leq \frac{1}{i'+1} + \frac{1}{i''}$ by definition of sets $U$'s. This finishes the proof for this case. In case that $\sigma^t \neq F^2$, the proof proceeds by induction on $j$ as the second induction in the base case. \\ \noindent The proof for $\delta_{m}^W$ is similar to the $\delta_{m}$ but using $\mathcal{Q}^W_{A^M,A'}$ instead of $\mathcal{Q}_{A^M,A'}$ and by theorem~\ref{thm:weak_thm}. \end{proof} \section{Models for Case Studies} \label{sec:case_studies} In this section we provide models for some instances of the case studies presented in Section 4, these and models for other instances can be found on the tool repository. \subsection{Memory Cell (3 bits)} Here we have a basic model for a 3 bit redundancy memory cell, there is a single process Memory with actions for reading and writing a value. The process may fail by flipping one or more bits. \begin{lstlisting}[ basicstyle=\tiny, ] Process Memory { w: BOOL; // the last value written, r: BOOL; // the value we can read from the memory c0: BOOL; // the first bit c1: BOOL; // the second bit c2: BOOL; // the third bit Initial: w && c0 && c1 && c2 && r; Normative: (c0==c1) && (c1==c2) && (c0==c2) && w==r; [write] true -> w=!w, c0=!c0, c1=!c1, c2=!c2, r =!r; [read0] !r -> r = r; [read1] r -> r = r; [fail1] faulty true -> c0=!c0, r =(!c0&&c1)||(c1&&c2)||(!c0&&c2); [fail2] faulty true -> c1=!c1, r =(c0&&!c1)||(!c1&&c2)||(c0&&c2); [fail3] faulty true -> c2=!c2, r =(c0&&c1)||(c1&&!c2)||(c0&&!c2); } Main(){ m1: Memory; run m1(); } \end{lstlisting} \subsection{N-Modular Redundancy (3 modules)} This is a model for 3-Modular Redundancy, there are three processes: Module,Voter and Environment. Modules can fail by flipping the input signal, The Voter outputs the majority value of the signals received, and the Environment can reset the input to 0 or 1. \begin{lstlisting}[ basicstyle=\tiny, ] Global i0,i1,i2:BOOL; // inputs for each module Process Module(out:BOOL) { Initial: !i0 && !i1 && !i2; Normative: true; [fail] faulty true -> out = !out; } Process Voter{ Initial: !i0 && !i1 && !i2; Normative: true; [vote] (i0&&i1)||(i1&&i2)||(i0&&i2) -> i0 = i0; //if majority then skip } Process Environment{ Initial: !i0 && !i1 && !i2; Normative: true; [input0] true -> i0 = false, i1 = false, i2 = false; [input1] true -> i0 = true, i1 = true, i2 = true; } Main(){ m0: Module; m1: Module; m2: Module; v0: Voter; e0: Environment; run m0(i0); run m1(i1); run m2(i2); run v0(); run e0(); } \end{lstlisting} \subsection{Byzantine Agreement (4 generals)} This is a model for Byzantine Agreement with 4 generals, we differentiate the commander as a separate process Commander, and the other generals are instances of a Lieutenant process. In this case, there are two rounds of messages, the first one is an order from the commander to the lieutenants of attack or retreat, then comes the second round which involves a forward of the commander order from each lieutenant to all other lieutenants. Lieutenants may become traitors at any moment and send conflicted messages. \begin{lstlisting}[ basicstyle=\tiny, ] Global g1g2A,g1g3A,g1g4A: BOOL; //Commander(g1) attack messages Global g2g3A,g2g4A: BOOL; //Lieutenant1(g2) attack messages Global g3g2A,g3g4A: BOOL; //Lieutenant2(g3) attack messages Global g4g2A,g4g3A: BOOL; //Lieutenant3(g4) attack messages Global g1g2R,g1g3R,g1g4R: BOOL; //Commander(g1) retreat messages Global g2g3R,g2g4R: BOOL; //Lieutenant1(g2) retreat messages Global g3g2R,g3g4R: BOOL; //Lieutenant2(g3) retreat messages Global g4g2R,g4g3R: BOOL; //Lieutenant3(g4) retreat messages Global A2,A3,A4: BOOL; //The Attack decision of each lieutenant Global R2,R3,R4: BOOL; //The Retreat decision of each lieutenant Process Commander{ s0,s1: BOOL; Initial: s0 && !s1; Normative: true; [sA] s0 -> g1g2A = true, g1g3A = true, g1g4A = true, s0= false, s1=true; [sR] s0 -> g1g2R = true, g1g3R = true, g1g4R = true, s0= false, s1=true; } Process Lieutenant(attack: BOOL, retreat: BOOL, fw1A: BOOL, fw2A:BOOL, fw1R:BOOL, fw2R:BOOL, a1:BOOL, a2:BOOL, r1:BOOL, r2:BOOL, dA:BOOL, dR:BOOL){ // PARAMS: attack: attack order from commander, fw1A and fw2A: messages sent // to other lieutenants, a1 and a2: messages received from other lieutenants, // dA: decide to attack, the rest of params are similar but with retreat s0,s1,s2, isBetrayer: BOOL; Initial: s0 && !s1 && !s2 && !isBetrayer; Normative: true; [fA] s0 && attack && !isBetrayer -> fw1A = true, fw2A= true, s0 = false, s1 = true; [fR] s0 && retreat && !isBetrayer -> fw1R = true, fw2R = true, s0 = false, s1 = true; [fA] s0 && attack && isBetrayer -> fw1R = true, fw2R= true, s0 = false, s1 = true; [fR] s0 && retreat && isBetrayer -> fw1A = true, fw2A= true, s0 = false, s1 = true; [Betray] faulty s0 && !isBetrayer -> isBetrayer = true; [Attack] s1 && !isBetrayer && ((attack && a1)||(attack && a2)||(a1 && a2)) -> s1 = false, s2 = true, dA = true; [Retreat] s1 && !isBetrayer && ((retreat && r1)||(retreat && r2)||(r1 && r2)) -> s1 = false, s2 = true, dR = true; } Main(){ g1:Commander; g2:Lieutenant; g3:Lieutenant; g4:Lieutenant; run g1(); run g2(g1g2A,g1g2R,g2g3A,g2g4A,g2g3R,g2g4R,g3g2A,g4g2A,g3g2R,g4g2R,A2,R2); run g3(g1g3A,g1g3R,g3g2A,g3g4A,g3g2R,g3g4R,g2g3A,g4g3A,g2g3R,g4g3R,A3,R3); run g4(g1g4A,g1g4R,g4g3A,g4g2A,g4g3R,g4g2R,g2g4A,g3g4A,g2g4R,g3g4R,A4,R4); } \end{lstlisting} \subsection{Dining Philosophers (3 philosophers)} Here we show a model for the Dining Philosophers in the case of 3 philosophers, 2 of which take the right fork first and the other takes the left one first, these are modeled as processes EvenPhil and OddPhil respectively. We incorporate a faulty behaviour on EvenPhil that makes it behave as OddPhil, i.e takes the left fork first. \begin{lstlisting}[ basicstyle=\tiny, ] // !s0!s1 == thinking // !s0s1 == hungry // s0!s1 == eating Global fork0,fork1,fork2:BOOL; Process OddPhil(forkL:BOOL, forkR:BOOL){ s0,s1 : BOOL; hasL, hasR : BOOL; Initial: !s0 && !s1 && !hasL && !hasR && forkR && forkL; Normative: !(hasR && !hasL); [hungry] !s0 && !s1 -> s1 = true; [getLeft] !s0 && s1 && forkL && !hasL && !hasR -> forkL=false, hasL=true; [getRight] !s0 && s1 && hasL && forkR && !hasR -> forkR = false, hasR=true; [eating] !s0 && s1 && hasL && hasR -> s1 = false, s0 = true; [thinking] s0 && !s1 -> s0 = false, forkL=true, forkR=true, hasR=false, hasL=false; } Process EvenPhil(forkL:BOOL, forkR:BOOL){ s0,s1 : BOOL; hasL, hasR : BOOL; Initial: !s0 && !s1 && !hasL && !hasR && forkR && forkL; Normative: !(hasL && !hasR); [hungry] !s0 && !s1 -> s1 = true; [getRight] !s0 && s1 && forkR && !hasL && !hasR -> forkR=false, hasR=true; [getLeft] !s0 && s1 && hasR && forkL && !hasL -> forkL = false, hasL=true; [eating] !s0 && s1 && hasL && hasR -> s1 = false, s0 = true; [thinking] s0 && !s1 -> s0 = false, forkL=true, forkR=true, hasR=false, hasL=false; [getLeft] faulty !s0 && s1 && !hasR && forkL && !hasL -> forkL = false, hasL=true; } Main(){ phil1:OddPhil; phil2:EvenPhil; phil3:EvenPhil; run phil1(fork2, fork0); run phil2(fork0, fork1); run phil3(fork1, fork2); } \end{lstlisting} \subsection{Bounded Retransmission Protocol (1 chunk, 3 retransmissions)} The BRP protocol sends a file in a number of chunks, but allows only a bounded number of retransmissions of each chunk, here we model an instance of this problem with 3 retransmissions on a single chunk. There are two processes, the Sender and the Receiver, the former has a set of internal actions that represent the case when a message is lost (i.e. a fault has occurred) and has to retrasmit it. \begin{lstlisting}[ basicstyle=\tiny, ] // N==1 number of chunks // MAX==3 number of retransmissions Global fs,ls,bs: BOOL; Global flagK,flagL: BOOL; Process Sender { s0,s1,s2: BOOL; // state variables idle(000),nextframe(001),waitack(010), // retransmit(011),success(100),error(101),waitsync(110) srep0,srep1: BOOL; // srep variables bottom(00),nok(01),dk(10),ok(11) sab: BOOL; rt0,rt1: BOOL; // firstattempt(00),retransmission1(01),retransmission2(10), // retransmission3(11) Initial: !s0 && !s1 && !s2 && !srep0 && !srep1 && !bs && !sab && !fs && !ls && !rt0 && !rt1 && !flagK && !flagL; Normative: true; //idle [NewFile] !s0 && !s1 && !s2 -> s2 = true, srep0 = false, srep1 = false; //next frame [sendChunk] !s0 && !s1 && s2 && !flagK -> s1 = true, s2 = false, fs = true, ls = true, bs = sab, rt0 = false, rt1 = false, flagK = true; //wait ack [receiveAck] !s0 && s1 && !s2 && !flagK && flagL -> s0 = true, s1 = false, sab = !sab, flagL = false; [TOMsg] faulty !s0 && s1 && !s2 && flagK -> s2 = true, flagK = false; // retransmit [sendChunk] internal !s0 && s1 && s2 && !rt0 && !rt1 && !flagK -> s2 = false, fs = true, ls = true, bs = sab, rt1 = true, flagK = true; [sendChunk] internal !s0 && s1 && s2 && !rt0 && rt1 && !flagK -> s2 = false, fs = true, ls = true, bs = sab, rt0 = true, rt1 = false, flagK = true; [sendChunk] internal !s0 && s1 && s2 && rt0 && !rt1 && !flagK -> s2 = false, fs = true, ls = true, bs = sab, rt1 = true, flagK = true; [error] internal !s0 && s1 && s2 && rt0 && rt1 -> s0 = true, s1 = false, srep0 = false, srep1 = true; [error] internal !s0 && s1 && s2 && rt0 && rt1 -> s0 = true, s1 = false, srep0 = true, srep1 = false; // success [success] s0 && !s1 && !s2 -> s0 = false, srep0 = true, srep1 = true; // error [restart] s0 && !s1 && s2 -> s0 = false, s2 = false; } Process Receiver { r0,r1,r2: BOOL; // newfile(000), fstsafe(001), framereceived(010), // framereported(011), idle(100), finish(101) rrep0,rrep1,rrep2: BOOL; // bottom(000), fst(001), inc(010), ok(011), nok(100) fr,lr,br,rab,recv: BOOL; Initial: !r0 && !r1 && !r2 && !rrep0 && !rrep1 && !rrep2 && !fr && !lr && !br && !rab && !recv && !fs && !ls && !bs && !flagK && !flagL; Normative: true; // new_file [receiveFirstChunk] !r0 && !r1 && !r2 && flagK && !flagL -> r2 = true, fr = fs, lr = ls, br = bs, recv = true, flagK = false; // fst_safe_frame [e] !r0 && !r1 && r2 && !flagL -> r1 = true, r2 = false, rab = br; // frame_received [setIndication] !r0 && r1 && !r2 && rab==br && fr && !lr && !flagL-> r2 = true, rrep0 = false, rrep1 = false, rrep2 = true; [setIndication] !r0 && r1 && !r2 && rab==br && !fr && !lr && !flagL -> r2 = true, rrep0 = false, rrep1 = true, rrep2 = false; [setIndication] !r0 && r1 && !r2 && rab==br && !fr && lr && !flagL -> r2 = true, rrep0 = false, rrep1 = true, rrep2 = true; [sendAck] !r0 && r1 && !r2 && !(rab==br) && !flagL -> r0 = true, r1 = false, flagL = true; // frame_reported [sendAck] !r0 && r1 && r2 && !flagL && !lr -> r0 = true, r1 = false, r2 = false, rab = !rab, flagL = true; [sendAck] !r0 && r1 && r2 && !flagL && lr -> r0 = true, r1 = false, r2 = true, rab = !rab, flagL = true; // idle [receiveChunk] r0 && !r1 && !r2 && flagK && !flagL -> r0 = false, r1 = true, fr = fs, lr = ls, br = bs, recv = true,flagK = false; //finish [restart] r0 && !r1 && r2 -> r1 = false, r2 = false; } Main(){ s: Sender; r: Receiver; run s(); run r(); } \end{lstlisting} \section{Preliminaries} \label{sec:background} Let us introduce some basic definitions and results on game theory that will be necessary across the paper, the interested reader is referred to \cite{AptG11}. A \emph{transition system} (TS) is a tuple $A =\langle S, \Sigma, E, s_0\rangle$, where $S$ is a finite set of states, $\Sigma$ is a finite alphabet, $E \subseteq S \times \Sigma \times S$ is a set of labelled transitions, and $s_0$ is the initial state. In the following we use $s \xrightarrow{e} s' \in E$ to denote $(s,e,s') \in E$. Let $|S|$ and $|E|$ denote the number of states and edges, respectively. We define $post(s) = \{s' \in S \mid s \xrightarrow{e} s' \in E\}$ as the set of successors of $s$. Similarly, $pre(s') = \{s \in S \mid s \xrightarrow{e} s' \in E \}$ as the set of predecessors of $s'$. Moreover, $post^{*}(s)$ denotes the states which are reachable from $s$. Without loss of generality, we require that every state $s$ has a successor, i.e., $\forall s \in S : post(s) \neq \emptyset$. A run in a transition system $A$ is an infinite path $\rho = \rho_0 \sigma_0 \rho_1 \sigma_1 \rho_2 \sigma_2 \dots \in (S \cdot \Sigma)^{w}$ where $\rho_0 = s_0$ and for all $i$, $\rho_i \xrightarrow{\sigma_i} \rho_{i+1} \in E$. From now on, given a tuple $(x_0,\dots,x_n)$, we denote by $\pr{i}{(x_0,\dots,x_n)}$ its $i$-th projection. A \emph{game graph} $G$ is a tuple $G = \langle S, S_1, S_2, \Sigma, E, s_0 \rangle$ where $S$, $\Sigma$, $E$ and $s_0$ are as in transition systems and $(S_1, S_2)$ is a partition of $S$. The choice of the next state is made by Player $1$ (Player $2$) when the current state is in $S_1$ (respectively, $S_2$). A weighted game graph is a game graph along with a weight function $v^G$ from $E$ to $\mathbb{Q}$. A run in the game graph $G$ is called a \emph{play}. The set of all plays is denoted by $\Omega$. Given a game graph $G$, a \emph{strategy} for Player $1$ is a function $\pi: (S \cdot \Sigma)^{*} S_1 \rightarrow S \times \Sigma$ such that for all $\rho_0 \sigma_0 \rho_1 \sigma_1 \dots \rho_i~\in~(S \cdot \Sigma)^{*} S_1$, we have that if $\pi(\rho_0 \sigma_0 \rho_1 \sigma_1 \dots \rho_i) = (\sigma, \rho)$, then $\rho_i \xrightarrow{\sigma} \rho \in E$. A strategy for Player $2$ is defined in a similar way. The set of all strategies for Player $p$ is denoted by $\Pi_{p}$. A strategy for player $p$ is said to be memoryless (or positional) if it can be defined by a mapping $f:S_p \rightarrow E$ such that for all $s \in S_p$ we have that $\pr{0}{f(s)}=s$, that is, these strategies do not need memory of the past history. Furthermore, a play $\rho_0 \sigma_0 \rho_1 \sigma_1 \rho_2 \sigma_2 \dots$ conforms to a player $p$ strategy $\pi$ if $\forall i \geq 0: (\rho_i \in S_p) \Rightarrow (\sigma_{i}, \rho_{i+1}) = \pi(\rho_0 \sigma_0 \rho_1 \sigma_1 \dots \rho_i)$. The \emph{outcome} of a Player $1$ strategy $\pi_{1}$ and a Player $2$ strategy $\pi_2$ is the unique play, named $out(\pi_1, \pi_2)$, that conforms to both $\pi_1$ and $\pi_2$. A \emph{game} is made of a game graph and a boolean or quantitative objective. A \emph{boolean objective} is a function $\Phi: \Omega \rightarrow \{0, 1\}$ and the goal of Player $1$ in a game with objective $\Phi$ is to select a strategy so that the outcome maps to $1$, independently what Player $2$ does. On the contrary, the goal of Player $2$ is to ensure that the outcome maps to $0$. Given a boolean objective $\Phi$, a play $\rho$ is \emph{winning} for Player $1$ (resp. Player $2$) if $\Phi(\rho) = 1$ (resp. $\Phi(\rho) = 0$). A strategy $\pi$ is a \emph{winning strategy} for Player $p$ if every play conforming to $\pi$ is winning for Player $p$. We say that a game with boolean objective is \emph{determined} if some player has a winning strategy, and we say that it is memoryless determined if that winning strategy is memoryless. Reachability games are those games whose objective functions are defined as $\Phi(\rho_0 \sigma_0 \rho_1 \sigma_1 \rho_2 \sigma_2 \dots) = (\exists i : \rho_i \in V)$ for some set $V \subseteq S$, a standard result is that reachability games are memoryless determined. A \emph{quantitative objective} is given by a \emph{payoff} function $f: \Omega \rightarrow \mathbb{R}$ and the goal of Player $1$ is to maximize the value $f$ of the play, whereas the goal of Player $2$ is to minimize it. For a quantitative objective $f$, the value of the game for a Player $1$ strategy $\pi_1$, denoted by $v_1(\pi_1)$, is defined as the infimum over all the values resulting from Player $2$ strategies, i.e., $v_1(\pi_1) = \inf_{\pi_2 \in \Pi_2} f(out(\pi_1, \pi_2))$. The value of the game for Player $1$ is defined as the supremum of the values of all Player $1$ strategies, i.e., $\sup_{\pi_1 \in \Pi_1} v_1(\pi_1)$. Analogously, the value of the game for a Player $2$ strategy $\pi_2$ and the value of the game for Player $2$ are defined as $v_2(\pi_2) = \sup_{\pi_1 \in \Pi_1} f(out(\pi_1, \pi_2))$ and $\inf_{\pi_2 \in \Pi_2} v_2(\pi_2)$, respectively. We say that a game is determined if both values are equal, that is: $\sup_{\pi_1 \in \Pi_1} v_1(\pi_1) = \inf_{\pi_2 \in \Pi_2} v_2(\pi_2)$. In this case we denote by $val(\mathcal{G})$ the value of game $\mathcal{G}$. The following result from \cite{Martin98} characterizes a large set of determined games. \begin{theorem} Any game with a quantitative function $f$ that is bounded and Borel measurable is determined. \end{theorem} \section{Conclusions and Future Work}\label{sec:conclusions} In this paper, we presented a notion of masking fault-tolerance distance between systems built on a characterization of masking tolerance via simulation relations and a corresponding game representation with quantitative objectives. Our framework is well-suited to support engineers for the analysis and design of fault-tolerant systems. More precisely, we have defined a computable masking distance function such that an engineer can measure the masking tolerance of a given fault-tolerant implementation, i.e., the number of faults that can be masked. Thereby, the engineer can measure and compare the masking fault-tolerance distance of alternative fault-tolerant implementations, and select one that fits best to her preferences. There are many directions for future work. We have only defined a notion of fault-tolerance distance for masking fault-tolerance, similar notions of distance can be defined for other levels of fault-tolerance like failsafe and non-masking, we leave this as a further work. \section{Experimental Evaluation} \label{sec:experimental_eval} The approach described in this paper have been implemented in a tool in \textsf{Java} called \textsf{MaskD}: Masking Distance Tool \cite{MaskD}. \textsf{MaskD}~takes as input a nominal model and its fault-tolerant implementation, and produces as output the masking distance between them. The input models are specified using the guarded command language introduced in \cite{AroraGouda93}, a simple programming language common for describing fault-tolerant algorithms. More precisely, a program is a collection of processes, where each process is composed of a collection of actions of the style: $Guard \rightarrow Command$, where $Guard$ is a boolean condition over the actual state of the program and $Command$ is a collection of basic assignments. These syntactical constructions are called actions. The language also allows user to label an action as internal (i.e., $\tau$ actions). Moreover, usually some actions are used to represent faults. The tool has several additional features, for instance it can print the traces to the error state or start a simulation from the initial state. We report on Table~\ref{table:results} the results of the masking distance for multiple instances of several case studies. These are: a Redundant Cell Memory (our running example), N-Modular Redundancy (a standard example of fault-tolerant system \cite{ShoomanBook}), a variation of the Dining Philosophers problem \cite{Dijkstra71}, the Byzantine Generals problem introduced by Lamport et al. \cite{LamportSP82}, and the Bounded Retransmission Protocol (a well-known example of fault-tolerant protocol \cite{GrooteP96}). Some words are useful to interpret the results. For the case of a $3$ bit memory the masking distance is $0.333$, the main reason for this is that the faulty model in the worst case is only able to mask $2$ faults (in this example, a fault is an unexpected change of a bit value) before failing to replicate the nominal behaviour (i.e. reading the majority value), thus the result comes from the definition of masking distance and taking into account the occurrence of two faults. The situation is similar for the other instances of this problem with more redundancy. N-Modular-Redundancy consists of N systems, in which these perform a process and that results are processed by a majority-voting system to produce a single output. Assuming a single perfect voter, we have evaluated this case study for different numbers of modules. Note that the distance measures for this case study are similar to the memory example. For the dining philosophers problem we have adopted the odd/even philosophers implementation (it prevents from deadlock), i.e., there are $n-1$ \emph{even} philosophers that pick the right fork first, and $1$ \emph{odd} philosopher that picks the left fork first. The fault we consider in this case occurs when an even philosopher behaves as an odd one, this could be the case of a byzantine fault. For two philosophers the masking distance is $0{.}5$ since a single fault leads to a deadlock, when more philosophers are added this distance becomes smaller. Another interesting example of a fault-tolerant system is the Byzantine generals problem, introduced originally by Lamport et al. \cite{LamportSP82}. This is a consensus problem, where we have a general with $n-1$ lieutenants. The communication between the general and his lieutenants is performed through messengers. The general may decide to attack an enemy city or to retreat; then, he sends the order to his lieutenants. Some of the lieutenants might be traitors. We assume that the messages are delivered correctly and all the lieutenants can communicate directly with each other. In this scenario they can recognize who is sending a message. Faults can convert loyal lieutenants into traitors (byzantines faults). As a consequence, traitors might deliver false messages or perhaps they avoid sending a message that they received. The loyal lieutenants must agree on attacking or retreating after $m + 1$ rounds of communication, where $m$ is the maximum numbers of traitors. The Bounded Retransmission Protocol (BRP) is a well-known industrial case study in software verification. While all the other case studies were treated as toy examples and analyzed with $\delta_{m}$, the BRP was modeled closer to the implementation following~\cite{GrooteP96}, considering the different components (sender, receiver, and models of the channels). To analyze such a complex model we have used instead the weak masking distance $\delta_{m}^W$. We have calculated the masking distance for the bounded retransmission protocol with $1$, $3$ and $5$ chunks, denoted BRP(1), BRP(3) and BRP(5), respectively. We observe that the distance values are not affected by the number of chunks to be sent by the protocol. This is expected because the masking distance depends on the redundancy added to mask the faults, which in this case, depends on the number of retransmissions. We have run our experiments on a MacBook Air with Processor 1.3 GHz Intel Core i5 and a memory of 4 Gb. The tool and case studies for reproducing the results are available in the tool repository. \begin{table}[t] \centering\noindent% \caption{Results of the masking distance for the case studies.} \label{table:results} \scalebox{0.9}{ \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Case Study} & \multirow{2}{*}{Redundancy} & Masking & \multirow{2}{*}{Time} \\ & & Distance & \\ \hline \multirow{4}{*}{Memory} & $3$ bits & $0.333$ & $0.7s$ \\ \cline{2-4} & $5$ bits & $0.25$ & $1.5s$ \\ \cline{2-4} & $7$ bits & $0.2$ & $27s$ \\ \cline{2-4} & $9$ bits & $0.167$ & $34m33s$ \\ \cline{2-4} \hline \multirow{4}{*}{\parbox{6em}{\centering N-Modular Redundancy}} & $3$ modules & $0.333$ & $0.3s$ \\ \cline{2-4} & $5$ modules & $0.25$ & $0.5s$ \\ \cline{2-4} & $7$ modules & $0.2$ & $31.7s$ \\ \cline{2-4} & $9$ modules & $0.167$ & $115m$ \\ \cline{2-4} \hline \multirow{4}{*}{Philosophers} & $2$ phils & $0.5$ & $0.3s$ \\ \cline{2-4} & $3$ phils & $0.333$ & $0.6s$ \\ \cline{2-4} & $4$ phils & $0.25$ & $7.1s$ \\ \cline{2-4} & $5$ phils & $0.2$ & $13m.53s$ \\ \cline{2-4} \hline \multirow{2}{*}{Byzantines} & $3$ generals & $0.5$ & $0.5s$ \\ \cline{2-4} & $4$ generals & $0.333$ & $2s$ \\ \cline{2-4} \hline \end{tabular} \qquad \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Case Study} & \multirow{2}{*}{Redundancy} & Masking & \multirow{2}{*}{Time} \\ & & Distance & \\ \hline \multirow{4}{*}{BRP(1)} & $1$ retransm. & $0.333$ & $1.2s$ \\ \cline{2-4} & $3$ retransm. & $0.2$ & $1.4s$ \\ \cline{2-4} & $5$ retransm. & $0.143$ & $1.5s$ \\ \cline{2-4} & $7$ retransm. & $0.111$ & $2.1s$ \\ \cline{2-4} \hline \multirow{4}{*}{BRP(3)} & $1$ retransm. & $0.333$ & $5.5s$ \\ \cline{2-4} & $3$ retransm. & $0.2$ & $14.9s$ \\ \cline{2-4} & $5$ retransm. & $0.143$ & $1m28s$ \\ \cline{2-4} & $7$ retransm. & $0.111$ & $4m40s$ \\ \cline{2-4} \hline \multirow{4}{*}{BRP(5)} & $1$ retransm. & $0.333$ & $6.7s$ \\ \cline{2-4} & $3$ retransm. & $0.2$ & $32s$ \\ \cline{2-4} & $5$ retransm. & $0.143$ & $1m51s$ \\ \cline{2-4} & $7$ retransm. & $0.111$ & $6m35s$ \\ \cline{2-4} \hline \multicolumn{4}{c}{}\\ \multicolumn{4}{c}{}\\ \end{tabular}} \end{table} \iffalse --- ---- \begin{table} \vspace{-0.4cm} \centering {\renewcommand{\arraystretch}{1.8} \scalebox{0.8}{ \begin{tabular}{|l|c|c|c|c|} \hline Case Study & Redundancy & Masking Distance & Time & States \\ \hline Memory & $3$ bits & $0.333$ & $0.7s$ & $97$ \\ \cline{2-5} & $5$ bits & $0.25$ & $1.5s$ & $641$ \\ \cline{2-5} & $7$ bits & $0.2$ & $27s$ & $2561$ \\ \cline{2-5} & $9$ bits & $0.167$ & $34m33s$ & $12289$ \\ \cline{2-5} \hline Philosophers & $2$ phils & $0.5$ & $0.3s$ & $51$ \\ \cline{2-5} & $3$ phils & $0.333$ & $0.6s$ & $291$ \\ \cline{2-5} & $4$ phils & $0.25$ & $7.1s$ & $1680$ \\ \cline{2-5} & $5$ phils & $0.2$ & $13m.53s$ & $1680$ \\ \cline{2-5} \hline BRP(1) & $1$ retransm. & $0.333$ & $1.2s$ & $35$\\ \cline{2-5} & $3$ retransm. & $0.2$ & $1.4s$ & $57$ \\ \cline{2-5} & $5$ retransm. & $0.143$ & $1.5s$ & $79$ \\ \cline{2-5} & $7$ retransm. & $0.111$ & $2.1s$ & $101$ \\ \cline{2-5} \hline BRP(3) & $1$ retransm. & $0.333$ & $5.5s$ & $408$\\ \cline{2-5} & $3$ retransm. & $0.2$ & $14.9s$ & $604$ \\ \cline{2-5} & $5$ retransm. & $0.143$ & $1m28s$ & $1136$ \\ \cline{2-5} & $7$ retransm. & $0.111$ & $4m40s$ & $1500$ \\ \cline{2-5} \hline BRP(5) & $1$ retransm. & $0.333$ & $6.7s$ & $584$\\ \cline{2-5} & $3$ retransm. & $0.2$ & $32s$ & $1108$ \\ \cline{2-5} & $5$ retransm. & $0.143$ & $1m51s$ & $1632$ \\ \cline{2-5} & $7$ retransm. & $0.111$ & $6m35s$ & $2156$ \\ \cline{2-5} \hline \end{tabular}} \vspace{0.4cm} \caption{Results of the masking distance for the case studies.} \label{table:results} } \end{table} \begin{comment} \vspace{0.4cm} \begin{tikzpicture} \begin{axis}[axis lines = left, title={\textbf{Basic Plot}}, xlabel={Redundancy}, ylabel={Time}, ymax = 500, xmax = 10] \addplot plot coordinates { (1,1.2) (3,1.4) (5,1.5) (7,2.1) }; \addlegendentry{BRP1} \addplot plot coordinates { (1,5.5) (3,14.9) (5,88) (7,280) }; \addlegendentry{BRP3} \addplot plot coordinates { (1,6.7) (3,32) (5,111) (7,395) }; \addlegendentry{BRP5} \addplot plot coordinates { (9,2073) (7,27) (5,1.5) (3,0.7)}; \addlegendentry{Memory} \end{axis} Notes: We adopt the Odd/Even Philosophers model, where there are $n-1$ even philosophers that pick the right fork first, and $1$ philosopher that picks the left fork first. In this case, we consider that there is a fault whenever an even philosopher behaves as an odd one. Bounded retransmission protocol masking distance values aren't affected by the number of chunks sent, so we show results for one chunk and varying retransmission limits. \begin{axis}[ axis lines = left, xlabel = Redundancy, ylabel = {Masking Distance}, ] \addplot [ domain=2:10, color=red, ] {1/(2+(x-1)/2)}; \addlegendentry{Memory} \addplot [ domain=3:10, color=blue, ] {1/(2+(x-1)/3)}; \addlegendentry{Byzantines} \addplot [ domain=0:10, color=green, ] {1/(2+x)}; \addlegendentry{BRP} \addplot [ domain=2:10, color=cyan, ] {1/x}; \addlegendentry{Philosophers} \end{axis} end{tikzpicture} \end{comment} \fi \section{Introduction} \label{sec:intro} Fault-tolerance allows for the construction of systems that are able to overcome the occurrence of faults during their execution. Examples of fault-tolerant systems can be found everywhere: communication protocols, hardware circuits, avionic systems, cryptographic currencies, etc. So, the increasing relevance of critical software in everyday life has led to a renewed interest in the automatic verification of fault-tolerant properties. However, one of the main difficulties when reasoning about these kinds of properties is given by their quantitative nature, which is true even for non-probabilistic systems. A simple example is given by the introduction of redundancy in critical systems. This is, by far, one of the most used techniques in fault-tolerance. In practice, it is well-known that adding more redundancy to a system increases its reliability. Measuring this increment is a central issue for evaluating fault-tolerant software, protocols, etc. On the other hand, the formal characterization of fault-tolerant properties could be an involving task, usually these properties are encoded using \emph{ad-hoc} mechanisms as part of a general design. The usual flow for the design and verification of fault-tolerant systems consists in defining a nominal model (i.e., the ``fault-free'' or ``ideal'' program) and afterwards extending it with faulty behaviors that deviate from the normal behavior prescribed by the nominal model. This extended model represents the way in which the system operates under the occurrence of faults. There are different ways of extending the nominal model, the typical approach is \emph{fault injection} \cite{HsuehTI97,IyerNGK10}, that is, the automatic introduction of faults into the model. An important property that any extended model has to satisfy is the preservation of the normal behavior under the absence of faults. In \cite{DemasiCMA17}, we proposed an alternative formal approach for dealing with the analysis of fault-tolerance. This approach allows for a fully automated analysis and appropriately distinguishes faulty behaviors from normal ones. Moreover, this framework is amenable to fault-injection. In that work, three notions of simulation relations are defined to characterize \emph{masking}, \emph{nonmasking}, and \emph{failsafe} fault-tolerance, as originally defined in \cite{Gartner99}. During the last decade, significant progress has been made towards defining suitable metrics or distances for diverse types of quantitative models including real-time systems \cite{HenzingerMP05}, probabilistic models \cite{DesharnaisGJP04}, and metrics for linear and branching systems \cite{CernyHR12,AlfaroFS09,Henzinger13,LarsenFT11,ThraneFL10}. Some authors have already pointed out that these metrics can be useful to reason about the robustness of a system, a notion related to fault-tolerance. Particularly, in \cite{CernyHR12} the traditional notion of simulation relation is generalized and three different simulation distances between systems are introduced, namely \emph{correctness}, \emph{coverage}, and \emph{robustness}. These are defined using quantitative games with \emph{discounted-sum} and \emph{mean-payoff} objectives. In this paper we introduce a notion of fault-tolerance distance between labelled transition systems. Intuitively, this distance measures the degree of fault-tolerance exhibited by a candidate system. As it was mentioned above, there exist different levels of fault-tolerance, we restrict ourselves to the analysis of \emph{masking fault-tolerance} because it is often classified as the most benign kind of fault-tolerance and it is a highly desirable property for critical systems. Roughly speaking, a system is masking fault-tolerant when it is able to completely mask the faults, not allowing these faults to have any observable consequences for the users. Formally, the system must preserve both the safety and liveness properties of the nominal model \cite{Gartner99}. In contrast to the robustness distance defined in \cite{CernyHR12}, which measures how many unexpected errors are tolerated by the implementation, we consider a specific collection of faults given in the implementation and measure how many faults are tolerated by the implementation in such a way that they can be masked by the states. We also require that the normal behavior of the specification has to be preserved by the implementation when no faults are present. In this case, we have a bisimulation between the specification and the non-faulty behavior of the implementation. Otherwise, the distance is $1$. That is, $\delta_{m}(N,I)=1$ if and only if the nominal model $N$ and $I\backslash F$ are not bisimilar, where $I\backslash F$ behaves like the implementation $I$ where all actions in $F$ are forbidden ($\backslash$ is Milner's restriction operator). Thus, we effectively distinguish between the nominal model and its fault-tolerant version and the set of faults taken into account. In order to measure the degree of masking fault-tolerance of a given system, we start characterizing masking fault-tolerance via simulation relations between two systems as defined in \cite{DemasiCMA17}. The first one acting as a specification of the intended behavior (i.e., nominal model) and the second one as the fault-tolerant implementation (i.e., the extended model with faulty behavior). The existence of a masking relation implies that the implementation masks the faults. Afterwards, we introduce a game characterization of masking simulation and we enrich the resulting games with quantitative objectives to define the notion of \emph{masking fault-tolerance distance}, where the possible values of the game belong to the interval $[0,1]$. The fault-tolerant implementation is masking fault-tolerant if the value of the game is $0$. Furthermore, the bigger the number, the farther the masking distance between the fault-tolerant implementation and the specification. Accordingly, a bigger distance remarkably decreases fault-tolerance. Thus, for a given nominal model $N$ and two different fault-tolerant implementations $I_1$ and $I_2$, our distance ensures that $\delta_{m}(N,I_1)<\delta_{m}(N,I_2)$ whenever $I_1$ tolerates more faults than $I_2$. We also provide a weak version of masking simulation, which makes it possible to deal with complex systems composed of several interacting components. We prove that masking distance is a directed semimetric, that is, it satisfies two basic properties of any distance, reflexivity and the triangle inequality. Finally, we have implemented our approach in a tool that takes as input a nominal model and its fault-tolerant implementation and automatically compute the masking distance between them. We have used this tool to measure the masking tolerance of multiple instances of several case studies such as a redundant cell memory, a variation of the dining philosophers problem, the bounded retransmission protocol, N-Modular-Redundancy, and the Byzantine generals problem. These are typical examples of fault-tolerant systems. The remainder of the paper is structured as follows. In Section \ref{sec:background}, we introduce preliminaries notions used throughout this paper. We present in Section \ref{sec:masking_dist} the formal definition of masking distance build on quantitative simulation games and we also prove its basic properties. We describe in Section \ref{sec:experimental_eval} the experimental evaluation on some well-known case studies. In Section \ref{sec:related_work} we discuss the related work. Finally, we discuss in Section \ref{sec:conclusions} some conclusions and directions for further work. Full details and proofs can be found in \cite{CastroDDP18}. \section{Masking Distance} \label{sec:masking_dist} We start by defining masking simulation. In \cite{DemasiCMA17}, we have defined a state-based simulation for masking fault-tolerance, here we recast this definition using labelled transition systems. First, let us introduce some concepts needed for defining masking fault-tolerance. For any vocabulary $\Sigma$, and set of labels $\mathcal{F} = \{F_0, \dots, F_n\}$ not belonging to $\Sigma$, we consider $\Sigma_{\mathcal{F}} = \Sigma \cup \mathcal{F}$, where $\mathcal{F} \cap \Sigma = \emptyset$. Intuitively, the elements of $\mathcal{F}$ indicate the occurrence of a fault in a faulty implementation. Furthermore, sometimes it will be useful to consider the set $\Sigma^i = \{ e^i \mid e \in \Sigma\}$, containing the elements of $\Sigma$ indexed with superscript $i$. Moreover, for any vocabulary $\Sigma$ we consider $\Sigma_{\mathcal{M}} = \Sigma \cup \{M\}$, where $M \notin \Sigma$, intuitively, this label is used to identify masking transitions. Given a transition system $A = \langle S, \Sigma, E, s_0 \rangle$ over a vocabulary $\Sigma$, we denote $A^M = \langle S, \Sigma_{\mathcal{M}}, E^M, s_0 \rangle$ where $E^M = E \cup \{s \xrightarrow{M} s \mid s \in S\}$. \subsection{Strong Masking Simulation} \begin{definition} \label{def:masking_rel} Let $A =\langle S, \Sigma, E, s_0\rangle$ and $A' =\langle S', \Sigma_{\mathcal{F}}, E', s_0' \rangle$ be two transition systems. $A'$ is \emph{strong masking fault-tolerant} with respect to $A$ if there exists a relation $\M \subseteq S \times S'$ (considering $A^M =\langle S, \Sigma_{\mathcal{M}}, E, s_0\rangle$ instead of $A$) such that: \begin{enumerate}[(A)] \item $s_0 \mathrel{\M} s'_0$, and \item for all $s \in S, s' \in S'$ with $s \mathrel{\M} s'$ and all $e \in \Sigma$ the following holds: \begin{enumerate}[(1)] \item if $(s \xrightarrow{e} t) \in E$ then $\exists\; t' \in S': (s' \xrightarrow{e} t' \wedge t \mathrel{\M}t')$; \item if $(s' \xrightarrow{e} t') \in E'$ then $\exists \; t \in S: (s \xrightarrow{e} t \wedge t \; \M\; t')$; \item if $(s' \xrightarrow{F} t')$ for some $F \in \mathcal{F}$ then $\exists\; t \in S: (s \xrightarrow{M} t \wedge t \mathrel{\M} t').$ \end{enumerate} \end{enumerate} If such relation exist we denote it by $A \preceq_{m} A'$ and say that $A'$ is a \emph{strong masking fault-tolerant implementation} of $A$. \end{definition} We say that state $s'$ is masking fault-tolerant for $s$ when $s~\M~s'$. Intuitively, the definition states that, starting in $s'$, faults can be masked in such a way that the behavior exhibited is the same as that observed when starting from $s$ and executing transitions without faults. In other words, a masking relation ensures that every faulty behavior in the implementation can be simulated by the specification. Let us explain in more detail the above definition. First, note that conditions $A$, $B.1$, and $B.2$ imply that we have a bisimulation when $A$ and $A'$ do not exhibit faulty behavior. Particularly, condition $B.1$ says that the normal execution of $A$ can be simulated by an execution of $A'$. On the other hand, condition $B.2$ says that the implementation does not add normal (non-faulty) behavior. Finally, condition $B.3$ states that every outgoing faulty transition ($F$) from $s'$ must be matched to an outgoing masking transition ($M$) from $s$. \subsection{Weak Masking Simulation} For analysing nontrivial systems a weak version of masking simulation relation is needed, the main idea is that weak masking simulation abstracts away from internal behaviour, which is modeled by a special action $\tau$. Note that internal transitions are common in fault-tolerance: the actions performed as part of a fault-tolerant procedure in a component are usually not observable by the rest of the system. The \textit{weak transition relations} ${\Rightarrow} \subseteq S \times (\Sigma \cup \{\tau\} \cup \{M\} \cup \mathcal{F}) \times S$, also denoted as $E_W$, considers the \emph{silent} step $\tau$ and is defined as follow: \\ \[\xRightarrow{e} = \begin{cases} (\xrightarrow{\tau})^{*}\circ\xrightarrow{e}\circ(\xrightarrow{\tau})^{*} & \text{if } e \in \Sigma, \\ (\xrightarrow{e})^{*} & \text{if } e = \tau, \\ \xrightarrow{e} & \text{if } e \in \{M\} \cup \mathcal{F}.\\ \end{cases}\] The symbol $\circ$ stands for composition of binary relations and $(\xrightarrow{\tau})^{*}$ is the reflexive and transitive closure of the binary relation $\xrightarrow{\tau}$. Intuitively, if $e \notin \{\tau,M\}\cup\mathcal{F}$, then $s\xRightarrow{e}s'$ means that there is a sequence of zero or more $\tau$ transitions starting in $s$, followed by one transition labelled by $e$, followed again by zero or more $\tau$ transitions eventually reaching $s'$. $s \xRightarrow{\tau} s'$ states that $s$ can transition to $s'$ via zero or more $\tau$ transitions. In particular, $s \xRightarrow{\tau} s$ for avery $s$. For the case in which $e\in\{M\}\cup\mathcal{F}$, $s\xRightarrow{e}s'$is equivalente to $s\xrightarrow{e}s'$ and hence no $\tau$ step is allowed before or after the $e$ transition. \begin{definition} \label{def:weak_mask} Let $A =\langle S, \Sigma, E, s_0\rangle$ and $A' =\langle S', \Sigma_{\mathcal{F}}, E', s_0' \rangle$ be two transition systems with $\Sigma$ possibly containing $\tau$. $A'$ is \emph{weak masking fault-tolerant} with respect to $A$ if there is a relation $\M \subseteq S \times S'$ (considering $A^M$ instead of $A$) such that: \begin{enumerate}[(A)] \item $s_0 \mathrel{\M} s'_0$ \item for all $s \in S, s' \in S'$ with $s \mathrel{\M} s'$ and all $e \in \Sigma \cup \{\tau\}$ the following holds: \begin{enumerate}[(1)] \item if $(s \xrightarrow{e} t) \in E$ then $\exists\; t' \in S': (s' \xRightarrow{\text{e}} t' \in E_W' \wedge t\mathrel{\M}t')$; \item if $(s' \xrightarrow{e} t') \in E'$ then $\exists \; t \in S: (s \xRightarrow{e} t \in E_W \wedge t \mathrel{\M} t')$; \item if $(s' \xrightarrow{F} t') \in E'$ for some $F \in \mathcal{F}$ then $\exists\; t \in S: (s \xrightarrow{M} t \in E \wedge t \mathrel{\M} t').$ \end{enumerate} \end{enumerate} If such relation exists, we denote it by $A \preceq^w_{m} A'$ and say that $A'$ is a \emph{weak masking fault-tolerant implementation} of $A$. \end{definition} The following theorem makes a strong connection between strong and weak masking simulation. It states that weak masking simulation becomes strong masking simulation whenever transition $\xrightarrow{}$ is replaced by $\xRightarrow{}$ in the original automata. \begin{theorem} \label{thm:weak_thm} Let $A =\langle S, \Sigma, E, s_0\rangle$ and $A' =\langle S', \Sigma_{\mathcal{F}}, E', s_0' \rangle$. $\M \subseteq S \times S'$ (considering $A^M$ instead of $A$) is a weak masking simulation if and only if: \begin{enumerate}[(A)] \item $s_0 \mathrel{\M} s'_0$, and \item for all $s \in S, s' \in S'$ with $s \mathrel{\M} s'$ and all $e \in \Sigma \cup \{\tau\}$ the following holds: \begin{enumerate}[(1)] \item if $(s \xRightarrow{e} t) \in E_W$ then $\exists\; t' \in S': (s' \xRightarrow{\text{e}} t' \in E_W' \wedge t\mathrel{\M}t')$; \item if $(s' \xRightarrow{e} t') \in E_W'$ then $\exists \; t \in S: (s \xRightarrow{e} t \in E_W \wedge t \mathrel{\M} t')$; \item if $(s' \xRightarrow{F} t') \in E_W'$ for some $F \in \mathcal{F}$ then $\exists\; t \in S: (s \xRightarrow{M} t \in E_W\wedge t \mathrel{\M} t')$ \end{enumerate} \end{enumerate} \end{theorem} The proof of this theorem is straightforward following the same ideas of Milner in \cite{Milner89}. A natural way to check weak bisimilarity is to \emph{saturate} the transition system \cite{FernandezM91,Milner89} and then check strong bisimilarity on the saturated transition system. Similarly, Theorem~\ref{thm:weak_thm} allows us to compute weak masking simulation by reducing it to strong masking simulation. Note that $\xRightarrow{e}$ can be alternatively defined by: \[ \dfrac{p \xrightarrow{e} q}{p\xRightarrow{e} q} \hspace{2cm} \dfrac{}{p\xRightarrow{\tau} p} \hspace{2cm} \dfrac{p\xRightarrow{\tau} p_1 \xRightarrow{e} q_1 \xRightarrow{\tau} q}{p\xRightarrow{e} q}~(e \notin \{M\} \cup \mathcal{F}) \] As a running example, we consider a memory cell that stores a bit of information and supports reading and writing operations, presented in a state-based form in \cite{DemasiCMA17}. A state in this system maintains the current value of the memory cell ($m=i$, for $i=0,1$), writing allows one to change this value, and reading returns the stored value. Obviously, in this system the result of a reading depends on the value stored in the cell. Thus, a property that one might associate with this model is that the value read from the cell coincides with that of the last writing performed in the system. A potential fault in this scenario occurs when a cell unexpectedly loses its charge, and its stored value turns into another one (e.g., it changes from $1$ to $0$ due to charge loss). A typical technique to deal with this situation is \emph{redundancy}: use three memory bits instead of one. Writing operations are performed simultaneously on the three bits. Reading, on the other hand, returns the value that is repeated at least twice in the memory bits; this is known as \emph{voting}. We take the following approach to model this system. Labels $\text{W}_0, \text{W}_1, \text{R}_0,$ and $\text{R}_1$ represent writing and reading operations. Specifically, $\text{W}_0$ (resp. $\text{W}_1$): writes a zero (resp. one) in the memory. $\text{R}_0$ (resp. $\text{R}_1$): reads a zero (resp. one) from the memory. Figure~\ref{figure:exam_1_mem_cell} depicts four transition systems. The leftmost one represents the nominal system for this example (denoted as $A$). The second one from the left characterizes the nominal transition system augmented with masking transitions, i.e., $A^M$. The third and fourth transition systems are fault-tolerant implementations of $A$, named $A'$ and $A''$, respectively. Note that $A'$ contains one fault, while $A''$ considers two faults. Both implementations use triple redundancy; intuitively, state $\text{t}_0$ contains the three bits with value zero and $\text{t}_1$ contains the three bits with value one. Moreover, state $\text{t}_2$ is reached when one of the bits was flipped (either $001$, $010$ or $100$). In $A''$, state $\text{t}_3$ is reached after a second bit is flipped (either $011$ or $101$ or $110$) starting from state $\text{t}_0$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{example_1_cell_mem.eps} \vspace{-0.8cm} \caption{Transition systems for the memory cell.} \vspace{-0.5cm} \label{figure:exam_1_mem_cell} \end{center} \end{figure} It is straightforward to see that there exists a relation of masking fault-tolerance between $A^M$ and $A'$, as it is witnessed by the relation $\M = \{(\text{s}_0, \text{t}_0), (\text{s}_1, \text{t}_1), (\text{s}_0, \text{t}_2)\}$. It is a routine to check that $\M$ satisfies the conditions of Definition \ref{def:masking_rel}. On the other hand, there does not exist a masking relation between $A^M$ and $A''$ because state $\text{t}_3$ needs to be related to state $\text{s}_0$ in any masking relation. This state can only be reached by executing faults, which are necessarily masked with $M$-transitions. However, note that, in state $\text{t}_3$, we can read a $1$ (transition $\text{t}_3 \xrightarrow{\text{R}_1} \text{t}_3$) whereas, in state $\text{s}_0$, we can only read a $0$. \subsection{Masking Simulation Game} \label{subsec:mask_sim_game} We define a masking simulation game for two transition systems (the specification of the nominal system and its fault-tolerant implementation) that captures masking fault-tolerance. We first define the masking game graph where we have two players named by convenience the \emph{refuter} ($R$) and the \emph{verifier} ($V$). \begin{definition} \label{def:strong_masking_game_graph} Let $A=\langle S, \Sigma, E, s_0\rangle$ and $A'=\langle S', \Sigma_{\mathcal{F}}, E_W', s'_0 \rangle$. The \emph{strong masking game graph} $\mathcal{G}_{A^M,A'} = \langle S^G, S_R, S_V, \Sigma^G, E^G, {s_0}^G \rangle$ for two players is defined as follows: \begin{itemize} \item $\Sigma^G = \Sigma_{\mathcal{M}} \cup \Sigma_{\mathcal{F}}$ \item $S^G = (S \times ( \Sigma_{\mathcal{M}}^1 \cup \Sigma_{\mathcal{F}}^2 \cup\{\#\}) \times S' \times \{ R, V \}) \cup \{s_{err}\}$ \item The initial state is $s_0^G = \langle s_0, \#, s'_0, R \rangle$, where the refuter starts playing \item The refuter's states are $S_R = \{ (s, \#, s', R) \mid s \in S \wedge s' \in S' \} \cup \{s_{err}\}$ \item The verifier's states are $S_V = \{ (s, \sigma, s', V) \mid s \in S \wedge s' \in S' \wedge \sigma \in \Sigma^G\setminus\{M\}\}$ \end{itemize} and $E^G$ is the minimal set satisfying: \begin{itemize} \item $\{ (s, \#, s', R) \xrightarrow{\sigma} (t, \sigma^{1}, s', V) \mid \exists\;\sigma \in \Sigma: s \xrightarrow{\sigma} t \in E \} \subseteq E^G$, \item $\{ (s, \#, s', R) \xrightarrow{\sigma} (s, \sigma^{2}, t', V) \mid \exists\;\sigma \in \Sigma_{\mathcal{F}}: s' \xrightarrow{\sigma} t' \in E' \} \subseteq E^G$, \item $\{ (s, \sigma^2, s', V) \xrightarrow{\sigma} (t, \#, s', R) \mid \exists\;\sigma \in \Sigma: s \xrightarrow{\sigma} t \in E \} \subseteq E^G$, \item $\{ (s, \sigma^1, s', V) \xrightarrow{\sigma} (s, \#, t', R) \mid \exists\;\sigma \in \Sigma: s' \xrightarrow{\sigma} t' \in E' \} \subseteq E^G$, \item $\{ (s, F_i^2, s', V) \xrightarrow{M} (t, \#, s', R) \mid \exists\;s \xrightarrow{M} t \in E^M \} \subseteq E^G$, for any $F_i \in \mathcal{F}$ \item If there is no outgoing transition from some state $s$ then transitions $s \xrightarrow{\sigma} s_{err}$ and $s_{err} \xrightarrow{\sigma} s_{err}$ for every $\sigma \in \Sigma$, are added. \end{itemize} \end{definition} The intuition of this game is as follows. The refuter chooses transitions of either the specification or the implementation to play, and the verifier tries to match her choice, this is similar to the bisimulation game \cite{Stirling99}. However, when the refuter chooses a fault, the verifier must match it with a masking transition ($M$). The intuitive reading of this is that the fault-tolerant implementation masked the fault in such a way that the occurrence of this fault cannot be noticed from the users' side. $R$ wins if the game reaches the error state, i.e., $s_{err}$. On the other hand, $V$ wins when $s_{err}$ is not reached during the game. (This is basically a reachability game \cite{Jurd11}). We say $\text{Ver}(v)$ (resp. $\text{Ref}(v)$) if $v$ is a verifier's node (resp. refuter's node). A \emph{weak masking game graph} $\mathcal{G}^W_{A^M,A'}$ is defined in the same way as the strong masking game graph in Def.~\ref{def:strong_masking_game_graph}, with the exception that $\Sigma_{\mathcal{M}}$ and $\Sigma_{\mathcal{F}}$ may contain $\tau$, and the set of labelled transitions (denoted as $E_W^G$) is now defined using the weak transition relations (i.e., $E_W$ and $E_W'$) from the respective transition systems. Figure~\ref{figure:exam_2_mem_cell_gg_two_faults} shows a part of the strong masking game graph for the running example considering the transition systems $A^M$ and $A''$. We can clearly observe on the game graph that the verifier cannot mimic the transition $(s_0, \#, t_3, R) \xrightarrow{R_1^2} (s_0, R_1^2, t_3, V)$ selected by the refuter which reads a $1$ at state $t_3$ on the fault-tolerant implementation. This is because the verifier can only read a $0$ at state $s_0$. Then, the $s_{err}$ is reached and the refuter wins. As expected, there is a strong masking simulation between $A$ and $A'$ if and only if the verifier has a winning strategy in $\mathcal{G}_{A^M,A'}$. \begin{theorem} \label{thm:wingame_strat} Let $A=\langle S, \Sigma, E, s_0\rangle$ and $A'=\langle S', \Sigma_{\mathcal{F}}, E', s_0' \rangle$. $A \preceq_{m} A'$ iff the verifier has a winning strategy for the strong masking game graph $\mathcal{G}_{A^M,A'}$. \end{theorem} By Theorems~\ref{thm:weak_thm} and~\ref{thm:wingame_strat}, the result replicates for weak masking game. \begin{theorem} \label{thm:weak_wingame_strat} Let $A=\langle S, \Sigma \cup \{\tau\}, E, s_0\rangle$ and $A'=\langle S', \Sigma_{\mathcal{F}} \cup \{\tau\}, E', s_0' \rangle$. $A \preceq^w_{m} A'$ iff the verifier has a winning strategy for the weak masking game graph $\mathcal{G}^W_{A^M,A'}$. \end{theorem} Using the standard properties of reachability games we get the following property. \begin{theorem} For any $A$ and $A'$, the strong (resp.\ weak) masking game graph $\mathcal{G}_{A^M, A'}$ (resp.\ $\mathcal{G}^W_{A^M, A'}$) is determined. Furthermore, the strong (resp.\ weak) masking game graph can be determined in time $O(|E^G|)$ (resp.\ $O(|E_W^G|)$). \end{theorem} \begin{figure} [h] \begin{center} \vspace{-0.6cm} \includegraphics[scale=0.49]{ex1_cell_mem_game_graph_two_faults.eps} \vspace{-0.7cm} \caption{Part of the masking game graph for memory cell model with two faults} \label{figure:exam_2_mem_cell_gg_two_faults} \vspace{-0.8cm} \end{center} \end{figure} The set of winning states for the refuter can be defined in a standard way from the error state \cite{Jurd11}. We adapt ideas in \cite{Jurd11} to our setting. For $i,j\geq 0$, sets $U^j_i$ are defined as follows: \begin{align} U^0_i =& U^j_0 = \emptyset \label{def:of:Uji}\\ U_1^1 =& \{s_{err}\}, \hspace{10cm} {}\notag \end{align} \begin{align*} U_{i+1}^{j+1} &= \{v' \mid Ref(v') \wedge post(v') \cap U_{i+1}^j \neq \emptyset\} \\ &\textstyle \hspace{-1em}{}\cup \{v' \mid Ver(v') \wedge post(v') \subseteq \bigcup_{j'\leq j} U_{i+1}^{j'} \wedge post(v') \cap U^j_{i+1} \neq \emptyset \wedge \pi_2(v') \notin \mathcal{F} \} \\ &\textstyle \hspace{-1em}{}\cup \{v' \mid Ver(v') \wedge post(v') \subseteq \bigcup_{i'\leq i, j' \leq j}U_{i'}^{j'} \wedge post(v') \cap U^j_{i} \neq \emptyset \wedge \pi_2(v') \in \mathcal{F} \} \end{align*} then $U^k = \bigcup_{i \geq 0} U^k_i$ and $U = \bigcup_{k \geq 0} U^k$. Intuitively, the subindex $i$ in $U^k_i$ indicates that $s_{err}$ is reach after at most $i-1$ faults occurred. The following lemma is straightforwardly proven using standard techniques of reachability games \cite{AlfaroHK07}. \begin{lemma} \label{lemma:RefWinStrat} The refuter has a winning strategy in $\mathcal{G}_{A^M, A'}$ (or $\mathcal{G}^W_{A^M, A'}$) iff $s_{init} \in U^k$, for some $k$. \end{lemma} \subsection{Quantitative Masking} In this section, we extend the strong masking simulation game introduced above with quantitative objectives to define the notion of masking fault-tolerance distance. Note that we use the attribute ``quantitative'' in a non-probabilistic sense. \begin{definition} For transition systems $A$ and $A'$, the \emph{quantitative strong masking game graph} $\mathcal{Q}_{A^M, A'} = \langle S^G, S_R, S_V, \Sigma^G, E^G, s_{0}^G, v^G \rangle$ is defined as follows: \begin{itemize} \item $\mathcal{G}_{A^M, A'}=\langle S^G, S_R, S_V, \Sigma^G, E^G, s_{0}^G \rangle$ is defined as in Definition~\ref{def:strong_masking_game_graph}, \item $ v^G(s \xrightarrow{e} s') = (\chi_{\mathcal{F}} (e), \chi_{s_{err}}(s'))$ \end{itemize} where $\chi_{\mathcal{F}}$ is the characteristic function over set $\mathcal{F}$, returning $1$ if $e \in \mathcal{F}$ and $0$ otherwise, and $\chi_{s_{err}}$ is the characteristic function over the singleton set $\{s_{err}\}$. \end{definition} Note that the cost function returns a pair of numbers instead of a single number. It is direct to codify this pair into a number, but we do not do it here for the sake of clarity. We remark that the \emph{quantitative weak masking game graph} $\mathcal{Q}^W_{A^M, A'}$ is defined in the same way as the game graph defined above but using the weak masking game graph $\mathcal{G}^W_{A^M, A'}$ instead of $\mathcal{G}_{A^M, A'}$ Given a quantitative strong masking game graph with the weight function $v^G$ and a play $\rho = \rho_0 \sigma_0 \rho_1 \sigma_1 \rho_2, \ldots$, for all $i \geq 0$, let $v_i = v^G(\rho_i \xrightarrow{\sigma_i} \rho_{i+1})$. We define the \emph{masking payoff function} as follow: \ f_{m}(\rho) = \lim_{n \rightarrow \infty} \frac{\pr{1}{v_n}}{1+ \sum^{n}_{i=0} \pr{0}{v_i}}, \] which is proportional to the inverse of the number of masking movements made by the verifier. To see this, note that the numerator of $\frac{\pr{1}{v_n}}{1+ \sum^{n}_{i=0} \pr{0}{v_i}}$ will be $1$ when we reach the error state, that is, in those paths not reaching the error state this formula returns $0$. Furthermore, if the error state is reached, then the denominator will count the number of fault transitions taken until the error state. All of them, except the last one, were masked successfully. The last fault, instead, while attempted to be masked by the verifier, eventually leads to the error state. That is, the transitions with value $1$ are those corresponding to faults. The others are mapped to $0$. Notice also that if $s_{err}$ is reached in $v_n$ without the occurrence of any fault, the nominal part of the implementation does not match the nominal specification, in which case $\frac{\pr{1}{v_n}}{1+ \sum^{n}_{i=0} \pr{0}{v_i}}=1$. Then, the refuter wants to maximize the value of any run, that is, she will try to execute faults leading to the state $s_{err}$. In contrast, the verifier wants to avoid $s_{err}$ and then she will try to mask faults with actions that take her away from the error state. More precisely, the value of the quantitative strong masking game for the refuter is defined as $val_R(\mathcal{Q}_{A^M,A'}) = \sup_{\pi_R \in \Pi_R} \; \inf_{\pi_V \in \Pi_V} f_{m}(out(\pi_R, \pi_V))$. Analogously, the value of the game for the verifier is defined as $val_V(\mathcal{Q}_{A^M,A'}) = \inf_{\pi_V \in \Pi_V} \; \sup_{\pi_R \in \Pi_R} f_{m}(out(\pi_R, \pi_V))$. Then, we define the value of the quantitative strong masking game, denoted by $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'})$, as the value of the game either for the refuter or the verifier, i.e., $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'}) = val_R(\mathcal{Q}_{A^M,A'}) = val_V(\mathcal{Q}_{A^M,A'})$. This can be done because quantitative strong masking games are determined as we prove below in Theorem~\ref{thm:mask_game_det}. \begin{definition} \label{def:mask_dist} Let $A$ and $A'$ be transition systems. The \emph{strong masking distance} between $A$ and $A'$, denoted by $\delta_{m}(A, A')$ is defined as: $\delta_{m}(A, A') = \mathop{\textup{val}}(\mathcal{Q}_{A^M,A'}).$ \end{definition} We would like to remark that the \emph{weak masking distance} $\delta_{m}^W$ is defined in the same way for the quantitative weak masking game graph $\mathcal{Q}^W_{A^M,A'}$. Roughly speaking, we are interesting on measuring the number of faults that can be masked. The value of the game is essentially determined by the faulty and masking labels on the game graph and how the players can find a strategy that leads (or avoids) the state $s_{err}$, independently if there are or not silent actions. In the following, we state some basic properties of this kind of games. As already anticipated, quantitative strong masking games are determined: \begin{theorem} \label{thm:mask_game_det} For any quantitative strong masking game $\mathcal{Q}_{A^M, A'}$ with payoff function $f_{m}$, \[\textstyle \inf_{\pi_V \in \Pi_V} \; \sup_{\pi_R \in \Pi_R} f_{m}(out(\pi_R, \pi_V)) = \sup_{\pi_R \in \Pi_R} \; \inf_{\pi_V \in \Pi_V} f_{m}(out(\pi_R, \pi_V))\] \end{theorem} The value of the quantitative strong masking game can be calculated as stated below. \begin{theorem} \label{thm:quant_game} Let $\mathcal{Q}_{A^M,A'}$ be a quantitative strong masking game. Then, $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'}) = \frac{1}{w}$, with $w = \min \{ i \mid \exists j : s_{init} \in U^j_i \}$, whenever $s_{init} \in U$, and $\mathop{\textup{val}}(\mathcal{Q}_{A^M,A'})=0$ otherwise, where sets $U^j_i$ and $U$ are defined in equation~(\ref{def:of:Uji}). \end{theorem} Note that the sets $U^j_i$ can be calculated using a bottom-up breadth-first search from the error state. Thus, the strategies for the refuter and the verifier can be defined using these sets, without taking into account the history of the play. That is, we have the following theorems: \begin{theorem} \label{thm:memoryless} Players $R$ and $V$ have memoryless winning strategies for $\mathcal{Q}_{A^M,A'}$. \end{theorem} Theorems~\ref{thm:mask_game_det}, \ref{thm:quant_game}, and~\ref{thm:memoryless} apply as well to $\mathcal{Q}^W_{A^M,A'}$. The following theorem states the complexity of determining the value of the two types of games. \begin{theorem} The quantitative strong (weak) masking game can be determined in time $O(|S^G| + |E^G|)$ (resp. $O(|S^G| + |E_{W}^{G}|)$). \end{theorem} By using $\mathcal{Q}^W_{A^M,A'}$ instead of $\mathcal{Q}_{A^M,A'}$ in Definition~\ref{def:mask_dist}, we can define the \emph{weak masking distance} $\delta_{m}^W$. The next theorem states that, if $A$ and $A'$ are at distance $0$, there is a strong (or weak) masking simulation between them. \begin{theorem}\label{theorem:ref} For any transition systems $A$ and $A'$, then \begin{inparaenum}[(i)] \item $\delta_{m}(A,A') = 0$ iff $A \preceq_{m} A'$, and \item $\delta_{m}^W(A,A') = 0$ iff $A \preceq^w_{m} A' $. \end{inparaenum} \end{theorem} This follows from Theorem \ref{thm:quant_game}. Noting that $A \preceq_{m} A$ (and $A \preceq^w_{m} A$) for any transition system $A$, we obtain that $\delta_{m}(A,A)=0$ (resp.\ $\delta_{m}^W(A,A)=0$) by Theorem \ref{theorem:ref}, i.e., both distance are reflexive. For our running example, the masking distance is $1/3$ with a redundancy of $3$ bits and considering two faults. This means that only one fault can be masked by this implementation. We can prove a version of the triangle inequality for our notion of distance. \begin{theorem} \label{thm:triang_ineq} Let $A = \langle S, \Sigma, E, s_0 \rangle$, $A' = \langle S', \Sigma_{\mathcal{F'}}, E', s'_0 \rangle$, and $A'' = \langle S'', \Sigma_{\mathcal{F''}},\\E'', s''_0 \rangle$ be transition systems such that $\mathcal{F}' \subseteq \mathcal{F}''$. Then $\delta_{m}(A,A'') \leq \delta_{m}(A,A') + \delta_{m}(A', A'')$ and $\delta_{m}^W(A,A'') \leq \delta_{m}^W(A,A') + \delta_{m}^W(A', A'').$ \end{theorem} Reflexivity and the triangle inequality imply that both masking distances are directed semi-metrics \cite{CharikarMM06,AlfaroMRS08}. Moreover, it is interesting to note that the triangle inequality property has practical applications. When developing critical software is quite common to develop a first version of the software taking into account some possible anticipated faults. Later, after testing and running of the system, more plausible faults could be observed. Consequently, the system is modified with additional fault-tolerant capabilities to be able to overcome them. Theorem \ref{thm:triang_ineq} states that incrementally measuring the masking distance between these different versions of the software provides an upper bound to the actual distance between the nominal system and its last fault-tolerant version. That is, if the sum of the distances obtained between the different versions is a small number, then we can ensure that the final system will exhibit an acceptable masking tolerance to faults w.r.t. the nominal system. \section{Related Work} \label{sec:related_work} In recent years, there has been a growing interest in the quantitative generalizations of the boolean notion of correctness and the corresponding quantitative verification questions \cite{BokerCHK14,CernyHR12,Henzinger10,Henzinger13}. The framework described in \cite{CernyHR12} is the closest related work to our approach. The authors generalize the traditional notion of simulation relation to three different versions of simulation distance: \emph{correctness}, \emph{coverage}, and \emph{robustness}. These are defined using quantitative games with \emph{discounted-sum} and \emph{mean-payoff} objectives, two well-known cost functions. Similarly to that work, we also consider distances between purely discrete (non-probabilistic, untimed) systems. Correctness and coverage distances are concerned with the nominal part of the systems, and so faults play no role on them. On the other hand, robustness distance measures how many unexpected errors can be performed by the implementation in such a way that the resulting behavior is tolerated by the specification. So, it can be used to analyze the resilience of the implementation. Note that, robustness distance can only be applied to correct implementations, that is, implementations that preserve the behavior of the specification but perhaps do not cover all its behavior. As noted in~\cite{CernyHR12}, bisimilarity sometimes implies a distance of $1$. In this sense a greater grade of robustness (as defined in~\cite{CernyHR12}) is achieved by pruning critical points from the specification. Furthermore, the errors considered in that work are transitions mimicking the original ones but with different labels. In contrast to this, in our approach we consider that faults are injected into the fault-tolerant implementation, where their behaviors are not restricted by the nominal system. This follows the idea of model extension in fault-tolerance where faulty behavior is added to the nominal system. Further, note that when no faults are present, the masking distance between the specification and the implementation is $0$ when they are bisimilar, and it is $1$ otherwise. It is useful to note that robustness distance of~\cite{CernyHR12} is not reflexive. We believe that all these definitions of distance between systems capture different notions useful for software development, and they can be used together, in a complementary way, to obtain an in-depth evaluation of fault-tolerant implementations.
1,477,468,750,563
arxiv
\section{Introduction} Mechanical metamaterials are structured from mesoscopic building blocks, whose individual characteristics and mutual arrangements dictate global properties and functionalities, potentially leading to exotic macroscopic responses~\cite{bertoldi2017flexible, Zadpoor2016, babaee20133d, Chen2016}. For instance, a pruning process selectively applied to random spring networks can cause them to approach either the incompressible or completely auxetic limits~\cite{Goodrich2015}, as well as tune specific long-range coupled mechanical responses~\cite{Rocks2520}. Hierarchical cut patterns in elastic media allow for extremely large strains and emergence of macroscopic shapes when stretched~\cite{Cho17390}. In lattice-based structures, defects and dislocations can localize collective soft modes~\cite{paulose2015topological} and guide folding motions~\cite{Silverberg647}. Combinatorial metamaterials, realized by an array of soft or hinging anisotropic building blocks have elicited much recent interest~\cite{coulais2016combinatorial, Meeussen2020topological, oligomodal}. The ability to control the orientations of individual blocks allows access to highly complex non-periodic designs, and may lead to soft compatible structures with advanced mechanical functionalities, such as mimicking kinematic mechanisms~\cite{bertoldi2017flexible}, textured sensing~\cite{coulais2016combinatorial}, or shape changing~\cite{Coulaisshapechanging}, with possible applications in pluripotent origami~\cite{Dieleman2020}. In such systems, only very specific arrangements of the building blocks lead to cooperative soft deformations. Most arrangements, however, contain multiple contradictions: contacting blocks tend to deform in opposing directions. Mechanical incompatibility controls the stiffness of the metamaterial~\cite{coulais2016combinatorial}. It also results in localized response to an external force and thus limits its functionality~\cite{Coulais2018, Meeussen2020topological}. Crucially, it can be harnessed for advanced functionalities such as multistability~\cite{Haghpanah2016} and programmability~\cite{Florijn2014}. In particular, deliberate incompatibility of the constituting units can lead to topological defects and to complex mechanical responses~\cite{Meeussen2020topological, Meeussen2020sss}. Hence, understanding and manipulating mechanical incompatibility opens a path toward mechanical control at the macroscopic level. When considering the directions of deformations as binary arrows, the study of building block incompatibility in mechanical metamaterials can be greatly facilitated by an analogy with geometrically-frustrated lattices~\cite{Kang2014}, random spin glasses~\cite{Mattis1976} and spin-ice systems~\cite{Wang2006, Morrison_2013, Nisoli_2013_colloquium, Lao2018, Erdal2020, Merrigan2020}. In this article, we introduce a general framework for identifying and generating topological defects due to mechanical incompatibility in metamaterials based on the analogy with frustrated spin systems, and provide guidelines for a material-by-design approach. Our formalism describes incompatibility via Wilson loop products~\cite{fradkin1978gauge}, which count the parity of antiferromagnetic effective interactions among emergent pseudo-spins, in complete analogy to the case of geometric frustration in classical Ising spin systems~\cite{Villain_1977}. Our spins, in turn, are related to mechanical deformations in the metamaterial. We apply this framework to a novel class of two-dimensional (2D) combinatorial mechanical metamaterials constructed of hexagonal building blocks as well as to three-dimensional (3D) metamaterials, whose compatible architectures have been investigated recently~\cite{coulais2016combinatorial}. We demonstrate the capability of our approach to induce complex frustration motifs such as defect lines in 3D systems that can lead to twisted stress distribution in the material, or defect loops in 3D that can cause stress to concentrate in a certain region or alternatively to avoid that region, merely by controlling the texture of the boundary forcing. \begin{figure}[t] \includegraphics[width=0.98\columnwidth]{figure1_13_2th.pdf} \caption{(a) 2D hexagonal building block, (b) the extended 2-in-4-out, and (c) the contracted 4-in-2-out states of its soft deformation mode, which does not stretch or compress the constituent links. (d,e) Vertices of sublattices A and B are marked in blue and orange respectively. Ising spins are assigned to the deformation arrows according to the winding direction around the two sublattice vertices: $\pm 1$ spins are indicated in purple and green. Spin of two adjacent facets is preserved (flipped) if deformations wind in the same (opposite) direction with respect to the common vertex between them, as indicated by the circular arrows. Resulting ferromagnetic (antiferromagnetic) interactions are indicated by dashed (solid) lines connecting the two facets. Red director line drawn perpendicular to the antiferromagnetic bonds designates the orientation of the building block.} \label{fig:fig1} \end{figure} \begin{figure}[t] \includegraphics[width=0.98\columnwidth]{figure2_from1_1s.pdf} \caption{(a) Compatible and (b) incompatible vertices respectively consist of an even or odd number of antiferromagnetic bonds on the corresponding triangular plaquettes of the dual kagome lattice. Black circle indicates a topological mechanical defect. The four depicted interfaces account for all the possible configurations of three hexagons meeting at a vertex, up to rotations and reflections.} \label{fig:fig1.2} \end{figure} \section{Magnetic Spin analogy in mechanical metamaterials} As a particular example of our general strategy, consider the anisotropic hexagonal building block with hinging facets presented in Fig.~\ref{fig:fig1}(a). Its soft deformation mode, in which the constituent links do not change in length, consists of deformations along the six symmetry directions such that a 2-in-4-out or 4-in-2-out rule applies, as indicated by the yellow arrows, see Fig.~\ref{fig:fig1}(b,c). The rule reduces its symmetry from six-fold to two-fold, around a director line marked in red in Fig.~\ref{fig:fig1}(d,e), so that $\pi/3$ rotations of the building block change its mechanical functionality. The resulting combinatorial metamaterial comprises an array of such blocks positioned with arbitrary orientations in a honeycomb lattice. This lattice is bipartite, see Fig.~\ref{fig:fig1}(d), with neighboring vertices alternating between sublattices A (blue) and B (orange). We map a deformation of a facet, indicated by an arrow in Fig.~\ref{fig:fig1} to a $+1$ ($-1$) spin if it winds anticlockwise (clockwise) around an A vertex, and conversely for a B vertex. We identify ferromagnetic or antiferromagnetic interactions between neighboring spins according to their states in the building block's lowest-energy deformation, as shown in Fig.~\ref{fig:fig1}(d,e). These bonds are determined by the mutual winding direction of the arrows around the vertices of the honeycomb lattice: ferromagnetic (dashed line) when both displacements wind in the same direction, and antiferromagnetic (solid line) when displacements wind in opposite directions, as indicated by the circular arrows in Fig.~\ref{fig:fig1}(d,e). Thus, a metamaterial specified by the orientations of all its building blocks maps to an Ising model of mixed ferromagnetic and antiferromagnetic bonds, thereby defining a bond distribution on the dual lattice. Here, the displacement arrows in Fig.~\ref{fig:fig1}, which sit on the facets of the hexagonal building blocks, constitute the sites of the kagome lattice, the dual of the honeycomb, and each metamaterial maps to a different bond distribution on the kagome lattice. Mechanical compatibility of a vertex in the hexagonal metamaterial is hence determined by the parity of antiferromagnetic bonds in the corresponding triangular plaquette of the kagome lattice, which can be inferred from the parity of director lines meeting at the central vertex, see Fig.~\ref{fig:fig1.2}; For an even number of antiferromagnetic bonds, as shown in Fig.~\ref{fig:fig1.2}(a), all three building blocks meeting at the vertex can simultaneously deform to their lowest-energy soft mode; If there is an odd number of antiferromagnetic bonds, as shown in Fig.~\ref{fig:fig1.2}(b), the spins are frustrated, meaning that the displacements cannot be assigned in a way that satisfies all interactions simultaneously, thus generating a topological mechanical defect, which is indicated with a black circle in Fig.~\ref{fig:fig1.2}(b). \section{Compatible metastructures} Lack of frustration in each plaquette implies that the entire emergent Ising model is described by what we call an even bond distribution and is thus unfrustrated, and the corresponding mechanical system is globally compatible. Note that in this system compatible configurations exhibit holographic order in the soft mode maintained by the alternating displacements of each pair of opposing facets. A global soft mode can thus be uniquely determined by the deformations along the boundary of the metamaterial; In a rhombic metamaterial consisting of $N = L \times L$ building blocks, the soft mode of a compatible architecture can be described using the $4L-1$ principal axes running through it, see Fig.~\ref{fig:fig_holographic}(a), and written in the form: \begin{align} \begin{split} d_{\hat{\text{a}}}\left(i,j\right)&=\left(-1\right)^{a_{j}+i}\\ d_{\hat{b}}\left(i,j\right)&=\left(-1\right)^{b_{i}+j}\\ d_{\hat{c}}\left(i,j\right)&=\left(-1\right)^{c_{i+j-1}+s_{ij}}\\ s_{ij}&=\begin{cases} j & i+j\leq L+1\\ L+1-i & i+j\geq L+1 \end{cases} \end{split} \label{holographic_equations} \end{align} where $d_{\hat{\text{k}}}\left(i,j\right)$ denotes the displacement along direction $\hat{\text{k}}$ of the building block in the row $i$ and column $j$, where $k=a,b,c$, and $a_j, b_i, c_\ell$ describe the deformation along the boundary, see Fig.~\ref{fig:fig_holographic}(a). Hence, the number of compatible architectures $\Omega_0$ scales sub-extensively with the system size, $\ln \Omega_0 \sim \sqrt{N}$, with $N$ denoting the total number of hexagons, see Fig.~\ref{fig:fig_holographic}(b). We can bound $\Omega_0$ by $2^{2L-1} \leq \Omega_0 \leq 3^{2L-1}$, see Appendix~\ref{app_compatible_2D} for details. This is in contrast, for example, to the 2D combinatorial metamaterials studied in Ref.~\cite{Meeussen2020topological}, in which the freedom to individually orient the constituent triangles leads to an extensive number of compatible configurations. The scarcity of such configurations in the hexagonal case highlights the importance of studying architectures beyond the compatible scope. \begin{figure}[t] \includegraphics[width=0.98\columnwidth]{boundary_and_bounds_w_31_f.pdf} \caption{The deformation field of a global soft mode is described according to holographic order and set by the deformations along the boundary, e.g., the yellow hexagons. The holographic order defines $4L-1$ axes along which deformations alternate; $\text{a}_{1}\ldots\text{a}_{L},\text{b}_{1}\ldots\text{b}_{L},\text{c}_{1}\ldots\text{c}_{2L-1}$, with $L=4$ in this drawing. (b) The number of compatible rhombic $L \times L$ structures, exactly counted up to $L=10$ (black dots), falls between the lower and upper bounds (blue region), and is very close to the lower bound, where the leading order is $2^{2L-1}$.} \label{fig:fig_holographic} \end{figure} \begin{figure}[t] \includegraphics[width=0.98\columnwidth]{figure2_27.pdf} \caption{(a) Single defect, and (b) two defects (black circles), where director lines terminate or branch and where triangular plaquettes of the kagome lattice have an odd number of antiferromagnetic bonds. Loops of interaction bonds consisting of an even (green) or odd (blue) number of antiferromagnetic bonds. (c-f) Displacement conditions at the left and right boundaries (red arrows) lead to displacements of the facets (black arrows) and to finite elastic energy stored in each building block (color-coded hexagons). The color bar indicates the percentile of the stored energy, separately calculated for each case. Single defect (c,d): Compatible actuation on each one of the boundaries concentrates the stresses (strains) at the top (bottom) or bottom (top) half of the metamaterial. Two defects (e,f): Compatible actuation on opposing boundaries concentrates the stresses either between the defects (e) or around them (f), whereas the strains concentrate in the complementary region.} \label{fig:fig3} \end{figure} \section{Mechanical consequences of defects in 2D metamaterials} To understand defects from a global perspective, consider arbitrarily long loops of bonds in the kagome lattice. The compatibility of such loops is determined by the parity of antiferromagnetic interactions along the loop~\cite{Villain_1977}, which in turn, is set by the number of defects it contains, see Appendix~\ref{app:local_rot}. For example, any loop surrounding the defect in Fig.~\ref{fig:fig3}(a) will consist of an odd number of antiferromagnetic interactions, whereas any loop surrounding the two defects in Fig.~\ref{fig:fig3}(b) will consist of an even such number. This topological characterization is related to Wilson loops, also known as holonomies of a connection, which were previously studied in the context of frustrated spin systems. The connection is defined as the product of bonds along a line; $+1$ for ferromagnetic bonds, and $-1$ for antiferromagnetic bonds. The connection along a closed loop is gauge invariant, and tells us whether there is frustration or not~\cite{fradkin1978gauge}. There is a remarkable similarity between the pattern formed by the red director lines around mechanical defects, see Fig.~\ref{fig:fig3}(a,b), and the point defects present in 2D nematic liquid crystals, which posses a topological charge of winding number $\pm1/2$~\cite{Smalyukh_2020, alexander2012, shankar2020topological}. However, the discrete orientations and positions of the building blocks in the mechanical system do not allow for a definition of a winding number, and indeed the two types of mechanical defects are indistinguishable. Locally rotating building blocks changes the number of defects by an even amount, suggesting that in our metamaterials, the parity of the defects is the topologically protected quality, see Appendix~\ref{app:local_rot}. We study the mechanics of the metamaterial by means of a coarse-grained model, in which we describe the complex deformation field by scalar normal displacements defined for each facet, and by assigning harmonic interactions between these scalar displacements at each hexagonal building block. The deformations of the facets serve as continuous mechanical degrees of freedom, and we can therefore write the elastic energy in the metamaterial in the following way: \begin{align} E=\frac{1}{2}k_{ij}u_{i}u_{j}=\frac{1}{2}\mathbf{u^{T}}\mathbf{K}\mathbf{u}, \label{Omega0} \end{align} where $\mathbf{u}$ is a vector containing the displacements of all the facets in the metamaterial and $\mathbf{K}$ is a matrix containing the elastic interaction constants $k_{ij}$ between the facets $i$ and $j$. Symmetries reduce $k_{ij}$ to eight independent interaction constants $k_n$, see Fig.~\ref{fig:fig_interaction_constants}. If the arrangement of the hexagons leads to a compatible structure, the ground state of the corresponding unfrustrated Ising model describes the deformations of the global soft mode. However, if the system is incompatible, the lowest energy configurations of the corresponding Ising system do not necessarily describe its elastic deformations. A distinction can be made based on the different nature of the physical degrees of freedom; discrete spin degrees of freedom result in high energetic cost locally concentrated at specific (frustrated) interaction bonds, whereas continuous deformation degrees of freedom reduce the energetic cost by spreading the deviations from the local soft mode over the sample, see also Ref.~\cite{Merrigan2020}. \begin{figure}[t] \includegraphics[width=\columnwidth]{interaction_constants_42.pdf} \caption{(a) The coarse-grained variables $u_{i}$ describing the displacements of the facets. (b) For a hexagonal building block, symmetries allow eight different interaction constants between pairs of facets. The interacting facets are indicated by a connecting line, or by a circle for the diagonal terms.} \label{fig:fig_interaction_constants} \end{figure} In realistic metamaterials, the softest deformation mode of the building block generally has finite rigidity. For simplicity, we ascribe zero energy cost to the deformation mode described in Fig.~\ref{fig:fig1}(b,c). This translates to the condition of a vanishing net force acting on the facets, and results in two independent equations describing the interaction constants $k_n$, \begin{align} \begin{split} k_{1}&=2k_{4}+2k_{7}-k_{3},\\ k_{2}&=k_{4}-k_{5}-k_{6}+k_{7}-k_{8}, \label{ki} \end{split} \end{align} see Appendix~\ref{app:mech_model} for further details on selecting the values of the interaction constants and solving the mechanical response. Our model and calculations can be easily adjusted for finite rigidity of the softest mode, and we do not expect qualitative differences as a result. To understand how defects can be harnessed to steer the stress distribution, note that actuating a facet of a building block defines the compatible actuation of any of its neighboring facets, given by satisfying the interaction bond between the two facets. Compatible actuation can therefore be defined along any path in the metamaterial, but can only be defined along loops containing an even number of antiferromagnetic bonds, i.e, surrounding an even number of defects. Consider first an architecture with a single defect as portrayed in Fig.~\ref{fig:fig3}(a); any loop winding around it would have an odd number of antiferromagnetic interactions and thus can not be actuated compatibly. By setting compatible actuations along the opposing left and right boundaries of the metamaterial, we can control the location of the compatible and incompatible regions, thereby steering the stresses and strains to complementary parts of the system: when the actuation along the left boundary can be compatibly extended towards the actuation along the right boundary using a path below the defect, stresses concentrate above the defect, coinciding with a region of vanishing deformations, see Fig.~\ref{fig:fig3}(c). If we then flip the actuation of one the boundaries, so that the left and right boundaries can now be compatibly connected via a path above the defect, stresses and vanishing deformations concentrate below the defect, see Fig.~\ref{fig:fig3}(d). In a similar manner, when the system contains multiple defects, as shown in Fig.~\ref{fig:fig3}(b), the regions between the defects and the boundaries can be made stressed or strained in an alternating manner, depending on the chosen compatible boundary actuation, see Fig.~\ref{fig:fig3}(e,f). Note that the topological signature of a defect in our system, an odd number of antiferromagnetic interactions along a loop seemingly seeking to invert the deformation at the loop's origin, is reminiscent of the topological structure of nonorientable ribbons~\cite{Bartolo2019}. Therefore, it is instructive to compare their mechanical response: both systems feature a region of vanishing deformations and a region of vanishing stresses. The latter is maximally separated from the applied boundary actuations, whereas the location of the former is system-dependent. In elastic ribbons, the linear constitutive relations between stress and strain dictate that the region of vanishing deformation coincides with that of vanishing stresses. In our system, however, the local soft (floppy) mode violates these simple relations, and finite deformations persist in the region of vanishing stresses that compatibly connects the two boundaries. \begin{figure}[t] \includegraphics[width=0.98\columnwidth]{figure3_26.pdf} \caption{(a) 3D cubic building block and its soft deformation mode (reproduced from Ref.~\cite{coulais2016combinatorial}). (b) Deformation sign between two adjacent facets is preserved (flipped) if deformations wind in the same (opposite) direction with respect to the common edge between them. Ferromagnetic sign-preserving (antiferromagnetic sign-flipping) interactions are indicated by dashed (solid) lines connecting the two facets. A red cross drawn perpendicular to the sign-flipping interactions designates the orientation of the building block. (c) Top view of a compatible and (d) an incompatible edge consisting of an even (odd) number of antiferromagnetic interactions, as indicated by white (black) circle. The number of antiferromagnetic interactions can be inferred from the parity of red lines meeting at the central edge.} \label{fig:fig4} \end{figure} \section{Mechanical consequences of defects in 3D metamaterials} We can extend our simple approach to 3D systems, which are usually much harder to analyze. Consider the class of combinatorial metamaterials presented in Ref.~\cite{coulais2016combinatorial}, where cubic building blocks possess the anisotropic soft mode of deformation shown in Fig.~\ref{fig:fig4}(a). Similar to our 2D hexagonal metamaterials, the holographic order maintained by the building block's soft deformation mode results in sub-extensive scaling of compatible architectures with system size, see Appendix~\ref{sec:nonper3d}. Here too, we define ferromagnetic and antiferromagnetic bonds between adjacent arrows describing deformations in the building block, according to whether or not they maintain the same winding direction around the shared lattice edge between them, as depicted by the dashed and solid lines in Fig.~\ref{fig:fig4}(b). Again, compatibility is associated with parity of antiferromagnetic interactions along closed loops. However, simple connectedness is removed by point defects in 2D, but by line defects in 3D. This has well known consequences in materials: for instance, dislocations are point defects in 2D, but line defects in 3D. Similarly, incompatibilities are described as line defects in this 3D system while they are point defects in the 2D system~\cite{alexander2012}, see Fig.~\ref{fig:fig1.2}(b). In 2D, our elementary loops on the dual lattice wind around lattice vertices. In 3D, they wind around the shared edge of four cubes, see Fig.~\ref{fig:fig4}(c,d), which is identified as a defect if the number of antiferromagnetic bonds surrounding it is odd, as shown in Fig.~\ref{fig:fig4}(d). Because the parity of antiferromagnetic interactions along a 3D loop must remain unchanged as it morphs between the facets and over the non-frustrated lattice edges of an even bond distribution, defected edges must join to form defect lines. These must either close into loops, or extend between the boundaries of the system~\cite{baardink2018}. Starting from a compatible configuration and rotating a single building block leads to two parallel loops of frustrated edges. In that sense, mechanical defects in 3D are reminiscent of the topologically neutral disclination loops seen in 3D active nematics~\cite{Duclos1120}. To study the mechanics of the system, we imply the coarse-grain model described in Eq.~(\ref{Omega0}) to the metamaterial comprised of the cubic building blocks described in Fig.~\ref{fig:fig4}(a). We can identify the facets of the cubic building blocks with those of the hexagonal building block, and use Eq.~(\ref{ki}) together with $k_4=k_7$ and $k_5=k_6$ to describe the interaction constants (a total of six independent interaction constants). Textured actuation along the boundary of the metamaterial can steer strains and stresses around the complex lines of frustrated edges in different fashions, giving rise to different mechanical functionalities for a given structure. Consider first the simple extension from 2D to 3D, namely frustrated edges connected to form a straight defect line terminating on opposing faces of the metacube. The metacube can be compatibly actuated on the opposing defect-free faces (parallel to the $(y,z)$ plane) in such a way that stresses can be steered around the defect line, and can be localized on the one half of the material, whereas the strains are larger in the other half, cf.\ Fig.~\ref{fig:fig5}(a). While this scenario is reminiscent of 2D stress steering, the structure's extra dimension offers a richer plethora of possibilities. For instance, by actuating the same metacube through its incompatible faces (those parallel to the $(x,z)$ plane), we can generate more complex response patterns such as a twisted stressed region, as shown in Fig.~\ref{fig:fig5}(b). In this case, since we cannot force the entire $(x,z)$ faces in a compatible manner, we introduce a cut running from the location of the defect to the system's boundary, and do not actuate along this cut. When the remaining face is actuated in a compatible manner, stresses concentrate along the designated cut. We set the cuts on two opposing faces to be orthogonal to one other, thus causing a 3D twist in the stress concentration inside the metamaterial, cf. Fig.~\ref{fig:fig5}(b). The other fundamental defect topology we consider is a closed defect loop, see Fig.~\ref{fig:fig5}(c,d). Here, by compatibly actuating opposing facets parallel to the $(x,z)$ plane, we can concentrate the stresses outside or inside the loop. \begin{figure}[t] \includegraphics[width=0.98\columnwidth]{figure4_41.pdf} \caption{(a) Compatible actuation on the back and front faces can concentrate stresses beneath the defect line. (b) Twisting the stressed region through actuation on the incompatible left and right faces. (c,d) Compatible actuation on the front and back faces concentrates stresses outside (c) or inside (d) a defect loop. (e) Compatible actuation on the front and back faces concentrates stresses on two separated quadrants. Color bar indicates the percentile of the stored energy, separately calculated for each case. The faces on which the actuation is applied are indicated by red frames. (f) Partial cross section close to the centered defect line of the structures in (a,b), showing the non-periodic internal architecture. In this top view, building blocks oriented along the $y$ axis are represented by a red cross (cf. Fig.~\ref{fig:fig4}(c,d)), whereas building blocks oriented along the $x$ and $z$ axes are represented by horizontal and vertical rectangles, respectively. All calculations are for metacubes of dimension $35 \times 35 \times 35$.} \label{fig:fig5} \end{figure} Finally, in complex topologies featuring multiple defect lines, such as the defect cross arrangement presented in Fig.~\ref{fig:fig5}(e), stress and strain concentrate in complementary regions that alternate around the defect lines with respect to the boundary conditions. Therefore, by compatibly actuating the facets opposing the defect cross (parallel to the $(x,z)$ plane), we can concentrate the stresses in two separate quadrants. Note that taking cross sections of the stress concentration maps through planes rotated around the $y$ axis results in images reminiscent in nature to Fig.~\ref{fig:fig3}(e,f). Also note that our combinatorial approach allows us to generate these different defect patterns both with periodic and with non-periodic structures, cf. Fig.~\ref{fig:fig5}(f), however, the described features of the mechanical response remain unchanged, as shown in Appendix~\ref{app:incomp_3d}. \section{Discussion} The framework we present maps the soft modes of deformable building blocks to ferromagnetic and antiferromagnetic interactions on the underlying dual lattice of the metamaterial that is formed by these blocks. The orientations of all blocks in the structure define a bond distribution on this lattice, and that, in turn dictates the compatibility, frustration, and topological defects of the combinatorial metamaterial. We provide detailed demonstrations for such combinatorial metamaterials constructed of two specific hexagonal and cubic building blocks. However, our framework is suitable for many types of metamaterials made of deformable blocks with arbitrary internal interaction rules. It also provides a platform to describe metamaterials with vacancies, or constructed by mixing different types of building blocks. Our approach enables programming metamaterials with complex defect patterns, as well as devising spatially textured actuations that yield different mechanical functionalities from a single sample. Controlling and steering the mechanical response in the bulk of 3D metamaterials could enable adaptive failure control, could potentially be implemented in nematic elastomers~\cite{White2015}, and may also lead to additional applications such as steering waves~\cite{Jin2020}, or to drive active matter~\cite{Peng2016, Zhang2019, Norton2020}. \begin{acknowledgments} We thank Corentin Coulais, Martin van Hecke, Roni Ilan, Yoav Lahini, Ron Lifshitz, Anne S. Meeussen, Carl Merrigan, Ivan Smalyukh, and Eial Teomy for fruitful discussions. This research was supported in part by the Israel Science Foundation Grants No. 968/16 and 1899/20, by the Israeli Ministry of Science and Technology, and by the National Science Foundation Grant No. NSF PHY-1748958. Y.S. thanks the Center for Nonlinear Studies at Los Alamos National Laboratory for its hospitality. The work of C.N. was carried out under the auspices of the U.S. DoE through the Los Alamos National Laboratory, operated by Triad National Security, LLC (Contract No. 892333218NCA000001). \end{acknowledgments}
1,477,468,750,564
arxiv
\section{Introduction and main results} In this paper, we assume the reader is familiar with standard notations and basic results of Nevanlinna's value distribution theory; see \cite{[Goldberg],Hayman,Laine,YL,YY}. Some basic knowledge of complex dynamics of meromorphic functions is also needed; see \cite{[Berg93],ZhengJH2}. Let $f$ be a meromorphic function in the whole complex plane. We use $\sigma(f)$ and $\mu(f)$ to denote the order and lower order of $f$ respectively; see \cite[p.10]{YY} for the definitions.\\ We define $f^n, n\in \mathbb{N}$ denote the $n$th iterate of $f$. The Fatou set $F(f)$ of transcendental meromorphic function $f$ is the subset of the plane $\mathbb{C}$ where the iterates $f^n$ of $f$ form a normal family. The complement of $F(f)$ in $\mathbb{C}$ is called the Julia set $J(f)$ of $f$. It is well known that $F(f)$ is open and completely invariant under $f$, $J(f)$ is closed and non-empty.\\ We denote $\Omega(\alpha,\beta)=\{z\in\mathbb{C}|\arg z\in(\alpha,\beta)\}$, where $0<\alpha<\beta<2\pi$. Given $\theta\in[0,2\pi)$, if $\Omega(\theta-\varepsilon,\theta+\varepsilon)\cap J(f)$ is unbounded for any $\varepsilon>0$, then we call the ray $\arg z=\theta$ the radial distribution of $J(f)$. Define $$\Delta(f)=\{\theta\in[0,2\pi)|J(f) \rm {\ has\ the\ radial\ distribution\ with\ respect\ to} \arg z=\theta\}.$$ Obviously, $\Delta(f)$ is closed and so measurable. We use the $meas\Delta(f)$ to denote the linear measure of $\Delta(f)$. Many important results of radial distribution of transcendental meromorphic functions have been obtained, for example \cite{B3,Qiao1,Qiao2, QiuWu,WangS,ZhengJH3}. Qiao \cite{Qiao1} proved that $meas\Delta(f)=2\pi$ if $\mu(f)<1/2$ and $meas\Delta(f)\geq \pi/\mu(f)$ if $\mu(f)\geq1/2$, where $f(z)$ is a transcendental entire function of finite lower order. Recently, Huang et al \cite{Huang1, Huang2} considered radial distribution of Julia set of entire solutions of linear complex differential equations. Their results are stated as follows.\\ \noindent{\bf Theorem A} \cite{Huang1} {\it Let $\{f_1,f_2,\ldots,f_n\}$ be a solution base of \begin{eqnarray}\label{1.1} f^{(n)}+A(z)f=0,\end{eqnarray} where $A(z)$ is a transcendental entire function with finite order, and denote $E=f_1f_2\ldots f_n$. Then $meas\Delta(E)\geq \min\{2\pi,\pi/\sigma(A)\}$}.\\ \noindent{\bf Theorem B} \cite{Huang2} {\it Let $A_i(z)(i=0,1,\ldots,n-1)$ be entire functions of finite lower order such that $A_0$ is transcendental and $m(r,A_i)=o(m(r,A_0)),(i=1,2,\ldots,n-1)$ as $r\rightarrow\infty$. Then every non-trivial solution $f$ of the equation \begin{eqnarray}\label{1.2} f^{(n)}+A_{n-1}f^{(n-1)}+\ldots+A_0f=0\end{eqnarray} satisfies $meas\Delta(f)\geq \min\{2\pi,\pi/\mu(A_0)\}$.} \\ For entire functions and their derivatives, the difference between their local properties are astonishing, because a small disturbance of the parameter may cause a gigantic change of the dynamics for some given entire functions. So no one seems to believe that there are some neat relation between them in dynamical properties. However, Qiao \cite{Qiao3,Qiao2} proved that the Julia set of a transcendental entire function of \emph{finite lower order} and its derivative have a large amount of common radial distribution and their distribution densities influence each other. A natural question is that what happens to the radial distribution of Julia set between entire function with \emph{infinite lower order} and its derivative? \\ It is easy to know that, by the logarithmic derivative lemma, the non-trivial entire solutions of equations \eqref{1.1} and \eqref{1.2} have infinite lower order, see details in \cite{Huang1} and \cite{Huang2}. In the present paper, we study the radial distribution of Julia set of the derivatives of entire solutions of equations \eqref{1.1} and \eqref{1.2} and try to answer that above question partially. Indeed, we obtain the following results.\\ \noindent{\bf Theorem 1.1} {\it Let $A_i(z)(i=0,1,\ldots,n-1)$ be entire functions of finite lower order such that $A_0$ is transcendental and $m(r,A_i)=o(m(r,A_0)),(i=1,2,\ldots,n-1)$ as $r\rightarrow\infty$. Then every non-trivial solution $f$ of the equation \eqref{1.2} satisfies $meas(\Delta(f)\cap \Delta(f^{(k)})) \geq \min\{2\pi,\pi/\mu(A_0)\}$, where $k$ is a positive integer.}\\ \noindent{\bf Corollary 1.1} {\it Under the hypothesis of Theorem 1.1 we have $meas(\Delta(f^{(k)})) \geq \min\{2\pi,\pi/\mu(A_0)\}$, where $k$ is a positive integer.}\\ Obviously, Theorem B is a corollary of Theorem 1.1. For entire solutions of equation \eqref{1.1}, we have\\ \noindent{\bf Corollary 1.2} {\it Assume that $f$ is any non-trivial solution of equation \eqref{1.1}, we have $meas(\Delta(f^{(k)})) \geq \min\{2\pi,\pi/\mu(A)\}$, where $k$ is a positive integer.}\\ Furthermore, we obtain the following.\\ \noindent{\bf Theorem 1.2} {\it Under the hypothesis of Theorem A, we have $meas(\Delta(E^{(k)})) \geq \min\{2\pi,\pi/\sigma(A)\}$, where $k$ is a positive integer.}\\ By Theorem 1.1, we have the next corollary even more. \\ \noindent{\bf Corollary 1.3} {\it Suppose that $A_i(z)(i=0,1,\ldots,n-1)$ be entire functions satisfying $\sigma(A_j)<\mu(A_0)(j=1,2,\ldots,n-1)$ and $\mu(A_0)$ is finite. Then every non-trivial solution $f$ of the equation \eqref{1.2} satisfies $meas(\Delta(f)\cap \Delta(f^{(k)})) \geq \min\{2\pi,\pi/\mu(A_0)\}$, where $k$ is a positive integer.}\\ \setcounter{section}{1} \section{Preliminary lemmas} At first, we recall the Nevanlinna characteristic in an angle; see \cite{[Goldberg]}. We set $$\Omega(\alpha,\beta,r)=\{z:z\in\Omega(\alpha,\beta),|z|<r\};$$ $$\Omega(r,\alpha,\beta)=\{z:z\in\Omega(\alpha,\beta),|z|\geq r\}$$ and denote by $\overline{\Omega}(\alpha,\beta)$ the closure of $\Omega(\alpha,\beta)$. Let $g(z)$ be meromorphic on the angle $\overline{\Omega}(\alpha,\beta)$, where $\beta-\alpha\in(0,2\pi]$. Following \cite{[Goldberg]}, we define \begin{eqnarray} A_{\alpha,\beta}(r,g)&=&\frac{w}{\pi}\int_1^r\left(\frac{1}{t^w}-\frac{t^w}{r^{2w}}\right)\{\log^+|g(te^{i\alpha})|+\log^+|g(te^{i\beta})|\}\frac{dt}{t};\nonumber\\ B_{\alpha,\beta}(r,g)&=&\frac{2w}{\pi r^w}\int_{\alpha}^{\beta}\log^+|g(re^{i\theta})|\sin w(\theta-\alpha)d\theta;\nonumber\\ C_{\alpha,\beta}(r,g)&=&2\sum_{1<|b_n|<r}\left(\frac{1}{|b_n|^w}-\frac{|b_n|^w}{r^{2w}}\right)\sin w(\beta_n-\alpha),\nonumber \end{eqnarray} where $w=\pi/(\beta-\alpha)$, and $b_n=|b_n|e^{i\beta_n}$ are poles of $g(z)$ in $\overline{\Omega}(\alpha,\beta)$ appearing according to their multiplicities. The Nevanlinna angular characteristic is defined as $$S_{\alpha,\beta}(r,g)=A_{\alpha,\beta}(r,g)+B_{\alpha,\beta}(r,g)+C_{\alpha,\beta}(r,g).$$ In particular, we denote the order of $S_{\alpha,\beta}(r,g)$ by $$\sigma_{\alpha,\beta}(g)=\limsup_{r\rightarrow\infty}\frac{\log S_{\alpha,\beta}(r,g)}{\log r}.$$\\ We call $W$ is a hyperbolic domain if $\overline{\mathbb{C}}\backslash W$ contains three points, where $\overline{\mathbb{C}}$ is the extended complex plane. For an $a\in\mathbb{C}\backslash W$, define $$C_{W}(a)=\inf\{\lambda_W(z)|z-a|: \forall z\in W\},$$ where $\lambda_W(z)$ is the hyperbolic density on $W$. It's well known that, if every component of $W$ is simply connected, then $C_W(a)\geq 1/2$.\\ \noindent{\bf Lemma 2.1.}\ (\cite[Lemma 2.2]{ZhengJH3}) {\it Let $f(z)$ be an analytic in $\Omega(r_0,\theta_1,\theta_2)$, $U$ be a hyperbolic domain, and $f:\Omega(r_0,\theta_1,\theta_2)\rightarrow U$. If there exists a point $a\in \partial U\backslash \{\infty\}$ such that $C_U(a)>0$, then there exists a constant $d>0$ such that, for sufficiently small $\varepsilon>o$, we have $$|f(z)|=O(|z|^d),\ \ z\rightarrow\infty,\ z\in \Omega(r_0,\theta_1+\varepsilon,\theta_2-\varepsilon). $$} The next lemma shows some estimates for the logarithmic derivative of functions being analytic in an angle. Before this, we recall the definition of an R-set; for reference, see \cite{Laine}. Set $B(z_n,r_n)=\{z:|z-z_n|<r_n\}$. If $\sum_{n=1}^{\infty}r_n<\infty$ and $z_n\rightarrow\infty$, then $\cup_{n=1}^{\infty}B(z_n,r_n)$ is called an R-set. Clearly, the set $\{|z|:z\in\cup_{n=1}^{\infty}B(z_n,r_n)\}$ is of finite linear measure.\\ \noindent{\bf Lemma 2.2.}\ (\cite[Lemma 2.2]{Huang2}) {\it Let $z=re^{i\psi}, r_0+1<r$ and $\alpha\leq \psi\leq\beta$, where $0<\beta-\alpha\leq2\pi$. Suppose that $n(\geq2)$ is an integer, and that $g(z)$ is analytic in $\Omega(r_0,\alpha,\beta)$ with $\sigma_{\alpha,\beta}(g)<\infty$. Choose $\alpha<\alpha_1<\beta_1<\beta$. Then, for every $\varepsilon_j\in(0,(\beta_j-\alpha_j)/2)(j=1,2,\ldots,n-1)$ outside a set of linear measure zero with $$\alpha_j=\alpha+\sum_{s=1}^{j-1}\varepsilon_s,\ \ \beta_j=\beta-\sum_{s=1}^{j-1}\varepsilon_s,\ \ j=2,3,\ldots,n-1.$$ there exists $K>0$ and $M>0$ only depending on $g$, $\varepsilon_1,\ldots,\varepsilon_{n-1}$ and $\Omega(\alpha_{n-1},\beta_{n-1})$, and not depending on $z$, such that $$\left|\frac{g'(z)}{g(z)}\right|\leq Kr^M(\sin k(\psi-\alpha))^{-2}$$ and $$\left|\frac{g^{(n)}(z)}{g(z)}\right|\leq Kr^M\left(\sin k(\psi-\alpha)\prod_{j=1}^{n-1}\sin k_{\varepsilon_j}(\psi-\alpha_j)\right)^{-2}$$ for all $z\in \Omega(\alpha_{n-1},\beta_{n-1})$ outside an R-set $D$, where $k=\pi/(\beta-\alpha)$ and $k_{\varepsilon_j}=\pi/(\beta_j-\alpha_j)(j=1,2,\ldots,n-1)$.}\\ \noindent{\bf Lemma 2.3.} (\cite{YL2,ZhengJH2})\ {\it Let $f(z)$ be a transcendental meromorphic function with lower order $\mu(f)<\infty$ and order $0<\sigma(f)\leq\infty$. Then, for any positive number $\lambda$ with $\mu(f)\leq \lambda\leq\sigma(f)$ and any set $H$ of finite measure, there exists a sequence $\{r_n\}$ satisfies \\ (1). $r_n\not\in H, \lim_{n\rightarrow\infty}r_n/n=\infty$;\\ (2). $\liminf_{n\rightarrow\infty}\log T(r_n,f)/\log r_n\geq\lambda$;\\ (3). $T(r,f)<(1+o(1))(2t/r_n)^{\lambda}T(r_n/2,f), t\in[r_n/n,nr_n]$;\\ (4). $t^{-{\lambda-\varepsilon_n}}T(t,f)\leq 2^{\lambda+1}r_n^{-{\lambda-\varepsilon_n}}T(r_n,f), 1\leq t \leq nr_n, \varepsilon_n=(\log n)^{-2}$.}\\ Such $\{r_n\}$ is called a sequence of P\'{o}lya peaks of order $\lambda$ outside $H$. The following lemma, which related to P\'{o}lya peaks, is called the spread relation; see \cite{Baernstein}.\\ \noindent{\bf Lemma 2.4.}\ (\cite{Baernstein}){\it Let $f(z)$ be a transcendental meromorphic function with positive order and finite lower order, and has a deficient value $a\in \overline{\mathbb{C}}$. Then, for any sequence of P\'{o}lya peaks $\{r_n\}$ of order $\lambda>0,\ \mu(f)\leq \lambda\leq\sigma(f)$, and any positive function$\Upsilon(r)\rightarrow0$ as $r_n\rightarrow\infty$, we have $$\liminf_{r_n\rightarrow\infty}meas D_{\Upsilon}(r_n,a)\geq \min \left\{2\pi,\frac{4}{\lambda}\arcsin\sqrt{\frac{\delta(a,f)}{2}}\right\},$$ where $$D_{\Upsilon}(r,a)=\left\{\theta\in[0,2\pi):\log^+\frac{1}{|f(re^{i\theta})-a|}>\Upsilon(r)T(r,f)\right\},\ a\in\mathbb{C}$$ and $$D_{\Upsilon}(r,\infty)=\left\{\theta\in[0,2\pi):\log^+|f(re^{i\theta})|>\Upsilon(r)T(r,f)\right\}.$$ } \setcounter{section}{2} \section{Proof of Theorems} \noindent{\bf Proof of theorem 1.1} We know that every non-trivial solution $f$ of the equation is an entire function with infinite lower order. We obtain the assertion by reduction to contradiction. Assume that \begin{eqnarray}\label{3.1} meas(\Delta(f)\cap\Delta(f^{(k)}))<\nu=\min\{2\pi,\pi/\mu(A_0)\}\end{eqnarray} and so \begin{eqnarray}\label{3.2} \xi:=\nu-meas(\Delta(f)\cap\Delta(f^{(k)}))>0.\end{eqnarray} Applying Lemma 2.3 to $A_0$, we have a P\'{o}lya peak $\{r_j\}$ of order $\mu(A_0)$ with all $r_j\not\in H$. Since $A_0$ is transcendental entire function, it follows the Nevanlinna deficient $\delta(\infty,A_0)=1$. By Lemma 2.4, for the P\'{o}lya peak $\{r_j\}$, we have \begin{eqnarray}\label{3.3} \liminf_{r_j\rightarrow\infty}meas(D_{\Upsilon}(r_j,\infty))\geq\pi/\mu(A_0),\end{eqnarray} where the function $\Upsilon(r)$ is defined by \begin{eqnarray}\label{3.4} \Upsilon(r)=\max \left\{\sqrt{\frac{\log r}{m(r,A_0)}},\sqrt{\frac{m(r,A_i)}{m(r,A_0)}},i=1,2,\ldots,n-1\right\}\end{eqnarray} and $m(r, A_j)$ is the proximation function of $A_j, j=0,1,\ldots,n-1$. Obviously, $\Upsilon(r)$ is positive and $\lim_{r\rightarrow\infty}\Upsilon(r)=0$. For the sake of simplicity, we denote $D_{\Upsilon}(r_j,\infty)$ by $D(r_j)$ in the following. We shall show that there must exist an open interval \begin{eqnarray}\label{3.5} I=(\alpha,\beta)\subset \Delta(f^{(k)})^c,\ \ 0<\beta-\alpha<\nu\end{eqnarray} such that \begin{eqnarray}\label{3.6} \lim_{j\rightarrow\infty} meas( \Delta(f)\cap D(r_j)\cap I)>0, \end{eqnarray} where $\Delta(f^{(k)})^c:=[0,2\pi)\backslash\Delta(f^{(k)})$. In order to achieve this goal, we shall prove the following firstly. \begin{eqnarray}\label{3.7} \lim_{j\rightarrow\infty}meas(D(r_j)\backslash\Delta(f))=0.\end{eqnarray} Otherwise, suppose that there is a subseries $\{r_{j_k}\}$ such that \begin{eqnarray}\label{3.8} \lim_{k\rightarrow\infty}meas(D(r_{j_k})\backslash \Delta(f))>0,\end{eqnarray} then there exists $\theta_0\in \Delta(f)^c$ and $\eta>0$ satisfying \begin{eqnarray}\label{3.9} \lim_{k\rightarrow\infty}meas((\theta_0-\eta,\theta_0+\eta)\cap (D(r_{j_k})\backslash \Delta(f)))>0.\end{eqnarray} Since $\arg z=\theta_0$ is not a radial distribution of $J(f)$, there exists $r_0>0$ such that \begin{eqnarray}\label{3.10} \Omega(r_0,\theta_0-\eta,\theta_0+\eta)\cap J(f)=\emptyset.\end{eqnarray} This implies that there exists an unbounded component $U$ of Fatou set $F(f)$, such that $\Omega(r_0,\theta_0-\eta,\theta_0+\eta)\subset U$. Take a unbounded and connected set $\Gamma\subset\partial U$, the mapping $f:\Omega(r_0,\theta_0-\eta,\theta_0+\eta)\rightarrow \mathbb{C}\backslash\Gamma$ is analytic. Since $\mathbb{C}\backslash\Gamma$ is simply connected, then for any $a\in \Gamma\backslash\{\infty\}$, we have $C_{\mathbb{C}\backslash\Gamma}(a)\geq 1/2$. Now applying Lemma 2.1 to $f$ in $\Omega(r_0,\theta_0-\eta,\theta_0+\eta)$, for any $\zeta>0,\zeta<\eta$, we have \begin{eqnarray}\label{3.11} |f(z)|=O(|z|^{d_1}),\ \ z\in\Omega(r_0,\theta_0-\eta+\zeta,\theta_0+\eta-\zeta),\ |z|\rightarrow\infty,\end{eqnarray} where $d_1$ is a positive constant. Recalling the definition of $S_{\alpha,\beta}(r,f)$, we immediately get that \begin{eqnarray}\label{3.12} S_{\theta_0-\eta+\zeta,\theta_0+\eta-\zeta}(r,f)=O(1).\end{eqnarray} Therefore, by Lemma 2.2, there exists constants $M>0$ and $K>0$ such that \begin{eqnarray}\label{3.13} \left|\frac{f^{(s)}(z)}{f(z)}\right|\leq Kr^M, \ \ (s=1,2,\ldots,n-1),\end{eqnarray} for all $z\in\Omega(r_0,\theta_0-\eta+\zeta,\theta_0+\eta-\zeta)$, outside a R-set $H$. Since $\zeta$ can be chosen sufficiently small, from \eqref{3.9} we have \begin{eqnarray}\label{3.14} \lim_{k\rightarrow\infty}meas((\theta_0-\eta+\zeta,\theta_0+\eta-\zeta)\cap D(r_j))>0.\end{eqnarray} Thus, we can find an infinite series $\{r_{j_k}e^{i\theta_{j_k}}\}$ such that for all sufficiently large $k$, \begin{eqnarray}\label{3.15} \log^+|A_0(r_{j_k}e^{i\theta_{j_k}})|>\Upsilon(r_{j_k})T(r_{j_k},A_0)=\Upsilon(r_{j_k})m(r_{j_k},A_0) \end{eqnarray} where $\theta_{j_k}\in F_{j_k}:=(\theta_0-\eta+\zeta,\theta_0+\eta-\zeta)\cap D(r_j)$. Then, for sufficiently large $k$, we have \begin{eqnarray}\label{3.16} \int_{F_{j_k}}\log^+|A_0(r_{j_k}e^{i\theta_{j_k}})|d\theta\geq meas (F_{j_k})\Upsilon(r_{j_k})m(r_{j_k},A_0). \end{eqnarray} On the other hand, combining \eqref{1.2} and \eqref{3.9} leads to \begin{eqnarray}\label{3.17} \int_{F_{j_k}}\log^+|A_0(r_{j_k}e^{i\theta_{j_k}})|d\theta&\leq& \int_{F_{j_k}}\left(\sum_{s=1}^n \log^+\left|\frac{f^{(s)}(r_{j_k}e^{i\theta_{j_k}})}{f(r_{j_k}e^{i\theta_{j_k}})}\right|+\sum_{i=1}^{n-1}\log^+|A_i(r_{j_k}e^{i\theta_{j_k}})|\right)d\theta+O(1)\nonumber\\ &=& \int_{F_{j_k}}\left(\sum_{i=1}^{n-1}\log^+|A_i(r_{j_k}e^{i\theta_{j_k}})|\right)d\theta+O(\log r_{j_k})\nonumber\\ &\leq&\sum_{i=1}^{n-1}m(r_{j_k},A_i)+O(\log r_{j_k})\nonumber\\ &\leq&c_0\Upsilon^2(r_{j_k})m(r_{j_k},A_0)\end{eqnarray} where $c_0$ is a positive constant. From \eqref{3.16} and \eqref{3.17}, we have \begin{eqnarray}\label{3.18} 0<meas (F_{j_k})\leq c_0\Upsilon(r_{j_k})\end{eqnarray} which contradicts to the fact $\Upsilon(r_{j_k})\rightarrow0$ as $k\rightarrow\infty$. This contradiction implies \eqref{3.7} is valid. By Theorem B, we know that \begin{eqnarray}\label{3.19} meas \Delta(f)\geq \nu.\end{eqnarray} From Lemma 2.4, we have, for all sufficiently large $j$ and any positive $\varepsilon$, \begin{eqnarray}\label{3.20} meas D(r_j)>\nu-\varepsilon.\end{eqnarray} Combining \eqref{3.7}, \eqref{3.19} and \eqref{3.20} follows that, for all sufficiently large $j$, \begin{eqnarray}\label{3.21} meas (\Delta(f)\cap D(r_j))\geq \nu-\xi/4,\end{eqnarray} where $\xi$ is defined in \eqref{3.2}. Since $\Delta(f^{(k)})$ is closed, clearly $\Delta(f^{(k)})^c$ is open, so it consists of at most countably open intervals. We can choose finitely many open intervals $I_j,(j=1,2,\ldots,m)$ satisfying \begin{eqnarray}\label{3.22} I_j\subset \Delta(f^{(k)})^c,\ \ meas(\Delta(f^{(k)})^c\backslash\cup_{i=1}^m I_i)<\xi/4.\end{eqnarray} Since, for sufficiently large $j$, \begin{eqnarray}\label{3.23} & &meas( \Delta(f)\cap D(r_j)\cap(\cup_{i=1}^m I_i))+meas(\Delta(f)\cap D(r_j)\cap \Delta(f^{(k)}))\nonumber\\ &=&meas(\Delta(f)\cap D(r_j)\cap(\Delta(f^{(k)})\cup(\cup_{i=1}^m I_i)))\geq \nu-\xi/2, \end{eqnarray} we have \begin{eqnarray}\label{3.24} meas( \Delta(f)\cap D(r_j)\cap(\cup_{i=1}^m I_i))&\geq& \nu-\xi/2-meas(\Delta(f)\cap D(r_j)\cap \Delta(f^{(k)}))\nonumber\\ &\geq& \nu-\xi/2-meas(\Delta(f)\cap \Delta(f^{(k)}))=\xi/2. \end{eqnarray} Thus, there exists an open interval $I_{i_0}=(\alpha,\beta)\subset \cup_{i=1}^m I_i\subset \Delta(f^{(k)})^c$ such that, for infinitely many sufficiently large $j$, \begin{eqnarray}\label{3.25} meas( \Delta(f)\cap D(r_j)\cap I_{i_0})\geq \frac{\xi}{2m}>0.\end{eqnarray} Then, we prove \eqref{3.6} holds.\\ From \eqref{3.6}, we know that there are $\widetilde{\theta_0}$ and $\widetilde{\eta}>0$ such that \begin{eqnarray}\label{3.26}(\widetilde{\theta_0}-\widetilde{\eta},\widetilde{\theta_0}+\widetilde{\eta})\subset I\end{eqnarray} and \begin{eqnarray}\label{3.27} \lim_{j\rightarrow\infty} meas( \Delta(f)\cap D(r_j)\cap (\widetilde{\theta_0}-\widetilde{\eta},\widetilde{\theta_0}+\widetilde{\eta}))>0.\end{eqnarray} Then, there exists $\widetilde{r_0}$ such that $\Omega(\widetilde{r_0},\widetilde{\theta_0}-\widetilde{\eta},\widetilde{\theta_0}+\widetilde{\eta})\cap J(f^{(k)}(z))=\emptyset$. By the similar argument between \eqref{3.10} and \eqref{3.11}, for any $\widetilde{\zeta}>0,\widetilde{\zeta}<\widetilde{\eta}$, we have \begin{eqnarray}\label{3.28} |f^{(k)}(z)|=O(|z|^{d_2}),\ \ z\in\Omega(\widetilde{r_0},\widetilde{\theta_0}-\widetilde{\eta}+\widetilde{\zeta},\widetilde{\theta_0}+\widetilde{\eta}-\widetilde{\zeta}),\ |z|\rightarrow\infty,\end{eqnarray} where $d_2$ is a positive constant. By \eqref{3.27} we can choose an unbounded series $\{r_je^{i\theta_j}\}$, for all sufficiently large $j$ such that \begin{eqnarray}\label{3.29} \log^+|A_0(r_je^{i\theta_j})|>\Upsilon(r_j)m(r_j,A_0),\end{eqnarray} where $$\theta_j\in \Delta(f)\cap D(r_j)\cap (\widetilde{\theta_0}-\widetilde{\eta},\widetilde{\theta_0}+\widetilde{\eta}).$$ Fixed $r_Je^{i\theta_J}$, and take a $r_je^{i\theta_j}\in\{r_je^{i\theta_j}\}$. Take a simple Jordan arc $\gamma$ in $\Omega(\widetilde{r_0},\widetilde{\theta_0}-\widetilde{\eta},\widetilde{\theta_0}+\widetilde{\eta})$ which connecting $r_Je^{i\theta_J}$ to $r_Je^{i\theta_j}$ along $|z|=r_J$, and connecting $r_Je^{i\theta_j}$ to $r_je^{i\theta_j}$ along $\arg z=\theta_j$. For any $z\in\gamma$, $\gamma_z$ denotes a part of $\gamma$, which connecting $r_Je^{i\theta_J}$ to $z$. Let $L(\gamma)$ be the length of $\gamma$. Clearly, $$L(\gamma)=O(r_j),\ \ j\rightarrow\infty.$$ By \eqref{3.28}, it follows \begin{eqnarray}|f^{(k-1)}(z)|&\leq& \int_{\gamma_z}|f^{(k)}(z)||dz|+c_k\nonumber \\ &\leq&O(|z|^{d_2}L(\gamma))+c_k\nonumber\\&\leq&O(r_j^{d_2+1}), \ \ j\rightarrow\infty.\nonumber\end{eqnarray} Similarly, we have \begin{eqnarray}\label{3.30}|f^{(k-2)}(z)|&\leq& \int_{\gamma_z}|f^{(k-1)}(z)||dz|+c_{k-1}\nonumber \\ &\leq&O(r_j^{d_2+2}), \ \ j\rightarrow\infty.\nonumber\\ &\vdots&\nonumber\\ |f(z)|&\leq& \int_{\gamma_z}|f'(z)||dz|+c_1\nonumber \\ &\leq&O(r_j^{d_2+k}), \ \ j\rightarrow\infty.\end{eqnarray} where $c_1,c_2,\ldots,c_k$ are constants, which are independent of $j$. Therefore,\begin{eqnarray}\label{3.31} S_{\widetilde{\theta_0}-\widetilde{\eta}+\widetilde{\zeta},\widetilde{\theta_0}+\widetilde{\eta}-\widetilde{\zeta}}(r,f)=O(1).\end{eqnarray} By Lemma 2.2, we know \eqref{3.13} also holds for all $z\in\Omega(\widetilde{r_0},\widetilde{\theta_0}-\widetilde{\eta}+\widetilde{\zeta},\widetilde{\theta_0}+\widetilde{\eta}-\widetilde{\zeta})$, outside a R-set $H$. Combining \eqref{3.13} and \eqref{3.29}, and applying the similar argument as \eqref{3.16} and \eqref{3.17}, we can deduce a contradiction. Therefore, it follows \begin{eqnarray}\label{3.33} meas(\Delta(f)\cap\Delta(f^{(k)}))\geq\min\{2\pi,\pi/\mu(A)\}.\end{eqnarray} The proof is complete.\\ \noindent{\bf Proof of theorem 1.2}\ \ The main idea of this proof comes from that of the proof of Theorem 1.1 in \cite{Huang1}, but need some changes. We assume that $meas(\Delta(E^{(k)}))<\min\{2\pi,\pi/\sigma(A)\}$. By similar argument in \cite{Huang1}, there exists an angular domain $\Omega(\alpha,\beta)$ such that \begin{eqnarray} \Omega(\alpha,\beta)\cap \Delta(E^{(k)})=\emptyset,\ \ \Omega(r_0,\alpha,\beta)\cap J(E^{(k)})=\emptyset \end{eqnarray} for sufficiently large $r_0$. Then by the same method between \eqref{3.10} and \eqref{3.11}, we have \begin{eqnarray} |E^{(k)}(z)|=O(|z|^{d}),\ \ z\in\Omega(r_0,\alpha,\beta),\ |z|\rightarrow\infty,\end{eqnarray} where $d$ is a positive constant. Take a simple Jordan arc $\gamma$, which connected points $z_0$ and $z$, satisfying $\gamma\subset \Omega(r_0,\alpha,\beta)$. Applying the method which is used in \eqref{3.30}, we obtain \begin{eqnarray} |E(z)|=O(|z|^{d+k}),\ \ z\in\Omega(r_0,\alpha,\beta),\ |z|\rightarrow\infty.\end{eqnarray} Therefore, Theorem 1.2 can be proved word by word following the proof of Theorem 1.1 in \cite{Huang1}.
1,477,468,750,565
arxiv
\section{Introduction} For a Riemann integrable function $f:[0,1)^s \to \mathbb{R}$, we consider the integral $\int_{[0,1)^s} f(\mathbf{x}) \textrm{d} \mathbf{x}$ and its approximation by quasi-Monte Carlo integration: \begin{eqnarray} \label{eqn:monte carlo} \int_{[0,1)^s} f(\mathbf{x}) \textrm{d} \mathbf{x} \approx \frac{1}{N}\sum_{k = 0}^{N-1} f(\mathbf{x}_k), \end{eqnarray} where the point set $P :=\{ \mathbf{x}_0, \ldots, \mathbf{x}_{N-1} \} \subset [0,1)^s$ is chosen deterministically. A typical quasi-Monte Carlo point set $P$ is a low-discrepancy point set based on the $t$-value of a $(t, m, s)$-net. Thus, the $t$-value is probably the most important criterion of quasi-Monte Carlo point sets \cite{MR3038697, MR2683394, MR1172997}. Matsumoto, Saito, and Matoba \cite{MSM} recently proposed the Walsh figure of merit (WAFOM) as another criterion of quasi-Monte Carlo point sets \textcolor{blue}{to ensure} higher order convergence for function classes of very high smoothness. WAFOM is also quickly computable, and this efficiency enables us to search for quasi-Monte Carlo point sets using a random search. \textcolor{blue}{From an analogy to} coding theory, since a random search is easier than a mathematical construction (e.g., the success of low-density parity-check codes), Matsumoto et al.\ also searched for point sets at random by minimizing WAFOM. In the same spirit, Harase and Ohori \cite{HO} searched for low-WAFOM point sets with extensibility (i.e., the number of points may be increased while the existing points are retained). In numerical experiments, these point sets are significantly effective for low-dimensional smooth functions. In fact, as shown later (in Remark~\ref{Remark: t-value}), low-WAFOM point sets based on a simple random search do not always have small $t$-values in the framework of $(t, m, s)$-nets, and such point sets are sometimes inferior to classical $(t, m, s)$-nets for non-smooth functions. In this paper, we search for point sets whose $t$-value and WAFOM are both small, so as to be effective for a wider range of function classes, i.e., point sets combining the advantages of good $(t, m, s)$-nets and low-WAFOM point sets. For this, we fix suitable digital $(t, m, s)$-nets (e.g., Sobol' or Niederreiter--Xing nets) in advance and apply random linear scrambling with non-singular lower triangular matrices that preserves \textcolor{blue}{the} $t$-values. \textcolor{blue}{The key to our approach} is to select good point sets from the scrambled digital $(t, m, s)$-nets in terms of WAFOM. Our numerical experiments show that the obtained point sets improve the rates of convergence for smooth functions and are robust for non-smooth functions. The rest of this paper is organized as follows. In Section~2, we briefly recall the definitions of digital $(t, m, s)$-nets and WAFOM. Section~3 is devoted to our main result: a search for low-WAFOM point sets with small $t$-values using linear scrambling. In Section~4, we \textcolor{blue}{compare our new point sets with} other quasi-Monte Carlo point sets by using the Genz test function package \cite{Genz1984,Genz1987}. Section~5 concludes the paper with some directions for future research. \section{Notations} \subsection{Digital $(t, m, s)$-nets} \textcolor{blue}{We briefly recall the definition of digital $(t, m, s)$-nets. Throughout this paper, we consider only the digital $(t, m, s)$-nets in base $2$.} Let $s$ and $n$ be positive integers. Let $\mathbb{F}_2:=\{ 0,1\}$ be the two-element field, and $V := \mathbb{F}_2^{s \times n}$ the set of $s \times n$ matrices. Let us denote $\mathbf{x} \in V$ by $\mathbf{x}:=(x_{i, j})_{1 \leq i \leq s, 1 \leq j \leq n}$ with $x_{i, j} \in \mathbb{F}_2$. We identify $\mathbf{x} \in V$ with the $s$-dimensional point \[ (\sum_{j=1}^{n} x_{1, j}2^{-j}+2^{-n-1}, \ldots, \sum_{j=1}^{n} x_{s, j}2^{-j}+2^{-n-1}) \in [0,1)^s.\] Note that $n$ corresponds to the precision. Note also that the points are shifted by $2^{-n-1}$ because we will later consider WAFOM (see \cite[Remark 2.2]{MSM}). To construct $P:=\{ \mathbf{x}_0, \mathbf{x}_1,\ldots, \mathbf{x}_{2^m-1} \} \subset [0,1)^s$, we often use the following construction scheme called the {\it digital net}. \textcolor{blue}{ \begin{definition}[Digital net] Consider $n \times m$ matrices $C_1, \ldots, C_s \in \mathbb{F}_2^{n \times m}$. For $h = 0, 1, \ldots, 2^m-1$, let $h = \sum_{l = 0}^{m-1} h_l 2^l$ with $h_l \in \mathbb{F}_2$ be the expansion of $h$ in base $2$. We set $\mathbf{h} := {}^t(h_0, \ldots, h_{m-1}) \in \mathbb{F}_2^m$, where ${}^t$ represents the transpose. We set $\mathbf{x}_h := {}^t(C_1 \mathbf{h},\ldots, C_s \mathbf{h}) \in V$. Then, the point set $P:= \{ \mathbf{x}_0, \ldots, \mathbf{x}_{2^m-1} \}$ is called a {\it digital net} over $\mathbb{F}_2$ and $C_1, \ldots, C_s$ are the {\it generating matrices} of the digital net $P$. \end{definition}} Throughout this paper, we assume $P$ is a digital net. Note that $P \subset V$ is an $\mathbb{F}_2$-linear subspace of $V$. \begin{definition}[$(t, m, s)$-net] Let $s \geq 1$, \textcolor{blue}{and let} $0 \leq t \leq m$ be integers. Then, a point set $P$ consisting of $2^m$ points in $[0, 1)^s$ is called a {\it $(t, m, s)$-net} (in base $2$) if every subinterval $J = \prod_{i = 1}^s[a_i 2^{-d_i}, (a_i +1) 2^{-d_i})$ in $[0, 1)^s$ with integers $d_i \geq 0$ and $0 \leq a_i < 2^{d_i}$ for $1 \leq i \leq s$ and of volume $2^{t-m}$ contains exactly $2^t$ points of $P$. \end{definition} \textcolor{blue}{ \begin{definition}[$t$-value] If $t$ is the smallest value such that $P$ is a $(t,m, s)$-net, then we call this the $t$-value (or exact quality parameter). \end{definition} \begin{definition}[Digital $(t, m, s)$-net] If $P$ is a digital net and a $(t, m, s)$-net, it is called a {\it digital $(t, m, s)$-net}. \end{definition} } As a criterion, $P$ is well distributed if the $t$-value is small. In this framework, from the Koksma--Hlawka inequality and estimation of star-discrepancies, the upper bound on the absolute error of (\ref{eqn:monte carlo}) is $O(2^t(\log N)^{s-1}/N)$ (see \cite{MR2683394, MR1172997} for details). There are many studies on \textcolor{blue}{the} generating matrices of digital $(t, m, s)$-nets, e.g., Sobol' nets \cite{MR0219238}, Niederreiter nets \cite{MR1172997}, and Niederreiter--Xing nets \cite{MR1358190}. There are also some algorithms for computing \textcolor{blue}{the $t$-value} of digital nets \cite{MR3085113, MR1881672}. \subsection{WAFOM} Matsumoto et al.\ \cite{MSM} proposed WAFOM as a computable criterion of quasi-Monte Carlo point sets constructed by digital nets $P$. WAFOM has the \textcolor{blue}{potential} to ensure higher order convergence than $O(N^{-1})$ for function classes of very high smoothness (so-called {\it $n$-smooth functions}). In a recent talk, Yoshiki \cite{Yoshiki2014} modified the definition of WAFOM resulting in a more explicit upper bound for integration errors \textcolor{blue}{(see also Section~7 of \cite{MO2014})}. Thus, throughout this paper, we adopt \textcolor{blue}{his new result as our WAFOM value with some abuse of notation}. \textcolor{blue}{ \begin{definition}[WAFOM] Let $P \subset V$ be a digital net. For $A = (a_{i, j}), B = (b_{i, j})\in V$, we define the inner product as $\langle A ,B \rangle := \sum_{1 \leq i \leq s, 1 \leq j \leq n} a_{i, j} b_{i,j} \in \mathbb{F}_2$. For an $\mathbb{F}_2$-linear subspace $P$, let us define its perpendicular space by $P^{\perp} := \{ A \in V \ | \ \langle B, A \rangle = 0 \mbox{ for all } B \in P \}$. The WAFOM (Walsh figure of Merit) of $P$ is defined by \[ \mbox{WAFOM}(P) := \sum_{A \in P^{\perp} \backslash \{ \mathbf{0} \} } 2^{-\mu'(A)}, \] where we set the weight \begin{eqnarray} \label{eqn:Yoshiki_weight} \mu'(A) := \sum_{1 \leq i \leq s, 1 \leq j \leq n} (j + 1) \times a_{i,j} \quad \mbox{ for } A = (a_{i,j}) \in P^{\perp}. \end{eqnarray} \end{definition} In the original definition of WAFOM, Matsumoto et al.\ \cite{MSM} considered the weight $\mu(A) := \sum_{1 \leq i \leq s, 1 \leq j \leq n} j \times a_{i, j}$ instead of (\ref{eqn:Yoshiki_weight}). (The weight $\mu$ was originally proposed by Dick \cite{MR2346374, MR2391005} and is now called the Dick weight.) Further, by replacing $c(A) := 2^{-\mu(A)}$ by $c(A) := 2^{-\mu'(A)}$ in Theorem~4.1 and Corollary~4.2 of \cite{MSM} and their proofs, we obtain the following efficiently computable formula: \begin{eqnarray} \label{eqn:WAFOM} \mbox{WAFOM}(P) = \frac{1}{|P|} \sum_{\mathbf{x} \in P} \left\{ \prod_{1 \leq i \leq s} \prod_{1 \leq j \leq n} (1+(-1)^{x_{i, j}} 2^{-(j+1)}) -1 \right\}. \end{eqnarray} Thus, this criterion is computable in $O(nsN)$ arithmetic operations, where $N := |P|$, and is computable in $O(sN)$ steps when using look-up tables (see \cite{HO}). } Next, we recall the $n$-digit discretization $f_n$ of $f$ by following \cite[Section~2]{MSM}. For $\mathbf{x}=(x_{i, j})_{1 \leq i \leq s, 1 \leq j \leq n} \in V$, we define the $s$-dimensional subinterval $\mathbf{I}_{\mathbf{x}} \subset [0,1)^S$ by \[ \mathbf{I}_\mathbf{x} := [\sum_{j=1}^{n} x_{1, j}2^{-j}, \sum_{j=1}^{n} x_{1, j}2^{-j} + 2^{-n}) \times \cdots \times [\sum_{j=1}^{n} x_{s, j}2^{-j}, \sum_{j=1}^{n} x_{s, j}2^{-j} + 2^{-n}).\] For a Riemann integrable function $f: [0,1)^s \to \mathbb{R}$, we define its $n$-digit discretization $f_n: V \to \mathbb{R}$ by $f_n(\mathbf{x}) := (1/{\rm Vol}(\mathbf{I}_{\mathbf{x}})) \int_{\mathbf{I}_{\mathbf{x}}} f(\mathbf{x}) \textrm{d} \mathbf{x}$. This is the average value of $f$ over $\mathbf{I}_{\mathbf{x}}$. When $f$ is Lipschitz continuous, it can be shown \cite{MSM} that the discretization error between $f$ and $f_n$ on $\mathbf{I}_{\mathbf{x}}$ is negligible if $n$ is sufficiently large (e.g., when $n \geq 30$). Thus, for such $f: [0, 1)^s \to \mathbb{R}$ and large $n$, we may consider $({1}/|P|) \sum_{\mathbf{x} \in P} f(\mathbf{x}) \approx (1/{|P|}) \sum_{\mathbf{x} \in P} f_n(\mathbf{x})$. Here, we assume that $f$ is an $n$-smooth function (see \cite{MR2391005} and \cite[Ch.~14.6]{MR2683394} for the definition). Yoshiki \cite{Yoshiki2014} gave the following Koksma--Hlawka type inequality by improving Dick's inequality (\cite[Section~4.1]{MR2743889} and \cite[(3.7)]{MSM}): \begin{eqnarray} \label{ineq:yoshiki} \left| \int_{[0,1)^s} f(\mathbf{x}) \textrm{d} \mathbf{x}-\frac{1}{|P|} \sum_{\mathbf{x} \in P} f_n(\mathbf{x}) \right| \leq \sup_{0 \leq N_1, \ldots, N_s \leq n} || f^{(N_1, \ldots, N_s)}||_{\infty} \cdot \mbox{WAFOM}(P), \end{eqnarray} where $||f||_{\infty}$ is the infinity norm of $f$ and $f^{(N_1, \ldots, N_s)} := \partial^{N_1 + \cdots + N_s}f / \partial x_1^{N_1} \cdots \partial x_s^{N_s}$. \textcolor{blue}{ \begin{remark} More precisely, Yoshiki \cite{Yoshiki2014} proved an upper bound on the Walsh coefficient of wavenumber $\mathbf{k} :=(k_1, \ldots, k_s)$ as follows: \[ | \hat{f} (\mathbf{k}) | \leq 2^{-\mu' (\mathbf{k})} || f^{(N_1, \ldots, N_s)} ||_{\infty}, \] where $f$ is an $n$-smooth function and $k_i = \sum_{j = 1}^{N_i} 2^{a_{i, j}}$ such that $a_{i, 1} > \ldots > a_{i, N_i}$ for each $j$. From a similar argument as that for the proof of Theorem~3.4 and formula (3.5) in \cite{MSM}, the discretized upper bound (\ref{ineq:yoshiki}) is obtained. \end{remark} } \begin{remark} Following the discussions in \cite{MR3145585, Ohori2015, Suzuki2014, Yoshiki}, the best (i.e., smallest) value of $\log({\rm WAFOM}(P))$ is $O(-m^2/s)$ for $P$ with $|P| = 2^m$. Thus, WAFOM can be used to search \textcolor{blue}{for} a digital net $P$ with higher order convergence than $O(N^{-1})$ for $n$-smooth functions. \end{remark} \section{Scrambling methods} \label{Sec:scrambling} In previous works, Matsumoto et al.\ \cite{MSM} and Harase and Ohori \cite{HO} searched for low-WAFOM point sets using only WAFOM as a criterion. In fact, the point sets obtained in these ways do not always have small $t$-values as $(t, m, s)$-nets. In this section, we take into account the \textcolor{blue}{$t$-value}, and search for low-WAFOM point sets with small $t$-values. For this, we consider the following transformation, known as {\it linear scrambling} \textcolor{blue}{\cite{MR1659004}, which is a subclass of (non-linear) {\it scrambling} with general permutations proposed by Owen \cite{MR1445791}.} \begin{proposition} Let $C_1, \ldots, C_s \in \mathbb{F}_2^{n \times m}$ be generating matrices of a digital $(t, m, s)$-net. Let $L_1, \ldots, L_s \in \mathbb{F}_2^{n \times n}$ be non-singular lower triangular matrices. Then, the digital net with generating matrices $L_1C_1, \ldots, L_sC_s \in \mathbb{F}_2^{n \times m}$ is also a $(t, m, s)$-net. \end{proposition} \textcolor{blue}{The proof is easily obtained from Theorem~4.28 in \cite{MR1172997} or Theorem~4.52 in \cite{MR2683394}.} Linear scrambling preserves the \textcolor{blue}{$t$-value}, so we cannot distinguish whether \textcolor{blue}{the} scrambled nets are good using \textcolor{blue}{the $t$-value itself}. Here, WAFOM can be applied to \textcolor{blue}{assess the} linearly scrambled digital $(t, m, s)$-nets. Our algorithm proceeds as follows: \begin{enumerate} \item Fix a digital $(t, m, s)$-net with a small $t$-value in advance. \item Generate $L_1, \ldots, L_s$ at random $M$ times, and construct $P$ from $L_1C_1, \ldots, \textcolor{blue}{L_s C_s}$. \item Select the point set $P$ with the smallest $\mbox{WAFOM}(P)$. \end{enumerate} In this case, note that the point sets $P$ are not extensible. As an example, we set $(s , n, M) = (5, 32, 100000)$ and compare the WAFOM values of the following point sets $P$: \begin{enumerate} \renewcommand{\labelenumi}{(\alph{enumi})} \item {\bf Niederreiter--Xing} nets \cite{MR1358190} implemented by Pirsic \cite{MR1958872}. \item {\bf Sobol'} nets with better two-dimensional projections \cite{MR2429482}. \item {\bf Naive} low-WAFOM point sets based on a random search \cite{HO}. \item {\bf Scrambled Niederreiter-Xing} nets given by the above procedure. \item {\bf Scrambled Sobol'} nets given by the above procedure. \end{enumerate} Figure~\ref{fig:WAFOM} plots the WAFOM values. This shows that (c)--(e) have similar values. \textcolor{blue}{The WAFOM values of the Sobol' nets (without linear scrambling) are rather large. Roughly speaking, the slope of the Sobol' nets is $O(N^{-1})$. Mostly, we can expect the improvement of their efficiency by using linear scrambling. Intuitively, we explain these phenomena in terms of WAFOM. In (\ref{eqn:WAFOM}), ${\rm WAFOM}(P)$ increases if the proportion of ${x_{i, j}} = 0$ is large. (Conversely, ${\rm WAFOM}(P)$ decreases if the proportion of ${x_{i, j}} = 1$ is large.) The generating matrices $C_1, \ldots, C_s \in \mathbb{F}_2^{n \times m}$ of the Sobol' nets are non-singular upper triangular, and hence the first $2^m$ points always have ${x_{i, j}} = 0$ for $m < j \leq n$. In other words, these least significant bits of the first $2^m$ output points with $n$-digit precision are all zero. As a result, ${\rm WAFOM}(P)$ tends to be large in (\ref{eqn:WAFOM}). When we apply linear scrambling to the Sobol' nets, these least significant bits change from $0$ to $1$ (at random) and the WAFOM values decrease. Hence, the rate of convergence is expected to improve. On the other hand, the generating matrices of the Niederreiter--Xing nets are (almost) dense, and the WAFOM values are already small, so we obtain higher order convergence rates using non-scrambled Niederreiter--Xing nets. However, by selecting suitable scrambling matrices, further improvements can be obtained for large values of $m$. We conduct additional numerical experiments on these topics in Remark~\ref{remark:high_wafom}. } \begin{figure} \centering \includegraphics[width=11.5cm]{wafom_comp5_Y32_col.eps} \caption{WAFOM (in $\log_{10}$ scale) for $s = 5$ and $m = 1, \ldots, 25$.} \label{fig:WAFOM} \end{figure} \begin{remark} \label{Remark: t-value} Low-WAFOM point sets based on a simple random search do not always possess small $t$-values, particularly for larger $s$ and $m$. Table~\ref{table:t-values} gives a summary of the $t$-values of the above point sets for $s = 5$. As described in \cite{HO}, the naive low-WAFOM point sets were searched by inductively determining the columns vectors of $C_1, \ldots, C_s$ in terms of WAFOM, thus allowing extensibility. Because we did not consider the \textcolor{blue}{$t$-values} in advance, the $t$-values are rather large. Matsumoto--Saito--Matoba (non-extensible) sequential generators \cite{MSM} exhibit a similar tendency. Nevertheless, such low-WAFOM point sets are effective for smooth functions (see the next section for details). \end{remark} \begin{remark} In two pioneering papers, Dick \cite{MR2346374, MR2391005} proposed {\it higher order digital nets} and {\it sequences} that achieve a convergence rate of $O(N^{-\alpha} (\log N )^{\alpha s})$ for $\alpha$-smooth functions ($\alpha \geq 1$) by considering the decay of the Walsh coefficients. For this, he described an explicit construction for generating matrices, called {\it interlacing}. \textcolor{blue}{First}, we prepare $s \alpha$ generating matrices $C_1, \ldots, C_{s \alpha} \in \mathbb{F}_2^{m \times m}$ \textcolor{blue}{for} a digital $(t, m, s\alpha)$-net in advance. These are converted to the matrices $C_1^{(\alpha)}, \ldots, C_s^{(\alpha)} \in \mathbb{F}_2^{{m \alpha} \times m}$ by rearranging the row vectors of $\alpha$ successive generating matrices. Then, the digital net with $C_1^{(\alpha)}, \ldots, C_s^{(\alpha)}$ achieves a convergence rate of $O(N^{-\alpha} (\log N )^{\alpha s})$. From \cite[Proposition~15.8]{MR2683394}, such a digital net is a classical digital $(t', m, s)$-net with $t' \leq t$. However, when $\alpha$ or $s$ is large, the exact quality parameter $t'$ might become large compared with the best possible $t$-value in the framework of classical $(t, m, s)$-nets. The last two rows of Table~\ref{table:t-values} give the $t$-values of interlaced Niederreiter--Xing nets for $\alpha = 2$ and $3$. Our scrambling approach has the advantages that \textcolor{blue}{the exact quality parameter $t$ does} not increase and higher order convergences can be expected. \end{remark} \begin{table} \scalebox{0.7}{ % \begin{tabular}{l||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline $m$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$ & $14$ & $15$ & $16$ & $17$ & $18$ & $19$ & $20$ & $21$ & $22$ & $23$ & $24$ & $25$ \\ \hline \hline {Sobol'} & $0$ & $1$ & $2$ & $2$ & $2$ & $3$ & $3$ & $3$ & $3$ & $3$ & $4$ & $4$ & $5$ & $4$ & $4$ & $5$ & $4$ & $5$ & $5$ & $5$ & $5$ & $5$ & $5$ & $5$ & $5$ \\ \hline {Niederreiter--Xing} & $1$ & $2$ & $1$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ \\ \hline {Naive} & $0$ & $1$ & $2$ & $1$ & $2$ & $3$ & $4$ & $4$ & $4$ & $5$ & $6$ & $7$ & $5$ & $6$ & $6$ & $6$ & $7$ & $7$ & $8$ & $9$ & $9$ & $10$ & $8$ & $9$ & $9$ \\ \hline {Interlacing $(\alpha = 2)$} & $1$ & $2$ & $3$ & $4$ & $4$ & $3$ & $4$ & $4$ & $4$ & $5$ & $6$ & $6$ & $7$ & $6$ & $5$ & $6$ & $7$ & $7$ & $6$ & $6$ & $6$ & $7$ & $6$ & $6$ & $7$ \\ \hline {Interlacing $(\alpha = 3)$} & $1$ & $2$ & $3$ & $2$ & $3$ & $3$ & $4$ & $5$ & $5$ & $5$ & $6$ & $7$ & $6$ & $6$ & $7$ & $8$ & $9$ & $9$ & $7$ & $8$ & $8$ & $9$ & $8$ & $8$ & $8$ \\ \hline \end{tabular} } \caption{The exact quality parameters $t$ for $m = 1, \ldots, 25$ and $s = 5$.} \label{table:t-values} \end{table} \begin{remark} Goda, Ohori, Suzuki, and Yoshiki \cite{GOSY} proposed a variant of WAFOM from the \textcolor{blue}{viewpoint} of the mean square error for digitally shifted digital nets. They defined the criterion by replacing $2$ in (\ref{eqn:WAFOM}) with $4$. Thus, this is similarly applicable to our approach. \end{remark} \section{Numerical results} \label{Sec:integration} To evaluate the point sets (a)--(e) described in Section~\ref{Sec:scrambling}, we applied the Genz test package \cite{Genz1984,Genz1987}. This has been used in many studies (e.g., \cite{MR1417864, MR1958872,mSLO94a, MR1849865}), and was also analyzed from a theoretical perspective in \cite{MR1963917}. Thus, we investigate six different test functions defined over $[0,1)^s$. These are: \begin{eqnarray*} \begin{array}{ll} \mbox{Oscillatory:} & f_1(\mathbf{x}) = \cos (2 \pi u_1 + \sum_{i = 1}^{s} a_i x_i),\\ \mbox{Product Peak:} & f_2(\mathbf{x}) = \prod_{i = 1}^{s} [1/{(a_i^{-2} + (x_i - u_i)^2)]},\\ \mbox{Corner Peak:} & f_3(\mathbf{x}) = (1 + \sum_{i = 1}^{s} a_i x_i)^{-(s+1)},\\ \mbox{Gaussian:} & f_4(\mathbf{x}) = \exp (-\sum_{i=1}^s a_i^2 (x_i - u_i)^2),\\ \mbox{Continuous:} & f_5(\mathbf{x}) = \exp (-\sum_{i=1}^s a_i |x_i - u_i|),\\ \mbox{Discontinuous:} & f_6(\mathbf{x}) = \left\{ \begin{array}{ll} 0, & \mbox{if $x_1 > u_1$ or $x_2 > u_2$},\\ \exp (\sum_{i = 1}^s a_i x_i), & \mbox{otherwise.} \end{array} \right. \end{array} \end{eqnarray*} In these functions, we have two parameters, i.e., the difficulty parameters $\mathbf{a} = (a_1, \ldots, a_s)$ and the shift parameters $\mathbf{u} = (u_1, \ldots, u_s)$. We generate $\mathbf{a}= (a_1, \ldots, a_s) $ and $\mathbf{u}= (u_1, \ldots, u_s)$ as uniform random vectors in $[0, 1]^s$, and renormalized $\mathbf{a}$ to satisfy the following condition: \begin{eqnarray*} \label{eqn:Genz condition} \sum_{i = 1}^s a_i = h_j, \end{eqnarray*} where $h_j$ depends on the family $f_j$. By varying $\mathbf{a}$ and $\mathbf{u}$, we formed quantitative examples based on 20 random samples for each function class. For any sample size $|P|=2^m$ and any function $f_j$, we computed the median of the relative errors (in $\log_{10}$ scale) \[ \log_{10} \frac{|I(f_j) - I_N(f_j)|}{|I(f_j)|} \] varying the parameters, where $I(f_j) := \int_{[0,1)^s} f_j \textcolor{blue}{(\mathbf{x})} \textrm{d} \mathbf{x}$, $N := |P|$, and $I_N(f_j) := ({1}/{|P|}) \sum_{\mathbf{x} \in P} \textcolor{blue}{f_j(\mathbf{x})}$. Figure~\ref{fig:Genz} shows a summary of the medians of the relative errors for $s = 5$, $m = 1, \ldots, 23$, and $(h_1, \ldots, h_6) = (4.5, 3.625, 0.925, 3.515, \textcolor{blue}{10.2}, 2.15)$, which are \textcolor{blue}{the settings used} in \cite{HO}. \begin{figure} \includegraphics[width=8cm]{f1_dim5_Y32_col.eps} \includegraphics[width=8cm]{f2_dim5_Y32_col.eps} \includegraphics[width=8cm]{f3_dim5_Y32_col.eps} \includegraphics[width=8cm]{f4_dim5_Y32_col.eps} \includegraphics[width=8cm]{f5_dim5_Y32_col_2.eps} \includegraphics[width=8cm]{f6_dim5_Y32_col.eps} \caption{Median of relative errors for Genz functions.} \label{fig:Genz} \end{figure} For $f_1$and $f_3$, the low-WAFOM point sets are clearly superior to the Niederreiter-Xing nets. In particular, the scrambled Sobol' nets represent a drastic improvement over the original Sobol' nets. Note that the slopes are similar to those in Figure~\ref{fig:WAFOM}. Additionally, for $f_2$ and $f_4$, the low-WAFOM point sets are competitive with the Niederreiter--Xing nets. In these smooth functions, the WAFOM criterion seems to work very well. In the case of non-smooth functions, the situations \textcolor{blue}{are} different. For the continuous but non-differentiable functions $f_5$, the naive low-WAFOM point sets are inferior to the Niederreiter--Xing nets. However, when we take into account the \textcolor{blue}{$t$-value} of $(t, m, s)$-nets, the low-WAFOM point sets preserve the rate of convergence. For $f_6$, the naive low-WAFOM point sets are also inferior to the other point sets with small $t$-values. These results imply that the \textcolor{blue}{$t$-value} is important for non-smooth functions. \textcolor{blue}{Finally, we note that, as the dimension $s$ increases, the WAFOM values tend to have only slight differences (see Section~4.2 of \cite{MO2014} for details). In this case, the rates of convergence weaken, but the obtained point sets in this paper seem to be at worst comparable to the original non-scrambled Niederreiter--Xing or Sobol' nets, especially for high-smooth functions. (To save space, we omit the figures.)} \begin{remark} \label{remark:high_wafom} \textcolor{blue}{ There are some experimental reports that random linear scrambling improves the rates of convergence in numerical integration. To investigate the effect of WAFOM and scrambling, we conduct further experiments on a comparison between scrambled nets with small WAFOM and those with large WAFOM. For this purpose, using the similar algorithm to that in Section~\ref{Sec:scrambling}, we searched for linearly scrambled digital $(t, m ,s)$-nets $P$ with small $t$-values but with the largest ${\rm WAFOM}(P)$: \begin{enumerate} \renewcommand{\labelenumi}{(\alph{enumi})} \setcounter{enumi}{5} \item {\bf Scrambled Niederreiter--Xing (worst)} nets with the largest ${\rm WAFOM}(P)$. \item {\bf Scrambled Sobol' (worst)} nets with the largest ${\rm WAFOM}(P)$. \end{enumerate} Figure~\ref{fig:Genz2} plots the WAFOM values and the medians of relative errors of the Genz function packages for the point sets (a), (b), and (d)--(g) in the same settings as in Figure~\ref{fig:Genz}. {\bf Scrambled Niederreiter--Xing (best)} and {\bf Scrambled Sobol' (best)} are copies of (d) and (e) in Figure~\ref{fig:Genz} (with the smallest ${\rm WAFOM}(P)$), respectively.} \textcolor{blue}{ We can summarize our experimental results as follows: \begin{itemize} \item The largest WAFOM values of the scrambled Sobol' nets are comparable to or slightly better than the WAFOM values of the non-scrambled Sobol' nets. Thus, most scrambled Sobol' nets have WAFOM values that are smaller than those of the non-scrambled Sobol' nets (as pointed out in Section~\ref{Sec:scrambling}), and hence we can expect that the simple application of ``random" linear scrambling improves the rate of convergence for the Sobol' nets from the viewpoint of WAFOM. In Figure~\ref{fig:Genz2}, the scrambled Sobol' nets with the largest WAFOM are better than the non-scrambled Sobol' nets for all the smooth functions, especially $f_2$ and $f_4$, but the scrambled Sobol' nets with the smallest WAFOM seem to be the best choices. \item The WAFOM values of the Niederreiter--Xing nets are already small, and the WAFOM values of the scrambled Niederreiter--Xing nets given by inappropriate lower triangular matrices become larger than those of the non-scrambled Niederreiter--Xing nets. Indeed, the scrambled Niederreiter--Xing nets with the largest WAFOM are worse than the non-scrambled Niederreiter--Xing nets for all the smooth Genz functions. \end{itemize} Overall, WAFOM is a good criterion for ensuring higher order convergence for high-smooth functions. } \begin{figure} \begin{center} \includegraphics[width=7cm]{high_wafom_comp5_Y32_col.eps}\\ \end{center} \includegraphics[width=7cm]{f1_dim5_Y32_HW_col.eps} \includegraphics[width=7cm]{f2_dim5_Y32_HW_col.eps} \includegraphics[width=7cm]{f3_dim5_Y32_HW_col.eps} \includegraphics[width=7cm]{f4_dim5_Y32_HW_col.eps} \includegraphics[width=7cm]{f5_dim5_Y32_HW_col_2.eps} \includegraphics[width=7cm]{f6_dim5_Y32_HW_col.eps} \caption{\textcolor{blue}{Comparison of scrambled digital nets with small WAFOM and those with large WAFOM for $s = 5$. The top figure shows WAFOM values (in $\log_{10}$ scale) for $m = 1, \ldots, 23$. The other figures show the median of relative errors for the Genz functions for $m = 1, \ldots, 23$. }} \label{fig:Genz2} \end{figure} \end{remark} \section{Conclusions and future directions} In this paper, we have searched for point sets whose $t$-value and WAFOM are both small so as to be effective for a wider range of function classes. For this, we fixed digital $(t, m, s)$-nets in advance and applied random linear scrambling. The key technique was \textcolor{blue}{the selection} of linearly scrambled $(t, m, s)$-nets in terms of WAFOM. Numerical experiments showed that the point sets obtained by our method have improved convergence rates for smooth functions and are robust for non-smooth functions. Finally, we discuss some directions for future research. In our approach, $m$ was fixed and the extensibility was discarded. We also attempted to search for extensible point sets, but the WAFOM values tended to be worse than the current ones for large $m$. Thus, an efficient search algorithm for extensible scrambling matrices is one area of future work. As another direction, the quasi-Monte Carlo method is an important tool in computational finance (e.g., \cite{MR1999614, MR2519835}). However, many applications encounter integrands with boundary singularities. Such integrands are not included in a suitable class of functions, i.e., $n$-smooth functions, so we might not expect higher order convergence from the simple application of low-WAFOM point sets. There will probably be a need for some kind of transformation to force the integrand to be included in a suitable class of functions, such as periodization in lattice rules. The study of WAFOM is still in its infancy, so a number of unsolved problems remain. \subsection*{Acknowledgments} \textcolor{blue}{ The author is thankful to the anonymous referees for their valuable comments and suggestions. The author also wishes to express his gratitude to Professor Makoto Matsumoto at Hiroshima University and Professor Syoiti Ninomiya at Tokyo Institute of Technology for continuous encouragement and many helpful comments. The author was partially supported by Grant-in-Aid for JSPS Fellows 24$\cdot$7985, Young Scientists (B) 80610576, and Scientific Research (B) 70231602.} \bibliographystyle{model1b-num-names}
1,477,468,750,566
arxiv
\section{Introduction} After the recent success of the LHC Run I experiment, and discovery of the Higgs boson in particular, the LHC Run II is pushing the limits of higher-order calculations in QCD even further than ever. In that context, analytical calculations play a crucial role as a background for the numerical methods and phenomenological analyses in QCD. In this paper, we focus on the analytical calculation of phase-space master integrals for $1 \to n$ decay processes with massless particles in the final state. This type of decays within the electron-positron annihilation reactions have given us much information about the properties of quarks and gluons and the nature of their interactions as described by QCD. Moreover, they will play an outstanding role for the further precision studies of QCD at upcoming $e^+ e^-$ colliders at even higher energies. The classical example is jet production in $e^+e^-$ annihilation, which can be used to extract values of the strong coupling constant $\alpha_s$ from the three-jet rate and related event shape observables. In the past decade, next-to-next-to-leading order (NNLO), i.e., $\Order(\as^3)$, contributions to the three-jet rate from the process $\gamma^* \to 3 \text{ partons}$ in $e^+ e^-$ annihilation were calculated \cite{FGK89,BDK97,GGGKR01,MUW02}. Further improvements to this calculations at N$^3$LO inevitably require analytical expression for the integrals we consider in this work. For example, three-loop splitting functions are a must-have piece for numerical calculations of N$^3$LO contributions to the three-jet rate from the $\gamma^* \to$ 6 partons process. Splitting functions for the initial-state radiation, i.e., space-like, are known exactly at NNLO \mbox{\cite{MVV04a,MVV04b}}. In contrast, for the final-state radiation, i.e., time-like, they are known at NNLO only approximately \cite{MMV06,MV08,AMV11}. Despite the fact that those uncertainties are numerically irrelevant for phenomenological applications, e.g. for the evolution of fragmentation functions \cite{ARS15}. The exact result are still needed, as mentioned before, for performing numerical integration in various subtraction schemes when the need to integrate local counter terms arise \cite{GGG05,STD06,Cza10}. At the same time, a huge progress has been made in development of tools and methods for higher-order calculations in the field theory and perturbative QCD in particular. Integration-by-parts (IBP) reduction of Feynman integrals~\cite{CT81,Tka81} together with differential equations for master integrals~\cite{Smi04} proved, by state-of-the-art calculations, e.g. \cite{GMTW14,MSZ14,ADDHM15,AHHHKS15,BBDMS15}, to form a powerful framework for calculating high-order Feynman diagrams. Despite that it is usually applied for virtual integrals at the level of amplitudes, this approach can be used with the same success for analytical calculation of real phase-space integrals at the level of matrix elements, where standard approaches are usually applied, i.e., to parametrize a phase space explicitly and proceed accordingly with Feynman parameters integration and similar methods~\cite{RN96,GGH03}, or alternatively to work in the Mellin space with recursion relations \cite{MV99} or a system of difference equations \cite{MM06}. During the past few years, the method of differential equations became very popular due to the fact that a good choice of the basis for master integrals leads to significant simplifications of the differential equations \cite{Henn13}. Although, in general, finding an appropriate basis is not easy, the approach based on the Moser algorithm \cite{Mos60} was discussed in~\cite{Henn14}, which allows to reduce the system in one singular point, but not globally. A global extension of the Moser algorithm, which shows how to adjust the transformations in such a way that they do not spoil behavior in any other point was presented in \cite{Lee14}. It allows for systematic simplification of differential equations. Unfortunately, there is still no computer implementation of these methods available for the public use, which is very desirable to automatize the process. In this work we propose an alternative method for calculating phase-space master integrals from differential equations and show how to fix boundary conditions. The algorithm is self-consistent, in sense that if all the prerequisites are fulfilled the proof is reduced simply to verifying that proposed solutions satisfy initial equations. The main advantage of the proposed approach is that it is relatively simple, can be easily implemented as computer code, and at the same time gives a complete solution for masters to any power in $\eps$. Although it may be not as general as other methods, it can be successfully applied for calculating splitting functions, but is not limited only to that case. As an example of practical use, we perform a detailed calculation of the master integrals for the NLO contribution to time-like splitting functions and discuss possible extensions to NNLO accuracy. The paper is organized as follows: in Section \ref{sec:2} we introduce the notation and show how to calculate splitting functions from the $e^+e^-$ annihilation process. In Section \ref{sec:deq} we formulate a solution for the system of differential equations for phase-space master integrals of the topology $1\to n$ derived from IBP reduction rules in $x$-space. In Section \ref{sec:4} we calculate master integrals for the NLO splitting functions of $1 \to 3$ and $1 \to 4$ topologies. Finally, we discuss properties of the solutions obtained with our approach and its possible extensions to higher orders. \section{\boldmath Splitting functions in QCD} \label{sec:2} Let us briefly review the main facts on splitting functions in the collinear factorization formalism of QCD, mainly for notation consistency. For a more detailed review we refer the reader to \cite{MM06,GM15}. Splitting functions govern the collinear evolution in hard scattering processes with hadrons in the initial (space-like) or final (time-like) state. For processes with identified hadrons in the final state the parton-to-hadron transition is described by the parton fragmentation distributions $D_{f}^{h}(x,q^2)$, where $x$ represents the fractional momentum of the final-state parton $f$ transferred to the outgoing hadron $h$ and $q^2 \ge 0$ is a time-like hard scale. The scale dependence of the fragmentation distributions is controlled by the so-called time-like splitting functions $P^{T}_{ba}(x)$\footnote{Further in the text we omit the superscript $T$ and assume all splitting functions to be time-like.}, and is given by \begin{eqnarray} \label{eq:Devol} {d \over d \ln q^2} \; D_{a}^{h} (x,q^2) & = & \int_x^1 {dz \over z} \,P^{T}_{ba} \left( z, \alpha_s(q^2) \right) D_{b}^{h} \Big(\, {x \over z},\, q^2 \Big) \; , \end{eqnarray} where the summation runs over the number $n_f$ of effectively massless quark flavors and the gluon, $b = q_i,\bar{q}_i, g$ for $i = 1, \ldots, n_f$. The splitting functions $P_{ba}$ can be computed in perturbation theory in powers of the strong coupling $\alpha_s$, \begin{eqnarray} \label{eq:PTexp} P_{ba} \left( x,\alpha_s (q^2) \right) & = & a_s \, P_{ba}^{(0)}(x) + a_s^{2}\, P_{ba}^{(1)}(x) + a_s^{3}\, P_{ba}^{(2)}(x) + \ldots \, , \end{eqnarray} where we normalize the expansion parameter as $a_s = \alpha_s/ (4\pi)$. As discussed at length in \cite{GM15}, splitting functions can be extracted using the mass factorization formalism from the electron-positron annihilation processes \begin{equation} \label{eq:process} e^+ + e^- \to \gamma^*(q) \to p(k_0) + \langle\text{$n$ partons}\rangle \end{equation} and \begin{equation} \label{eq:process-phi} e^+ + e^- \to \phi^*(q) \to p(k_0) + \langle\text{$n$ partons}\rangle \end{equation} with photon ($\gamma$) exchange and Higgs ($\phi$) boson exchange in the effective theory and a tagged parton $p = q,{\bar q},g$ with momentum $k_0$. For the photon-exchange process~(\ref{eq:process}), following the notation in \cite{NW93}, the unpolarized differential cross-section in $m=4-2\eps$ dimensions is given by \begin{equation} \frac{1}{\sigma_\text{tot}} \frac{\D^2 \sigma}{\D x \, \D \cos\theta} = \frac{3}{8}(1+\cos^2\theta)\, \Ft(x,\eps) + \frac{3}{4}\sin^2\theta\, \Fl(x,\eps) + \frac{3}{4}\cos\theta\, \Fa(x,\eps), \end{equation} where $\theta$ denotes an angle between the beam and parton momentum $k_0$. The scaling variable $x$ is defined as \begin{align} x=\frac{2\sd{q}{k_0}}{q^2}, && q^2 = s > 0, && 0<x\le1\, . \end{align} For the demonstration of the method for calculating master integrals described in detail in Section \ref{sec:deq}, let us consider the time-like $q \to g$ splitting function at NLO. It can be written as \begin{equation} \label{eq:pqg} P_{qg}^{(2)}(x) = \delta(1-x)\;P_{qg}^{(0 \times 2)} + P_{qg}^{(1 \times 1)}(x) + P_{qg}^{(2 \times 0)}(x), \end{equation} where $P_{qg}^{(n_r \times n_v)}$ denotes contribution from the diagram with $n_r$ real and $n_v$ virtual legs, as illustrated in figure~\ref{fig:pqg-nlo}. \begin{figure}[h] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/g4r1v10.eps} \caption{real-virtual $P_{qg}^{(1 \times 1)}$} \label{fig:amp2a} \end{subfigure ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/g4r2v00.eps} \caption{real-real $P_{qg}^{(2 \times 0)}$} \label{fig:amp2b} \end{subfigure} \caption{Contributions to the time-like splitting functions at NLO.} \label{fig:pqg-nlo} \end{figure} In particular, we are interested in $1/\eps$ contribution to the transverse fragmentation function, as discussed in~\cite{GM15}, \begin{equation} \label{eq:Ft} {\cal F}_T^{(2)}(x,\eps) = \frac{2}{2-m} \left( \frac{\sd{q}{k_0}}{q^2} g^{\mu\nu} + \frac{k_0^\mu k_0^\nu}{\sd{q}{k_0}} \right) W_{\mu\nu}^{(2)} , \end{equation} where the hadronic tensor $W_{\mu\nu}^{(2)}(x,\eps)$ for the real-virtual and real-real cases becomes \begin{equation} W_{\mu\nu}^{(1\times 1)}(x,\eps) = \frac{x^{m-3}}{4\pi} \int \D \mathrm{PS}(2) \, \D l \; M_\mu (3) \: M_\nu^* (3) \end{equation} and \begin{equation} W_{\mu\nu}^{(2\times 0)}(x,\eps) = \frac{x^{m-3}}{4\pi} \int \D \mathrm{PS}(3) \; M_\mu (4) \: M_\nu^* (4) , \end{equation} where $M(3)$ and $M(4)$ are amplitudes for the processes depicted in figures~\ref{fig:amp2a} and \ref{fig:amp2b} respectively, $l$ is a loop momentum, and $\Dps{n}$ denotes a $n$-particle phase-space integral \begin{equation} \label{eq:psn} \int \D \mathrm{PS}(n) = \int \prod_{i=0}^{n} \D^m\! k_i \; \delta^+(k_i^2) \; \delta\Big(x-\frac{2\sd{q}{k_0}}{q^2}\Big) \; \delta\Big(q-\sum_{j=0}^{n} k_j\Big) . \end{equation} In the Section~\ref{sec:deq} we provide a method to calculate this kind of integrals with some detailed examples. Below, however, we would like to present a general plan of the calculation: \begin{enumerate} \item Generate amplitudes in figure~\ref{fig:pqg-nlo} with \QGRAF~\cite{Nog91} and construct from them fragmentation function $\mathcal{F}_T(x,\eps)$ using \FORM~\cite{Ver00}. \item Generate integration-by-parts rules for phase-space integrals with the \LiteRed package~\cite{Lee12,Lee13}. \item Find master integrals solving differential equations in $x$-space as described in the next section. \end{enumerate} \section{Master integrals from differential equations} \label{sec:deq} We consider a homogeneous system of differential equations, which in the most general case takes the form \begin{equation} \label{eq:f} \frac{\partial f_i}{\partial x} = \sum_{j=1}^{n} a_{ij}(x,\eps) \, f_j(x,\eps), \end{equation} where the coefficients $a_{ij}(x,\eps)$ (or the $n\times n$ matrix $A(x,\eps)$) are known, $f_i(x,\eps)$ are unknown functions, and $\eps$ is an infinitesimally small parameter (playing the role of a dimensional regulator in $m=4-2\eps$ dimensions). Assuming that the coefficients $a_{ij}(x,\eps)$ are rational functions of $\eps$, without loss of generality they can be written in the form \begin{equation} \label{eq:a_eps} A(x,\eps) = \sum_{k=r_\eps}^\infty \eps^k \, A^{(k)}(x), \end{equation} where $r_\eps$ is an integer (likely negative), which we use to denote an {\em $\eps$-rank} of the matrix $A(x,\eps)$. On the other hand, we restrict the matrix $A(x,\eps)$ to have the form \begin{equation} \label{eq:a_x} A(x,\eps) = \sum_{i} \frac{A_i(x,\eps)}{(x-x_i)^{1-p_i}}, \end{equation} where $i$ runs over some finite set, $p_i$ is said to be the {\em Poincare rank} of $A_i(x,\eps)$ at a singular point $x_i$, and $A_i(x,\eps)$ is a regular matrix at $x=x_i$, i.e., polynomial. Such a form is imposed exclusively for a practical reason since calculations of the splitting functions are bound to the case of $x_i \in \{-1,0,1\}$, which is exactly an alphabet for Harmonic Polylogarithms (HPLs)~\cite{RV99}. In the case of a more complex structure of denominators in the expansion \eqref{eq:a_x} the same arguments could be extended to the more general case of Generalized Harmonic Polylogarithms (GHPLs) introduced in~\cite{AB04}, which maintain the structure and properties of HPLs~\cite{BDV10,ABS13}. Keeping all the above considerations in mind we proceed with providing a solution for~\eqref{eq:f} as an $\eps$-series. Taking into account a recursive definition of (G)HPLs we show that such a series can be found to any order in~$\eps$ at a low computational price. \subsection{Solutions for $\eps$-rank $>0$} \label{sec:eps>0} We are looking for the solution of the system \eqref{eq:f} in the form \begin{equation} f_i(x,\eps) = \sum_{k=1}^\infty \eps^k f_i^{(k)}(x). \end{equation} Keeping in mind the expansion \eqref{eq:a_eps} it is easy to show that expansion coefficients calculated by the recursive formula \begin{equation} \label{eq:f_sol} f_i^{(k)}(x) = c_i^{(k)} + \sum_{m=1}^k \int \D x \, a_{ij}^{(m)}(x) f_j^{(k-m)}(x) \end{equation} lead to the desired solution, where $c_i^{(k)}$ are integration constants determined from boundary conditions as described in Section~\ref{sec:boundary}. \subsection{Solutions for $\eps$-rank $=0$} There is no general solution for the system with $\eps\text{-rank}=0$, however for some special cases it is possible to write down such a solution. In this paper we consider {\em weakly coupled} systems, these are systems for which $a_{ij}^{(0)}(x)$ is a triangular matrix, i.e., \begin{equation} a_{ij}^{(0)}(x) = 0, \quad\text{ for }\quad i<j. \end{equation} In such a case it is possible to choose a new basis so that a new system has $\eps$-rank $>0$ and can be solved using the method of Section~\ref{sec:eps>0}. In the remaining part of this section we provide a procedure how to accomplish that task, which consists of finding such new bases that: \begin{enumerate}[label=\roman*)] \item diagonal elements of $a_{ij}^{(0)}(x)$ are zero, i.e., $a_{ij}^{(0)}(x)=0$ for $i=j$; and \item off-diagonal elements of $a_{ij}^{(0)}(x)$ are zero, i.e., $a_{ij}^{(0)}(x)=0$ for $i>j$. \end{enumerate} \subsubsection{Zero-diagonal form} \label{sec:321} It is easy to verify that a system of differential equations for a new basis defined as \begin{equation} \label{eq:g-f} g_i(x,\eps) = b_{ii}(x,\eps) f_i(x,\eps), \end{equation} where \begin{equation} b_{ii}(x,\eps) = \exp\left(-\int \D x \; a_{ii}(x,\eps)\right), \end{equation} contains zeroes as diagonal elements and has a new form \begin{equation} \label{eq:g} \frac{\partial g_i}{\partial x} = \sum_{j=1}^{n} \tilde{a}_{ij}(x,\eps) \, g_j(x,\eps), \quad \text{where} \quad \tilde{a}_{ij}(x,\eps) = \frac{a_{ij}(x,\eps)}{b_{jj}(x,\eps)}. \end{equation} \subsubsection{Zero-triangular form} \label{sec:322} Next, following the same strategy, we find a new basis $h_i(x,\eps)$ which leads to the zero-triangular form of the differential equations: \begin{equation} \label{eq:h_ij} h_i(x,\eps) = g_i(x,\eps) + \sum_{j=1}^{i-1} b_{ij}(x,\eps) g_j(x,\eps), \end{equation} where \begin{equation} \label{eq:h_b_ij} b_{ij}(x,\eps) = - \int \D x \bigg( \tilde{a}_{ij}^{(0)}(x) + \sum_{k=j+1}^{i-1} b_{ik}(x,\eps) \tilde{a}_{kj}^{(0)}(x) \bigg). \end{equation} A complete form of the new system is rather complex and it is of no practical use to write it down here. However it can be easily obtained from \eqref{eq:h_ij} after coefficients of \eqref{eq:h_b_ij} are explicitly calculated. Let us show that such a choice indeed provides the desired zero-triangular system of equations. Taking the derivative of \eqref{eq:h_ij}, keeping in mind \eqref{eq:g} and \eqref{eq:h_b_ij}, and neglecting higher-order term in $\eps$ we obtain \begin{equation} \label{eq:part_h} \frac{\partial h_i}{\partial x} = \sum_{j=1}^{i-1} \bigg( \tilde{a}_{ij}^{(0)} g_j - \bigg( \tilde{a}_{ij}^{(0)} g_j + \sum_{k=j+1}^{i-1} b_{ik} \tilde{a}_{kj}^{(0)} g_j \bigg) +\sum_{k=1}^{j-1} b_{ij} \tilde{a}_{jk}^{(0)} g_k \bigg) . \end{equation} It is easy to check, by carefully switching summation variables in one of the nested sums, that right-hand side of \eqref{eq:part_h} becomes zero. At first sight, it may look that nested integrals in \eqref{eq:h_b_ij} are way too complicated for practical calculations. In fact, they are very easy to compute taking into account the recursive nature of (G)HPLs, as was discussed earlier in this section. For our examples, discussed in the next section, we have used the \texttt{HPL} package \cite{Maitre05}. \subsection{Solutions for $\eps$-rank $<0$} \label{sec:33} As a rule, when one chooses a basis of master integrals as provided directly by the IBP rules generator, like \FIRE \cite{Smi08}, \Reduze \cite{MS12}, or \LiteRed \cite{Lee12,Lee13}, the system \eqref{eq:f} has a negative $\eps$-rank. In this situation we can not proceed with the procedure described before in this section. To overcome this issue it is usually enough to adjust $\eps^{n}$ factors in the masters, for example see \eqref{eq:v_i_1} in Appendix~\ref{app:a}. To get a hint on how to choose $n$ we analyze Mellin moments for the corresponding masters that leads to several possibilities: \begin{enumerate} \item In the presence of factors $x^{-1+a\eps}(1-x)^{-1+b\eps}$ we choose $n = r_\eps^{(1)} - 1$, where $r_\eps^{(i)}$ is an $\eps$-rank of the $i^\text{th}$ Mellin moment. The reason is that the logarithmic singularity in $x$ is canceled by the Mellin moment while the second one in $1-x$ introduces an additional $\eps$ pole. For the illustration see masters $V_6, R_7, R_8$ in the Appendices. \item In the presence of factors $x^{-1+a\eps}$ we choose $n = r_\eps^{(0)} - 1$. \item Otherwise we choose $n = r_\eps^{(0)}$. \end{enumerate} \subsection{Boundary conditions} \label{sec:boundary} The final step of the method is to find integration constants $c_i^{(k)}$ emerging in \eqref{eq:f_sol}. On the one hand, in the case of phase-space integrals we can do that by calculating Mellin moments of the solution \eqref{eq:f_sol}. On the other hand, the same moments can be taken from the literature or directly calculated by performing integration over the entire $n$-particle phase-space, i.e., \begin{equation} \int \prod_{i=0}^{n} \D^m\! k_i \; \delta^+(k_i^2) \; \delta\Big(q-\sum_{j=0}^{n} k_j\Big). \end{equation} As in the case of the phase-space integrals with $x$-space projection \eqref{eq:psn}, we can generate IBP rules for the inclusive integrals as well. That allows us to reduce the set of inclusive masters which should be calculated explicitly. Another simplification is related to the Mellin moments, which can be extracted from the difference equations. These equations in turn can be derived from the differential equations \eqref{eq:f}, hence only one Mellin moment needs to be computed for each inclusive master. \section{\boldmath Master integrals for NLO splitting functions} \label{sec:4} Finally, we demonstrate the practical application of the method described in the previous section. We choose to calculate two-loop contributions to the time-like splitting function $P_{qg}^{(2)}(x)$ since its three-loop contribution is still not known exactly, however it will be possible to obtain them by future extension of this example to NNLO. We follow the plan described at the end of Section~\ref{sec:2}. After IBP reduction, done with the help of \LiteRed, we obtain 6 real-virtual (figure~\ref{fig:1}) and 8 real-real (figure~\ref{fig:2}) masters for contributions depicted in figure~\ref{fig:pqg-nlo}. \subsection{Real-virtual contribution} We define real-virtual master integrals depicted in figure~\ref{fig:1} as \begin{equation} V_i(x,\eps) = \{a_1,\ldots,a_n\} = \int \D \mathrm{PS}(2) \, \D l \frac{1}{D_{a_1} \ldots D_{a_n}} , \end{equation} where real-state integration phase-space is defined by \eqref{eq:psn}, $l$ is a loop momentum, and denominators $D_j$ are defined in \eqref{eq:Drv}. \begin{equation} \begin{aligned} D_1 & = l^2 & D_2 & = (l+k_1-q)^2 & D_3 & = (l-q)^2 & D_4 & = (l+k_1+k_2)^2 \\ D_5 & = (l-k_2)^2 & D_6 & = (l+k_1+k_2-q)^2 & D_7 & = (k_2-q)^2. \end{aligned} \label{eq:Drv} \end{equation} \begin{figure}[h] \centering \begin{subfigure}[b]{0.250\textwidth} \includegraphics[width=\textwidth]{img/v1.eps} \caption*{$V_1 = \{1,2\}$} \end{subfigure ~ \begin{subfigure}[b]{0.250\textwidth} \includegraphics[width=\textwidth]{img/v2.eps} \caption*{$V_2 = \{1,3\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.250\textwidth} \includegraphics[width=\textwidth]{img/v3.eps} \caption*{$V_3 = \{1,4\}$} \end{subfigure} \vspace{4mm} \begin{subfigure}[b]{0.250\textwidth} \includegraphics[width=\textwidth]{img/v4.eps} \caption*{$V_4 = \{1,2,3\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.250\textwidth} \includegraphics[width=\textwidth]{img/v5.eps} \caption*{$V_5 = \{1,2,3,5\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.250\textwidth} \includegraphics[width=\textwidth]{img/v6.eps} \caption*{$V_6 = \{1,2,3,6,7\}$} \end{subfigure} \vspace{4mm} \caption{Master integrals for the real-virtual NLO contribution to the time-like splitting function.} \label{fig:1} \end{figure} {\bf Step 1.} In order to obtain a system of differential equations with non-negative $\eps$-rank we choose $\eps^n$ factors as described in Section~\ref{sec:33} (see \eqref{eq:v_i_1}). The resulting system is given by \eqref{eq:mrv}. {\bf Step 2.} We change the basis to obtain a zero-diagonal system, as described in Section~\ref{sec:321}: \begin{equation} \label{eq:v_i_1} \begin{aligned} V_1 & = \eps x^{-1+2\eps} (1-x)^{2\eps} V_1 \\ V_2 & = \eps x^{-1+3\eps} (1-x)^\eps V_2 \\ V_3 & = \eps x^{-1+2\eps} (1-x)^\eps V_3 \\ V_4 & = \eps^2 (1-x)^{2\eps} V_4 \\ V_5 & = \eps^3 x^{1+2\eps} (1-x)^{1+2\eps} V_5 \\ V_6 & = \eps^3 x^{1+4\eps} V_6 \end{aligned} \end{equation} {\bf Step 3.} We make the last change of the basis in order to obtain a zero-triangular system as described in Section~\ref{sec:322}: \begin{equation} \begin{aligned} V_1 & = \eps\, x^{-1+2\eps} (1-x)^{2\eps} V_1 \\ V_2 & = \eps\, x^{-1+3\eps} (1-x)^\eps V_2 \\ V_3 & = \eps\, x^{-1+2\eps} (1-x)^\eps V_3 \\ V_4 & = - \eps\, x^{-1+3\eps} (1-x)^{\eps} \HPL_{1} V_2 + \eps\, x^{-1+2\eps} (1-x)^{\eps} \HPL_{1} V_3 + \eps^2 (1-x)^{2\eps} V_4 \\ V_5 & = \eps^3 x^{1+2\eps} (1-x)^{1+2\eps} V_5 \\ V_6 & = \eps^3 x^{1+4\eps} V_6 \end{aligned} \end{equation} {\bf Step 4.} We solve the resulting equations with the help of \eqref{eq:f_sol} as described in Section~\ref{sec:eps>0} and return to the initial basis.\\ {\bf Step 5.} We find the final result by fixing boundary conditions using Mellin moments given in Appendix~\ref{app:a}. \subsection{Real-real contribution} By analogy with the real-virtual case we proceed with the real-real contribution with final results given in Appendix~\ref{app:b}. We define master integrals depicted in figure~\ref{fig:2} as \begin{equation} R_i(x,\eps) = \{a_1,\ldots,a_n\} = \int \D \mathrm{PS}(3) \frac{1}{D_{a_1} \ldots D_{a_n}} , \end{equation} where denominators $D_j$ are defined in \eqref{eq:Drr}. \begin{equation} \label{eq:Drr} \begin{aligned} D_1 & = k_1^2 & D_2 & = (q-k_1)^2 & D_3 & = (q-k_2)^2 & D_4 & = (q-k_1-k_3)^2 \\ D_5 & = (q-k_2-k_3)^2 & D_6 & = (k_2+k_3)^2 & D_7 & = (k_1+k_3)^2 \end{aligned} \end{equation} \begin{figure}[h] \centering \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r1.eps} \caption*{$R_1 = \{\}$} \end{subfigure ~ \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r2.eps} \caption*{$R_2 = \{2\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r3.eps} \caption*{$R_3 = \{3,6\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r4.eps} \caption*{$R_4 = \{4,5,6,7\}$} \end{subfigure} \vspace{4mm} \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r5.eps} \caption*{$R_5 = \{1,2,3\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r6.eps} \caption*{$R_6 = \{2,3\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r7.eps} \caption*{$R_7 = \{2,3,6,7\}$} \end{subfigure} ~ \begin{subfigure}[b]{0.200\textwidth} \includegraphics[width=\textwidth]{img/r8.eps} \caption*{$R_8 = \{2,3,4,5\}$} \end{subfigure} \vspace{4mm} \caption{Master integrals for the real-real NLO contributions to the time-like splitting function.} \label{fig:2} \end{figure} We tune $\eps^n$ factors in order to obtain a non-negative $\eps$-rank matrix which is given by \eqref{eq:mrr}. Corresponding powers of $\eps$ can be seen in \eqref{eq:rr-zd}. Next, a zero-diagonal basis reads \begin{equation} \label{eq:rr-zd} \begin{aligned} R_1 & = x^{-1+2\eps} (1-x)^{-1+2\eps} R_1 \\ R_2 & = x^{-1+3\eps} R_2 \\ R_3 & = \eps (1-x)^{2\eps} R_3 \\ R_4 & = \eps^3 x^{1+2\eps} (1-x)^{1+2\eps} R_4 \\ R_5 & = \eps^2 x^{4\eps} (1-x)^{2\eps} (1+x)^{-4\eps} R_5 \\ R_6 & = \eps^2 (1+x)^{-1+6\eps} R_6 \\ R_7 & = \eps^3 x^{1+2\eps} (1-x)^{1+2\eps} R_7 \\ R_8 & = \eps^2 x^{1+2\eps} (1+x)^{1+2\eps} R_8 \end{aligned} \end{equation} Finally, a zero-triangular basis reads \begin{equation} \begin{aligned} R_1 & = x^{-1+2\eps} (1-x)^{-1+2\eps} R_1 \\ R_2 & = x^{-1+3\eps} R_2 + 2 x^{-1+2\eps} (1-x)^{-1+2\eps} \HPL_0 R_1 \\ R_3 & = \eps (1-x)^{2\eps} R_3 - x^{-1+3\eps} \HPL_1 R_2 + 2 x^{-1+2\eps} (1-x)^{-1+2\eps} \HPL_2 R_1 \\ R_4 & = \eps^3 x^{1+2\eps} (1-x)^{1+2\eps} R_4 \\ R_5 & = \eps^2 x^{4\eps} (1-x)^{2\eps} (1+x)^{-4\eps} R_5 \\ R_6 & = 2 x^{-1+2\eps} (1-x)^{-1+2\eps} (1+x)^{-1} R_1 + 2 \eps^2 x^{4\eps} (1-x)^{2\eps} (1+x)^{-1-4\eps} R_5 \\ & + \eps^2 (1+x)^{-1+6\eps} R_6 \\ R_7 & = \eps^3 x^{1+2\eps} (1-x)^{1+2\eps} R_7 \\ R_8 & = 4 x^{-1+2\eps} (1 - x)^{-1+2\eps} (1 + x)^{-1} ((1+2x)\HPL_0 + (1-x) \HPL_{-1}) R_1 \\ & - 4\eps^2 x^{4\eps} (1-x)^{2\eps} (1+x)^{-1-4\eps} (\HPL_0 - (1-x) \HPL_{-1}) R_5 \\ & + \eps^2 (1+x)^{-1+6\eps}(-2\HPL_0 + 4 \HPL_{-1}) R_6 + \eps^2 x^{1+2\eps} (1+x)^{1+2\eps} R_8 \end{aligned} \end{equation} \begin{landscape} \vspace*{\fill} \begin{equation} \label{eq:mrv} \hspace{-8mm} \begin{pmatrix} \frac{1-x - \eps (2-4x)}{x(1-x)} & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1-x-\eps (3-4x)}{x(1-x)} & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1-x-\eps (2-3 x)}{x(1-x)} & 0 & 0 & 0 \\ 0 & -\frac{1-5\eps+6\eps^2}{x(1-x)} & \frac{(1-2 \eps)^2}{x(1-x)} & \frac{2 \eps}{1-x} & 0 & 0 \\ 0 & \frac{\eps(3-x)\left(1-5\eps+6\eps^2\right)}{x^3(1-x)^2} & -\frac{2\eps(1-2\eps)^2}{x^2(1-x)^2} & -\frac{2 \eps^2}{x(1-x)^2} & -\frac{(1+2\eps) (1-2x)}{(1-x) x} & 0 \\ 0 & -\frac{2 \eps \left(1-5\eps+6\eps^2\right)}{x^2(1-x)} & 0 & -\frac{4 \eps^2}{x(1-x)} & 0 & -\frac{1+4\eps}{x} \\ \end{pmatrix} \end{equation} \vspace*{\fill} \begin{equation} \label{eq:mrr} \hspace{-8mm} \begin{pmatrix} \frac{(1-2\eps)(1-2x)}{x(1-x)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{2-3 \eps}{x(1-x)} & \frac{1-3 \eps}{x} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -\frac{1-5\eps+6\eps^2}{(1-x) x} & \frac{2 \eps}{1-x} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\frac{(1+2\eps) (1-2x)}{x(1-x)} & 0 & 0 & 0 & 0 \\ -\frac{\eps \left(2-13\eps+27\eps^2-18\eps^3\right)}{x^2(1+x)} & -\frac{\eps^2 \left(1-5\eps+6\eps^2\right)}{x^2} & 0 & 0 & -\frac{2 \eps \left(2-3x-x^2\right)}{x(1-x)(1+x)} & -\frac{2 \eps (1-6\eps)}{x (1+x)} & 0 & 0 \\ \frac{2-13\eps+27\eps^2-18\eps^3}{x(1-x)(1+x)} & 0 & 0 & 0 & \frac{2}{1+x} & \frac{1-6\eps}{1+x} & 0 & 0 \\ \frac{4 \eps \left(2-13\eps+27\eps^2-18\eps^3\right)}{x^3(1-x)^3(1+x)} & \frac{2 \eps^2 \left(1-5\eps+6\eps^2\right) (2-x)}{x^3(1-x)^2} & -\frac{4 \eps^3}{x(1-x)^2} & 0 & \frac{4 \eps (1+x^2)}{x^2(1-x)^2(1+x)} & \frac{2 \eps (1-6\eps)}{x^2(1-x)(1+x)} & -\frac{(1+2\eps) (1-2x)}{x(1-x)} & 0 \\ -\frac{2 \left(2-13\eps+27\eps^2-18\eps^3\right) (1+4x+x^2)}{x^3(1-x)(1+x)^3} & -\frac{2\eps \left(1-5\eps+6\eps^2\right) (1-x)}{x^3 (1+x)^2} & 0 & 0 & \frac{4 (1+x^2)}{x^2 (1+x)^3} & \frac{2(1-6\eps)(1-x)}{x^2 (1+x)^3} & 0 & -\frac{(1+2\eps)(1+2x)}{x(1+x)} \\ \end{pmatrix} \end{equation} \vspace*{\fill} \end{landscape} In summary, with the help of the method described in Section~\ref{sec:deq} we have found the master integrals of figures \ref{fig:1} and \ref{fig:2}. The solutions are presented in Appendix~\ref{app:a} as a partly-expanded series in the dimensional regulator $\eps$ with at least 3 leading terms, i.e., HPLs of weight 2. That is enough for our purpose, i.e., to extract slitting functions as discussed in Section~\ref{sec:2}. Furthermore, the higher-order $\eps$-terms of the presented solutions are easy to obtain provided the corresponding $\eps$-terms of Mellin moments, required to fix boundary conditions, are know. \section{Conclusions} In this paper we proposed a method for calculating phase-space integrals for the decay process \mbox{$1 \to n$} massless partons in QCD using integration-by-parts and differential equations techniques. The key idea of our approach is based on choosing a basis of master integrals which leads to significant simplification of differential equations. As a main result of this work, we describe an algorithm how to construct such a basis and find a solution of the resulting differential equations. The advantage of our approach comparing to available techniques is that it is relatively simple to automate for execution on a computer without loss of generality of the final solution, which is obtained to any order in the dimensional regulator $\eps$ in terms of (generalized) harmonic polylogarithms. That requires however to know at least one Mellin moment for every master integral in order to determine boundary conditions for the final solution. In order to demonstrate how our method works in practice, we calculate master integrals for the decay processes $1 \to 4$ and $1 \to 3$ with a projection to $x$-space, needed to extract NLO time-like splitting functions from $e^+e^-$ annihilation process. Analyzing this example we notice that another asset of the proposed method is that resulting master integrals are explicitly regulated in the singular points with the help of the dimensional regulator $\eps$, manifested by overall factors $x^{-1+a\eps}$ and $(1-x)^{-1+b\eps}$ in the final result. The generalization of the results to NNLO topologies with loop insertions, needed to obtain missing $n_f^2$ pieces of the off-diagonal time-like splitting functions, is particularly straight-forward due to the factorizability of the phase-space. In addition, master integrals with various types of projectors, not only to $x$-space as in the case of splitting functions, can be obtained with the described method as well. \acknowledgments I gratefully acknowledge the hospitality of the Theory Group of the University of Hamburg where major part of this research was done. In particular, I am thankful for numerous discussions with prof. Sven-Olaf Moch, his support and guidance. I also acknowledged useful discussions and comments from Johannes Henn and Roman Lee. The Feynman diagrams were drawn with the help of \texttt{JaxoDraw}~\cite{BCKT08} and \texttt{Axodraw}~\cite{Ver94}. This work has been supported by the Research Executive Agency (REA) with the European Union grant PITN-GA-2010-264564 (LHCPhenoNet) and by Narodowe Centrum Nauki with the Sonata Bis grant DEC-2013/10/E/ST2/00656. \newpage
1,477,468,750,567
arxiv
\section{Multi-Step Methods for Mechanical Engineers} Multi-step methods \cite{Butcher08,HairerWanner91} are numerical schemes that are used to approximate solutions for systems of ODEs which commonly arise in engineering practice. Because the intended readers of this document are my students, whom will become Mechanical Engineers upon graduation, I present these methods using variables that are intuitive to them: time $t$ is the independent variable of integration, and position $\mathbf{x} = \{ x_1 , x_2 , x_3 \}^{\mathsf{T}}$ is the dependent variable of integration (plus, sometimes, velocity) while velocity $\mathbf{v} = \{ v_1 , v_2 , v_3 \}^{\mathsf{T}} = \mathbf{v}(t, \mathbf{x})$ and acceleration $\mathbf{a} = \{ a_1 , a_2 , a_3 \}^{\mathsf{T}} = \mathbf{a}(t, \mathbf{x}, \mathbf{v})$ are functions of these independent and dependent variables. The time rate-of-change of acceleration is jerk $\dot{\mathbf{a}} = \{ \dot{a}_1 , \dot{a}_2 , \dot{a}_3 \}^{\mathsf{T}}$, which is introduced as a means by which improvements in solution accuracy can be made. Problems like these commonly arise in applications within disciplines like kinematics, dynamics, thermo\-dynamics, vibrations, controls, process kinetics, etc. The methods presented in this document apply to systems of any dimension, it is just that $t$, $\mathbf{x}$, $\mathbf{v}$ and $\mathbf{a}$ are physical notions for which my students have intuitive understanding. Current engineering curricula expose students to some basic methods like Euler's method (you should never use forward-Euler by itself), a simple Euler predictor with a trapezoidal corrector, often called Heun's method, and \textit{the\/} Runge-Kutta method. Kutta \cite{Kutta01} derived \textit{the\/} Runge-Kutta method (Runge played no part here). This method, likely the most popular of all ODE solvers, was not the method Kutta actually advocated for use. He derived a more accurate fourth-order method in his paper---a method that has sadly become lost to the obscurity of dusty shelves. The intent of this note is to inform my students about the existence and utility of a whole other class of ODE solvers that have great value in many applications. These are called multi-step methods. They make an informed decision on the direction that its solution will advance into the future based upon where it has been in the recent past. In contrast, Runge-Kutta methods sample multiple paths in the present to make an informed decision on the direction that its solution will advance into the future. The past does not enter into the Runge-Kutta process. These two classes of numerical methods are fundamentally different in this regard. There is an emerging field within computational mathematics where these two approaches are being melded into one. They are called general linear methods, and two such methods can be found in Appendix~D of my textbook \cite{Freed14}. We will not address them here. \section{The Objective} Throughout this document we shall consider an interval in time $[0,T]$ over which $N$ solutions are to be extracted at nodes $n = 1,2, \ldots, N$ spaced at uniform intervals in time with a common step size of $h = T/N$ separating them. This is referred to as the global step size. A local step size will be introduced later, which will be the actual step size that an integrator uses to advance along its solution path. This size dynamically adjusts to maintain solution accuracy, and is under the control of a proportional integral (PI) controller. Node $n$ is located at current time. Here is where the solution front resides. Node $n \! - \! 1$ is where the previous solution was acquired, while node $n \! + \! 1$ is the where the next solution is to be calculated. In this regard, information storage required by these methods is compatible with memory strategies and coding practices adopted by many industrial codes like finite elements. This requirement of working solely with nodes $n \! - \! 1$, $n$, $n \! + \! 1$ will limit the accuracy that one can achieve with these methods. Higher-order multi-step methods require more nodes, and as such, more information history. Our objective is to construct a collection of numerical methods that resemble the popular, second-order, backward-difference formula \cite{HairerWanner91} denoted as BDF2 in the literature and software packages. BDF2 is described by \begin{equation*} \mathbf{x}_{n+1} = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{2}{3} \, h \mathbf{v}_{n+1} + \mathcal{O} (h^3) \end{equation*} and is an implicit method in that $\mathbf{v} = \mathbf{v} (t, \mathbf{x})$, typically, and therefore $\mathbf{x}_{n+1}$ appears on both sides of the equals sign. There are good reasons for selecting this numerical model upon which to construct other methods; specifically, BDF2 is a convergent method in that it is consistent and A~stable \cite{Butcher08}. These are noble properties to aspire to, but whose discussion lies beyond the scope of this document. Here your professor seeks to provide techniques that address three questions: \textit{i\/}) How can one apply an implicit multi-step method where you need to know the solution to get the solution? \textit{ii\/}) How can one startup a multi-step method, because at the initial condition there is no solution history? and \textit{iii\/}) Numerical ODE solvers typically solve first-order systems, but Newton's Laws for Motion are described with a second-order system. How can one construct an ODE solver designed to handle these types of problems? An answer to the first question is: We will introduce a predictor to get an initial solution estimate; specifically, predict\slash evaluate\slash correct\slash evalute (PECE) schemes are developed. An answer to the second question is: A single-step method can be used to start up a two-step method. And an answer to the third question is: We will use the natural features of multi-step methods and Taylor series expansions to construct solvers for second-order ODEs. Several of the methods found in this document are not found in the literature. Your professor created them just for you! \subsection{Strategy} The strategy used to construct multi-step algorithms is to expand an appropriate linear combination of Taylor series for displacement $\mathbf{x}$ taken about solution nodes at discrete times. In our case, expansions are taken about times $t_{n-1}$, $t_n$ and $t_{n+1}$ such that their sum replicates the general structure of the BDF2 method. Specifically, we seek two-step methods with constituents $\mathbf{x}_{n+1} = \tfrac{1}{3} (4 \textbf{x}_n - \textbf{x}_{n-1}) + \cdots$ that are common betwixt them. Each Taylor series is expanded out to include acceleration $\mathbf{a}$ for methods that solve first-order ODEs, and each Taylor series is expanded out to include jerk $\dot{\mathbf{a}}$ for methods that solve second-order ODEs. The pertinent series for displacement include \begin{subequations} \label{TaylorDisplacements} \begin{align} \mathbf{x}_{n+1} & = \mathbf{x}_n + h \mathbf{v}_n + \tfrac{1}{2} h^2 \mathbf{a}_n + \tfrac{1}{6} h^3 \dot{\mathbf{a}}_n + \cdots \label{displacementA} \\ \mathbf{x}_n & = \mathbf{x}_{n+1} - h \mathbf{v}_{n+1} + \tfrac{1}{2} h^2 \mathbf{a}_{n+1} - \tfrac{1}{6} h^3 \dot{\mathbf{a}}_{n+1} + \cdots \label{displacementB} \\ \mathbf{x}_n & = \mathbf{x}_{n-1} + h \mathbf{v}_{n-1} + \tfrac{1}{2} h^2 \mathbf{a}_{n-1} + \tfrac{1}{6} h^3 \dot{\mathbf{a}}_{n-1} + \cdots \label{displacementC} \\ \mathbf{x}_{n-1} & = \mathbf{x}_n - h \mathbf{v}_n + \tfrac{1}{2} h^2 \mathbf{a}_n - \tfrac{1}{6} h^3 \dot{\mathbf{a}}_n + \cdots \label{displacementD} \end{align} \end{subequations} where the set of admissible expansions only involve nodes $n \! - \! 1$, $n$ and $n \! + \! 1$. Once these are in place, like Taylor expansions for the velocity are secured \begin{subequations} \label{TaylorVelocities} \begin{align} \mathbf{v}_{n+1} & = \mathbf{v}_n + h \mathbf{a}_n + \tfrac{1}{2} h^2 \dot{\mathbf{a}}_n + \cdots \label{velocityA} \\ \mathbf{v}_n & = \mathbf{v}_{n+1} - h \mathbf{a}_{n+1} + \tfrac{1}{2} h^2 \dot{\mathbf{a}}_{n+1} + \cdots \label{velocityB} \\ \mathbf{v}_n & = \mathbf{v}_{n-1} + h \mathbf{a}_{n-1} + \tfrac{1}{2} h^2 \dot{\mathbf{a}}_{n-1} + \cdots \label{velocityC} \\ \mathbf{v}_{n-1} & = \mathbf{v}_n - h \mathbf{a}_n + \tfrac{1}{2} h^2 \dot{\mathbf{a}}_n + \cdots . \label{velocityD} \end{align} \end{subequations} These series are solved for acceleration for the first-order ODE solvers, and for jerk for the second-order ODE solvers. These solutions for acceleration\slash jerk are then inserted back into the original series for displacement. The net effect is to incorporate contributions for acceleration\slash jerk by approximating them in terms of velocities and, possibly, accelerations, thereby increasing the order of accuracy for the overall method by one order, e.g. from second-order, i.e., $\mathcal{O}(h^3)$, to third-order, viz., $\mathcal{O}(h^4)$, for the second-order ODE methods. This is accomplished without the solver explicitly needing any information about jerk from the user, which would be hard to come by in practice. We speak of a method being, say, second-order accurate, and designate this with the notation $\mathcal{O}(h^3)$. There may seem to be an apparent discrepancy between the order of a method and the exponent of $h$. This comes into being because the `order' of a method represents the global order of accuracy in a solution, whereas the exponent on the $h$-term represents an order of accuracy in the solution over a local step of integration with the exponent on $h$ designating the order of its error estimate. Our objective, viz., $\mathbf{x}_{n+1} = \mathbf{x}_n + \cdots$ for one-step (startup) methods and $\mathbf{x}_{n+1} = \tfrac{1}{3} (4 \textbf{x}_n - \textbf{x}_{n-1}) + \cdots$ for two-step methods, is achieved by applying the following linear combinations of Taylor series \begin{displaymath} \textrm{predictors} \Leftarrow \begin{cases} 1 (\ref{displacementA}) & \textrm{one-step} \\ 1 (\ref{displacementA}) - \tfrac{1}{6} (\ref{displacementC}) + \tfrac{1}{6} (\ref{displacementD}) & \textrm{two-step} \end{cases} \end{displaymath} and \begin{displaymath} \textrm{correctors} \Leftarrow \begin{cases} \tfrac{1}{2} (\ref{displacementA}) - \tfrac{1}{2} (\ref{displacementB}) & \textrm{one-step} \\ \tfrac{4}{3} (\ref{displacementA}) + \tfrac{1}{3} (\ref{displacementB}) + \tfrac{1}{3} (\ref{displacementD}) & \textrm{two-step } \# 1 \\ \tfrac{4}{3} (\ref{displacementA}) + \tfrac{1}{3} (\ref{displacementB}) - \tfrac{1}{6} (\ref{displacementC}) + \tfrac{1}{6} (\ref{displacementD}) & \textrm{two-step } \# 2 \end{cases} \end{displaymath} with \begin{displaymath} \textrm{truncation errors} \Leftarrow \begin{cases} \tfrac{1}{2} \| (\ref{displacementA}) + (\ref{displacementB}) \| & \textrm{one-step} \\ \tfrac{1}{6} \| 2 (\ref{displacementA}) + 2 (\ref{displacementB}) + 1 (\ref{displacementC}) + 1 (\ref{displacementD}) \| & \textrm{two-step } \# 1 \\ \tfrac{1}{3} \| (\ref{displacementA}) + (\ref{displacementB}) \| & \textrm{two-step } \# 2 \end{cases} \end{displaymath} wherein the parenthetical numbers refer to the sub-equations listed in Eq.~(\ref{TaylorDisplacements}) and where the coefficients out front designate the weight applied to that formula. To be a corrector requires expansion (\ref{displacementB}), which must not appear in a predictor. There are two ways to construct a corrector that satisfy our conjecture, and both will be used. A design objective is to come up with a predictor\slash corrector pair that weigh their contributions the same; specifically, their displacements are weighted the same, their velocities are weighted the same, and when present, their accelerations are weighted the same, too. \section{PECE Methods for First-Order ODEs} \label{Sec:firstOrder} The following algorithm is suitable for numerically approximating solutions to stiff systems of ODEs, which engineers commonly encounter. The idea of mathematical stiffness is illustrated through an example in \S\ref{Sec:Brusselator}. For this class of problems it is assumed that velocity is described as a function in time and displacement, e.g., at step $n$ a formula would give $\mathbf{v}_n = \mathbf{v} (t_n , \mathbf{x}_n)$. An initial condition $\mathbf{x} (0) = \mathbf{x}_0$ is required to start an analysis. The objective is to solve this ODE for displacement $\mathbf{x}_{n+1}$ evaluated at the next moment in time $t_{n+1}$, wherein $n$ sequences as $n = 0, 1, \ldots , N \! - \! 1$. Heun's method is used to take the first integration step. Begin by applying a predictor (it is a forward Euler step) \begin{subequations} \label{startUp1stOrderODEs} \begin{align} \mathbf{x}_1^p & = \mathbf{x}_0 + h \mathbf{v}_0 + \mathcal{O} (h^2) \label{startUp1stOrderPredictor} \\ \intertext{which is to be followed with an evaluation for velocity $\mathbf{v}^p_1 = \mathbf{v} (t_1 , \mathbf{x}_1^p)$ using this predicted estimate for displacement. A corrector is then applied (it is the trapezoidal rule)} \mathbf{x}_1 & = \mathbf{x}_0 + \tfrac{1}{2} h \bigl( \mathbf{v}_1^p + \mathbf{v}_0 \bigr) + \mathcal{O} (h^3) \label{startUp1stOrderCorrector} \end{align} \end{subequations} after which a final re-evaluation for velocity $\mathbf{v}_1 = \mathbf{v} (t_1 , \mathbf{x}_1)$ is made and the first step comes to a close. In this case, using another Taylor series to subtract out the influences from acceleration did not bring about any change to the formula. This is not unexpected, as the trapezoidal method is already second-order accurate, i.e., it has a truncation error on the order of $\mathcal{O}(h^3)$. The step counter is assigned a value of $n = 1$, after which control of the solution process is passed over to the following method. For entering step counts that lie within the interval $n=1$ to $n = N \! - \! 1$, numeric integration continues by employing a predictor \begin{subequations} \label{1stOrderODEs} \begin{align} \mathbf{x}_{n+1}^p & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{2}{3} h \bigl( 2\mathbf{v}_n - \mathbf{v}_{n-1} \bigr) + \mathcal{O} (h^3) \label{1stOrderPredictor} \\ \intertext{followed by an evaluation for velocity via $\mathbf{v}^p_{n+1} = \mathbf{v} (t_{n+1} , \mathbf{x}_{n+1}^p)$ using this predicted estimate for displacement. Here including correction terms for acceleration changed $\tfrac{1}{6} h ( 5 \mathbf{v}_n - \mathbf{v}_{n-1})$ to $\tfrac{2}{3} h ( 2 \mathbf{v}_n - \mathbf{v}_{n-1} )$ and in the process improved its accuracy from $\mathcal{O}(h^2)$ to $\mathcal{O}(h^3)$. The corrector obtained according to our recipe for a type \#1 method is} \mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{2}{3} h \mathbf{v}^p_{n+1} + \mathcal{O} (h^3) \label{1stOrderCorrector} \end{align} \end{subequations} which culminates with a re-evaluation for $\mathbf{v}_{n+1} = \mathbf{v} ( t_{n+1}, \mathbf{x}_{n+1})$. This corrector is the well-known BDF2 formula, the method we are generalizing around. Including correction terms for acceleration changed $- \tfrac{1}{3} h ( \mathbf{v}^p_{n+1} - 3 \mathbf{v}_n)$ to $\tfrac{2}{3} h \mathbf{v}^p_{n+1}$ and in the process improved its accuracy from $\mathcal{O}(h^2)$ to $\mathcal{O}(h^3)$. For both integrators, displacement has weight 1, while velocity has weight $\tfrac{2}{3} h$. The predictor and corrector are consistent in this regard, a required design objective when deriving an admissible PECE method. Variables are to be updated according to $n \! - \! 1 \leftarrow n$ and $n \leftarrow n \! + \! 1$ after which counter $n$ gets incremented. After finishing with the data management, the solution is ready for advancement to the next integration step, with looping continuing until $n = N$ whereat the solution becomes complete. \section{PECE Methods for Second-Order ODEs} \label{Sec:secondOrder} For this class of problems it is assumed that the velocity is described as a function of time and displacement, e.g., $\mathbf{v}_n = \mathbf{v} (t_n , \mathbf{x}_n)$, and likewise, the acceleration is also a prescribed function in terms of time, displacement and velocity, e.g., $\mathbf{a}_n = \mathbf{a} (t_n , \mathbf{x}_n , \mathbf{v}_n)$. An initial condition is to be supplied by the user, viz., $\mathbf{x} (0) = \mathbf{x}_0$. The objective of this method is to solve this second-order ODE for displacement $\mathbf{x}_{n+1}$, which is to be evaluated at the next moment in time $t_{n+1}$, wherein $n=0,1, \ldots , N \! - \! 1$. Like the previous method, this is a two-step method so, consequently, it is not self starting. To take a first step, apply the predictor (a straightforward Taylor series expansion) \begin{subequations} \label{startup} \begin{align} \mathbf{x}_1^p & = \mathbf{x}_0 + h \mathbf{v}_0 + \tfrac{1}{2} h^2 \mathbf{a}_0 + \mathcal{O} (h^3) \label{startupPredictor} \\ \intertext{followed by evaluations $\mathbf{v}^p_1 = \mathbf{v} (t_1, \mathbf{x}^p_1)$ and $\mathbf{a}^p_1 = \mathbf{a} (t_1, \mathbf{x}^p_1, \mathbf{v}^p_1)$ to prepare for executing its corrector} \mathbf{x}_1 & = \mathbf{x}_0 + \tfrac{1}{2} h \bigl( \mathbf{v}^p_1 + \mathbf{v}_0 \bigr) - \tfrac{1}{12} h^2 \bigl( \mathbf{a}^p_1 - \mathbf{a}_0 \bigr) + \mathcal{O} (h^4) \label{startupCorrector} \end{align} \end{subequations} after which one re-evaluates $\mathbf{v}_1 = \mathbf{v} (t_1, \mathbf{x}_1)$ and $\mathbf{a}_1 = \mathbf{a} (t_1, \mathbf{x}_1, \mathbf{v}_1)$. Including correction terms for jerk changed $- \tfrac{1}{4} h^2 ( \mathbf{a}^p_{n+1} - \mathbf{a}_n)$ to $- \tfrac{1}{12} h^2 ( \mathbf{a}^p_{n+1} - \mathbf{a}_n)$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$. After this integrator has been run once, a switch is made to employ the two-step PECE method described below to finish up. For entering step counts that lie within the interval $n=1$ to $n = N \! - \! 1$, numeric integration continues by employing a predictor \begin{subequations} \label{PECE} \begin{align} \mathbf{x}_{n+1}^p & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{1}{6} h \bigl( 3 \mathbf{v}_n + \mathbf{v}_{n-1} \bigr) \notag \\ \mbox{} & \hspace{4.5cm} + \tfrac{1}{36} h^2 \bigl( 31 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4) \label{predictor} \\ \intertext{followed by evaluations $\mathbf{v}^p_{n+1} = \mathbf{v} (t_{n+1}, \mathbf{x}^p_{n+1})$ and $\mathbf{a}^p_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}^p_{n+1}, \mathbf{v}^p_{n+1})$ to be made sequentially. Here including correction terms for jerk changed $\tfrac{1}{6} h ( 5 \mathbf{v}_n - \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{12} h^2 ( 7 \mathbf{a}_n - \mathbf{a}_{n-1} )$ to $\tfrac{1}{6} h ( 3 \mathbf{v}_n + \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{36} h^2 ( 31 \mathbf{a}_n - \mathbf{a}_{n-1} )$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$. A corrector that is consistent with the above predictor is} \mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{1}{24} h \bigl( \mathbf{v}^p_{n+1} + 14 \mathbf{v}_n + \mathbf{v}_{n-1} \bigr) \notag \\ \mbox{} & \hspace{4.5cm} + \tfrac{1}{72} h^2 \bigl( 10 \mathbf{a}^p_{n+1} + 51 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4) \label{corrector} \end{align} \end{subequations} whose derivation follows below in \S\ref{Sec:derivation}. With the corrector having been run, finish by re-evaluating $\mathbf{v}_{n+1} = \mathbf{v} (t_{n+1}, \mathbf{x}_{n+1})$ and $\mathbf{a}_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}_{n+1}, \mathbf{v}_{n+1})$. Variables are to be updated according to $n \! - \! 1 \leftarrow n$, $n \leftarrow n \! + \! 1$, plus the counter $n$ gets incremented. After that the solution is ready for advancement to the next integration step, with looping continuing until $n = N$ whereat the solution becomes complete. \subsection{Derivation of the Corrector} \label{Sec:derivation} The corrector obtained via our recipe for a type \#1 corrector is \begin{displaymath} \mathbf{x}_{n+1} = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{1}{9} h \bigl( \mathbf{v}^p_{n+1} + 5 \mathbf{v}_n \bigr) + \tfrac{2}{9} h^2 \bigl( \mathbf{a}^p_{n+1} + 3 \mathbf{a}_n \bigr) + \mathcal{O} (h^4) \end{displaymath} where inclusion of correction terms for jerk changed $\tfrac{1}{3} h ( -\mathbf{v}^p_{n+1} + 3 \mathbf{v}_n )$ $+$ $\tfrac{1}{6} h^2 ( \mathbf{a}^p_{n+1} + 5 \mathbf{a}_n )$ to $\tfrac{1}{9} h ( \mathbf{v}^p_{n+1} + 5 \mathbf{v}_n )$ $+$ $\tfrac{2}{9} h^2 ( \mathbf{a}^p_{n+1} + 3 \mathbf{a}_n )$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$. The corrector obtained via our recipe for a type \#2 corrector is \begin{displaymath} \begin{aligned} \mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{1}{36} h \bigl( - \mathbf{v}^p_{n+1} + 22 \mathbf{v}_n + 3 \mathbf{v}_{n-1} \bigr) \\ \mbox{} & \hspace{4.5cm} + \tfrac{1}{36} h^2 \bigl( 2 \mathbf{a}^p_{n+1} + 27 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4) \end{aligned} \end{displaymath} where the correction terms for jerk changed $-\tfrac{1}{6} h ( -2 \mathbf{v}_{n+1}^p + 7 \mathbf{v}_n - \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{12} h^2 ( 2 \mathbf{a}^p_{n+1} + 9 \mathbf{a}_n - \mathbf{a}_{n-1} )$ to $\tfrac{1}{36} h ( - \mathbf{v}^p_{n+1} + 22 \mathbf{v}_n + 3 \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{36} h^2 ( 2 \mathbf{a}^p_{n+1} + 27 \mathbf{a}_n - \mathbf{a}_{n-1} )$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$. Unfortunately, neither of these two correctors is consistent with the predictor in Eq.~(\ref{predictor}). This predictor has a weight imposed on displacement of 1, a weight imposed on velocity of $\tfrac{2}{3} h$, and a weight imposed on acceleration of $\tfrac{5}{6} h^2$. It is desirable to seek a corrector with these same weights. This would imply that if a field, say acceleration, were uniform over a time interval, say $[t_{n-1}, t_{n+1}]$, then both the predictor and corrector would produce the same numeric value for acceleration's contribution to the overall result at this location in time. The correctors derived from types~\#1 and \#2 are consistent with this predictor for all contributions except acceleration. In terms of acceleration, the predictor has a weight of $\tfrac{5}{6} h^2$, while corrector~\#1 has a weight of $\tfrac{8}{9} h^2$ and corrector \#2 has a weight of $\tfrac{7}{9} h^2$. Curiously, averaging correctors~\#1 and \#2 does produce the correct weight. There is consistency between the predictor and this `averaged' corrector, which is the corrector put forward in Eq.~(\ref{corrector}). \subsection{When Only Acceleration is Controlled} \label{Sec:Newton} There is an important class of problems that is similar to the above class in that acceleration is described through a function of state; however, velocity is not. Velocity, like displacement, is a response function for this class of problems. Acceleration is still described by a function of time, displacement and velocity, e.g., $\mathbf{a}_n = \mathbf{a} (t_n , \mathbf{x}_n , \mathbf{v}_n)$; however, instead of the velocity being given as a function, it, like displacement, is to be solved through integration. Two initial conditions must be supplied, viz., $\mathbf{x} (0) = \mathbf{x}_0$ and $\mathbf{v} (0 , \mathbf{x}_0) = \mathbf{v}_0$. This is how Newton's Second Law usually presents itself for analysis. Beeman \cite{Beeman76} constructed a different set of multi-step methods that can also be used to get solutions for this class of problems. This is a two-step method. Therefore, it will require a one-step method to startup an analysis. To start integration, take the first step using predictors \begin{subequations} \label{pairedStartUp} \begin{align} \mathbf{x}_1^p & = \mathbf{x}_0 + h \mathbf{v}_0 + \tfrac{1}{2} h^2 \mathbf{a}_0 + \mathcal{O} (h^3) \label{startupDisplacementPredictor} \\ \mathbf{v}^p_1 & = \mathbf{v}_0 + h \mathbf{a}_0 + \mathcal{O} (h^3) \label{startUpVelocityPredictor} \\ \intertext{followed by an evaluation for $\mathbf{a}^p_1 = \mathbf{a} (t_1, \mathbf{x}^p_1, \mathbf{v}^p_1)$. Their paired correctors are} \mathbf{x}_1 & = \mathbf{x}_0 + \tfrac{1}{2} h \bigl( \mathbf{v}^p_1 + \mathbf{v}_0 \bigr) - \tfrac{1}{12} h^2 \bigl( \mathbf{a}^p_1 - \mathbf{a}_0 \bigr) + \mathcal{O} (h^4) \label{startupDisplacementCorrector} \\ \mathbf{v}_1 & = \mathbf{v}_0 + \tfrac{1}{2} h \bigl( \mathbf{a}_1^p + \mathbf{a}_0 \bigr) + \mathcal{O} (h^4) \label{startUpVelocityCorrector} \end{align} \end{subequations} followed with a re-evaluation for $\mathbf{a}_1 = \mathbf{a} (t_1, \mathbf{x}_1, \mathbf{v}_1)$. With the first step of integration taken, one can switch to the PECE algorithm described below. For entering step counts that lie within the interval $n=1$ to $n = N \! - \! 1$, numeric integration continues by employing predictors \begin{subequations} \label{pairedMethods} \begin{align} \mathbf{x}_{n+1}^p & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{1}{6} h \bigl( 3 \mathbf{v}_n + \mathbf{v}_{n-1} \bigr) \notag \\ \mbox{} & \hspace{3.175cm} + \tfrac{1}{36} h^2 \bigl( 31 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4) \label{displacementPredictor} \\ \mathbf{v}_{n+1}^p & = \tfrac{1}{3} \bigl( 4 \mathbf{v}_n - \mathbf{v}_{n-1} \bigr) + \tfrac{2}{3} h \bigl( 2\mathbf{a}_n - \mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4) \label{velocityPredictor} \\ \intertext{followed with an evaluation of $\mathbf{a}^p_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}^p_{n+1}, \mathbf{v}^p_{n+1})$. The paired correctors belonging with these predictors are} \mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) + \tfrac{1}{24} h \bigl( \mathbf{v}^p_{n+1} + 14 \mathbf{v}_n + \mathbf{v}_{n-1} \bigr) \notag \\ \mbox{} & \hspace{3.175cm} + \tfrac{1}{72} h^2 \bigl( 10 \mathbf{a}^p_{n+1} + 51 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4) \label{displacementCorrector} \\ \mathbf{v}_{n+1} & = \tfrac{1}{3} \bigl( 4 \mathbf{v}_n - \mathbf{v}_{n-1} \bigr) + \tfrac{2}{3} h \mathbf{a}^p_{n+1} + \mathcal{O} (h^4) \label{velocityCorrector} \end{align} \end{subequations} which are followed with a re-evaluation for $\mathbf{a}_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}_{n+1}, \mathbf{v}_{n+1})$. Variables are to be updated according to $n \! - \! 1 \leftarrow n$ and $n \leftarrow n \! + \! 1$, after which counter $n$ gets incremented. Upon finishing the data management, a solution is ready for advancement to the next integration step, with looping continuing until $n = N$ whereat the solution becomes complete. \section{Error and Step-Size Control} \label{Sec:PI} To be able to control the local truncation error one must first have an estimate for its value. Here error is defined as a norm in the difference between predicted and corrected values. A recipe for computing this is stated in the \textit{Strategy\/} section. These expressions, although informative, cannot be used as stated because Taylor expansions for velocity have been applied to remove the next higher-order term in the Taylor series for displacement to improve accuracy. An estimate for truncation error is simply \begin{equation} \varepsilon_{n+1} = \frac{ \| \mathbf{x}_{n+1} - \mathbf{x}^p_{n+1} \| } {\max (1 , \| \mathbf{x}_{n+1} \| )} \label{truncationError} \end{equation} which can be used to control the size of a time step applied to an integrator, i.e., a local time step. Our objective here is to keep $\varepsilon$ below some allowable error, i.e., a user specified tolerance denoted as \textit{tol}, typically set within the range of $[10^{-8}, 10^{-2}]$. At this juncture it is instructive to introduce separate notations for the two time steps that arise in a typical implementation for an algorithm of this type into code. Let $\Delta t$ denote the global time step, and let $h$ denote the local time step. The global time step is considered to be uniformly sized at $\Delta t = T / N$, where $T$ is the time at which analysis stops and $N$ is the number of discrete nodes whereat information is to be passed back from the solver to its driver. Typically $N$ is selected to be dense enough so that a user can create a suitable graphical representation of the result. On the other hand, the local time step $h$ that appears in formul\ae\ (\ref{startUp1stOrderODEs}--\ref{pairedMethods}) is dynamically sized to maintain accuracy. If error $\varepsilon$ becomes too large, then $h$ is reduced, and if it becomes too small, then $h$ is increased. If there is to be a local time step of size $h$ that adjusts dynamically, then the first question one must answer is: What is an acceptable value for $h$ to start an integration with? It has been your professor's experience that the user is not as reliable in this regard as he\slash she would like to believe. The following automated procedure has been found to be useful in this regard \cite{FreedIskovitz96}. From the initial conditions, compute \begin{displaymath} h_0 = \frac{ \| \mathbf{x}_0 \| }{\| \mathbf{v}_0 \|} \qquad \text{constrained so that} \qquad \frac{\Delta t}{100} < h_0 < \frac{\Delta t}{10} \end{displaymath} and with this initial estimate for the step size, take an Euler step forward $\mathbf{x}^p_1 = \mathbf{x}_0 + h_0 \mathbf{v}_0$, evaluate $\mathbf{v}^p_1 = \mathbf{v} (h_0, \mathbf{x}^p_1)$, follow with a trapezoidal correction $\mathbf{x}_1 = \mathbf{x}_0 + \tfrac{1}{2} h_0 (\mathbf{v}^p_1 + \mathbf{v}_0)$, and re-evaluate $\mathbf{v}_1 = \mathbf{v} (h_0, \mathbf{x}_1)$. At this juncture, one can get an improved estimate for the initial step size via \begin{displaymath} h_1 = 2 \left| \frac{\| \mathbf{x}_1 \| - \| \mathbf{x}_0 \|} {\| \mathbf{v}_1 \| + \| \mathbf{v}_0 \|} \right| \qquad \text{subject to} \qquad \frac{\Delta t}{1000} < h_1 . \end{displaymath} With this information, one can calculate the number of steps $S$ needed by a local solver to traverse the first step belonging to the global solver whose step size is $\Delta t$; specifically, \begin{equation} S = \max \bigl( 2 , \mathrm{round} ( \Delta t / h_1 ) \bigr) \qquad \text{with} \qquad h = \Delta t / S \label{initialStepSize} \end{equation} and a reasonable value for the initial, local, step size $h$ is now in hand. As a minimum, there are to be two local steps taken for each global step traversed. From here on a discrete PI controller (originally derived from control theory as a senior engineering project at Lund Institute of Technology in Lund, Sweden \cite{Gustafssonetal88}) is employed to automatically manage the size of $h$. The goal of this PI controller is to allow a solution to traverse its path with maximum efficiency, all the while maintaining a specified tolerance on error. The P in PI stands for proportional feedback and accounts for current error, while the I in PI stands for integral feedback and accounts for an accumulation of error. The simplest controller is an I controller. For controlling step size, this I controller adjusts $h$ via \cite{Soderlind02} \begin{displaymath} C = \frac{h_{n+1}}{h_n} = \left( \frac{tol}{\varepsilon_{n+1}} \right)^{k_I} \end{displaymath} wherein $k_I$ designates gain in the integral feedback loop, while $tol$ is the maximum truncation error to be tolerated over a local step of integration. Such controllers had been used by the numerical analysis community for a long time, and are known to be problematic \cite[pp.~31--35]{HairerWanner91}. Controls engineers know that PI controllers are superior to I controllers, and for the task of managing $h$, in 1988 a team of students at Lund University derived \cite{Gustafssonetal88} \begin{displaymath} C = \frac{h_{n+1}}{h_n} = \left( \frac{tol}{\varepsilon_{n+1}} \right)^{k_I+k_P} \left( \frac{\varepsilon_{n+1} \vphantom{l}}{tol} \right)^{k_P} \end{displaymath} wherein $k_P$ designates gain in the proportional feedback loop. This PI controller has revolutionized how commercial-grade ODE solvers are built today. A strategy for managing error by dynamically adjusting the size of time step $h$ can now be put forward. To do so, it is instructive to introduce a second counter $s$ that decrements from $S$ down to 0. It designates the number of steps left to go before reaching the node located at the end of a global step that the integrator is currently traversing. $S$ needs to be redetermined each time the algorithm advances to its next global step. If there is a discontinuity in step size $h$ across this interface, then the history variables will need to be adjusted using, e.g., a Hermite interpolator \cite{Shampine85}. A suitable algorithm for controlling truncation error by managing step size is described below. \medskip Initialize the controller by setting $\varepsilon_n = 1$. \begin{enumerate} \item After completing an integration for displacement $\mathbf{x}_{n+1}$, and possibly velocity $\mathbf{v}_{n+1}$, via any of the integrators given in Eqs.~(\ref{startUp1stOrderODEs}--\ref{pairedMethods}), calculate an estimate for its local truncation error $\varepsilon_{n+1}$ via Eq.~(\ref{truncationError}). \item Calculate a scaling factor $C$ that comes from the controller \begin{displaymath} C = \begin{cases} \left( \vphantom{\frac{a}{a}} \right. \frac{\mathit{tol}}{\varepsilon_{n+1}} \left. \vphantom{\frac{a}{a}} \right)^{0.7/(p+1)} \left( \frac{\varepsilon_n}{\mathit{tol}} \right)^{0.4/(p+1)} & \text{if} \; \varepsilon_{n} < \mathit{tol} \; \text{and} \; \varepsilon_{n+1} < \mathit{tol} \\ \left( \frac{\varepsilon_{n+1}} {\mathit{tol}} \right)^{1/p} & \text{otherwise} \end{cases} \end{displaymath} wherein \textit{tol\/} is the truncation error that the controller targets and $p$ is the order of the method, e.g., it appears as $\mathcal{O} (h^{p+1})$ in formul\ae\ (\ref{startUp1stOrderODEs}--\ref{pairedMethods}). \item If $C > 2$ plus $s > 3$ and $s$ is even, then double the step size $h = 2h$, halve the steps to go $s = s / 2$, and continue on to the next step. \item If $1 \leq C \leq 2$ then maintain the step size, decrement the counter, and on continue to the next step. \item If $C < 1$ yet $\varepsilon_{n+1} \leq \mathit{tol}$, then halve the step size $h = h / 2$, double the steps to go $s = 2 s$, and continue on to the next step. \item Else $C < 1$ and $\varepsilon_{n+1} > \mathit{tol}$, then halve the step size $h = h / 2$, double the steps to go $s = 2 s$, and \textit{repeat\/} the integration step from $n$ to $n \! + \! 1$. \end{enumerate} For the I controller, the gain on feedback has been set at $k_I = 1/p$. For the PI controller, the gain on I feedback has been set at $k_I = 0.3 / (p+1)$ while the gain on P feedback has been set at $k_P = 0.4 / (p+1)$, wherein factors 0.3 and 0.4 have been selected based upon the developer's experience in working with their controller \cite{Gustafssonetal88,Soderlind02}. By only admitting either a doubling or a halving of the current step size, a built-in mechanism is in play that mitigates the likelihood that wind-up or wind-down instabilities will happen in practice. Whenever a step is to be halved, the displacement at a half step can be approximated via \begin{equation} \mathbf{x}_{n-\scriptfrac{1}{2}} = \tfrac{1}{2} ( \mathbf{x}_n + \mathbf{x}_{n-1} ) - \tfrac{1}{8} \, h ( \mathbf{v}_n - \mathbf{v}_{n-1} ) + \mathcal{O} (h^4) + \mathcal{O} (h^{p+1}) \label{stepHalving} \end{equation} which is a cubic Hermite interpolant \cite{Shampine85} whose accuracy is $\mathcal{O}(h^4)$ with $\mathcal{O}(h^{p+1})$ designating accuracy of the numerical method used to approximate displacements $\mathbf{x}_n$ and $\mathbf{x}_{n+1}$ and, for solvers (\ref{pairedStartUp} \& \ref{pairedMethods}), a like interpolation for velocities $\mathbf{v}_n$ and $\mathbf{v}_{n+1}$ will be required, too. As a closing comment, many PECE methods are often implemented as $\text{PE}(\text{CE})^m$ methods with the correct\slash evaluate steps being repeated $m$ times, or until convergence. It has been your professor's experience that PECE, i.e., $m=1$, is usually sufficient whenever the step size $h$ is properly controlled to keep the truncation error in check, provided a reasonable assignment for permissible error has been made, typically $tol \approx 10^{-(p+1)}$. \section{Examples} Examples are provided to illustrate the numerical methods put forward. A chemical kinetics problem, popular in the numerical analysis literature \cite[pp.~115--116]{Haireretal93}, is considered for testing the two-step PECE method of Eqs.~(\ref{startUp1stOrderODEs} \& \ref{1stOrderODEs}) used to solve first-order systems of ODEs, including stiff ODEs. The vibrational response of an formula SAE race car is simulated to illustrate a problem belonging to the class of solvers that are appropriate for applications of Newton's Second Law of motion. \subsection{Brusselator} \label{Sec:Brusselator} The Brusselator describes a chemical kinetics problem where six substances are being mixed, and whose evolution through time is characterized by two, coupled, differential equations in two unknowns $A$ and $B$, viz., \begin{subequations} \begin{align} \dot{y}_1 & = A + y^2_1 y_2 - (B - 1) y_1 \notag \\ \dot{y}_2 & = B y_1 - y^2_1 y_2 \notag \end{align} \end{subequations} whose eigenvalues are \begin{displaymath} \lambda = \frac{1}{2} \left( - \left( 1 - B + A^2 \right) \pm \sqrt{( 1 - B + A^2)^2 - 4 A^2} \right) \end{displaymath} where parameters $A$ and $B$ are, to an extent, at the disposal of a chemist. This system exhibits vary different behaviors for different values of its parameters. For values $A=1$ and $B=3$ (see Fig.~\ref{fig:brusselator1}) the solution converges to a limit cycle that orbits a steady-state attractor located at coordinate (1,~3) for these values of $A$ and $B$. This limit cycle does not depend upon initial condition (IC), provided the IC does not reside at the steady state. \begin{figure} {\par\centering \resizebox*{0.6\textwidth}{0.3\textheight} {\includegraphics{limitCycle.png}} \par} \caption{A concentration plot for a Brusselator response with $A=1$ and $B=3$. Solutions are presented for several initial conditions. All solutions approach a limit cycle.} \label{fig:brusselator1} \end{figure} The behavior is very different for parameters $A=100$ and $B=3$. Here the solutions rapidly settle in on asymptotic responses (see Fig.~\ref{fig:brusselator2}). Figures \ref{fig:brusselator1} \& \ref{fig:brusselator2} came from the same system of equations, just different parameters. \begin{figure} {\par\centering \resizebox*{0.9\textwidth}{0.225\textheight} {\includegraphics{stiffY1.png} \includegraphics{stiffY2.png}} \par} \caption{Brusselator response versus time with $A=100$ and $B=3$ for several initial conditions. The response curves have been normalized against their initial values.} \label{fig:brusselator2} \end{figure} Ability of the PI controller discussed in \S\ref{Sec:PI} to manage the local truncation error by adjusting the local step size is illustrated in Fig.~\ref{fig:brusselator3}. Statistics gathered from these runs are reported on in Table~\ref{Table:Brusselator} \begin{figure} {\par\centering \resizebox*{0.9\textwidth}{0.225\textheight} {\includegraphics{limitCycleError.png} \includegraphics{stiffError.png}} \par} \caption{Local truncation error versus time for both Brusselator problems. The error tolerance was set at $10^{-4}$, which is the upper horizontal axis in both plots. The sawtooth response in the right plot was caused by the step size being doubled at those locations. Oscillations of error in the left plot arose from the PI controller navigating corners.} \label{fig:brusselator3} \end{figure} \begin{table} \small \begin{center} \begin{tabular}{|c|ccc|ccc|} \hline Initial & \multicolumn{3}{c|}{$A=1$, $B=3$, $t_{\mathrm{end}}=20$~s} & \multicolumn{3}{c|}{$A=100$, $B=3$, $t_{\mathrm{end}}=0.1$~s} \\ \cline{2-7} Condition & \#steps & \#halved & \#doubled & \#steps & \#halved & \#doubled \\ \hline (0.1, 0.1) & 1186 & 6 & 9 & 353 & 0 & 6 \\ (1.5, 3.0) & 1592 & 6 & 9 & 362 & 0 & 4 \\ (2.0, 0.5) & 1332 & 7 & 10 & 467 & 0 & 3 \\ (3.25, 2.5) & 1451 & 6 & 12 & 414 & 0 & 5 \\ \hline \end{tabular} \end{center} \normalsize \caption{Runtime statistics for the results plotted in Figs.~\ref{fig:brusselator1}--\ref{fig:brusselator3}. There were 200 global steps for the limit cycle analyses, and 100 global steps for the stiff analyses. In none of these numerical experiments did the integrator have to restart because of excessive error.} \label{Table:Brusselator} \end{table} The solutions in Fig.~\ref{fig:brusselator1} have an eigenvalue ratio of $| \lambda_{\max} | / | \lambda_{\min} | = 2.6$, whereas the solutions in Fig.~\ref{fig:brusselator2} have a ratio of $| \lambda_{\max} | / | \lambda_{\min} | = 9,602$. Although there is no accepted `definition' for stiffness in the numerical analysis literature, there are some rules of thumb that exist. Probably the simplest to apply is the ratio $\Lambda = | \lambda_{\max} | / | \lambda_{\min} |$ with $\Lambda \approx 10$ being the boundary. Systems of ODEs whose ratio of extreme eigenvalues is less than about 10 do not exhibit stiffness; whereas, systems of ODEs whose ratio $\Lambda$ exceeds 10, and certainly 100, do exhibit stiffness. Explicit methods, e.g., the predictors presented herein, when used alone, do not fair well when attempting to acquire solutions from systems of ODEs that are mathematically stiff. Implicit methods are needed, e.g., the correctors presented herein. The solutions graphed in Fig.~\ref{fig:brusselator1} are for a non-stiff problem, while the solutions graphed in Fig.~\ref{fig:brusselator2} are for a stiff problem. The implicit two-step method of Eqs.~(\ref{startUp1stOrderODEs} \& \ref{1stOrderODEs}) is a viable integrator for solving stiff systems of ODEs of first order; in contrast, explicit Runge-Kutta methods are not suitable. \subsection{Vibrational Response of a Vehicle} In this example we consider the vibrational response of a car as it travels down a roadway. This response is excited by an unevenness in the roadway, accentuated by the speed of a vehicle. This simulation determines the heave $z$, pitch $\theta$, and roll $\phi$ of a vehicle at its center of gravity excited by its traversal over a roadway. There are three degrees of freedom for this problem with the position $\mathbf{x}$, velocity $\mathbf{v}$, and acceleration $\mathbf{a}$ vectors taking on forms of \begin{displaymath} \mathbf{x} = \left\{ \begin{matrix} z \\ \theta \\ \phi \end{matrix} \right\} , \qquad \mathbf{v} = \left\{ \begin{matrix} \dot{z} \\ \dot{\theta} \\ \dot{\phi} \end{matrix} \right\} , \qquad \mathbf{a} = \left\{ \begin{matrix} \ddot{z} \\ \ddot{\theta} \\ \ddot{\phi} \end{matrix} \right\} \end{displaymath} wherein $\dot{z} = \partial{z} / \partial t$, $\ddot{z} = \partial^2 z / \partial t^2$, etc. In our application of this simulator, we consider a formula SAE race car like the one our seniors design, fabricate and compete with every year in a cap stone project here at Texas~A\mbox{\&}M. There are three matrices that establish the vibrational characteristics of a vehicle. There is a mass matrix \begin{equation*} \mathbf{M} = \begin{bmatrix} m & 0 & 0 \\ 0 & J_{\theta} & 0 \\ 0 & 0 & J_{\phi} \end{bmatrix} \end{equation*} where $m$ is the collective mass of the car and its driver, $J_{\theta}$ is the moment of inertia resisting pitching motions, and $J_{\phi}$ is the moment of inertia resisting rolling motions. There is also a damping matrix \begin{multline} \mathbf{C} = \left[ \begin{matrix} c_1 + c_2 + c_3 + c_4 \\ -(c_1 + c_2) \ell_f + (c_3 + c_4) \ell_r \\ -(c_1 - c_2) \rho_f + (c_3 - c_4) \rho_r \end{matrix} \right. \notag \\ \left. \begin{matrix} -(c_1 + c_2) \ell_f + (c_3 + c_4) \ell_r & -(c_1 - c_2) \rho_f + (c_3 - c_4) \rho_r \\ (c_1 + c_2) \ell_f^2 + (c_3 + c_4) \ell_r^2 & (c_1 - c_2) \ell_f \rho_f + (c_3 - c_4) \ell_r \rho_r \\ (c_1 - c_2) \ell_f \rho_f + (c_3 - c_4) \ell_r \rho_r & (c_1 + c_2) \rho_f^2 + (c_3 + c_4) \rho_r^2 \end{matrix} \right] \notag \end{multline} and a like stiffness matrix \begin{multline} \mathbf{K} = \left[ \begin{matrix} k_1 + k_2 + k_3 + k_4 \\ -(k_1 + k_2) \ell_f + (k_3 + k_4) \ell_r \\ -(k_1 - k_2) \rho_f + (k_3 - k_4) \rho_r \end{matrix} \right. \notag \\ \left. \begin{matrix} -(k_1 + k_2) \ell_f + (k_3 + k_4) \ell_r & -(k_1 - k_2) \rho_f + (k_3 - k_4) \rho_r \\ (k_1 + k_2) \ell_f^2 + (k_3 + k_4) \ell_r^2 & (k_1 - k_2) \ell_f \rho_f + (k_3 - k_4) \ell_r \rho_r \\ (k_1 - k_2) \ell_f \rho_f + (k_3 - k_4) \ell_r \rho_r & (k_1 + k_2) \rho_f^2 + (k_3 + k_4) \rho_r^2 \end{matrix} \right] \notag \end{multline} wherein $c_1$ and $k_1$ are the effective damping coefficient and spring stiffness for the suspension located at the driver's front, $c_2$ and $k_2$ are located at the passenger's front, $c_3$ and $k_3$ are located at the passenger's rear, and $c_4$ and $k_4$ are located at the driver's rear. Lengths $\ell_f$ and $\ell_r$ measure distance from the front and rear axles to the center of gravity (CG) for the car and driver with their sum being the wheelbase. Lengths $\rho_f$ and $\rho_r$ measure distance from the centerline (CL) of the vehicle out to the center of a tire patch along the front and rear axles, respectively. Typically, $\rho_f > \rho_r$ to allow a driver to take a tighter\slash shorter path into a corner during competition. Interacting with these three matrices is a vector that establishes how a roadway excites a vehicle. It is described by \begin{displaymath} \mathbf{f} = \left\{ \begin{matrix} w - c_1 \dot{R}_1 - c_2 \dot{R}_2 - c_3 \dot{R}_3 - c_4 \dot{R}_4 - k_1 R_1 - k_2 R_2 - k_3 R_3 - k_4 R_4 \\ \bigl( c_1 \dot{R}_1 + c_2 \dot{R}_2 + k_1 R_1 + k_2 R_2 \bigr) \ell_f - \bigl( 3_1 \dot{R}_3 + c_4 \dot{R}_4 + k_3 R_3 + k_4 R_4 \bigr) \ell_r \\ \bigl( c_1 \dot{R}_1 - c_2 \dot{R}_2 + k_1 R_1 - k_2 R_2 \bigr) \rho_f - \bigl( 3_1 \dot{R}_3 - c_4 \dot{R}_4 + k_3 R_3 - k_4 R_4 \bigr) \rho_r \end{matrix} \right\} \end{displaymath} where $w$ is the weight (mass times gravity) of the car and its driver. Functions $R(t)$ and $\dot{R}(t)$ are for displacement and velocity occurring normal to a roadway, measured from smooth. Roadway velocity is proportional to vehicle speed. It is through these functions that time enters into a solution. $R_i$ and $\dot{R_i}$, $i=1,2,3,4$, follow the same numbering scheme as the damping coefficients and spring stiffnesses. To apply our numerical algorithm (\ref{pairedStartUp} \& \ref{pairedMethods}), one simply computes \begin{displaymath} \mathbf{a} (t , \mathbf{x} , \mathbf{v}) = \mathbf{M}^{-1} \cdot \bigl( \mathbf{f}(t) - \mathbf{C} \cdot \mathbf{v} - \mathbf{K} \cdot \mathbf{x} \bigr) \end{displaymath} and assigns a suitable pair of ICs: one for displacement, and the other for velocity, as they pertain to the motion of a vehicle at its center of gravity. Initial conditions can be cast is various ways. The simplest ICs come from either starting at rest, or starting at a constant velocity on a smooth roadway. Either way, one arrives at \begin{displaymath} \mathbf{x}_0 = \mathbf{K}^{-1} \cdot \mathbf{f}_0 \quad \text{and} \quad \mathbf{v}_0 = \left\{ \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right\} \quad \text{wherein} \quad \mathbf{f}_0 = \left\{ \begin{matrix} w \\ 0 \\ 0 \end{matrix} \right\} \end{displaymath} because $R_i = 0$ and $\dot{R}_i = 0$, $i=1,2,3,4$, in these two cases. Remember, velocity $\mathbf{v}_0$ is not the speed of your car; rather, it is a change in vehicle motion with respect to its center of gravity. To illustrate the simulator, a roadway was constructed with five gradual waves at a wavelength equal to the wheelbase. To excite roll, the passenger side lagged out of phase with the driver side by a tenth of the wheelbase. Vehicle speed was set at 10~mph. There were 500 global nodes so the density of output would produce nice graphs, for which there were 5,422 local integration steps required with 8 steps being doubled. No steps were halved, and no steps required to be restarted. The responses are plotted in Fig.~\ref{fig:fsae1}, while the errors are reported in Fig.~\ref{fig:fsae2}. It is apparent that the integrator (\ref{pairedStartUp} \& \ref{pairedMethods}) performs to expectations, and that the PI controller of \S\ref{Sec:PI} does an admirable job in managing the local truncation error. \begin{figure} {\par\centering \resizebox*{0.9\textwidth}{0.225\textheight} {\includegraphics{heave.png} \includegraphics{pitchNroll.png}} \par} \caption{Heave $z$ is plotted against time in the left graphic, while pitch $\theta$ and roll $\phi$ are plotted against time in the right graphic. Heave and pitch have static offsets, whereas roll does not.} \label{fig:fsae1} \end{figure} \begin{figure} {\par\centering \resizebox*{0.6\textwidth}{0.3\textheight} {\includegraphics{fsaeError.png}} \par} \caption{Local truncation error versus time for the FSAE race car driving over a sequence of bumps. The error tolerance was set at $10^{-4}$, which is the upper horizontal axis of the plot. } \label{fig:fsae2} \end{figure} Vitals for the car that was simulated include: $m$ = 14~slugs ($w$ = 450~lbs), $J_{\theta} = 45 \text{ ft.lbs/(rad/sec}^2)$, $J_{\phi} = 20 \text{ ft.lbs/(rad/sec}^2)$, $\ell_f$ = 3.2~ft, $\ell_r$ = 1.8~ft, $\rho_f$ = 2.1~ft, $\rho_r$ = 2~ft, the front dampers were set at 10 lbs/(in/sec) and the rears were set at 15 lbs/(in/sec), while the front springs had stiffnesses of 150 lbs/in and the rears were selected at 300~lbs/in. These are reminiscent of a typical FSAE race car. \section{Summary} Two-step methods have been constructed that aspire to the structure of the well-known BDF2 formula. A predictor is derived for each case allowing PECE solution schemes to be put forward. The first method (\ref{startUp1stOrderODEs} \& \ref{1stOrderODEs}) that was introduced solves the classic problem where $\partial \mathbf{x} / \partial t = \mathbf{v} (t , \mathbf{x})$ subject to an IC of $\mathbf{x}(0) = \mathbf{x}_0$. The second method (\ref{startup} \& \ref{PECE}) introduced solves a fairly atypical case where functions for both velocity $\mathbf{v} ( t , \mathbf{x})$ and acceleration $\mathbf{a} (t , \mathbf{x} , \mathbf{v} )$ are given and a solution for the displacement $\mathbf{x}$ is sought, subject to an initial condition $\mathbf{x}(0) = \mathbf{x}_0$. And the third method (\ref{pairedStartUp} \& \ref{pairedMethods}) melds these two algorithms to construct a solver for the case where acceleration is given via a function $\mathbf{a} ( t , \mathbf{x} , \mathbf{v})$ from which solutions for both velocity $\mathbf{v}$ and displacement $\mathbf{x}$ are sought, subject to initial conditions of $\mathbf{x}(0) = \mathbf{x}_0$ and $\mathbf{v}(0, \mathbf{x}_0) = \mathbf{v}_0$. A PI controller is used to manage the local truncation error by dynamically adjusting the size of the local time step. All integrators have been illustrated using non-trivial example problems. \newpage \medskip\noindent\textbf{Acknowledgment}\medskip The author is grateful to Prof.\ Kai Diethelm, Institut Computational Mathematics, Technische Universit\"at, Braunschweig, Germany for critiquing this document and for providing instructive comments. \medskip\noindent\textbf{References}\medskip \small \bibliographystyle{spbasic}
1,477,468,750,568
arxiv
\section{Introduction} \label{sec:adv_comp:intro} Recent advances in machine learning and deep neural networks enabled researchers to solve multiple important practical problems like image, video, text classification and others. However most existing machine learning classifiers are highly vulnerable to adversarial examples~\cite{biggio2013evasion,Szegedy-ICLR2014,Goodfellow-2015-adversarial,Papernot-2016-TransferabilityStudy}. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Moreover it was discovered~\cite{PhysicalAdversarialExamples,Sharif16AdvML} that it is possible to perform adversarial attacks even on a machine learning system which operates in physical world and perceives input through inaccurate sensors, instead of reading precise digital data. In the long run, machine learning and AI systems will become more powerful. Machine learning security vulnerabilities similar to adversarial examples could be used to compromise and control highly powerful AIs. Thus, robustness to adversarial examples is an important part of the AI safety problem. Research on adversarial attacks and defenses is difficult for many reasons. One reason is that evaluation of proposed attacks or proposed defenses is not straightforward. Traditional machine learning, with an assumption of a training set and test set that have been drawn i.i.d., is straightforward to evaluate by measuring the loss on the test set. For adversarial machine learning, defenders must contend with an open-ended problem, in which an attacker will send inputs from an unknown distribution. It is not sufficient to benchmark a defense against a single attack or even a suite of attacks prepared ahead of time by the researcher proposing the defense. Even if the defense performs well in such an experiment, it may be defeated by a new attack that works in a way the defender did not anticipate. Ideally, a defense would be provably sound, but machine learning in general and deep neural networks in particular are difficult to analyze theoretically. A competition therefore gives a useful intermediate form of evaluation: a defense is pitted against attacks built by independent teams, with both the defense team and the attack team incentivized to win. While such an evaluation is not as conclusive as a theoretical proof, it is a much better simulation of a real-life security scenario than an evaluation of a defense carried out by the proposer of the defense. In this report, we describe the NIPS 2017 competition on adversarial attack and defense, including an overview of the key research problems involving adversarial examples (\secref{sec:adv_comp:adv_examples}), the structure and organization of the competition (\secref{sec:adv_comp:competition}), and several of the methods developed by the top-placing competitors (\secref{sec:adv_comp:submissions}). \section{Adversarial examples} \label{sec:adv_comp:adv_examples} Adversarial examples are inputs to machine learning models that have been intentionally optimized to cause the model to make a mistake. We call an input example a ``clean example'' if it is a naturally occurring example, such as a photograph from the ImageNet dataset. If an adversary has modified an example with the intention of causing it to be misclassified, we call it an ``adversarial example.'' Of course, the adversary may not necessarily succeed; a model may still classify the adversarial example correctly. We can measure the accuracy or the error rate of different models on a particular set of adversarial examples. \subsection{Common attack scenarios} Scenarios of possible adversarial attacks can be categorized along different dimensions. First of all, attacks can be classified by the type of outcome the adversary desires: \begin{itemize} \item \textbf{Non-targeted attack.} In this the case adversary's goal is to cause the classifier to predict any inccorect label. The specific incorrect label does not matter. \item \textbf{Targeted attack.} In this case the adversary aims to change the classifier's prediction to some specific target class. \end{itemize} Second, attack scenarios can be classified by the amount of knowledge the adversary has about the model: \begin{itemize} \item \textbf{White box.} In the white box scenario, the adversary has full knowledge of the model including model type, model architecture and values of all parameters and trainable weights. \item \textbf{Black box with probing.} In this scenario, the adversary does not know very much about the model, but can probe or query the model, i.e. feed some inputs and observe outputs. There are many variants of this scenario---the adversary may know the architecture but not the parameters or the adversary may not even know the architecture, the adversary may be able to observe output probabilities for each class or the adversary may only be to observe the choice of the most likely class. \item \textbf{Black box without probing} In the black box without probing scenario, the adversary has limited or no knowledge about the model under attack and is not allowed to probe or query the model while constructing adversarial examples. In this case, the attacker must construct adversarial examples that fool most machine learning models. \end{itemize} Third, attacks can be classifier by the way adversary can feed data into the model: \begin{itemize} \item \textbf{Digital attack.} In this case, the adversary has direct access to the actual data fed into the model. In other words, the adversary can choose specific {\tt float32} values as input for the model. In a real world setting, this might occur when an attacker uploads a PNG file to a web service, and intentionally designs the file to be read incorrectly. For example, spam content might be posted on social media, using adversarial perturbations of the image file to evade the spam detector. \item \textbf{Physical attack.} In the case of an attack in the physical world, the adversary does not have direct access to the digital representation of provided to the model. Instead, the model is fed input obtained by sensors such as a camera or microphone. The adversary is able to place objects in the physical environment seen by the camera or produce sounds heard by the microphone. The exact digital representation obtained by the sensors will change based on factors like the camera angle, the distance to the microphone, ambient light or sound in the environment, etc. This means the attacker has less precise control over the input provided to the machine learning model. \end{itemize} \subsection{Attack methods} Most of the attacks discussed in the literature are geared toward the white-box digital case. \subsubsection{White box digital attacks} \runinhead{L-BFGS}. One of the first methods to find adversarial examples for neural networks was proposed in~\cite{Szegedy-ICLR2014}. The idea of this method is to solve the following optimization problem: \begin{equation} \left|x^{adv} - x\right|_2 \rightarrow \text{minimum}, \quad \text{s.t.} \quad f(x^{adv})=y_{target}, \quad x^{adv} \in [0, 1]^m \end{equation} The authors proposed to use the L-BFGS optimization method to solve this problem, thus the name of the attack. One of the main drawbacks of this method is that it is quite slow. The method is not designed to counteract defenses such as reducing the number of bits used to store each pixel. Instead, the method is designed to find the smallest possible attack perturbation. This means the method can sometimes be defeated merely by degrading the image quality, for example, by rounding to an 8-bit representation of each pixel. \runinhead{Fast gradient sign method (FGSM).} To test the idea that adversarial examples can be found using only a linear approximation of the target model, the authors of ~\cite{Goodfellow-2015-adversarial} introduced the {\em fast gradient sign method} (FGSM). FGSM works by linearizing loss function in $L_{\infty}$ neighbourhood of a clean image and finds exact maximum of linearized function using following closed-form equation: \begin{equation} x^{adv} = x + \epsilon \sign \bigl( \nabla_x J(x, y_{true}) \bigr) \end{equation} \iffalse Ian commented the next part out just because it may be more detail than needed. OK to bring it back in, just seems like it could be longer than people want to read Also this method provides only non-targeted attack. Another problem is that this method uses true labels, which may be not available to attacker. Moreover the fact that method uses true labels may lead to ``label leaking'' phenomenon~\cite{Kurakin-AdversarialMlAtScale} when it actually increases classifier accuracy of adversarially trained network, instead of confusing classifier. Problem with true labels could be solved in several ways. One way is to use network predictions $y_{pred} = arg\,max f(x)$ instead of $y_{true}$. Another alternative is to change the method to maximize probability $p(y_{non-true}|x)$ on some non-true class instead of maximizing value of loss function~\cite{Kurakin-PhysicalAdversarialExamples}, in such case it will lead to following formula: \begin{equation} x^{adv} = x - \epsilon \sign \bigl( \nabla_x J(x, y_{non-true}) \bigr) \end{equation} You can either pick $y_{non-true}$ randomly or choose least likely network prediction. All of this variations of FGSM perform roughly similar~\cite{Kurakin-AdversarialMlAtScale}. \fi \runinhead{Iterative attacks} The L-BFGS attack has a high success rate and high computational cost. The FGSM attack has a low success rate (especially when the defender anticipates it) and low computational cost. A nice tradeoff can be achieved by running iterative optimization algorithms that are specialized to reach a solution quickly, after a small number (e.g. 40) of iterations. One strategy for designing optimization algorithms quickly is to take the FGSM (which can often reach an acceptable solution in one very large step) and run it for several steps but with a smaller step size. Because each FGSM step is designed to go all the way to the edge of a small norm ball surrounding the starting point for the step, the method makes rapid progress even when gradients are small. This leads to the \textbf{Basic Iterative Method (BIM)} method introduced in~\cite{Kurakin-PhysicalAdversarialExamples}, also sometimes called \textbf{Iterative FGSM (I-FGSM)}: \begin{equation} x^{adv}_{0} = \bm{X}, \quad x^{adv}_{N+1} = Clip_{X, \epsilon}\Bigl\{ \bm{X}^{adv}_{N} + \alpha \sign \bigl( \nabla_X J(\bm{X}^{adv}_{N}, y_{true}) \bigr) \Bigr\} \end{equation} The BIM can be easily made into a target attack, called the Iterative Target Class Method: \begin{equation} \bm{X}^{adv}_{0} = \bm{X}, \quad \bm{X}^{adv}_{N+1} = Clip_{X, \epsilon}\left\{ \bm{X}^{adv}_{N} - \alpha \sign \left( \nabla_X J(\bm{X}^{adv}_{N}, y_{target}) \right) \right\} \end{equation} It was observed that with sufficient number of iterations this attack almost always succeeds in hitting target class~\cite{Kurakin-PhysicalAdversarialExamples}. \runinhead{Madry et. al's Attack} \cite{MadryPgd2017} showed that the BIM can be significantly improved by starting from a random point within the $\epsilon$ norm ball. This attack is often called \textbf{projected gradient descent}, but this name is somewhat confusing because (1) the term ``projected gradient descent'' already refers to an optimization method more general than the specific use for adversarial attack, (2) the other attacks use the gradient and perform project in the same way (the attack is the same as BIM except for the starting point) so the name doesn't differentiate this attack from the others. \runinhead{Carlini and Wagner attack (C\&W).} N. Carlini and D. Wagner followed a path of L-BFGS attack. They designed a loss function which has smaller values on adversarial examples and higher on clean examples and searched for adversarial examples by minimizing it~\cite{CarliniWagnerAttack}. But unlike~\cite{Szegedy-ICLR2014} they used Adam~\cite{kingma2014adam} to solve the optimization problem and dealt with box constraints either by change of variables (i.e. $x = 0.5(\tanh(w) + 1)$) or by projecting results onto box constraints after each step. They explored several possible loss functions and achieved the strongest $L_2$ attack with following: \begin{equation} \|x^{adv} - x\|_p + c \max\bigl(\max_{i \neq Y}f(x^{adv})_{i} - f(x^{adv})_{Y}, -\kappa \bigr) \rightarrow \text{minimum} \end{equation} where $x^{adv}$ parametrized $0.5(\tanh(w) + 1)$; $Y$ is a shorter notation for target class $y_{target}$; $c$ and $\kappa$ are method parameters. \runinhead{Adversarial transformation networks (ATN).} Another approach which was explored in~\cite{ATN2017} is to train a generative model to craft adversarial examples. This model takes a clean image as input and generates a corresponding adversarial image. One advantage of this approach is that, if the generative model itself is designed to be small, the ATN can generate adversarial examples faster than an explicit optimization algorithm. In theory, this approach can be faster than even the FGSM, if the ATN is designed to use less computation is needed for running back-propagation on the target model. (The ATN does of course require extra time to train, but once this cost has been paid an unlimited number of examples may be generated at low cost) \runinhead{Attacks on non differentiable systems.} All attacks mentioned about need to compute gradients of the model under attack in order to craft adversarial examples. However this may not be always possible, for example if model contains non-differentiable operations. In such cases, the adversary can train a substitute model and utilize transferability of adversarial examples to perform an attack on non-differentiable system, similar to black box attacks, which are described below. \subsubsection{Black box attacks} It was observed that adversarial examples generalize between different models~\cite{szegedy2104intriguing}. In other words, a significant fraction of adversarial examples which fool one model are able to fool a different model. This property is called ``transferability'' and is used to craft adversarial examples in the black box scenario. The actual number of transferable adversarial examples could vary from a few percent to almost $100\%$ depending on the source model, target model, dataset and other factors. Attackers in the black box scenario can train their own model on the same dataset as the target model, or even train their model on another dataset drawn from the same distribution. Adversarial examples for the adversary's model then have a good chance of fooling an unknown target model. It is also possible to intentionally design models to systematically cause high transfer rates, rather than relying on luck to achieve transfer. If the attacker is not in the complete black box scenario but is allowed to use probes, the probes may be used to train the attacker's own copy of the target model~\cite{Papernot-2016-IntroTransferability,Papernot-2016-TransferabilityStudy} called a ``substitute.'' This approach is powerful because the input examples sent as probes do not need to be actual training examples; instead they can be input points chosen by the attacker to find out exactly where the target model's decision boundary lies. The attacker's model is thus trained not just to be a good classifier but to actually reverse engineer the details of the target model, so the two models are systematically driven to have a high amount of transfer. In the complete black box scenario where the attacker cannot send probes, one strategy to increase the rate of transfer is to use an ensemble of several models as the source model for the adversarial examples ~\cite{liu2017delving}. The basic idea is that if an adversarial example fools every model in the ensemble, it is more likely to generalize and fool additional models. Finally, in the black box scenario with probes, it is possible to just run optimization algorithms that do not use the gradient to directly attack the target model ~\cite{Brendel2017-DecisionBasedBlackBox,Zoo2017-ZerothOrderBlackBox}. The time required to generate a single adversarial example is generally much higher than when using a substitute, but if only a small number of adversarial examples are required, these methods may have an advantage because they do not have the high initial fixed cost of training the substitute. \iffalse Ian thinks we don't actually need this subsubsection \subsubsection{Physical world attacks} \begin{figure}[h] \centering \includegraphics[width=4in]{figures/PhysicalSystem} \caption{Modelling physical world adversarial attack. Physical system $g(\bullet)$ could be modelled as a combination of a sensor and digital classifier $f(\bullet)$, in such case entire system could be attacked using know techniques to construct adversarial examples for white box and black box systems.} \label{fig:adv_comp:physical_system} \end{figure} Conceptually physical world attack in not much different from already discussed attack scenarios. Physical world attack could be modelled as an attack on digital classifier which is extended by some known or unknown sensor, see Figure~\ref{fig:adv_comp:physical_system}. If digital classifier is known and sensor could be easily modelled then this becomes a form of a white box attack on a classifier~\cite{Athalye2017-TurtleRifle,Sharif16AdvML}. On the other hand if either classifier is unknown or it's hard to model the sensor, then such attack could be considered a form of black box attack~\cite{PhysicalAdversarialExamples,Sharif16AdvML}. \fi \subsection{Overview of defenses} No method of defending against adversarial examples is yet completely satisfactory. This remains a rapidly evolving research area. We given an overview of the (not yet fully succesful defense methods) proposed so far. Since adversarial perturbations generated by many methods look like high-frequency noise to a human observer\footnote{This may be because the human perceptual system finds the high-frequency components to be more salient; when blurred with a low pass filter, adversarial perturbations are often found to have significant low-frequency components } multiple authors have suggested to use image preprocessing and denoising as a potential defence against adversarial examples. There is a large variation in the proposed preprocessing techniques, like doing JPEG compression~\cite{das2017JpegDefense} or applying median filtering and reducing precision of input data~\cite{Weilin2017-FeatureSqueezing}. While such defences may work well against certain attacks, defenses in this category have been shown to fail in the white box case, where the attacker is aware of the defense~\cite{Warren2017-BreakingEnsembleWeakDefenses}. In the black box case, this defense can be effective in practice, as demonstrated by the winning team of the defense competition. Their defense, described in \secref{sec:adv_comp:def1}, is an example of this family of denoising strategies. Many defenses, intentionally or unintentionally, fall into a category called ``gradient masking.'' Most white box attacks operate by computing gradients of the model and thus fail if it is impossible to compute useful gradients. Gradient masking consists of making the gradient useless, either by changing the model in some way that makes it non-differentiable or makes it have zero gradients in most places, or make the gradients point away from the decision boundary. Essentially, gradient masking means breaking the optimizer without actually moving the class decision boundaries substantially. Because the class decision boundaries are more or less the same, defenses based on gradient masking are highly vulnerable to black box transfer ~\cite{Papernot-2016-IntroTransferability}. Some defense strategies (like replacing smooth sigmoid units with hard threshold units) are intentionally designed to perform gradient masking. Other defenses, like many forms of adversarial training, are not designed with gradient masking as a goal, but seem to often learn to do gradient masking when applied in practice. Many defenses are based on detecting adversarial examples and refusing to classify the input if there are signs of tampering~\cite{metzen2017detecting}. This approach works long as the attacker is unaware of the detector or the attack is not strong enough. Otherwise the attacker can construct an attack which simultaneously fools the detector into thinking an adversarial input is a legitimate input and fools the classifier into making the wrong classification~\cite{Carlini2017-Breaking10Detectors}. Some defenses work but do so at the cost of seriously reducing accuracy on clean examples. For example, shallow RBF networks are highly robust to adversarial examples on small datasets like MNIST \cite{goodfellow2014explaining} but have much worse accuracy on clean MNIST than deep neural networks. Deep RBF networks might be both robust to adversarial examples and accurate on clean data, but to our knowledge no one has successfully trained one. Capsule networks have shown robustness to white box attacks on the SmallNORB dataset, but have not yet been evaluated on other datasets more commonly used in the adversarial example literature \cite{hinton2018matrix}. The most popular defense in current research papers is probably adversarial training~\cite{szegedy2104intriguing,Goodfellow-2015-adversarial,LearningWithStrongAdversary}. The idea is to inject adversarial examples into training process and train the model either on adversarial examples or on mix of clean and adversarial examples. The approach was successfully applied to large datasets~\cite{Kurakin-AdversarialMlAtScale}, and can be made more effective by using discrete vector code representations rather than real number representations of the input \cite{thermometer_enconding2018}. One key drawback of adversarial training is that it tends to overfit to the specific attack used at training time. This has been overcome, at least on small datasets, by adding noise prior to starting the optimizer for the attack \cite{MadryPgd2017}. Another key drawback of adversarial training is that it tends to inadvertently learn to do gradient masking rather than to actually move the decision boundary. This can be largely overcome by training on adversarial examples drawn from an ensemble of several models~\cite{Tramer2017-EAT}. A remaining key drawback of adversarial training is that it tends to overfit to specific constraint region used to generate the adversarial examples (models trained to resist adversarial examples in a max-norm ball may not resist adversarial examples based on large modifications to background pixels \cite{adversarial_sphere2018} even if the new adversarial examples do not appear particularly challenging to a human observer). \section{Adversarial competition} \label{sec:adv_comp:competition} The phenomenon of adversarial examples creates a new set of problems in machine learning. Studying these problems is often difficult, because when a researcher proposes a new attack, it is hard to tell whether their attack is strong, or whether they have not implemented their defense method used for benchmarking well enough. Similarly, it is hard to tell whether a new defense method works well or whether it has just not been tested against the right attack. To accelerate research in adversarial machine learning and pit many proposed attacks and defenses against each other in order to obtain the most vigorous evaluation possible of these methods, we decided to organize a competition. In this competition participants are invited to submit methods which craft adversarial examples (attacks) as well as classifiers which are robust to adversarial eaxmples (defenses). When evaluating competition, we run all attack methods on our dataset to produce adversarial examples and then run all defenses on all generated adversarial examples. Attacks are ranked by number of times there were able to fool defenses and defenses are scored by number of correctly classified examples. \subsection{Dataset} When making a dataset for these competition we had following requirements: \begin{enumerate} \item Large enough dataset and non-trivial problem, so the competition would be interesting. \item Well known problem, so people potentially can reuse existing classifiers. (This ensures that competitors are able to focus on the adversarial nature of the challenge, rather than spending all their time coming up with a solution to the underlying task) \item Data samples which were never used before, so participants unlikely to overfit to well known dataset. \end{enumerate} These requirements were satisfied by choosing image classification problem and creating a dataset with ImageNet-compatible images~\cite{deng2009imagenet}. To do this we collected a set of images which were never used in publicly available datasets, labelled them using pretrained ImageNet classifier and then manually verified that these labels are correct. The original (non-adversarial) ImageNet challenge~\cite{ImageNetChallenge2015} is a complex and interesting problem, and thus satisfies requirement number 1. Additionally there are plenty of existing classifiers for ImageNet data, which satisfies requirement number 2. Because we collected new images and labelled them instead of using the already available dataset, our approach satisfies requirement number 3. Overall we collected two datasets for this competitions: \begin{itemize} \item \textbf{DEV} dataset was released to participants in the beginning of the competition, so they can use it for development of their solutions. This dataset contained $1000$ images. \item \textbf{FINAL} dataset was kept secret and was used to evaluate final submissions of participants. It contained $5000$ images. \end{itemize} \subsection{Tasks and competition rules} Our competition had three tracks. Each track had a different task: \begin{itemize} \item \textbf{Non-targeted adversarial attack.} In this track participants were invited to submit a method which performs a non-targeted black box attack, i.e. given an input image, generate an adversarial image which is likely be misclassified by unknown classifier. \item \textbf{Targeted adversarial attack.} In this track participants were invited to submit a method which performs a targeted black box attack, i.e. given an input image and a target class, generate an adversarial image which is likely be misclassified as a given target class by unknown classifier. \item \textbf{Defense against adversarial attacks.} In this track, participants were invited to submit a classifier which is robust to adversarial examples. \end{itemize} In all tracks participants submitted a program which performs the given task and we ran this code using our evaluation infrastructure. The submitted program was given a set of images (and target classes in case of targeted attack) as an input, and had to produce either an adversarial image (for attack submission) or classification label (for defense submission) for each input image. Input images were fed to each submission in batches of 100 images. To limit the budget required for computation, the rules of the competitions specified that each submission had to process a batch of 100 images in no more than 500 seconds. If a submission was not done after 500 seconds, then processing of the current batch was interrupted and the submission was fed the next batch. Only successfully processed images were included in the score of the submission (see details below in evaluation sub-section). \subsection{Evaluation metrics} \label{sec:adv_comp:evaluation_metric} An attack got $1$ point each time it was able to fool a defense on single image. If attack was unable to fool a defense or if it failed to generate an adversarial version of an image within the time limit then it got $0$ points for that image. A defense got $1$ point for each correctly classified image and $0$ points for incorrect classification or failure to produce a classification label. A defense was used to score attacks only if it successfully processed all input images, so that attackers were neither rewarded nor penalized for making images that caused the defender to crash or time out---the competition is purely about classification accuracy. Likewise, an attack was used to score defenses only if it succesfully processed all inputs. Points for each submission were added together and then normalized (using the same normalization constant for all submissions), such that final scores of all submissions are in the range $[0, 1]$, where $1$ means success on all images and $0$ means failure on all images. Specifically, the score of each non-targeted attack $a(Image)$ was computed using the following formula: \begin{equation}\label{eq:adv_comp:score_nontarget} \frac{1}{|D|N} \sum_{d \in D} \sum_{k=1}^{N} [d(a(I_k)) \ne TrueLabel_k] \end{equation} The score of each targeted attack $a(Image, TargetLabel)$ was computed as: \begin{equation}\label{eq:adv_comp:score_target} \frac{1}{|D|N} \sum_{d \in D} \sum_{k=1}^{N} [d(a(I_k, TargetLabel_k)) = TargetLabel_k] \end{equation} The score of each defense $d(Image)$ was computed in the following way: \begin{equation}\label{eq:adv_comp:score_defense} \frac{1}{|A|N} \sum_{a \in A} \sum_{k=1}^{N} [d(a(I_k)) = TrueLabel_k] \end{equation} Where: \begin{itemize} \item $A$ is the set of all attacks (targeted and non-targeted) which were used for evaluation of defenses (attacks that crashed on some inputs or ran out of time for some inputs were not used to evaluate defense); \item $D$ is the set of all defenses which were used for evaluation of attacks (defenses that crashed on some inputs or ran out of time for some inputs were not used to evaluate attacks); \item $N$ is the number of images in the dataset; \item $I_{k}$ is the $k$-th image from the dataset; \item $TrueLabel{k}$ is the true label of image $I_k$; \item $TargetLabel_{k}$ is the chosen target label for image $k$; \item $[P]$ is an indicator function which equals to $1$ when $P$ is true, and $0$ when $P$ is false or undefined. \item $d( \bullet )$ is a defending classifier. If the binary fails to complete execution within the time limit, the output of $d( \bullet )$ is a null label that never equals the true label. If $d(\bullet)$ is called on an undefined image, it is defined to always return the true label, so an attacker that crashes receives zero points. \end{itemize} Additionally to metrics used for ranking, after the competition we computed worst case score for each submission in defense and non-targeted attack tracks. These scores were useful to understand how submissions act in the worst case. To compute worst score of defense we computed accuracy of the defense against each attack and chosen minimum: \begin{equation} \frac{1}{N} \min_{a \in A} \sum_{k=1}^{N} [d(a(I_k)) = TrueLabel_k] \end{equation} To compute worst case score of non-targeted attack we computed how often attack caused misclassification when used against each defense and chosen minimum misclassification rate: \begin{equation} \frac{1}{N} \min_{d \in D} \sum_{k=1}^{N} [d(a(I_k)) \ne TrueLabel_k] \end{equation} Worst case score of targeted attack could be computed in a similar way, but generally not useful because targeted attacks are much weaker than non-targeted and all worst scores of targeted attacks were $0$. \subsection{Competition schedule} The competition was announced in May 2017, launched in the beginning of July 2017 and finished on October 1st, 2017. The ompetition was run in multiple rounds. There were three development rounds followed by the final round: \begin{itemize} \item August 1, 2017 - first development round \item September 1, 2017 - second development round \item September 15, 2017 - third development round \item October 1, 2017 - deadline for final submission \end{itemize} Development rounds were optional and their main purpose was to help participants to test their solution. Only the final round was used to compute final scores of submissions and determine winners. All rounds were evaluated in a similar way. For the evaluation of the round we gathered all submissions which were submitted before the round deadline, ran all of them and computed scores as described in section~\ref{sec:adv_comp:evaluation_metric}. We used DEV dataset to compute scores in development rounds and secret FINAL dataset to compute scores in the final round. \subsection{Technical aspects of evaluation} \begin{algorithm}[t] \caption{Algorithm of work of evaluation infrastructure}\label{alg:adv-comp:eval} \begin{algorithmic}[1] \Statex \(\triangleright\) PREPARE DATASET \parState{Split dataset $D = \{I_1, \ldots, I_N\}$ into batches $\{B_1, \ldots, B_k\}$, such that each batch $B_i$ contains $100$ image $\{I_{100(i-1)+1}, \ldots, I_{100i} \}$.} \parState{Assign size of maximum allowed perturbation $\epsilon_i$ to each batch $B_i$. Value of $\epsilon_i$ is randomly chosen from the set $\{\frac{4}{255}, \frac{8}{255}, \frac{12}{255}, \frac{16}{255}\}$} \Statex \(\triangleright\) RUN ALL ATTACKS \ForAll{$b \in \{1, \ldots, k\}$} \Comment{loop over all batches, $b$ is batch index} \ForAll{non-targeted attacks $a$} \parState{Run attack $a$ on batch $B_b$ and generate a batch of adversarial images $\hat{B}^{a}_{b}$. Size of maximum perturbation $\epsilon_b$ is provided to an attack.} \parState{Project each adversarial image from $\hat{B}^{a}_{b}$ into $L_{\infty}$ $\epsilon_b$-neighborhood of corresponding clean image from $B_{b}$.} \EndFor \ForAll{targeted attacks $t$} \parState{Run attack $t$ on batch $B_b$ and generate a batch of adversarial images $\hat{B}^{t}_{b}$. Attack is provided with size of maximum perturbation $\epsilon_b$ as well as target classes for each image from the batch $B_b$.} \parState{Project each adversarial image from $\hat{B}^{t}_{b}$ into $L_{\infty}$ $\epsilon_b$-neighborhood of corresponding clean image from $B_{b}$.} \EndFor \EndFor \Statex \(\triangleright\) RUN ALL DEFENSES \ForAll{$b \in \{1, \ldots, k\}$} \Comment{loop over all batches, $b$ is batch index} \ForAll{defense $d$} \ForAll{non-targeted attacks $a$} \parState{Run defense $d$ on all images from batch $\hat{B}^{a}_{b}$} \EndFor \ForAll{targeted attacks $t$} \parState{Run defense $d$ on all images from batch $\hat{B}^{t}_{b}$} \EndFor \EndFor \EndFor \Statex \(\triangleright\) COMPUTE SCORES \parState{Determine subset of targeted and non-targeted attacks $A$ which produces all adversarial images} \parState{Determine subset of defenses $D$ which output classification labels for all input images} \parState{Compute scores of all submissions using equations~\ref{eq:adv_comp:score_nontarget},~\ref{eq:adv_comp:score_target},~\ref{eq:adv_comp:score_defense}} \end{algorithmic} \end{algorithm} Competition participants were submitting pieces of code and we have run them ourselves. This approach posess several challanges. First of all we need to protect competition infrastructure from malicious code. Secondly, given the dataset size and number of submissions we had to run it in an efficient way. We partnered with Kaggle\footnote{\url{www.kaggle.com}} and used their platform as a frontend for the competition. Kaggle hosted competition web-site, leaderboard and participants were uploading submissions through Kaggle. For evaluation of each round we took all submissions from Kaggle and fed them into our evaluation infrastructure. Evaluation infrastructure worked as described in algorithm~\ref{alg:adv-comp:eval}. As could be seen from the algorithm, attacks could be run independently of each other. Same for defenses. We took advantage of this fact and parallelized execution of all attacks and all defenses by spreading them across multiple machines. For final evaluation we used 100 Google Cloud VMs. At any given moment one VM was running either one attack on one batch from the dataset or one defense on one batch of adversarial images. Submissions were run inside Docker containers to isolate submissions from our evaluation infrastructure and from the outside world. \section{Competition results} \label{sec:adv_comp:submissions} For the final round we had 91 non-targeted attack submissions, 65 targeted attack submission and 107 defense submissions. During the course of competitions submissions kept improving their results compared to baselines which could be seen from Figure~\ref{fig:adv_comp:improvements}. \begin{figure}[t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/improvement_defense} \caption{Defenses} \end{subfigure} \, \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/improvement_nontargeted} \caption{Non-targeted attacks} \end{subfigure} \, \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/improvement_targeted} \caption{Targeted attacks} \end{subfigure} \caption{Plots which shows difference between score of top submission and best baseline in each round in each track. As could be seen from the plot, submissions kept improving each round. }\label{fig:adv_comp:improvements} \end{figure} Final results of the top submissions in each track are provided in tables~\ref{tab:adv_comp:defenses}, \ref{tab:adv_comp:nontarget} and~\ref{tab:adv_comp:target}. Meaning of the columns is following. \textbf{Rank} is submission rank in final scoring, \textbf{score} is submission score as described in Section~\ref{sec:adv_comp:evaluation_metric}, \textbf{raw score} is un-normalized score which is number of times submission got a point on the image, \textbf{worst score} is submission score in the worst case and \textbf{medial eval time} is median time needed for evaluation of one batch of $100$ images. To put things into prospective, plots of all submission scores in final round from best to worst and comparison with provided baselines are depicted in Figure~\ref{fig:adv_comp:scores_plot}. \begin{table} \centering \caption{Top-5 defense submissions, best baseline and submission with maximum worst-case score} \label{tab:adv_comp:defenses} \begin{tabular}{r|l|r|r|r|r} \hline\noalign{\smallskip} Rank & Team name or baseline & Score & Raw Score & Worst Score & Median eval time \\ \noalign{\smallskip}\svhline\noalign{\smallskip} 1 & TSAIL & 0.953164 & 691044 & 0.1184 & 51.0 \\ 2 & iyswim & 0.923524 & 669555 & 0.2520 & 126.0 \\ 3 & Anil Thomas & 0.914840 & 663259 & 0.2666 & 97.0 \\ 4 & erko & 0.911961 & 661172 & 0.2920 & 87.0 \\ 5 & Stanford \& Suns & 0.910593 & 660180 & 0.0682 & 129.0 \\ 24 & Rafael Moraes & 0.871739 & 632011 & \textbf{0.5358} & 17.0 \\ 56 & Baseline (Ens. adv. ir\_v2) & 0.772908 & 560358 & 0.0186 & 17.0 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{table} \begin{table} \centering \caption{Top-5 non-targeted attack submissions, best baseline and best submission with according to worst-case score.} \label{tab:adv_comp:nontarget} \begin{tabular}{r|l|r|r|r|r} \hline\noalign{\smallskip} Rank & Team name or baseline & Score & Raw Score & Worst Score & Median eval time \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & TSAIL & 0.781644 & 410363 & 0.1364 & 423.0 \\ 2 & Sangxia & 0.776855 & 407849 & \textbf{0.3412} & 421.0 \\ 3 & Stanford \& Sun & 0.774025 & 406363 & 0.2722 & 497.0 \\ 4 & iwiwi & 0.768981 & 403715 & 0.1352 & 76.0 \\ 5 & toshi\_k & 0.755598 & 396689 & 0.3322 & 448.0 \\ 44 & Baseline (FGSM) & 0.346400 & 181860 & 0.0174 & 17.0 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{table} \begin{table} \centering \caption{Top-5 targeted attack submissions and best baseline.} \label{tab:adv_comp:target} \begin{tabular}{r|l|r|r|r} \hline\noalign{\smallskip} Rank & Team & Score & Raw Score & Median Eval Time \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & TSAIL & 0.402211 & 211161.0 & 392.0 \\ 2 & Sangxia & 0.368773 & 193606.0 & 414.0 \\ 3 & FatFingers & 0.368029 & 193215.0 & 493.0 \\ 4 & Anil Thomas & 0.364552 & 191390.0 & 495.0 \\ 5 & WNP & 0.347935 & 182666.0 & 487.0 \\ 24 & Baseline (Iter. T. C. 20) & 0.199773 & 104881.0 & 127.0 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{table} \begin{figure}[t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/defenses_scores_plot} \caption{Defenses} \end{subfigure} \, \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/nontarget_scores_plot} \caption{Non-targeted attacks} \end{subfigure} \, \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/target_scores_plot} \caption{Targeted attacks} \end{subfigure} \caption{Plots with scores of submissions in all three tracks. Solid line of each plot is scores of submissions depending on submission rank. Dashed lines are scores of baselines we provided. These plots demonstrate difference between best and worst submissions as well as how much top submissions were able to improve provided baselines. }\label{fig:adv_comp:scores_plot} \end{figure} As could be seen from the tables, best defenses achieved more than $90\%$ accuracy on all adversarial images from all attacks. At the same time worst case scores of defenses are much lower. The highest worst case score among all defenses is only $53.6\%$. This could be an indication that it's possible to achieve pretty high accuracy against adversarial examples in average case, however model will still be suspectible to adversarial examples and could be fooled if adversary will be able to find them. Similar observation holds for attacks. Best attacks achieved up to $78\%$ fooling rate against all defenses, at the same time worst case score of attacks was no more than $34\%$. \section{Top scoring submissions} In the remainder of the chapter, we present the solutions of several of the top-scoring teams. To describe the solutions, we use the following notation: \begin{itemize} \item $x$ - input image with label $y_{true}$. Different images are distinguished by superscripts, for examples images $x^{1}, x^{2}, \ldots$ with labels $y^{1}_{true}, y^{2}_{true}, \ldots$. \item $y_{target}$ is a target class for image $x$ for targeted adversarial attack. \item Functions with names like $f(\bullet), g(\bullet), h(\bullet), \ldots$ are classifiers which map input images into logits. In other words $f(x)$ is logits vector of networks $f$ on image $x$ \item $J(f(x), y)$ - cross entropy loss between logits $f(x)$ and class $y$. \item $\epsilon$ - maximum $L_{\infty}$ norm of adversarial perturbation. \item $x_{adv}$ - adversarial images. For iterative methods $x_{adv}^{i}$ is adversarial example generated on step $i$. \item $Clip_{[a, b]}(\bullet)$ is a function which performs element-wise clipping of input tensor to interval $[a, b]$. \item $\mathcal{X}$ is the set of all training examples. \end{itemize} All values of images are normalized to be in $[0, 1]$ interval. Values of $\epsilon$ are also normalized to $[0, 1]$ range, for examples $\epsilon = \frac{16}{255}$ correspond to uint8 value of epsilon equal to $16$. \input{submission_defense_1_tsail.tex} \input{submission_attack_1_tsail.tex} \input{submission_defense_2_iyswim.tex} \input{submission_attack_2_sangxia.tex} \input{submission_target_3_fatfingers.tex} \input{submission_defense_4_erko.tex} \input{submission_nontarget_4_iwiwi.tex} \section{Conclusion} Adversarial examples are interesting phenomenon and important problem in machine learning security. Main goals of this competition were to increase awareness of the problem and stimulate researchers to propose novel approaches. Competition definitely helped to increase awareness of the problem. Article ``AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks''\footnote{ \url{www.technologyreview.com/s/608288}} was published in MIT Technology review about the competition. And more than 100 teams were competing in the final round. Competition also pushed people to explore new approaches and improve existing methods to the problem. In all three tracks, competitors showed significant improvements on top of provided baselines by the end of the competition. Additionally, top submission in the defense tracked showed $95\%$ accuracy on all adversarial images produced by all attacks. While worst case accuracy was not as good as an average accuracy, the results are still suggesting that practical applications may be able to achieve reasonable level of robustness to adversarial examples in black box case. \bibliographystyle{abbrv} \subsection{1st place in both attack tracks: team TsAIL} \label{sec:adv_comp:mim} \runinhead{Team members:} Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu and Xiaolin Hu. In this section, we introduce the momentum iterative gradient-based attack method, which won the first places in both the non-targeted attack and targeted attack tracks. We first describe the algorithm in Sec.~\ref{sec:mim:method}, and then illustrate our submissions for non-targeted and targeted attacks respectively in Sec.~\ref{sec:mim:non-target} and Sec.~\ref{sec:mim:target}. A more detailed description can be found in~\cite{dong2017boosting}. \subsubsection{Method} \label{sec:mim:method} The momentum iterative attack method is built upon the basic iterative method~\cite{Kurakin-PhysicalAdversarialExamples}, by adding a momentum term to greatly improve the transferability of the generated adversarial examples. Existing attack methods exhibit low efficacy when attacking black-box models, due to the well-known trade-off between the attack strength and the transferability~\cite{Kurakin-AdversarialMlAtScale}. In particular, one-step method (e.g., FGSM) calculates the gradient only once using the assumption of linearity of the decision boundary around the data point. However in practice, the linear assumption may not hold when the distortions are large~\cite{liu2017delving}, which makes the adversarial examples generated by one-step method ``underfit'' the model, limiting attack strength. In contrast, basic iterative method greedily moves the adversarial example in the direction of the gradient in each iteration. Therefore, the adversarial example can easily drop into poor local optima and ``overfit'' the model, which are not likely to transfer across models. In order to break such a dilemma, we integrate momentum~\cite{Poljak1964Some} into the basic iterative method for the purpose of stabilizing update directions and escaping from poor local optima, which are the common benefits of momentum in optimization literature~\cite{Korczak1998Optimization,Sutskever2013On}. As a consequence, it alleviates the trade-off between the attack strength and the transferability, demonstrating strong black-box attacks. The momentum iterative method for non-targeted attack is summarized as: \begin{equation} g^{t+1} = \mu \cdot g^{t} + \frac{\nabla_{x}J(f(x^{t}_{adv}),y_{true})}{\|\nabla_{x}J(f(x^{t}_{adv}),y_{true})\|_1}, \hspace{1ex} x^{t+1}_{adv} = \mathrm{Clip}_{[0, 1]}(x^{t}_{adv} + \alpha\cdot\mathrm{sign}(g^{t+1})) \end{equation} where $g^{0}=0$, $x^{0}_{adv}=x$, $\alpha = \frac{\epsilon}{T}$ with T being the number of iterations. $g^t$ gathers the gradients of the first $t$ iterations with a decay factor $\mu$ and adversarial example $x^t_{adv}$ is perturbed in the direction of the sign of $g^t$ with the step size $\alpha$. In each iteration, the current gradient $\nabla_{x}J(f(x^{t}_{adv}),y_{true})$ is normalized to have unit $L_1$ norm (however other norms will work too), because we noticed that the scale of the gradients varies in magnitude between iterations. To obtain more transferable adversarial examples, we apply the momentum iterative method to attack an ensemble of models. If an example remains adversarial for multiple models, it may capture an intrinsic direction that always fools these models and is more likely to transfer to other models at the same time~\cite{liu2017delving}, thus enabling powerful black-box attacks. We propose to attack multiple models whose \textit{logit} activations are fused together, because the logits capture the logarithm relationships between the probability predictions, an ensemble of models fused by logits aggregates the fine detailed outputs of all models, whose vulnerability can be easily discovered. Specifically, to attack an ensemble of $K$ models, we fuse the logits as \begin{equation}\label{eq:mim:logits_fusing} f(x) = \sum_{k=1}^K w_k f_k(x) \end{equation} where $f_k(x)$ are the $k$-th model, $w_k$ is the ensemble weight with $w_k \geq 0$ and $\sum_{k=1}^K w_k = 1$. Therefore we get a big ensemble model $f(x)$ and we can use the momentum iterative method to attack $f$. \subsubsection{Submission for non-targeted attack} \label{sec:mim:non-target} In non-targeted attack, we implemented the momentum iterative method for attacking an ensemble of following models: \begin{itemize} \item Normally trained (i.e. without adversarial training) Inception~v3~\cite{Szegedy2016Rethinking}, Inception~v4~\cite{szegedy2017inception}, Inception~Resnet~v2~\cite{szegedy2017inception} and Resnet~v2-101~\cite{he2016identity} models. \item Adversarially trained Inception~v3\textsubscript{adv}~\cite{Kurakin-AdversarialMlAtScale} model. \item Ensemble adversarially trained Inc-v3\textsubscript{ens3}, Inc-v3\textsubscript{ens4} and IncRes-v2\textsubscript{ens} models from~\cite{Tramer2017-EAT}. \end{itemize} Ensemble weights (from Equation~\ref{eq:mim:logits_fusing}) were $\frac{0.25}{7.25}$ for Inception-v3\textsubscript{adv} and $\frac{1}{7.25}$ for all other models. The number of iterations was $10$ and the decay factor $\mu$ was $1.0$. \subsubsection{Submission for targeted attack}\label{sec:mim:target} For targeted attacks, we used a different formula of momentum iterative method: \begin{align} g^{t+1} & = \mu \cdot g^{t} + \frac{\nabla_{x}J(f(x^{t}_{adv}),y_{target})}{\mathrm{std}(\nabla_{x}J(f(x^{t}_{adv}),y_{target})}\\ x^{t+1}_{adv} & = \mathrm{Clip}_{[0,1]}\Bigl(x^{t}_{adv} - \alpha\cdot \mathrm{Clip}_{[-2, 2]}(\mathrm{round}(g^{t+1})) \Bigr) \end{align} where $\mathrm{std}(\bullet)$ is the standard deviation and $\mathrm{round}(\bullet)$ is rounding to nearest integer. Values of $\mathrm{Clip}_{[-2, 2]}(\mathrm{round}(\bullet))$ are in set $\{-2, -1, 0, 1, 2\}$ which enables larger search space compared to sign function. No transferability of the generated adversarial examples was observed in the targeted attacks, so we implement our method for attacking several commonly used white-box models. We built two versions of the attacks. If the size of perturbation $\epsilon$ was smaller than $\frac{8}{255}$, we attacked ensemble of Inception v3 and IncRes-v2\textsubscript{ens} with weights $\frac{1}{3}$ and $\frac{2}{3}$; otherwise we attacked an ensemble of Inception v3, Inception-v3\textsubscript{adv}, Inc-v3\textsubscript{ens3}, Inc-v3\textsubscript{ens4} and IncRes-v2\textsubscript{ens} with ensemble weights $\frac{4}{11}, \frac{1}{11}, \frac{1}{11}, \frac{1}{11}$ and $\frac{4}{11}$. The number of iterations were $40$ and $20$ respectively, and the decay factor $\mu$ was $1.0$. \subsection{2nd place in both attack tracks: team Sangxia} \label{sec:adv_comp:submission_sangxia} \runinhead{Team members:} Sangxia Huang In this section, we present the submission by Sangxia Huang for both non-targeted and targeted attacks. The approach is an iterated FGSM attack against an ensemble of classifiers with random perturbations and augmentations for increased robustness and transferability of the generated attacks. The source code is available online. \footnote{\url{https://github.com/sangxia/nips-2017-adversarial}} We also optimize the iteration steps for improved efficiency as we describe in more details below. \runinhead{Basic idea} An intriguing property of adversarial examples observed in many works ~\cite{Papernot-2016-IntroTransferability,szegedy2104intriguing,goodfellow2014explaining,Papernot-2016-TransferabilityStudy} is that adversarial examples generated for one classifier transfer to other classifiers. Therefore, a natural approach for effective attacks against unknown classifiers is to generate strong adversarial examples against a large collection of classifiers. Let $f^{1}, \ldots, f^{k}$ be an ensemble of image classifiers that we choose to target. In our solution we give equal weights to each of them. For notation simplicity, we assume that the inputs to all $f^{i}$ have the same size. Otherwise, we first insert a bi-linear scaling layer, which is differentiable. The differentiability ensures that the correct gradient signal is propagated through the scaling layer to the individual pixels of the images. Another idea we use to increase robustness and transferrability of the attacks is image augmentation. Denote by $T_{\theta}$ an image augmentation function with parameter $\theta$. For instance, we can have $\theta \in [0, 2\pi)$ as an angle and $T_{\theta}$ as the function that rotates the input image clock-wise by $\theta$. The parameter $\theta$ can also be a vector. For instance, we can have $\theta \in (0,\infty)^{2}$ as scaling factors in the width and height dimension, and $T_{\theta}$ as the function that scales the input image in the width direction by $\theta_1$ and in the height direction by $\theta_2$. In our final algorithm, $T_{\theta}$ takes the general form of a projective transformation with $\theta \in \mathbb{R}^8$ as implemented in \verb|tf.contrib.image.transform|. Let $x$ be an input image, and $y_{true}$ be the label of $x$. Our attack algorithm works to find an $x^{adv}$ that maximizes the expected average cross entropy loss of the predictions of $f^{1}, \ldots, f^{k}$ over a random input augmentation \footnote{The distribution we use for $\theta$ corresponds to a small random augmentation. See code for details.} \[ \max_{x^{adv}: \|x-x^{adv}\|_{\infty} \le \epsilon} \mathbf{E}_{\theta} \left[\frac{1}{k}\sum_{i=1}^{k} J\left(f^{i}(T_{\theta}(x)), y_{true}\right) \right]\,. \] However, in a typical attack scenario, the true label $y_{true}$ is not available to the attacker, therefore we substitute it with a psuedo-label $\hat{y}$ generated by an image classifer $g$ that is available to the attacker. The objective of our attack is thus the following \[ \max_{x^{adv}: \|x-x^{adv}\|_{\infty} \le \epsilon} \frac{1}{k}\sum_{i=1}^{k} \mathbf{E}_{\theta^{i}} \left[ J\left(f^{i}(T_{\theta^{i}}(x)), g(x)\right) \right]\,. \label{eq:adv_comp:sangxia:obj} \] Using linearity of gradients, we write the gradient of the objective as \[ \frac{1}{k}\sum_{i=1}^{k} \nabla_x \mathbf{E}_{\theta^{i}} \left[ J\left(f^{i}(T_{\theta^{i}}(x)), g(x)\right) \right]\,. \] For typical distributions of $\theta$, such as uniform or normal distribution, the gradient of the expected cross entropy loss over a random $\theta$ is hard to compute. In our solution, we replace it with an empirical estimate which is an average of the gradients for a few samples of $\theta$. We also adopt the approach in ~\cite{Tramer2017-EAT} where $x$ is first randomly perturbed. The use of random projective transformation seems to be a natural idea, but to the best of our knowledge, this has not been explicitly described in previous works on generating adversarial examples for image classifiers. In the rest of this section, we use $\widehat{\nabla^{i}}(x)$ to denote the empirical gradient estimate on input image $x$ as described above. Let $x^{0}_{adv} := x$, $x^{min} = max(x-\epsilon,0)$, $x^{max}=min(x+\epsilon,1)$, and let $\alpha^{1}, \alpha^{2}, \ldots$ be a sequence of pre-defined step sizes. Then in the $i$-th step of the iteration, we update the image by \[ x^{i}_{adv} = clip\left(x^{i-1}_{adv} + \alpha^{i} sign\left( \frac{1}{k} \sum_{i=1}^{k} \widehat{\nabla^{i}}(x) \right), x^{min}, x^{max}\right)\,. \] \runinhead{Optimization} We noticed from our experiments that non-targeted attacks against pre-trained networks without defense (white-box and black-box) typically succeed in 3 -- 4 rounds, whereas attacks against adversarially trained networks take more iterations. We also observed that in later iterations, there is little benefit in including in the ensemble un-defended networks that have been successfully attacked. In the final solution, each iteration is defined by step size $\alpha^{i}$ as well as the set of classifiers to include in the ensemble for the respective iteration. These parameters were found through trial and error on the official development dataset of the competition. \runinhead{Experiments: non-targeted attack} We randomly selected 18,000 images from ImageNet ~\cite{ImageNetChallenge2015} for which Inception V3 ~\cite{szegedy2015inceptionv3} classified correctly. The classifiers in the ensemble are: Inception V3 ~\cite{szegedy2015inceptionv3}, ResNet 50 ~\cite{he2015resnetv1}, ResNet 101 ~\cite{he2015resnetv1}, Inception ResNet V2 ~\cite{szegedy2017inception}, Xception ~\cite{chollet2016xception}, ensemble adversarially trained Inception ResNet V2 (EnsAdv Inception ResNet V2) ~\cite{Tramer2017-EAT}, and adversarially trained Inception V3 (Adv Inception V3) ~\cite{Kurakin-AdversarialMlAtScale}. We held out a few models to evaluate the transferrability of our attacks. The holdout models listed in Table ~\ref{tab:adv_comp:sangxia:non_targeted} are: Inception V4 ~\cite{szegedy2017inception}, ensemble adversarially trained Inception V3 with 2 (and 3) external models (Ens-3-Adv Inception V3, and Ens-4-Adv Inception V3, respectively) ~\cite{Tramer2017-EAT}. \begin{table}[h] \centering \caption{Success rate --- non-targeted attack} \label{tab:adv_comp:sangxia:non_targeted} \begin{tabular}{p{4cm} r} \hline Classifier & Success rate \\ \hline Inception V3 & $96.74\%$ \\ ResNet 50 & $92.78\%$ \\ Inception ResNet V2 & $92.32\%$ \\ EnsAdv Inception ResNet V2 & $87.36\%$ \\ Adv Inception V3 & $83.73\%$ \\ \hline Inception V4 & $91.69\%$ \\ Ens-3-Adv Inception V3 & $62.76\%$ \\ Ens-4-Adv Inception V3 & $58.11\%$ \\ \hline \end{tabular} \end{table} Table ~\ref{tab:adv_comp:sangxia:non_targeted} lists the success rate for non-targeted attacks with $\epsilon=16/255$. The performance for $\epsilon=12/255$ is similar, and somewhat worse for smaller $\epsilon$. We see that a decent amount of the generated attacks transfer to the two holdout adversarially trained network Ens-3-Adv Inception V3 and Ens-4-Adv Inception V3. The transfer rate for many other publicly available pretrained networks without defense are all close to or above $90\%$. For brevity, we only list the performance on Inception V4 for comparison. \runinhead{Targeted attack} Our targeted attack follows a similar approach as non-targeted attack. The main differences are: \begin{enumerate} \item For the objective, we now \emph{minimize} the loss between a target label $y_{target}$, instead of maximizing with respect to $\hat{y}$ as in Equation (\ref{eq:adv_comp:sangxia:obj}). \item Our experiments show that doing random image augmentation severely decreases the success rate for even \emph{white-box} attacks, therefore no augmentation is performed for targeted-attacks. Note that here success is defined as successfully make the classifier output the target class. The attacks with image augmentation typically managed to cause the classifiers to output some wrong label other than the target class. \end{enumerate} Our conclusion is that if the success criteria is to trick the classifier into outputting some specific target class, then our targeted attack does not transfer well and is not robust. \subsection{1st place in defense track: team TsAIL} \label{sec:adv_comp:def1} \runinhead{Team members:} Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu and Xiaolin Hu. In this section, we introduce the High-level representation guided denoiser~(HGD) method, which won the first place in the defense track. The idea is to train a neural network based denoiser to remove the adversarial perturbation. \subsubsection{Dataset} \label{sec:hgd:dataset} To prepare the training set for the denoiser, we first extracted 20K images from the ImageNet training set (20 images per class). Then we used a bunch of adversarial attacks to distort these images and form a training set. Attacking methods included FGSM and I-FGSM and were applied to the many models and their ensembles to simulate weak and strong attacks. \subsubsection{Denoising U-net} Denoising autoencoder (DAE) \cite{vincent2008extracting} is a potential choice of the denoising network. But DAE has a bottleneck for the transmission of fine-scale information between the encoder and decoder. This bottleneck structure may not be capable of carrying the multi-scale information contained in the images. That's why we used a denoising U-net (DUNET). \begin{figure*} \includegraphics[width=\linewidth]{figures/tsail/detail_daunet3} \caption{ The detail of DUNET. The numbers inside each cube stand for width $\times$ height, and the number outside the cube stands for the number of channels. In all the C3 of the feedforward path, the stride of the first C is $2\times2$.} \label{fig_daunet} \end{figure*} Compared with DAE, the DUNET adds some lateral connections from encoder layers to their corresponding decoder layers of the same resolution (Figure \ref{fig_daunet}). In this way, the network is learning to predict adversarial noise only ($d\hat{x}$ in Figure \ref{fig_daunet}), which is more relevant to denoising and easier than reconstructing the whole image \cite{zhang2017beyond}. The clean image can be readily obtained by subtracting the noise (adding -$d\hat{x}$) from the corrupted input. \subsubsection{Loss function} \label{sec:hgd:method} \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{figures/tsail/hgd} \caption{Diagrams of various denoisers. Upper-left: the vanilla denoiser. Upper-right: feature guided denoiser. Lower-left: the logits guided denoiser. Lower-right: the class-label guided denoiser.} \label{fig:hgd} \end{figure} The vanilla denoiser uses the reconstructing distance as the loss function, but we found a better method. Given a target neural network, we extract its representation at $l$-th layer for $x$ and $\hat{x}$, and calculate the loss function as: \begin{equation} L=|f_l(\hat{x}) - f_l(x)|. \end{equation} The corresponding models are called HGD, because the supervised signal comes from certain high-level layers of the classifier and carries guidance information related to image classification. We propose two HGDs with different choices of $l$. For the first HGD, we define $l=-2$ as the index of the topmost convolutional layer. This denoiser is called feature guided denoiser (FGD). For the second HGD, we use the logits layer. So it is called logits guided denoiser (LGD). Another kind of HGD uses the classification loss of the target model as the denoising loss function, which is supervised learning as ground truth labels are needed. This model is called class label guided denoiser (CGD) (see Fig. \ref{fig:hgd}). Please refer to our full-length paper \cite{liao2017defense} for more information. \subsection{1st place in defense track: team TsAIL} \label{sec:adv_comp:def1} \runinhead{Team members:} Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu and Xiaolin Hu. In this section, we introduce the high-level representation guided denoiser~(HGD) method, which won the first place in the defense track. The idea is to train a neural network based denoiser to remove the adversarial perturbation. \subsubsection{Dataset} \label{sec:hgd:dataset} To prepare the training set for the denoiser, we first extracted 20K images from the ImageNet training set (20 images per class). Then we used a bunch of adversarial attacks to distort these images and form a training set. Attacking methods included FGSM and I-FGSM and were applied to the many models and their ensembles to simulate weak and strong attacks. \subsubsection{Denoising U-net} Denoising autoencoder (DAE) \cite{vincent2008extracting} is a potential choice of the denoising network. But DAE has a bottleneck for the transmission of fine-scale information between the encoder and decoder. This bottleneck structure may not be capable of carrying the multi-scale information contained in the images. That's why we used a denoising U-net (DUNET). Compared with DAE, the DUNET adds some lateral connections from encoder layers to their corresponding decoder layers of the same resolution. In this way, the network is learning to predict adversarial noise only, which is more relevant to denoising and easier than reconstructing the whole image \cite{zhang2017beyond}. The clean image can be readily obtained by subtracting the noise from the corrupted input: \begin{equation} d\hat{x} = D_w(x^{adv}). \end{equation} \begin{equation} \hat{x} = x^{adv}-d\hat{x}. \end{equation} where $D_w$ is a denoiser network with parameters $w$, $d\hat{x}$ is predicted adversarial noise and $\hat{x}$ is reconstructured clean image. \subsubsection{Loss function} \label{sec:hgd:method} The vanilla denoiser uses the reconstructing distance as the loss function, but we found a better method. Given a target neural network, we extract its representation at $l$-th layer for $x$ and $\hat{x}$, and calculate the loss function as: \begin{equation} L=\|f_l(\hat{x}) - f_l(x)\|_1. \end{equation} The corresponding models are called HGD, because the supervised signal comes from certain high-level layers of the classifier and carries guidance information related to image classification. We propose two HGDs with different choices of $l$. For the first HGD, we define $l=-2$ as the index of the topmost convolutional layer. This denoiser is called feature guided denoiser (FGD). For the second HGD, we use the logits layer. So it is called logits guided denoiser (LGD). Another kind of HGD uses the classification loss of the target model as the denoising loss function, which is supervised learning as ground truth labels are needed. This model is called class label guided denoiser (CGD). In this case the loss function is optimized with respect to the parameters of the denoiser $w$, while the parameters of the guiding model are fixed. Please refer to our full-length paper \cite{liao2017defense} for more information. \subsection{2nd place in defense track: team iyswim} \label{sec:adv_comp:mitigating_adv} \runinhead{Team members:} Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren and Alan Yuille In this submission, we propose to utilize randomization as a defense against adversarial examples. Specifically, we propose a randomization-based method, as shown in figure \ref{Fig:pipline}, which adds a random resizing layer and a random padding layer to the beginning of the classification networks. Our method enjoys the following advantages: (1) no additional training or fine-tuning; (2) very few additional computations; (3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it ranked \textbf{No.2} in the NIPS adversarial defense challenge. \begin{figure}[h!] \centering \vspace{-0.2cm} \includegraphics[width=1.0\columnwidth]{figures/iyswim/pipeline.pdf} \caption{The pipeline of the proposed defense method. The input image $x$ first goes through the random resizing layer with \textbf{a} random scale applied. Then the random padding layer pads the resized image $x^{\prime}$ in \textbf{a} random manner. The resulting padded image $x^{\prime\prime}$ is used for classification.} \label{Fig:pipline} \vspace{-0.5cm} \end{figure} \subsubsection{Randomization as defense} Intuitively, the adversarial perturbation generated by iterative attacks may easily get over-fitted to the specific network parameters, and thus be less transferable. Due to this weak generalization ability, we hypothesis that low-level image transformations, e.g., resizing, padding, compression, etc, may probably destroy the specific structure of adversarial perturbations, thus making it a good defense. It can even defend against white-box iterative attacks if random transformations are applied. This is because each test image goes through a transformation randomly and the attacker does not know this specific transformation when generating adversarial noise. \subsubsection{Randomization layers} The first randomization layer is a random resizing layer, which resizes the original input image $x$ with the size $W \times H \times 3$ to a new image $x^{\prime}$ with random size $W^\prime \times H^\prime \times 3$. Note that, $|W^\prime - W|$ and $|H^\prime - H|$ should be within a reasonably small range, otherwise the network performance on clean images would significantly drop. Taking Inception-ResNet network~\cite{szegedy2017inception} as an example, the original data input size is $299 \times 299 \times 3$. Empirically we found that the network accuracy hardly drops if we control the height and width of the resized image $x^\prime$ to be within the range $\left[299, 331\right)$. The second randomization layer is the random padding layer, which pads zeros around the resized image in a random manner. Specifically, by padding the resized image $x^\prime$ into a new image $x^{\prime\prime}$ with the size $W^{\prime\prime} \times H^{\prime\prime} \times 3$, we can choose to pad $w$ zero pixels on the left, $W^{\prime\prime} - W^{\prime} - w$ zero pixels on the right, $h$ zero pixels on the top and $H^{\prime\prime} - H^{\prime} - h$ zero pixels on the bottom. This results in a total number of $(W^{\prime\prime} - W^{\prime} + 1) \times (H^{\prime\prime} - H^{\prime} + 1)$ different possible padding patterns. During implementation, the original image first goes through two randomization layers, and then we pass the transformed image to the original CNN for classification. The pipeline is illustrated in figure \ref{Fig:pipline}. \subsubsection{Randomization layers + adversarial training} \label{sec:random+adv_train} Recently, adversarial training~\cite{Kurakin-AdversarialMlAtScale, Tramer2017-EAT} was developed as an effective defense for single-step attacks. Thus by adding the proposed random transformations as additional layers to an adversarially trained model~\cite{Tramer2017-EAT}, it is expected that this method is able to effectively defend against both single-step and iterative attacks, including both black-box and white-box settings. \subsubsection{Submission details and results} An adversarially trained model appended with randomization layers was submitted as our defense model to the challenge. Specifically, we (1) set the resizing range to be $[310, 331)$; (2) set the padding size to be $331 \times 331 \times 3$; (3) choose the adversarially trained model to be \textit{ens-adv-Inception-Resnet-v2}\footnote{\url{https://download.tensorflow.org/models/ens_adv_inception_resnet_v2_2017_08_18.tar.gz}}; (4) average the prediction results over $30$ randomization patterns for each image; (5) flip the input image with probability $0.5$ for each randomization pattern. The whole implementation is public available\footnote{\url{https://github.com/cihangxie/NIPS2017_adv_challenge_defense}}. By evaluating our model against $156$ different attacks, it reaches a normalized score of $0.924$ (ranked No.$2$ among $107$ defense models), which is far better than using ensemble adversarial training~\cite{Tramer2017-EAT} alone with a normalized score of $0.773$. This result further demonstrates that the proposed randomization method can effectively make deep networks much more robust to adversarial attacks. \subsubsection{Attackers with more information} When submitting the proposed defense method to the NIPS competition, the randomization layers are remained as an unknown network module for the attackers. We thus test the robustness of this defense method further by assuming that the attackers are aware of the existence of randomization layers. Extensive experiments are performed in \cite{xie2017mitigating}, and it shows that the attackers still cannot break this defense completely in practice. Interested readers can refer to \cite{xie2017mitigating} for more details. \subsection{4th place in defense track: team erko} \label{sec:adv_comp:submission_erko} \runinhead{Team members:} Yerkebulan Berdibekov In this section, I describe a very simple defense solution against adversarial attacks using spatial smoothing on the input of adversarially trained models. This solution took 4th place in the final round. Using spatially smoothing, in particularly median filtering with 2 by 2 windows on images and processing it by only adversarially trained models we can achieve simple and decent defense against black box attacks. Additionally this approach can work along with other defense solutions that use randomizations (data augmentations \& other types of defenses). Adversarially trained models are models trained on adversarial examples along with a given original dataset. In the usual procedure for adversarial training, during the training phase half of each mini-batch of images are replaced with adversarial examples generated on the model itself (white box attacks). This can provide robustness against future white-box attacks. However, like described in ~\cite{Tramer2017-EAT} $gradient masking$ makes the finding of adversarial examples a challenging task. Due to this, adversarially trained models cannot guarantee robustness against black-box attacks. Many other techniques have been developed to overcome these problems. \subsubsection{Architecture of Defense Model} \label{sec:adv_comp:submission_erko:architecture} Figure~\ref{fig:adv_comp:submission_erko:architecture} below shows the architecture of my simple defense model: an input image is followed by median filtering, and then this filtered image is fed to ensemble of adversarially trained models. The resulting predictions are then averaged. However, like described in the sections below, many other variations of ensembles and single models were tested. The best results were achieved using an ensemble of all adversarially trained models with median filtering. \begin{figure}[h] \centering \includegraphics[scale=.25]{figures/erko/erko_architecture} \caption{Architecture of simple defense model, using median filtering with only adversarially trained models.} \label{fig:adv_comp:submission_erko:architecture} \end{figure} \subsubsection{Spatial smoothing: median filtering.} \label{sec:adv_comp:submission_erko:spatial} Median filtering is often used in image/photo pre-processing to reduce noise while preserving edges and other features. It is robust against random high-magnitude perturbations resembling salt-and-pepper noise. Photographers also use median filtering to increase photo quality. ImageNet may contain many median filtered images. Other major advantages of image filtering include: \begin{itemize} \item{Median filtering does not harm classification accuracy on clean examples, as shown below in experiments in Section~\ref{sec:adv_comp:submission_erko:experiments}} \item{Does not require additional expensive training procedures other than the adversarially trained model itself.} \end{itemize} \subsubsection{Experiments} \label{sec:adv_comp:submission_erko:experiments} I have experimentally observed that using median filtering only we cannot defend against strong adversarial attacks like described by Carlini and Wagner ~\cite{Carlin-Wagner-attack}. However, I have also experimentally observed that using median filtering and only adversarially trained models we can obtain a robust defense against adversarial attacks. In my experiments I used the dataset provided by competition organizers and used a modified C\&W L2 attack to generate adversarial examples. These examples were later used to calculate the adversarial example misclassification ratio (number of wrong classifications divided by number of all examples) and to rank defenses. To generate adversarial examples I used either a single model or ensemble of models (a list of multiple models is indicated in each cell). In all experiments I used a hold-out \verb|inception_v4| model that was not used to generate adversarial examples (see Table~\ref{tab:adv_comp:submission_erko:table1}, Table~\ref{tab:adv_comp:submission_erko:table2}). This allowed us to test transferability of attacks and to test spatial smoothing effects. \subsubsection{Effects of median filtering} \label{sec:adv_comp:submission_erko:effects} On our holdout \verb|inception_v4| model, using median filtering performs nearly the same as without median filtering. Same results on other non-adversarially trained models. With median filtering or without, misclassification ratio differences are small. Adversarially trained models with median filtering show good defense against attacks. An ensemble of these adversarially trained models with median filtered images is robust against black-box attacks and to attacks generated by an ensemble containing same models (see Table~\ref{tab:adv_comp:submission_erko:table1}, Table~\ref{tab:adv_comp:submission_erko:table2}). This is not exactly a white-box attack, because we generate adversarial examples on a model without a filtering layer. For example, we use a pre-trained \verb|ens3_adv_inception_v3| model to generate adversarial examples. These images are median filtered and fed to model again to check the misclassification ratios. All these attacks were generated using $\epsilon$=16 max pixel perturbations. In the case of the best ensemble defense against the best ensemble attacker, I tested other values of $\epsilon$ and plotted Figure~\ref{fig:adv_comp:submission_erko:graph1}, showing that in case of lower $\epsilon$ values this defense approach is more robust against attacks(exact values in Table~\ref{tab:adv_comp:submission_erko:table3}): \begin{figure}[h] \centering \includegraphics[scale=.4]{figures/erko/erko_graph1} \caption{Adversarial examples misclassification ratio, percentage} \label{fig:adv_comp:submission_erko:graph1} \end{figure} \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}l@{}}#2\end{tabular}} \begin{table}[h] \centering \caption{Misclassification ratio without filtering, percentage. Rows are defenders; columns are attackers. Even ensemble of adversarially trained models are not robust against good attackers.} \label{tab:adv_comp:submission_erko:table1} \setlength\tabcolsep{6pt} \begin{tabular}{p{4.9cm}cccc} \hline\noalign{\smallskip} Defenders\textbackslash{}Attackers & inception\_v3 & $A$ & $B$ & $C$ \\ \noalign{\smallskip}\svhline\noalign{\smallskip} inception\_v3 & 100.00 & 100.00 & 26.25 & 99.38\\\hline inception\_v4 & 42.50 & 80.63 & 21.88 & 62.50\\\hline adv\_inception\_v3 & 20.62 & 41.25 & 100.00 & 100.00\\\hline ens3\_adv\_inception\_v3 & 15.62 & 38.13 & 100.00 & 99.38\\\hline ens\_adv\_inception\_resnet\_v2 & 10.62 & 23.75 & 94.38 & 95.00\\\hline \specialcell[t]{adv\_inception\_v3\\ens3\_adv\_inception\_v3} & 15.00 & 36.25 & 100.00 & 100.00\\\hline \specialcell[t]{adv\_inception\_v3\\ens3\_adv\_inception\_v3\\ens4\_adv\_inception\_v3} & 16.25 & 33.13 & 100.00 & 99.38\\\hline \specialcell[t]{adv\_inception\_v3\\ens3\_adv\_inception\_v3\\ens\_adv\_inception\_resnet\_v2\\ens4\_adv\_inception\_v3} & 12.5 & 28.75 & 100.00 & 99.38\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \\ \raggedright Where $A$ is an ensemble of inception\_v3, inception\_resnet\_v2, resnet\_v1\_101, resnet\_v1\_50, resnet\_v2\_101, resnet\_v2\_50, vgg\_16;\\ $B$ is an ensemble of adv\_inception\_v3, ens3\_adv\_inception\_v3, ens\_adv\_inception\_resnet\_v2, ens4\_adv\_inception\_v3;\\ $C$ is an ensember of inception\_v3, adv\_inception\_v3, ens3\_adv\_inception\_v3, ens\_adv\_inception\_resnet\_v2, ens4\_adv\_inception\_v3, inception\_resnet\_v2, resnet\_v1\_101, resnet\_v1\_50, resnet\_v2\_101. \end{table} \begin{table}[h] \centering \caption{Misclassification ratio with filtering, percentage. Adversarially trained models with median filtering show better robustness against many kinds of attacks within these experiments. inception\_v4 model with median filtering on all of attacks performs nearly same as without filtering. Same on other non-adversarial models. Therefore, I am speculating median filtering is not cleaning, or not mitigating adversarial examples.} \label{tab:adv_comp:submission_erko:table2} \setlength\tabcolsep{6pt} \begin{tabular}{p{4.9cm}cccc} \hline\noalign{\smallskip} Defenders\textbackslash{}Attackers & inception\_v3 & $A$ & $B$ & $C$ \\ \noalign{\smallskip}\svhline\noalign{\smallskip} inception\_v3 & 100.00 & 97.50 & 27.50 & 95.63\\\hline inception\_v4 & 40.00 & 75.63 & 22.50 & 57.50\\\hline adv\_inception\_v3 & 21.88 & 43.13 & 33.13 & 40.00\\\hline ens3\_adv\_inception\_v3 & 21.88 & 43.75 & 57.50 & 58.13\\\hline ens\_adv\_inception\_resnet\_v2 & 13.13 & 30.63 & 30.63 & 39.38\\\hline \specialcell[t]{adv\_inception\_v3\\ens3\_adv\_inception\_v3} & 17.50 & 40.00 & 43.75 & 47.50\\\hline \specialcell[t]{adv\_inception\_v3\\ens3\_adv\_inception\_v3\\ens4\_adv\_inception\_v3} & 17.50 & 38.75 & 43.75 & 48.75\\\hline \specialcell[t]{adv\_inception\_v3\\ens3\_adv\_inception\_v3\\ens\_adv\_inception\_resnet\_v2\\ens4\_adv\_inception\_v3} & 14.38 & 35.00 & 39.38 & 43.13\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \\ \raggedright Where $A$ is an ensemble of inception\_v3, inception\_resnet\_v2, resnet\_v1\_101, resnet\_v1\_50, resnet\_v2\_101, resnet\_v2\_50, vgg\_16;\\ $B$ is an ensemble of adv\_inception\_v3, ens3\_adv\_inception\_v3, ens\_adv\_inception\_resnet\_v2, ens4\_adv\_inception\_v3;\\ $C$ is an ensemble of inception\_v3, adv\_inception\_v3, ens3\_adv\_inception\_v3, ens\_adv\_inception\_resnet\_v2, ens4\_adv\_inception\_v3, inception\_resnet\_v2, resnet\_v1\_101, resnet\_v1\_50, resnet\_v2\_101. \end{table} \begin{table}[h] \centering \caption{Misclassification ratio on $\epsilon$ values, percentage. On smaller $\epsilon$ values, median filtering shows even better robustness to adversarial attacks.} \label{tab:adv_comp:submission_erko:table3} \setlength\tabcolsep{6pt} \begin{tabular}{p{6cm}cccc} \hline\noalign{\smallskip} Defenders & $\epsilon$=16 & $\epsilon$=8 & $\epsilon$=4 & $\epsilon$=2 \\ \noalign{\smallskip}\svhline\noalign{\smallskip} Ensemble of adversarial models non-filtered input & 99.375 & 98.125 & 96.875 & 91.875\\ Ensemble of adversarial models with filtered input & 43.125 & 27.500 & 17.500 & 10.625\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{table} \subsubsection{Submission results} \label{sec:adv_comp:submission_erko:results} Following the competition results, we have seen that adversarially trained models with median filtering are indeed robust to most types of attacks. These results suggest more study on this effect of adversarially trained models in the future. During the competition, new types of attacks were developed with smoothed adversarial examples that can fool spatially smoothed defenses with as high as 50-60\% ratio and with high transferability. These are the best attackers developed in Non-Targeted/Targeted Adversarial Attack Competitions. Additional study is needed to defend against these new types of attacks. \subsection{4th place in non-targeted attack track: team iwiwi} \runinhead{Team members:} Takuya Akiba and Seiya Tokui and Motoki Abe In this section, we explain the submission from team \emph{iwiwi} to the non-targeted attack track. This team was Takuya Akiba, Seiya Tokui and Motoki Abe. The approach is quite different from other teams: training fully-convolutional networks (FCNs) that can convert clean examples to adversarial examples. The team received the 4th place. \subsubsection{Basic Framework} Given a clean input image $x$, we generate an adversarial example as follows: \begin{equation*} x^{adv} = {Clip}_{[0, 1]}( x + a(x; \theta_a) ). \end{equation*} Here, $a$ is a differentiable function represented by a FCN with parameter $\theta_a$. We call $a$ as an \emph{attack FCN}. It outputs $c \times h \times w$ tensors, where $c, h, w$ are the number of channels, height and width of $x$. The values of the output are in range $[-\varepsilon, +\varepsilon]$. During the training of the attack FCN, to confuse image classifiers, we maximize the loss $J(f(x^{adv}), y)$, where $f$ is a pre-trained image classifier. We refer to $f$ as a target model. Specifically, we optimize $\theta_a$ to maximize the following value: \begin{equation*} \sum_{x \in \mathcal{X}} J \left(f \left( {Clip}_{[0, 1]} \left( x + a \left(x; \theta_a \right) \right) \right), y \right). \end{equation*} This framework has some commonality with the work by Baluja and Fischer~\cite{ATN2017}. They also propose to train neural networks that produce adversarial examples. However, while we have the hard constraint on the distance between clean and adversarial examples, they considered the distance as one of optimization objective to minimize. In addition, we used a much larger FCN model and stronger computation power, together with several new ideas such as multi-target training, multi-task training, and gradient hints, which are explained in the next subsection. \subsubsection{Empirical Enhancement} \myparagraph{Multi-Target Training} To obtain adversarial examples that generalize to different image classifiers, we use multiple target models to train the attack FCN. We maximize the sum of losses of all models. In this competition, we used eight models: \textit{(1)} ResNet50, \textit{(2)} VGG16, \textit{(3)} Inception v3, \textit{(4)} Inception v3 with adversarial training, \textit{(5)} Inception v3 with ensemble adversarial training (EAT) using three models, \textit{(6)} Inception v3 with EAT using four models, \textit{(7)} Inception ResNet v2, and \textit{(8)} Inception ResNet v2 with EAT. All of these classifier models are available online. \myparagraph{Multi-Task Training} A naive approach to construct a FCN so that it outputs values in the range $[-\epsilon, +\epsilon]$ is to apply the $\text{tanh}$ function to the last output, and then multiply it by $\epsilon$. However, in this way, the FCN cannot finely control the magnitude of perturbation, as $\epsilon$ is not given to the FCN. To cope with this issue, we take the advantage of discreteness. In this competition, $\epsilon$ can take 13 values: $\frac{4}{256}, \frac{5}{256}, \ldots, \frac{16}{256}$. We consider adversarial attack with different $\epsilon$ values as different tasks, and employ multi-task training. Specifically, the FCN outputs a tensor with shape $13 \times c \times h \times w$, where the first dimension corresponds to the $\epsilon$ value. \myparagraph{Gradient Hints} Attack methods that use the gradients on image pixels work well. Therefore, these gradients are useful signals for generating adversarial examples. Thus, in addition to clean examples, we also use these gradients as input to the FCN. In this competition, we used gradients by Inception ResNet v2 with EAT, which was the strongest defense model publicly available. \subsubsection{Results and Discussion} The team ranked 4th among about one hundred teams. In addition, the team ranked 1st in 3rd-party PageRank-like analysis\footnote{\url{https://www.kaggle.com/anlthms/pagerank-ish-scoring}}, which shows that this attack method is especially effective for strong defense methods. In addition to its effectiveness, the generated attack images have interesting appearance (Figure~\ref{fig:iwiwi}, more examples are available online\footnote{\url{https://github.com/pfnet-research/nips17-adversarial-attack}}). We observe two properties from the generated images: detailed textures are canceled out, and Jigsaw-puzzle-like patterns are added. These properties deceive image classifiers into answering the Jigsaw puzzle class. \begin{figure}[t] \centering \includegraphics[width=.32\hsize]{figures/iwiwi/clean.png} \includegraphics[width=.32\hsize]{figures/iwiwi/attack.png} \includegraphics[width=.32\hsize]{figures/iwiwi/perturbation.png} \caption{A clean example (left), adversarial example generated by our method (middle), and their difference (right),where $\epsilon = \frac{16}{255}$.} \label{fig:iwiwi} \end{figure} \subsection{3rd place in targeted attack track: team FatFingers} \label{sec:adv_comp:submission_t3} \runinhead{Team members:} Yao Zhao, Yuzhe Zhao, Zhonglin Han and Junjiajia Long We propose a dynamic iterative ensemble targeted attack method, which builds iterative attacks on a loss ensemble neural networks focusing on the classifiers that are harder to perturb. Our methods are tested among 65 attackers against 107 defenders in NIPS-Kaggle competition and achieved 3rd in the targeted attack ranking. \subsubsection{Targeted Attack Model Transfer} In our experiments, we compared variants of single step attack methods and iterative attack methods including two basic forms of those two attack methods: fast gradient sign (FGS) \begin{equation}\label{eq:adv_comp:submission_t3:fgsm} \textbf{x}^{adv} =\textbf{x} + \epsilon \cdot sign \bigl( \nabla_x J(f(\textbf{x}), y_{true}) \bigr) \end{equation} and iterative sign attacks: \begin{equation}\label{eq:adv_comp:submission_t3:isa} \textbf{x}^{adv}_{t+1} = {clip}_{\epsilon, \textbf{x}} \left \{ \textbf{x}^{adv}_t + \alpha \cdot sign \bigl( \nabla_x J(f(\textbf{x}^{adv}_t), y_{true}) \bigr)\right \} \end{equation} To evaluate the ability of black-box targeted attacks, we built iterative attack methods (10 iterations) using single models against many single model defenders individually on 1000 images. Fig.\ref{fig:adv_comp:submission_t3:hittarget} demonstrates the matrix of target hitting for 10 attacking models, while Fig.\ref{fig:adv_comp:submission_t3:defense} shows their capabilitis of defending. White-box targeted adversarial attacks are generally successful, even against adversarial trained models. Though targeted adversarial attacks built on single models lower the accuracy of defenders based on a different model, the hit rate are close to zero. \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{figures/fatfingers/hittarget} \end{center} \caption{Target Hitting Matrix} \label{fig:adv_comp:submission_t3:hittarget} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{figures/fatfingers/defense} \end{center} \caption{Defender Accuracy Matrix} \label{fig:adv_comp:submission_t3:defense} \end{figure} \subsubsection{Ensemble Attack Methods} Since targeted attacks against unknown models has very low hit rate, it is important to combine known models in a larger number and more efficiently to attack a pool of unknown models or their ensembles. Probability ensemble is a common way to combine a number of classifiers (sometimes called majority vote). However, the loss function is usually hard to optimize because the parameters of different classifiers are coupled inside the logarithm. \begin{equation} \label{eq:adv_comp:submission_t3:prob_ensemble} J_{prob}\left ( \textbf{x}, y \right ) = -\sum \limits_{j}^N y_j \log\left ( \frac{1}{M} \sum \limits_{i}^M p_{ij} \left ( \textbf{x} \right ) \right ) \end{equation} By Jensen's inequality, an upper bound is obtained for the loss function. Instead of minimizing $J_{prob} (\textbf{x}, y)$, we propose to optimize the upper bound. This way of combining classifiers is called loss ensemble. By using the following new loss function eq.4, the parameters of different neural networks are decoupled, which helps the optimization. \begin{equation} \label{eq:adv_comp:submission_t3:loss_ensemble} J_{prob}\left ( \textbf{x}, y \right ) \le - \frac{1}{M} \sum \limits_{j}^N \sum \limits_{i}^M y_{ij} \log \left ( p_{ij} \left (\textbf{x} \right ) \right ) = J_{loss} \left ( \textbf{x}, y \right ) \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{figures/fatfingers/loss_vs_prob_larger_fonts} \end{center} \caption{Loss ensemble v.s. probability ensemble. Targeted attacks using the loss ensemble method outperforms probability ensemble at given number of iterations.} \label{fig:adv_comp:submission_t3:loss_prob} \end{figure} Comparisons between results of targeted attacks using loss ensemble and probability ensemble at given iterations were shown in Fig.\ref{fig:adv_comp:submission_t3:loss_prob}. In general, it demonstrates that capability of targeted attacking using loss ensemble is superior to that using probability ensemble. \subsubsection{Dynamic Iterative Ensemble Attack} The difficulty of attacking each individual neural network model within an ensemble can be quite different. We compared iterative attack methods with different parameters and found that number of iterations is most crucial, as shown in Fig.\ref{fig:adv_comp:submission_t3:diea} . For example, attacking an adversarial trained model at high success rate takes significantly more iterations than normal models. \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{figures/fatfingers/iterations} \end{center} \caption{Dynamic iterative ensemble attack results for three selected models} \label{fig:adv_comp:submission_t3:diea} \end{figure} \begin{equation}\label{eq:adv_comp:submission_t3:diea} \textbf{x}^{adv}_{t+1} = {clip}_{\epsilon, \textbf{x}} \left \{ \textbf{x}^{adv}_t + \alpha \cdot sign \bigl( \frac{1}{M} \sum \limits_{k}^M \delta_{tk} \nabla_x J_k(f(\textbf{x}^{adv}_t), y_{true}) \bigr)\right \} \end{equation} For tasks where computation is limited, we implemented a method that pre-assigns the number of iterations for each model or dynamically adjusts whether to include a model in each step of the attack by observing if the loss function for that model is small enough. As shown in Eq.\ref{eq:adv_comp:submission_t3:diea}, $\delta_{tk} \in \{0, 1\}$ determines if loss for model $k$ is included in the total loss at time step $t$.
1,477,468,750,569
arxiv
\section{Introduction} Although they are not yet high-temperature superconductors, understanding lightly doped antiferromagnets is a great challenge in condensed matter physics. A lot is known about hole- and electron-doped systems both from experiments and from studies of microscopic Hubbard or $t$-$J$-type models \cite{Bri70,Hal83,And87,Gro87,Shr88,Cha89,Kan89,Sac89,Wen89,Sha90,Mon91,Kue93,Kuc93,Fla93,Sus94,Dag94,Goo94,Bel95,Kuc95,Yam99,Bru00,Kus02,Arm02,Yam03,Sac03,Mar03,Lee03,Kus03,Sen04,Yua04,Toh04,Guo06,Aic06}. Based on the work of Haldane \cite{Hal83} and of Chakravarty, Halperin, and Nelson \cite{Cha89} who described the low-energy magnon physics by a $(2+1)$-d $O(3)$-invariant nonlinear $\sigma$-model, several attempts have been made to include charge carriers in the effective theory \cite{Shr88,Wen89,Sha90,Kue93}. However, conflicting results have been obtained. For example, the various approaches differ in the fermion field content of the effective theory and in how various symmetries are realized on those fields. In particular, it has not yet been established that any of the effective theories proposed so far indeed correctly describes the low-energy physics of the underlying microscopic systems in a quantitative manner. In analogy to chiral perturbation theory for the pseudo-Nambu-Goldstone pions of QCD \cite{Wei79,Gas85}, the $(2+1)$-d $O(3)$-invariant nonlinear $\sigma$-model has been established as a systematic and quantitatively correct low-energy effective field theory in the pure magnon sector \cite{Cha89,Neu89,Fis89,Has90,Has91,Has93,Chu94,Leu94,Hof99}. In analogy to baryon chiral perturbation theory \cite{Geo84,Gas88,Jen91,Ber92,Bec99} --- the effective theory for pions and nucleons --- we have recently extended the pure magnon effective theory by including charge carriers \cite{Kae05,Bru05,Bru06}. The effective theory provides a powerful theoretical framework in which the low-energy physics of magnons and charge carriers can be addressed in a systematic manner. The predictions of the effective theory are universal and apply to a large class of doped antiferromagnets. This is in contrast to calculations in microscopic models which usually suffer from uncontrolled approximations and are limited to just one underlying system. While some results obtained with the effective theory can be obtained directly from microscopic systems, the effective field theory treatment allows us to derive such results in a systematic and more transparent manner and it puts them on a solid theoretical basis. In order not to obscure the basic physics of magnons and charge carriers, the effective theory has been based on microscopic systems that share the symmetries of Hubbard or $t$-$J$-type models. In particular, effects of impurities, long-range Coulomb forces, anisotropies, or small couplings between different $CuO_2$ layers have so far been neglected, but can be added whenever this becomes desirable. Before such effects have been included, one should be aware of the fact that the effective theory does not describe the actual materials in all details. Still, for systems that share the symmetries of the Hubbard or $t$-$J$ model, the effective theory makes predictions that are exact, order by order in a systematic low-energy expansion. Hole-doped cuprates have hole pockets centered at lattice momenta $(\pm \frac{\pi}{2a},\pm \frac{\pi}{2a})$. The location of the hole pockets has important consequences for the fermion field content of the effective theory and on the realization of the various symmetries of these fields. In electron-doped cuprates the charged quasiparticles reside in momentum space pockets centered at $(\frac{\pi}{a},0)$ or $(0,\frac{\pi}{a})$ \cite{Lee03,Kus03,Sen04,Yua04,Toh04,Guo06,Aic06}. We have computed the single-electron dispersion relation in the $t$-$t'$-$J$ model shown in figure 1. The energy $E(\vec p)$ of an electron is indeed minimal when its lattice momentum $\vec p = (p_1,p_2)$ is located in an electron pocket centered at $(\frac{\pi}{a},0)$ or $(0,\frac{\pi}{a})$. \begin{figure}[t] \begin{center} \vspace{-2.4cm} \epsfig{file=electron_pockets_NEW2.eps,width=15cm} \end{center} \vspace{-1cm} \caption{\it The dispersion relation $E(\vec p)$ of a single electron in the $t$-$t'$-$J$ model (on a $32 \times 32$ lattice for $J = 0.4 t$ and $t' =-0.3 t$) with electron pockets centered at $(\frac{\pi}{a},0)$ and $(0,\frac{\pi}{a})$.} \end{figure} The location of these pockets again has important effects on the electron dynamics, which turns out to be quite different from that of the holes. In particular, in contrast to hole-doped systems, in electron-doped antiferromagnets the magnon-mediated forces between two electrons depend on the total momentum $\vec P$ of the pair. For $\vec P = 0$ the one-magnon exchange potential between two electrons at distance $r$ is proportional to $1/r^4$, while in the hole case it has a $1/r^2$ dependence. The different locations of electron and hole pockets also affect the phase structure. While spiral phases are possible in the hole-doped case \cite{Shr88,Kan89,Bru06a}, they are absent in electron-doped cuprates \cite{Goo94,Yam99,Kus02,Yam03,Mar03}. The paper is organized as follows. In section 2 the symmetries of charge carrier fields are summarized. Based on this, the electron fields are identified and the hole fields are eliminated. The low-energy effective action for magnons and electrons is then constructed in a systematic manner. Section 3 contains the derivation of the one-magnon exchange potential between two electrons as well as a discussion of the corresponding Schr\"odinger equation. In section 4 spiral configurations of the staggered magnetization and in section 5 the reduction of the staggered magnetization upon doping are investigated. Section 6 contains our conclusions. The somewhat subtle transformation of the one-magnon exchange potential from momentum to coordinate space is discussed in an appendix. \section{Symmetries of Magnon and Electron Fields} In this section, based on \cite{Kae05,Bru06}, we summarize the transformation properties of magnon and charge carrier fields. We then identify the electron fields and eliminate the hole fields in order to construct the low-energy effective theory for magnons and electrons. \subsection{Symmetries of Magnon Fields} In an antiferromagnet the spontaneous breaking of the $SU(2)_s$ spin symmetry down to $U(1)_s$ gives rise to two massless magnons. The staggered magnetization is described by a unit-vector field \begin{equation} \vec e(x) = (e_1(x),e_2(x),e_3(x)) = (\sin\theta(x) \cos\varphi(x),\sin\theta(x) \sin\varphi(x),\cos\theta(x)), \end{equation} in the coset space $SU(2)_s/U(1)_s = S^2$, where $x = (x_1,x_2,t)$ is a point in $(2+1)$-d space-time. It is convenient to use a ${\mathbb{C}} P(1)$ representation in terms of $2 \times 2$ Hermitean projection matrices $P(x)$ with \begin{equation} P(x) = \frac{1}{2}({\mathbbm{1}} + \vec e(x) \cdot \vec \sigma), \quad P(x)^\dagger = P(x), \quad \mbox{Tr} P(x) = 1, \quad P(x)^2 = P(x). \end{equation} As discussed in detail in \cite{Kae05}, the magnon field transforms as \begin{alignat}{3} SU(2)_s:&\quad &P(x)' &= g P(x) g^\dagger, \nonumber \\ SU(2)_Q:&\quad &^{\vec Q}P(x) &= P(x), \nonumber \\ D_i:&\quad &^{D_i}P(x) &= {\mathbbm{1}} - P(x), \nonumber \\ D'_i:&\quad &^{D'_i}P(x) &= P(x)^*, \nonumber \\ O:&\quad &^OP(x) &= P(Ox), &\quad Ox &= (- x_2,x_1,t), \nonumber \\ R:&\quad &^RP(x) &= P(Rx), &\quad Rx &= (x_1,- x_2,t), \nonumber \\ T:&\quad &^TP(x) &= {\mathbbm{1}} - P(Tx), &\quad Tx &= (x_1,x_2,- t), \nonumber \\ T':&\quad &^{T'}P(x) &= (i \sigma_2) \left[^TP(x)\right] (i \sigma_2)^\dagger = P(Tx)^*. \hspace{-4em} \end{alignat} The various symmetries are the $SU(2)_s$ spin rotations, the non-Abelian $SU(2)_Q$ extension of the $U(1)_Q$ fermion number symmetry (also known as pseudo-spin symmetry) that arises in the Hubbard model at half-filling, the displacement symmetry by one lattice spacing in the $i$-direction $D_i$, the symmetry $D_i$ combined with the spin rotation $i \sigma_2$ resulting in $D_i'$, as well as the 90 degrees rotation $O$, the reflection at the $x_1$-axis $R$, time reversal $T$, and $T$ combined with the spin rotation $i \sigma_2$ resulting in $T'$. The spontaneously broken $SU(2)_s$ symmetry is nonlinearly realized on the charge carrier fields. The global $SU(2)_s$ symmetry then manifests itself as a local $U(1)_s$ symmetry in the unbroken subgroup, and the charge carrier fields couple to the magnon field via composite vector fields. In order to construct these vector fields one first diagonalizes $P(x)$ by a unitary transformation $u(x) \in SU(2)$, i.e. \begin{gather} u(x) P(x) u(x)^\dagger = \frac{1}{2}({\mathbbm{1}} + \sigma_3) = \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right), \qquad u_{11}(x) \geq 0, \nonumber \\[0.7ex] u(x) = \left(\begin{array}{cc} \cos\frac{\theta(x)}{2} & \sin\frac{\theta(x)}{2} \exp(- i \varphi(x)) \\ - \sin\frac{\theta(x)}{2} \exp(i \varphi(x)) & \cos\frac{\theta(x)}{2} \end{array}\right). \end{gather} Under a global $SU(2)_s$ transformation $g$, the diagonalizing field $u(x)$ transforms as \begin{equation} \label{trafou} u(x)' = h(x) u(x) g^\dagger, \qquad u_{11}(x)' \geq 0. \end{equation} This defines the nonlinear symmetry transformation \begin{equation} h(x) = \exp(i \alpha(x) \sigma_3) = \left(\begin{array}{cc} \exp(i \alpha(x)) & 0 \\ 0 & \exp(- i \alpha(x)) \end{array} \right) \in U(1)_s. \end{equation} Under the displacement symmetry $D_i$ the staggered magnetization changes sign, i.e.\ $^{D_i}\vec e(x) = - \vec e(x)$, and one obtains \begin{equation} ^{D_i}u(x) = \tau(x) u(x), \qquad \tau(x) = \left(\begin{array}{cc} 0 & - \exp(- i \varphi(x)) \\ \exp(i \varphi(x)) & 0 \end{array} \right). \end{equation} In order to couple magnons and charge carriers, one constructs the traceless anti-Hermitean field \begin{equation} v_\mu(x) = u(x) \partial_\mu u(x)^\dagger, \end{equation} which transforms as \begin{alignat}{3} SU(2)_s:&\quad &v_\mu(x)' &= h(x) [v_\mu(x) + \partial_\mu] h(x)^\dagger, \hspace{-5em} \nonumber \\[-.2ex] SU(2)_Q:&\quad &^{\vec Q}v_\mu(x) &= v_\mu(x), \nonumber \\ D_i:&\quad &^{D_i}v_\mu(x) &= \tau(x)[v_\mu(x) + \partial_\mu] \tau(x)^\dagger, \hspace{-5em} \nonumber \\ D'_i:&\quad &^{D'_i}v_\mu(x) &= v_\mu(x)^*, \nonumber \\ O:&\quad &^Ov_i(x) &= \varepsilon_{ij} v_j(Ox), \quad &^Ov_t(x) &= v_t(Ox), \nonumber \\ R:&\quad &^Rv_1(x) &= v_1(Rx), \quad &^Rv_2(x) &= - v_2(Rx), \quad ^Rv_t(x) = v_t(Rx), \nonumber \\ T:&\quad &^Tv_j(x) &= \ ^{D_i}v_j(Tx), \quad &^Tv_t(x) &= - \ ^{D_i}v_t(Tx), \nonumber \\ T':&\quad &^{T'}v_j(x) &= \ ^{D'_i}v_j(Tx), \quad &^{T'}v_t(x) &= - ^{D'_i}v_t(Tx). \end{alignat} The field $v_\mu(x)$ decomposes into an Abelian ``gauge'' field $v_\mu^3(x)$ and two ``charged'' vector fields $v_\mu^\pm(x)$, i.e. \begin{equation} v_\mu(x) = i v_\mu^a(x) \sigma_a, \qquad v_\mu^\pm(x) = v_\mu^1(x) \mp i v_\mu^2(x). \end{equation} \subsection{Fermion Fields in Momentum Space Pockets} In \cite{Bru06} matrix-valued charge carrier fields \begin{equation} \Psi^k(x) = \left(\begin{array}{cc} \psi^k_+(x) & \psi^{-k'\dagger}_-(x) \\ \psi^k_-(x) & - \psi^{-k' \dagger}_+(x) \end{array} \right), \quad \Psi^{k\dagger}(x) = \left(\begin{array}{cc} \psi^{k\dagger}_+(x) & \psi^{k\dagger}_-(x) \\ \psi^{-k'}_-(x) & - \psi^{-k'}_+(x) \end{array} \right) \end{equation} have been constructed. Here $k' = k + \big(\frac{\pi}{a},\frac{\pi}{a}\big)$ and $\psi^k_\pm(x)$ and $\psi^{k\dagger}_\pm(x)$ are independent Grassmann fields which are associated with the following eight lattice momentum values illustrated in figure 2 \begin{equation} k = (k_1,k_2) \in \left\{\big(0,0\big),\; \big(\frac{\pi}{a},\frac{\pi}{a}\big),\; \big(\frac{\pi}{a},0\big),\; \big(0,\frac{\pi}{a}\big),\; \big(\pm \frac{\pi}{2a},\pm \frac{\pi}{2a}\big)\right\}. \end{equation} \begin{figure}[t] \begin{center} \vspace{-0.4cm} \epsfig{file=Bz.eps,width=7cm} \end{center} \caption{\it Eight lattice momenta and their periodic copies. In the cuprates the holes reside in momentum space pockets centered at lattice momenta $\big(\pm \frac{\pi}{2a},\pm \frac{\pi}{2a}\big)$ which are represented by the four crosses, while electrons reside at $\big(\frac{\pi}{a},0\big)$ or $\big(0,\frac{\pi}{a}\big)$ (represented by the circles).} \end{figure} The charge carrier fields transform as {\allowdisplaybreaks \begin{alignat}{2} \label{phitrafo} SU(2)_s:&\quad &\Psi^k(x)' &= h(x) \Psi^k(x), \nonumber \\[-.2ex] SU(2)_Q:&\quad &^{\vec Q}\Psi^k(x) &= \Psi^k(x) \Omega^T, \nonumber \\ D_i:&\quad &^{D_i}\Psi^k(x) &= \exp(i k_i a) \tau(x) \Psi^k(x) \sigma_3, \nonumber \\ D'_i:&\quad &^{D'_i}\Psi^k(x) &= \exp(i k_i a) (i \sigma_2) \Psi^k(x) \sigma_3, \nonumber \\\pagebreak O:&\quad &^O\Psi^k(x) &= \Psi^{Ok}(Ox), \nonumber \\ R:&\quad &^R\Psi^k(x) &= \Psi^{Rk}(Rx), \nonumber \\ T:&\quad &^T\Psi^k(x) &= \tau(Tx) (i \sigma_2) \left[\Psi^{-k\dagger}(Tx)^T\right] \sigma_3, \nonumber \\ &\quad &^T\Psi^{k\dagger}(x) &= - \sigma_3 \left[\Psi^{-k}(Tx)^T\right] (i \sigma_2)^\dagger \tau(Tx)^\dagger, \nonumber \\ T':&\quad &^{T'}\Psi^k(x) &= - \left[\Psi^{-k\dagger}(Tx)^T\right] \sigma_3, \nonumber \\ &\quad &^{T'}\Psi^{k\dagger}(x) &= \sigma_3 \left[\Psi^{-k}(Tx)^T\right]. \end{alignat} } Here $\Omega \in SU(2)_Q$ and $Ok$ and $Rk$ are the momenta obtained by rotating or reflecting the momentum $k$. \subsection{Electron Field Identification and Hole Field Elimination} ARPES measurements as well as theoretical investigations \cite{Lee03,Kus03,Sen04,Yua04,Toh04,Guo06,Aic06} (see also figure 1) indicate that electrons doped into an antiferromagnet appear in momentum space pockets centered at \begin{equation} k = \big(\frac{\pi}{a},0\big), \qquad k' = \big(0,\frac{\pi}{a}\big). \end{equation} Hence, only the fermion fields with these two momentum labels will appear in the low-energy effective theory. Using the transformation rules of eq.(\ref{phitrafo}) one can construct the following invariant mass terms \begin{align} \frac{1}{2} \mbox{Tr} \big[ {\cal M} (\Psi^{k\dagger} \sigma_3 & \Psi^{k'} + \Psi^{k'\dagger} \sigma_3 \Psi^k) + m (\Psi^{k\dagger} \Psi^k \sigma_3 + \Psi^{k'\dagger} \Psi^{k'} \sigma_3)\big] \nonumber \\ = & \; \; \; {\cal M} \big(\psi^{k\dagger}_+ \psi^{k'}_+ - \psi^{k\dagger}_- \psi^{k'}_- + \psi^{k'\dagger}_+ \psi^k_+ - \psi^{k'\dagger}_- \psi^k_- \big) \nonumber \\ & + m \big(\psi^{k\dagger}_+ \psi^k_+ + \psi^{k\dagger}_- \psi^k_- + \psi^{k'\dagger}_+ \psi^{k'}_+ + \psi^{k'\dagger}_- \psi^{k'}_- \big) \nonumber \\[0.3ex] = & \; \; \; \big(\psi^{k\dagger}_+, \, \psi^{k'\dagger}_+ \big) \bigg(\begin{array}{cc} m & {\cal M} \\ {\cal M} & m \end{array}\bigg) \bigg(\begin{array}{c} \psi^k_+ \\ \psi^{k'}_+ \end{array}\bigg) \nonumber \\ & + \big(\psi^{k\dagger}_-, \, \psi^{k'\dagger}_- \big) \bigg(\begin{array}{cc} m & - {\cal M} \\ - {\cal M} & m \end{array}\bigg) \bigg(\begin{array}{c} \psi^k_- \\ \psi^{k'}_- \end{array}\bigg). \end{align} The terms proportional to ${\cal M}$ are $SU(2)_Q$-invariant while those proportional to $m$ are only $U(1)_Q$-invariant. By diagonalizing the mass matrices, electron and hole fields can be identified. The resulting eigenvalues are $m \pm {\cal M}$. In the $SU(2)_Q$-symmetric case, i.e.\ for $m = 0$, there is an electron-hole symmetry. The electrons correspond to positive energy states with eigenvalue ${\cal M}$ and the holes correspond to negative energy states with eigenvalue $- {\cal M}$. In the presence of $SU(2)_Q$-breaking terms these energies are shifted and electrons now correspond to states with eigenvalue $m + {\cal M}$, while holes correspond to states with eigenvalue $m - {\cal M}$. The electron fields are given by the corresponding eigenvectors \begin{equation} \psi_+(x) = \frac{1}{\sqrt{2}} \big[ \psi^k_+(x) + \psi^{k'}_+(x) \big], \qquad \psi_-(x) = \frac{1}{\sqrt{2}} \big[ \psi^k_-(x) - \psi^{k'}_-(x) \big]. \end{equation} Under the various symmetries they transform as \begin{alignat}{2} \label{symcomp} SU(2)_s:&\quad &\psi_\pm(x)' &= \exp(\pm i \alpha(x)) \psi_\pm(x), \nonumber \\ U(1)_Q:&\quad &^Q\psi_\pm(x) &= \exp(i \omega) \psi_\pm(x), \nonumber \\ D_i:&\quad &^{D_i}\psi_\pm(x) &= \mp \exp(i k_i a) \exp(\mp i \varphi(x)) \psi_\mp(x), \nonumber \\ D'_i:&\quad &^{D'_i}\psi_\pm(x) &= \pm \exp(i k_i a) \psi_\mp(x), \nonumber \\ O:&\quad &^O\psi_\pm(x) &= \pm \psi_\pm(Ox), \nonumber \\ R:&\quad &^R\psi_\pm(x) &= \psi_\pm(Rx), \nonumber \\ T:&\quad &^T\psi_\pm(x) &= \exp(\mp i \varphi(Tx)) \psi^\dagger_\pm(Tx), \nonumber \\ &\quad &^T\psi^\dagger_\pm(x) &= - \exp(\pm i \varphi(Tx)) \psi_\pm(Tx), \nonumber \\ T':&\quad &^{T'}\psi_\pm(x) &= - \psi^\dagger_\pm(Tx), \nonumber \\ &\quad &^{T'}\psi^\dagger_\pm(x) &= \psi_\pm(Tx). \end{alignat} The action of magnons and electrons must be invariant under these symmetries. \subsection{Effective Action for Magnons and Electrons} We decompose the action into terms containing different numbers of fermion fields $n_\psi$ (with $n_\psi$ even) such that \begin{equation} S[\psi^\dagger_\pm,\psi_\pm,P] = \int d^2x \ dt \ \sum_{n_\psi} {\cal L}_{n_\psi}. \end{equation} The leading terms in the effective Lagrangian without fermion fields are given by \begin{equation} {\cal L}_0 = \rho_s \mbox{Tr} \big[ \partial_i P \partial_i P + \frac{1}{c^2} \partial_t P \partial_t P \big], \end{equation} with the spin stiffness $\rho_s$ and the spinwave velocity $c$ as low-energy parameters. The terms with two fermion fields (containing at most one temporal or two spatial derivatives) describe the propagation of electrons as well as their couplings to magnons, and are given by \begin{align} \label{action} {\cal L}_2 = & \sum_{s = +,-} \Big[ M \psi^\dagger_s \psi_s + \psi^\dagger_s D_t \psi_s + \frac{1}{2 M'} D_i \psi^\dagger_s D_i \psi_s + N \psi^\dagger_s v^s_i v^{-s}_i \psi_s \nonumber \\ & \hspace{1.8em} + i K \big(D_1 \psi^\dagger_s v^s_1 \psi_{-s} - \psi^\dagger_s v^s_1 D_1 \psi_{-s} - D_2 \psi^\dagger_s v^s_2 \psi_{-s} + \psi^\dagger_s v^s_2 D_2 \psi_{-s}\big)\Big]. \end{align} Here $M$ is the rest mass and $M'$ is the kinetic mass of an electron, $K$ is an electron-one-magnon, and $N$ is an electron-two-magnon coupling, which all take real values. The covariant derivatives are given by \begin{align} D_t \psi_\pm(x) &= \left[\partial_t \pm i v_t^3(x) - \mu \right] \psi_\pm(x), \nonumber \\ D_i \psi_\pm(x) &= \left[\partial_i \pm i v_i^3(x)\right] \psi_\pm(x). \end{align} The chemical potential $\mu$ enters the covariant time-derivative like an imaginary constant vector potential for the fermion number symmetry $U(1)_Q$. Next we list the contributions with four fermion fields including up to one temporal or two spatial derivatives \begin{align} \label{Lagrange} {\cal L}_4 = & \sum_{s = +,-} \Big[ \frac{G_1}{2} \psi^\dagger_s \psi_s \psi^\dagger_{-s} \psi_{-s} + G_2 D_i \psi^\dagger_s D_i \psi_s \psi^\dagger_s \psi_s + G_3 D_i \psi^\dagger_s D_i \psi_s \psi^\dagger_{-s} \psi_{-s} \nonumber \\[-1.5ex] &\hspace{1.9em} +G_4 D_i \psi^\dagger_s D_i\psi_{-s} \psi^\dagger_{-s} \psi_s + \frac{G_5}{2} \big( D_i \psi^\dagger_s \psi_s D_i \psi^\dagger_{-s} \psi_{-s} + \psi^\dagger_s D_i \psi_s \psi^\dagger_{-s} D_i \psi_{-s} \big) \nonumber \\[0.2ex] &\hspace{1.9em} +i G_6 \big( D_1 \psi^\dagger_s \psi_s \psi^\dagger_s v^s_1 \psi_{-s} - \psi^\dagger_s D_1 \psi_s \psi^\dagger_{-s} v^{-s}_1 \psi_s \nonumber \\ &\hspace{3.8em} - D_2 \psi^\dagger_s \psi_s \psi^\dagger_s v^s_2 \psi_{-s} +\psi^\dagger_s D_2 \psi_s \psi^\dagger_{-s} v^{-s}_2 \psi_s \big) + \frac{G_7}{2} \psi^\dagger_s\psi_s v^s_i v^{-s}_i \psi^\dagger_{-s}\psi_{-s} \nonumber \\ &\hspace{1.9em} + \frac{G_8}{2} \big( D_t \psi^\dagger_s \psi_s \psi^\dagger_{-s} \psi_{-s} -\psi^\dagger_s D_t \psi_s \psi^\dagger_{-s} \psi_{-s} \big)\Big]. \end{align} Since it contains $D_t$, the term proportional to $G_8$ would imply a deviation from canonical anticommutation relations in a Hamiltonian formulation of the theory. Fortunately, this term can be eliminated by a field redefinition $\psi_s \rightarrow \psi_s + \frac{G_8}{2} \psi_s \psi_{-s}^\dagger \psi_{-s}$. The redefined field obeys the same symmetry transformations as the original one and is constructed such that after the field redefinition $G_8 = 0$. All other terms in the action are reproduced in their present form. For completeness, we finally list the only contribution with more than four fermion fields, again including up to one temporal or two spatial derivatives \begin{align} {\cal L}_6 = \sum_{s = +,-} & H D_i \psi^\dagger_s D_i \psi_s \psi^\dagger_s \psi_s \psi^\dagger_{-s} \psi_{-s}. \end{align} The leading fermion contact term is proportional to $G_1$. Due to the large number of low-energy parameters $G_2,...,G_7,H$, the higher-order terms are unlikely to be used in practical applications. We have used the algebraic program FORM \cite{Ver00}, and independently thereof, the GiNaC framework for symbolic computation within the C++ programming language \cite{Bau02}, to verify that the terms listed above form a complete linearly independent set. It should be noted that, unlike in the hole case, the leading terms in the effective action are not invariant against Galilean boosts. This is not unexpected because the underlying microscopic systems also lack this symmetry. The lack of Galilean boost invariance has important physical consequences. In particular, the magnon-mediated forces between two electrons will turn out to depend on the total momentum $\vec P$ of the pair. Thus it is not sufficient to consider the two particles in their rest frame, i.e.\ at $\vec P = 0$. This is due to the underlying crystal lattice which defines a preferred rest-frame (a condensed matter ``ether''). \section{Magnon-mediated Binding between Electrons} We treat the forces between two electrons in the same way as the ones in the effective theory for magnons and holes \cite{Bru05,Bru06}. As in that case, one-magnon exchange dominates the long-range forces. In this section we calculate the one-magnon exchange potential between two electrons and we solve the corresponding two-particle Schr\"odinger equation. \subsection{One-Magnon Exchange Potential between Electrons} In order to calculate the one-magnon exchange potential between two electrons, we expand in the magnon fluctuations $m_1(x)$, $m_2(x)$ around the ordered staggered magnetization, i.e. \begin{align} \vec e(x) = \Big( \frac{m_1(x)}{\sqrt{\rho_s}},\,& \frac{m_2(x)}{\sqrt{\rho_s}},1 \Big) + {\cal O}(m^2) \nonumber \\ \Rightarrow \quad v_\mu^\pm(x) &= \frac{1}{2 \sqrt{\rho_s}} \partial_\mu \big[ m_2(x) \pm i m_1(x) \big] + {\cal O}(m^3), \nonumber \\ v_\mu^3(x) &= \frac{1}{4 \rho_s}\big[m_1(x) \partial_\mu m_2(x) - m_2(x) \partial_\mu m_1(x)\big] + {\cal O}(m^4). \end{align} The vertices with $v_\mu^3(x)$ (contained in $D_\mu$) involve at least two magnon fields. Hence, one-magnon exchange results exclusively from vertices with $v_\mu^\pm(x)$. Thus, two electrons can exchange a single magnon only if they have antiparallel spins ($+$ and $-$), which are both flipped in the magnon exchange process. We denote the momenta of the incoming and outgoing electrons by $\vec p_\pm$ and $\vec p_\pm \ \!\!\!\! ' \ $, respectively. Furthermore, $\vec q$ represents the momentum of the exchanged magnon. We also introduce the total momentum $\vec P$ as well as the incoming and outgoing relative momenta $\vec p$ and $\vec p \ '$ \begin{gather} \vec P = \vec p_+ + \vec p_- = \vec p_+ \ \!\!\!\! ' + \vec p_- \ \!\!\!\! '\:, \nonumber \\ \vec p = \frac{1}{2}(\vec p_+ - \vec p_-), \qquad \vec p \ ' = \frac{1}{2}(\vec p_+ \ \!\!\!\! ' - \vec p_- \ \!\!\!\! '). \end{gather} Due to momentum conservation we then have \begin{equation} \vec q = \vec p + \vec p \ '. \end{equation} Figure 3 shows the Feynman diagram describing one-magnon exchange. \begin{figure}[tb] \begin{center} \vspace{-0.4cm} \epsfig{file=meptree.eps,width=7cm} \end{center} \caption{\it Feynman diagram for one-magnon exchange between two electrons with antiparallel spins undergoing a spin-flip.} \end{figure} In momentum space the resulting one-magnon exchange potential takes the form \begin{align} \langle \vec p_+ \ \!\!\!\! ' \ \vec p_- \ \!\!\!\! '|V |\vec p_+ \vec p_-\rangle &= \frac{K^2}{2 \rho_s} \frac{1}{q^2} \left[q_1^2 - q_2^2 + 2(q_1 p_{-1} - q_2 p_{-2})\right] \left[q_1^2 - q_2^2 - 2(q_1 p_{+1} - q_2 p_{+2})\right] \nonumber \\ &\;\;\;\; \times \delta(\vec p_+ + \vec p_- - \vec p_+ \ \!\!\!\! ' - \vec p_- \ \!\!\!\! '). \end{align} Transforming the potential to coordinate space is not entirely trivial and is thus discussed in the appendix. In coordinate space the resulting potential is given by \begin{equation} \langle \vec r_+ \ \!\!\!\! ' \vec r_- \ \!\!\!\! '|V |\vec r_+ \vec r_-\rangle = \frac{K^2}{2 \pi \rho_s} \left[12 \frac{\cos(4 \varphi)}{r^4} + \frac{P^2}{2} \frac{\cos(2 (\varphi + \chi))}{r^2} \right] \delta(\vec r_+ - \vec r_- \ \!\!\!\! ') \ \delta(\vec r_- - \vec r_+ \ \!\!\!\! '). \end{equation} Here $\varphi$ is the angle between the distance vector $\vec r = \vec r_+ - \vec r_-$ of the two electrons and the $x_1$-axis. In contrast to the hole case, the potential depends on the magnitude $P$ of the total momentum $\vec P$ as well as on the angle $\chi$ between $\vec P$ and the $x_1$-axis. For $\vec P = 0$ the one-magnon exchange potential between two electrons falls off as $1/r^4$, while in the hole case it is proportional to $1/r^2$. Retardation effects enter at higher orders only and thus the potential is instantaneous. We have omitted short-distance $\delta$-function contributions to the potential which add to the 4-fermion contact interactions. Since we will model the short-distance repulsion by a hard-core radius, the $\delta$-function contributions will not be needed in the following. \subsection{Schr\"odinger Equation for two Electrons} Let us consider two electrons with opposite spins $+$ and $-$. The wave function depends on the relative distance vector $\vec r$ which points from the spin $-$ electron to the spin $+$ electron. Magnon exchange is accompanied by a spin-flip. Hence, the vector $\vec r$ changes its direction in the magnon exchange process. The resulting Schr\"odinger equation then takes the form \begin{equation} - \frac{1}{M'} \Delta \Psi(\vec r) + \frac{K^2}{2 \pi \rho_s} \left[12 \frac{\cos(4 \varphi)}{r^4} + \frac{P^2}{2} \frac{\cos(2 (\varphi + \chi))}{r^2} \right] \Psi(- \vec r) = \bigg[ E - \frac{P^2}{2 M'} \bigg] \Psi(\vec r). \end{equation} For simplicity, instead of explicitly using the 4-fermion contact interactions, we model the short-distance repulsion between the electrons by a hard-core of radius $r_0$, i.e.\ we require $\Psi(\vec r) = 0$ for $|\vec r| \leq r_0$. In contrast to the hole case \cite{Bru05,Bru06}, we have not been able to solve the above Schr\"odinger equation analytically. Instead, we have solved it numerically. A typical probability distribution for the ground state is illustrated in figure 4 for $\vec P = 0$. \begin{figure}[tb] \begin{center} \vspace{-0.3cm} \epsfig{file=wave5.eps,width=8cm} \end{center} \caption{\it Probability distribution for the ground state of two electrons with total momentum $\vec P = (0,0)$.} \end{figure} The probability distribution resembles $d_{xy}$ symmetry. However, due to the 90 degrees rotation symmetry the continuum classification scheme of angular momenta is inappropriate. Under the group of discrete rotations and reflections the ground state wave function transforms in the trivial representation. Due to the lack of Galilean boost invariance, the two-electron bound state changes its structure when it is boosted out of its rest frame. Of course, an electron pair with total momentum $\vec P \neq 0$ costs additional kinetic energy $P^2/2 M'$ for the center of mass motion. In addition, the binding energy also depends on $\vec P$. The strongest binding arises when the total momentum $\vec P$ points along a lattice diagonal. The corresponding probability distribution is illustrated in figure 5. \begin{figure}[tb] \begin{center} \vspace{-0.3cm} \epsfig{file=wave7.eps,width=8cm} \end{center} \caption{\it Probability distribution for the ground state of two electrons with total momentum $\vec P = \frac{1}{\sqrt{2}}(P,P)$ along a lattice diagonal.} \end{figure} Since they depend crucially on the precise values of the low-energy parameters, we have not attempted an extensive numerical investigation of the binding energy and other properties of the two-electron bound states. Once the low-energy parameters have been determined for a concrete underlying microscopic system, a precise calculation of the physical properties of the two-electron bound state is straightforward using the numerical method employed above. In order to gain at least some approximate analytic insight into the bound state problem, let us also consider the semi-classical Bohr-Sommerfeld quantization. First, we consider a pair of electrons with total momentum $\vec P = 0$ moving relative to each other along a lattice diagonal. The classical energy of the periodic relative motion is given by \begin{equation} E = M' \left(\frac{dr}{dt}\right)^2 - \frac{6 K^2}{\pi \rho_s r^4}. \end{equation} The Bohr-Sommerfeld quantization condition implies \begin{eqnarray} S + E T&=&\int_0^T dt \ \left[M' \left(\frac{dr}{dt}\right)^2 + \frac{6 K^2}{\pi \rho_s r^4} + E\right] = 2 \int_0^T dt \ M' \left(\frac{dr}{dt}\right)^2 \nonumber \\ &=&4 \int_{r_0}^R dr \ M' \ \frac{dr}{dt} = 4 \int_{r_0}^R dr \ \sqrt{E M' + \frac{6 K^2 M'}{\pi \rho_s r^4}} = 2 \pi n, \end{eqnarray} where $S$ is the action, $T$ is the period of the motion, and $n$ is a positive integer. The hard-core radius $r_0$ is a classical turning point and $R$ is the other classical turning point determined by \begin{equation} E = - \frac{6 K^2}{\pi \rho_s R^4}. \end{equation} The above equations lead to a relatively complicated expression for the energy in terms of elliptic integrals. Instead of investigating these expressions, we limit ourselves to estimating the number of bound states. For this purpose, we set $E = 0$ which implies $R =\infty$, and we then obtain \begin{equation} n = \left[ \ \int_{r_0}^\infty dr \ \sqrt{\frac{24 K^2 M'}{\pi^3 \rho_s r^4}} \ \right] = \left[ \ \sqrt{\frac{24 K^2 M'}{\pi^3 \rho_s r_0^2}} \ \right]. \end{equation} The brackets denote the nearest integer smaller than the expression enclosed in the brackets. In particular, Bohr-Sommerfeld quantization suggests that a bound state exists only if \begin{equation} \frac{24 K^2 M'}{\pi^3 \rho_s r_0^2} \geq 1. \end{equation} Of course, one should be aware of the fact that this is at best a semi-quantitative estimate because Bohr-Sommerfeld quantization should not be trusted quantitatively for small quantum numbers. Let us also repeat these considerations for $\vec P \neq 0$. Again, we consider $\vec P = \frac{1}{\sqrt{2}} (P,P)$ such that the diagonal motion of an electron-pair has the energy \begin{equation} E = M' \left(\frac{dr}{dt}\right)^2 - \frac{K^2}{2 \pi \rho_s} \left(\frac{12}{r^4} + \frac{P^2}{2 r^2}\right). \end{equation} In complete analogy to the $\vec P = 0$ case one then obtains \begin{equation} n = \left[ \ \int_{r_0}^\infty dr \ \sqrt{\frac{2 K^2 M'}{\pi^3 \rho_s} \left(\frac{12}{r^4} + \frac{P^2}{2 r^2}\right)} \ \right] \rightarrow \infty, \end{equation} which suggests that infinitely many two-electron bound states exist for $\vec P \neq 0$. This is similar to the two-hole problem which has a $1/r^2$ potential with infinitely many bound states already for $\vec P = 0$ \cite{Bru05,Bru06}. Two-electron bound states with $\vec P = 0$ have been considered before by Kuchiev and Sushkov \cite{Kuc95} in the context of the $t$-$t'$-$J$ model. In contrast to the hole-case \cite{Kuc93} with a $1/r^2$ potential and an infinite number of bound states, in the electron-case only a finite number of bound states was found. While some results of that work agree qualitatively with the results of our effective theory, there are also important differences. For example, in \cite{Kuc95} the magnon-electron vertex was considered to be the same as the magnon-hole vertex, while the two vertices are different in the effective theory. \section{Investigation of Spiral Phases} In the following we will investigate phases with constant fermion density. The most general magnon field configuration $\vec e(x)$ which provides a constant background field for the doped electrons is not necessarily constant itself, but may represent a spiral in the staggered magnetization. While a spiral costs magnetic energy proportional to the spin stiffness $\rho_s$, the electrons might lower their energy by propagating in the spiral background. However, we will find that spiral phases are not energetically favorable in electron-doped systems. \subsection{Spirals with Uniform Composite Vector Fields} Since the electrons couple to the composite vector field $v_i(x)$ in a gauge covariant way, in order to provide a constant background field for the electrons, $v_i(x)$ must be constant up to a gauge transformation, i.e. \begin{align} \label{const} {v^3_i}(x)'&=v^3_i(x) - \partial_i \alpha(x) = \partial_i \varphi(x) \sin^2\frac{\theta(x)}{2} - \partial_i \alpha(x) = c^3_i, \nonumber \\[.5ex] {v^\pm_i}(x)'&=v^\pm_i(x) \exp\big(\pm 2 i \alpha(x)\big) \nonumber \\ &=\frac{1}{2} \big[\partial_i \varphi(x) \sin\theta(x) \pm i \partial_i \theta(x)\big] \exp\big(\pm 2 i \alpha(x) \mp i \varphi(x)\big) = c^\pm_i, \end{align} with $c^3_i$ and $c^\pm_i$ being constant. As shown in \cite{Bru06a}, the most general configuration that leads to a constant $v_i(x)'$ represents a spiral in the staggered magnetization. In addition, by an appropriate gauge transformation one can always put \begin{equation} c_i^+ = c_i^- = c_i \in {\mathbb{R}}. \end{equation} The magnetic energy density of such configurations takes the form \begin{equation} \epsilon_m = \frac{\rho_s}{2} \partial_i \vec e(x) \cdot \partial_i \vec e(x) = 2 \rho_s v_i^+(x) v_i^-(x) = 2 \rho_s c_i c_i. \end{equation} We now consider a concrete family of spiral configurations with \begin{equation} \theta(x) = \theta_0, \qquad \quad \varphi(x) = k_i x_i, \end{equation} which implies \begin{equation} v_t(x) = 0, \qquad v^3_i(x) = k_i \sin^2\frac{\theta_0}{2}, \qquad v_i^\pm(x) = \frac{k_i}{2} \sin\theta_0 \exp(\mp i k_i x_i). \end{equation} Performing the gauge transformation \begin{equation} \alpha(x) = \frac{1}{2} k_i x_i, \end{equation} one arrives at \begin{align} \label{constant} v_t(x)' &= v_t(x) - \partial_t \alpha(x) = 0, \nonumber \\[.2ex] {v^3_i}(x)' &= v^3_i(x) - \partial_i \alpha(x) = k_i \bigg[\sin^2\frac{\theta_0}{2} - \frac{1}{2}\bigg] = c^3_i, \nonumber \\[-.4ex] {v^\pm_i}(x)' &= v^\pm_i(x) \exp\big(\pm 2 i \alpha(x)\big) = \frac{k_i}{2} \sin\theta_0 = c_i, \end{align} such that \begin{equation} c^3_i = - \frac{k_i}{2} \cos\theta_0, \qquad \quad a = \frac{|c_i|}{c^3_i} = - \tan\theta_0. \end{equation} The magnetic energy density then takes the form \begin{equation} \epsilon_m = 2 \rho_s c_i c_i = \frac{\rho_s}{2} \big(k_1^2 + k_2^2\big) \sin^2\theta_0. \end{equation} \subsection{Fermionic Contributions to the Energy} Let us now compute the fermionic contribution to the energy, first keeping the parameters $c^3_i$ and $c_i$ of the spiral fixed, and neglecting the 4-fermion contact interactions. The Euclidean action of eq.(\ref{action}) implies the following fermion Hamiltonian \begin{align} H = &\sum_{s = +,-} \Big[ M \Psi^\dagger_s \Psi_s + \frac{1}{2 M'} D_i \Psi^\dagger_s D_i \Psi_s + N \Psi^\dagger_s v^s_i v^{-s}_i \Psi_s \nonumber \\ &\hspace{1.8em} + i K \big(D_1 \Psi^\dagger_s v^s_1 \Psi_{-s} - \Psi^\dagger_s v^s_1 D_1 \Psi_{-s} - D_2 \Psi^\dagger_s v^s_2 \Psi_{-s} + \Psi^\dagger_s v^s_2 D_2 \Psi_{-s}\big)\Big], \end{align} with the covariant derivative \begin{equation} D_i \Psi_\pm(x) = [\partial_i \pm i v^3_i(x)] \Psi_\pm(x). \end{equation} Here $\Psi^\dagger_\pm(x)$ and $\Psi_\pm(x)$ are creation and annihilation operators (not Grassmann numbers) for electrons with spin parallel ($+$) or antiparallel ($-$) to the local staggered magnetization. The Hamiltonian is invariant under time-independent $U(1)_s$ gauge transformations \begin{align} \Psi_\pm(x)' &= \exp\big(\pm i \alpha(x)\big) \Psi_\pm(x), \nonumber \\ {v^3_i}(x)' &= v^3_i(x) - \partial_i \alpha(x), \nonumber \\ {v^\pm_i}(x)' &= v^\pm_i(x) \exp\big(\pm 2 i \alpha(x)\big). \end{align} We now consider electrons propagating in the background of a spiral in the staggered magnetization with \begin{equation} {v^3_i}(x)' = c^3_i, \qquad \quad {v^\pm_i}(x)' = c_i \in {\mathbb{R}}. \end{equation} After an appropriate gauge transformation, the fermions propagate in a constant composite vector field ${v_i}(x)'$. The Hamiltonian is diagonalized by going to momentum space. The Hamiltonian for single electrons with spatial momentum $\vec p = (p_1,p_2)$ is given by \begin{equation} H(\vec p) = \left(\begin{array}{cc} M + \frac{(p_i - c_i^3)^2}{2 M'} + N c_i c_i & 2 K (- p_1 c_1 + p_2 c_2) \\ 2 K (- p_1 c_1 + p_2 c_2) & M + \frac{(p_i + c_i^3)^2}{2 M'} + N c_i c_i \end{array} \right). \end{equation} The diagonalization of the Hamiltonian yields \begin{equation} \label{energy} E_\pm(\vec p) = M + \frac{p_i^2 + (c_i^3)^2}{2 M'} + N c_i c_i \pm \sqrt{\bigg(\frac{p_i c_i^3}{M'}\bigg)^2 + 4 K^2(p_1 c_1 - p_2 c_2)^2}. \end{equation} It should be noted that in this case the index $\pm$ does not refer to the spin orientation. In fact, the eigenvectors corresponding to $E_\pm(\vec p)$ are linear combinations of both spins. Since $c_i^3$ does not affect the magnetic contribution to the energy density, it can be fixed by minimizing $E_\pm(\vec p)$ which leads to $c_1^3 = c_2^3 = 0$. According to eq.(\ref{constant}) this implies that $\theta_0 = \frac{\pi}{2}$, i.e.\ the spiral is along a great circle on the sphere $S^2$. For $c_i^3 = 0$ the energies of eq.(\ref{energy}) reduce to \begin{equation} E_\pm(\vec p) = M + \frac{p_i^2}{2 M'} + N c_i c_i \pm 2 K |p_1 c_1 - p_2 c_2|. \end{equation} The lines of constant energy are shown in figure 6. In particular, the lines of constant $E_-(\vec p)$ are circles centered around $\pm 2 K M' (c_1,- c_2)$. \begin{figure}[t] \begin{center} \vspace{-0.4cm} \begin{tabular}{cc} \epsfig{file=peanut.eps,width=7cm} & \epsfig{file=almond.eps,width=7cm} \end{tabular} \end{center} \caption{\it Lines of constant energy for electrons propagating in a spiral configuration. The contours of the lower energy $E_-(\vec p)$ are shown on the left, and the contours of the higher energy $E_+(\vec p)$ are displayed on the right.} \end{figure} For given $c_i \neq 0$ we now fill the lowest energy states with a small number of electrons. The filled electron pockets are circles centered around $\pm 2 K M' (c_1,- c_2)$ with a radius determined by the kinetic energy \begin{equation} T = \frac{1}{2 M'}\left[(p_1 \mp 2 K M' c_1)^2 + (p_2 \pm 2 K M' c_2)^2 \right] \end{equation} of an electron at the Fermi surface. The two occupied circular electron pockets define a region $P$ in momentum space. The area of this region determines the fermion density as \begin{equation} n = \frac{1}{(2 \pi)^2} \int_P d^2p = \frac{1}{\pi} M' T. \end{equation} The two circles do not overlap as long as $n < \frac{2}{\pi} M'^2 K^2 c_i c_i$. The kinetic energy density of the filled region $P$ is given by \begin{equation} t = \frac{1}{(2 \pi)^2} \int_P d^2p \ \frac{1}{2 M'}\left[(p_1 \mp 2 K M' c_1)^2 + (p_2 \pm 2 K M' c_2)^2 \right] = \frac{1}{2 \pi} M' T^2 = \frac{\pi n^2}{2 M'}, \end{equation} and the total energy density of electrons then is \begin{equation} \epsilon_e = (M + N c_i c_i - 2 K^2 M' c_i c_i) n + \frac{\pi n^2}{2 M'}. \end{equation} The resulting total energy density that includes the vacuum energy density $\epsilon_0$ as well as the magnetic energy density $\epsilon_m$ is given by \begin{equation} \epsilon = \epsilon_0 + \epsilon_m + \epsilon_e = \epsilon_0 + 2 \rho_s c_i c_i + (M + N c_i c_i - 2 K^2 M' c_i c_i) n + \frac{\pi n^2}{2 M'}. \end{equation} For $\rho_s > (K^2 M' - \frac{1}{2} N) n$ (which is always satisfied for sufficiently small density $n$) the energy is minimized for $c_i = 0$ and the value of the energy density at the minimum is given by \begin{equation} \label{etot} \epsilon = \epsilon_0 + M n + \frac{\pi n^2}{2 M'}. \end{equation} However, one should not forget that, as the $c_i$ become smaller, the two occupied circles eventually touch each other once $\frac{2}{\pi} M'^2 K^2 c_i c_i = n$. Interestingly, in this moment the states with energy $E_+(\vec p)$ also become occupied. Indeed, as one can see in figure 6, the almond-shaped region of occupied states with energy $E_+(\vec p)$ and the peanut-shaped region of occupied states with energy $E_-(\vec p)$, combine to two complete overlapping circles. This is also illustrated in figure 7, in which the energies $E_-(\vec p)$ and $E_+(\vec p)$ combine to form two overlapping parabolic dispersion relations. \begin{figure}[t] \begin{center} \vspace{-0.4cm} \epsfig{file=cutpocketsLabel.eps,width=7cm} \end{center} \caption{\it The energies $E_-(\vec p)$ (solid curve) and $E_+(\vec p)$ (dashed curve) along the line $\vec p \propto (c_1,- c_2)$ (the dashed lines in figure 6) define two independent parabolic dispersion relations.} \end{figure} As a result, eq.(\ref{etot}) is still valid even when the occupied circles overlap. Consequently, the energy minimum is indeed at $c_i = 0$ and thus a homogeneous phase arises. This is in contrast to hole-doped cuprates for which a spiral phase is energetically favored for intermediate values of the spin stiffness $\rho_s$ \cite{Bru06a}. The effective theory predicts that spiral phases are absent in electron-doped antiferromagnets. \subsection{Inclusion of 4-Fermion Couplings} Let us also calculate the effect of the 4-fermion contact interactions on the energy density. We perform this calculation to first order of perturbation theory, assuming that the 4-fermion interactions are weak. Depending on the underlying microscopic system such as the Hubbard model, the 4-fermion couplings may or may not be small. We like to point out that, while the on-site Coulomb repulsion responsible for antiferromagnetism is always large in the microscopic systems, the 4-fermion couplings in the effective theory may still be small. If they are large, the result of the perturbative calculation should not be trusted. The perturbation of the Hamiltonian due to the leading 4-fermion contact term of eq.(\ref{Lagrange}) is given by \begin{equation} \label{DeltaH} \Delta H = \frac{G_1}{2} \int d^2x \sum_{s = +,-} \Psi^\dagger_s \Psi_s \Psi^\dagger_{-s} \Psi_{-s}. \end{equation} It should be noted that $\Psi^\dagger_s(x)$ and $\Psi_s(x)$ again are fermion creation and annihilation operators (and not Grassmann numbers). The terms proportional to $G_2, G_3,...,G_7$ are of higher order and will hence not be taken into account. The fermion density is equally distributed among the two spin orientations such that \begin{equation} \langle \Psi^\dagger_+ \Psi_+ \rangle = \langle \Psi^\dagger_- \Psi_- \rangle = \frac{n}{2}. \end{equation} The brackets denote expectation values in the unperturbed state determined before. Since the fermions are uncorrelated we have \begin{equation} \langle \Psi^\dagger_s \Psi_s \Psi^\dagger_{-s} \Psi_{-s} \rangle = \langle \Psi^\dagger_s \Psi_s \rangle \langle \Psi^\dagger_{-s} \Psi_{-s}\rangle. \end{equation} Taking the 4-fermion contact terms into account in first order perturbation theory, the total energy density of eq.(\ref{etot}) receives an additional contribution and now reads \begin{equation} \epsilon = \epsilon_0 + M n + \frac{\pi n^2}{2 M'} + \frac{G_1}{4} n^2. \end{equation} \section{Reduction of the Staggered Magnetization upon Doping} The order parameter of the undoped antiferromagnet is the local staggered magnetization $\vec M_s(x) = {\cal M}_s\,\vec e(x)$ with ${\cal M}_s$ being the length of the staggered magnetization vector. In a doped antiferromagnet the staggered magnetization receives additional contributions from the electrons such that \begin{equation} \vec M_s(x) = \Big[{\cal M}_s - m \sum_{s = +,-} \psi^\dagger_s(x) \psi_s(x) \Big] \vec e(x). \end{equation} The low-energy parameter $m$ determines the reduction of the staggered magnetization upon doping. Further contributions to $\vec M_s(x)$ which include derivatives or contain more than two fermion fields are of higher order and have thus been neglected. Using \begin{equation} \sum_{s = +,-} \langle \Psi^\dagger_s \Psi_s \rangle = n, \end{equation} we then obtain \begin{equation} {\cal M}_s(n) = {\cal M}_s - m n, \end{equation} i.e.\ at leading order the staggered magnetization decreases linearly with increasing electron density. The higher-order terms that we have neglected will give rise to sub-leading corrections of ${\cal O}(n^2)$. \section{Conclusions} In analogy to the hole-doped case \cite{Kae05,Bru06}, we have constructed a systematic effective field theory for lightly electron-doped antiferromagnets. Interestingly, the different locations of electron- and hole-pockets in the Brillouin zone have important consequences for the dynamics. In the hole-doped case, the pockets are located at $(\pm \frac{\pi}{2 a},\pm \frac{\pi}{2 a})$ which gives rise to a flavor index that determines to which pocket a hole belongs. Due to spontaneous symmetry breaking, holes and magnons are derivatively coupled. The leading magnon-hole coupling contains a single spatial derivative and is responsible for a variety of interesting effects. First, it leads to a $1/r^2$ potential between a pair of holes which gives rise to an infinite number of two-hole bound states \cite{Bru05,Bru06}. Remarkably, in the hole-doped case, in the $c \rightarrow \infty$ limit the symmetries give rise to an accidental Galilean boost invariance. Hence, it is sufficient to consider the bound state in its rest-frame. Second, in the hole-doped systems, the single-derivative magnon-hole coupling gives rise to a spiral phase for intermediate values of $\rho_s$. In the electron-doped case discussed in this paper, the momentum space pockets are located at $(\frac{\pi}{a},0)$ and $(0,\frac{\pi}{a})$. Due to antiferromagnetism these are actually two half-pockets which combine to a single electron-pocket. Hence, in contrast to the hole-doped systems, electrons do not carry an additional flavor index. As in the hole-case, electrons and magnons are derivatively coupled. However, due to the different implementation of the symmetries, the leading magnon-electron coupling now contains two spatial derivatives. In other words, at low energies magnons are coupled to holes more strongly than to electrons. As a consequence, the one-magnon exchange potential between two electrons in their rest-frame decays as $1/r^4$ and is hence weaker at large distances than in the hole-case. Still, magnon exchange is capable of binding electrons. As another consequence of symmetry considerations, an accidental Galilean boost invariance is absent in the electron-case. Indeed, the one-magnon exchange potential depends on the total momentum $\vec P$ of the electron-pair, and it is hence not sufficient to consider the system in its rest-frame. The momentum-dependent contribution to the potential is proportional to $P^2/r^2$, which gives rise to a non-trivial structure of moving bound states. As another consequence of the weakness of the magnon-electron coupling, in contrast to the hole-doped case, spiral phases are energetically unfavorable for electron-doped systems. While this is not a new result, we find it remarkable that it follows unambiguously from the very few basic assumptions of the systematic low-energy effective field theory, such as locality, symmetry, and unitarity. We like to point out that the systematic effective field theory approach is universally applicable to a large class of antiferromagnets. While it remains to be seen if the effective theory can also be applied to high-temperature superconductors, it makes unbiased, quantitative predictions for both lightly hole- and electron-doped cuprates and should be pursued further. \section*{Acknowledgements} C.\ P.\ H. would like to thank the members of the Institute for Theoretical Physics at Bern University for their hospitality. This work was supported in part by funds provided by the Schweizerischer Nationalfonds. The work of C.\ P.\ H.\ is supported by CONACYT grant No.\ 50744-F. \begin{appendix} \section{Magnon Exchange Potential in Coordinate \\ Space} In this appendix we discuss the transformation of the one-magnon exchange potential between two electrons from momentum space to coordinate space, which is not entirely straightforward. In momentum space the one-magnon exchange potential is given by \begin{equation} \langle \vec p_+ \ \!\!\!\! ' \ \vec p_- \ \!\!\!\! '|V |\vec p_+ \vec p_-\rangle = V(\vec p,\vec p \ ') \delta(\vec p_+ + \vec p_- - \vec p_+ \ \!\!\!\! ' - \vec p_- \ \!\!\!\! '), \end{equation} with \begin{equation} V(\vec p,\vec p \ ') = \frac{K^2}{2 \rho_s} \frac{1}{q^2} \left[q_1^2 - q_2^2 + 2(q_1 p_{-1} - q_2 p_{-2})\right] \left[q_1^2 - q_2^2 - 2(q_1 p_{+1} - q_2 p_{+2})\right]. \end{equation} Using \begin{gather} \vec P = \vec p_+ + \vec p_- = \vec p_+ \ \!\!\!\! ' + \vec p_- \ \!\!\!\! ', \nonumber \\ \vec p = \frac{1}{2}(\vec p_+ - \vec p_-), \qquad \vec p \ ' = \frac{1}{2}(\vec p_+ \ \!\!\!\! ' - \vec p_- \ \!\!\!\! '), \qquad \vec q = \vec p + \vec p \ ', \end{gather} it is easy to show that \begin{equation} V(\vec p,\vec p \ ') = V_0(\vec p,\vec p \ ') + V_{\vec P}(\vec p,\vec p \ '), \end{equation} with the rest-frame potential \begin{equation} V_0(\vec p,\vec p \ ') = \frac{K^2}{2 \rho_s} \frac{p_1^2 - {p_1'}^2 - p_2^2 + {p_2'}^2}{(p_1 + p_1')^2 + (p_2 + p_2')^2}, \end{equation} and the momentum-dependent contribution \begin{equation} V_{\vec P}(\vec p,\vec p \ ') = - \frac{K^2}{2 \rho_s} \frac{\left[P_1 (p_1 + p_1') - P_2 (p_2 + p_2')\right]} {(p_1 + p_1')^2 + (p_2 + p_2')^2}. \end{equation} The potential in coordinate space is the Fourier transform of the potential in momentum space \begin{equation} V(\vec x,\vec x \ ') = \frac{1}{(2 \pi)^4} \int d^2p \ d^2p' \ V(\vec p,\vec p \ ') \exp(i \vec p \cdot \vec x - i \vec p \ ' \cdot \vec x \ '). \end{equation} Introducing \begin{equation} \vec k = \frac{1}{2}(\vec p - \vec p \ '), \qquad \vec r = \frac{1}{2}(\vec x - \vec x \ '), \qquad \vec y = \vec x + \vec x \ ', \end{equation} one obtains \begin{equation} \vec p \cdot \vec x - \vec p \ ' \cdot \vec x \ ' = \vec q \cdot \vec r + \vec k \cdot \vec y, \end{equation} such that the momentum-dependent contribution takes the form \begin{equation} V_{\vec P}(\vec x,\vec x \ ') = - \frac{K^2}{2 \rho_s} \frac{1}{(2 \pi)^2} \int d^2q \ \frac{\left(P_1 q_1 - P_2 q_2\right)^2}{q_1^2 + q_2^2}\, \exp(i \vec q \cdot \vec r)\, \delta(\vec y). \end{equation} The $\delta$-function arises from the $k$-integration and implies $\vec x \ ' = - \vec x$ as well as $\vec r = \vec x$, which just means that the potential is local in coordinate space. Using \begin{equation} \frac{1}{(2 \pi)^2} \int d^2q \ \frac{q_1^2 - q_2^2}{q_1^2 + q_2^2} = - \frac{1}{\pi} \frac{\cos(2 \varphi)}{r^2}, \qquad \frac{1}{(2 \pi)^2} \int d^2q \ \frac{2 q_1 q_2}{q_1^2 + q_2^2} = - \frac{1}{\pi} \frac{\sin(2 \varphi)}{r^2}, \end{equation} with $\vec r = r (\cos\varphi,\sin\varphi)$ the $q$-integration results in \begin{align} V_{\vec P}(\vec x,\vec x \ ')&=\frac{K^2}{2 \pi \rho_s} \left[\frac{1}{2}(P_1^2 - P_2^2) \frac{\cos(2 \varphi)}{r^2} - P_1 P_2 \frac{\sin(2 \varphi)}{r^2}\right] \delta(\vec y) \nonumber \\[.5ex] &=\frac{K^2 P^2}{4 \pi \rho_s}\, \frac{\cos\big(2 (\varphi + \chi)\big)}{r^2}\, \delta(\vec y). \end{align} In the last step we have introduced $\vec P = P (\cos\chi,\sin\chi)$. Similarly, the rest-frame potential takes the form \begin{equation} \label{V_0} V_0(\vec x,\vec x \ ') = \frac{K^2}{2 \rho_s} \frac{1}{(2 \pi)^4} \int d^2q \ d^2k \ \frac{\left(2 q_1 k_1 - 2 q_2 k_2\right)^2}{q_1^2 + q_2^2} \exp(i \vec q \cdot \vec r) \exp(i \vec k \cdot \vec y). \end{equation} The $k$-integration results in the second derivative of a $\delta$-function which again implies $\vec x \ ' = - \vec x$ as well as $\vec r = \vec x$. Hence, also the rest-frame potential is local and one can write \begin{equation} V_0(\vec x,\vec x \ ') = V_{ij}(\vec r) \partial_{y_i} \partial_{y_j} \delta(\vec y), \end{equation} with $V_{ij}(\vec r)$ implicitly defined through eq.(\ref{V_0}). In order to figure out how $V_0(\vec x,\vec x \ ')$ acts on a wave function we calculate \begin{align} \langle \Phi|V_0|\Psi \rangle&=\int d^2x \ d^2x' \ \langle \Phi|\vec x \rangle V_0(\vec x,\vec x \ ') \langle \vec x \ '|\Psi\rangle \nonumber \\ &=\int d^2x \ d^2x' \ \langle \Phi|\vec x \rangle V_{ij}(\vec r) \partial_{y_i} \partial_{y_j} \delta(\vec y) \langle \vec x \ '|\Psi\rangle \nonumber \\ &=\int d^2r \ d^2y \ \langle \Phi|\frac{\vec y}{2} + \vec r \rangle V_{ij}(\vec r) \partial_{y_i} \partial_{y_j} \delta(\vec y) \langle \frac{\vec y}{2} - \vec r|\Psi\rangle \nonumber \\ &=\frac{1}{4} \int d^2r \ V_{ij}(\vec r) \partial_{r_i} \partial_{r_j} (\langle \Phi|\vec r \rangle \langle - \vec r|\Psi\rangle) \nonumber \\ &=\frac{1}{4} \int d^2r \ \langle \Phi|\vec r \rangle \left[\partial_{r_i} \partial_{r_j} V_{ij}(\vec r)\right] \langle - \vec r|\Psi\rangle. \end{align} It is now straightforward to convince oneself that \begin{equation} \frac{1}{4} \partial_{r_i} \partial_{r_j} V_{ij}(\vec r) = \frac{6 K^2}{\pi \rho_s} \frac{r_1^4 - 6 r_1^2 r_2^2 + r_2^4}{r^8} = \frac{6 K^2}{\pi \rho_s} \frac{\cos(4 \varphi)}{r^4}. \end{equation} Altogether, in coordinate space the resulting potential is hence given by \begin{equation} \langle \vec r_+ \ \!\!\!\! ' \vec r_- \ \!\!\!\! '|V |\vec r_+ \vec r_-\rangle = \frac{K^2}{2 \pi \rho_s} \left[12 \frac{\cos(4 \varphi)}{r^4} + \frac{P^2}{2} \frac{\cos\big(2 (\varphi + \chi)\big)}{r^2} \right] \delta(\vec r_+ - \vec r_- \ \!\!\!\! ') \ \delta(\vec r_- - \vec r_+ \ \!\!\!\! '). \end{equation} \end{appendix}
1,477,468,750,570
arxiv
\section{Introduction} Quantum error correcting codes (QECCs) protect qubits from detrimental noise by redundantly storing quantum states in multiple parties \cite{Lidar2013}. The basic idea is illustrated in Fig.~\ref{fig:drawing}. An environment induces noise in a system $S$, which is modeled as a quantum channel $\mathcal{E}_H$, as in Fig.~\ref{fig:drawing}(a). In order to protect it, the state of the system is encoded into a larger Hilbert space by introducing additional ancillas. Both system and ancillas are now susceptible to the noise process. But by applying appropriate correction measures, one may mitigate this noise at the expanse of making the final state of the ancillas more mixed (Fig.~\ref{fig:drawing}(b)). The connection between QECCs and thermodynamics has been discussed for quite some time in connection with Landauer's erasure and Maxwell's Demon \cite{Barbara1998,Vedral2000a}. However, an interesting connection which, to our knowledge has never been explored, is that with quantum heat engines (QHE) \cite{Kosloff2014,Mitchison2019}. This becomes accurate in the case of operator error correction \cite{Kribs2005,Kribs2006,Clemens2006,Tomita2011}, where no syndrome measurements are required. The diagram in Fig.~\ref{fig:drawing} is then seen to be entirely analogous to a quantum heat engine undergoing an Otto cycle: The ``working fluid'' is composed of both system and ancillas. The encoding, decoding and correction steps are the unitary strokes, involving the expenditure of work without any heat flow. The noise term represents the action of the hot bath. And finally, the recycling step where the states of the ancillas are reset, represents the cold bath. In view of this striking similarity, one is naturally led to ask how far can this connection be pushed. Of course, in the end, the goal of a QECC is entirely different from that of a QHE. Efficiency, for instance, has nothing to do with work extraction, but with the ability of the code to correct the error. Notwithstanding these fundamental differences, an analysis of a QECC from a thermodynamic perspective is still illuminating, as it allows one to address the roles of heat and work in the error correcting process. Particularly interesting is the question of what is the work cost for encoding and decoding quantum information, as compared to the cost for applying an error correction. For instance, is it possible to successfully apply an error correction and still extract useful work from the machine? Or does the success of the QECC necessarily involve the expenditure of work by an external agent? \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{drawing.pdf} \caption{\label{fig:drawing} Typical error correcting scenario. (a) A state $\rho_S$ is susceptible to error, described by a quantum channel $\mathcal{E}_H(\rho_S)$. (b) In order to protect it, $\rho_S$ is first encoded into ancillas $A$. After both undergo individual errors $\mathcal{E}_H$, the state of the system is decoded from $S+A$ and a set of correction measures are applied, leading to a final state $\rho_S'$ for the system. This procedure makes the ancillas dirty, which must then be recycled if they are to be used again. (c) The code is considered successful (at the ensemble level) whenever it mitigates the role of the noise, which means $D(\rho_S',\rho_S) < D(\mathcal{E}_H(\rho_S), \rho_S)$, where $D$ is any distance measure. This, of course, will be the case only if the effect of $\mathcal{E}_H$ is sufficiently small. } \end{figure} With these motivations in mind, in this paper we put forth a complete thermodynamic characterization of QECCs implemented using the operator error correction scheme \cite{Kribs2005,Kribs2006}. We begin by considering the general thermodynamic properties, including a reformulation of the first and second laws for the specific QECC scenario. Next we apply these results to two representative examples. The first is a 3-qubit classical error correction, capable of correcting incoherent states. The second is the fully quantum 9-qubit Shor code, which can simultaneously handle both incoherent as well as coherent states. \section{Formal Framework} In this section we provide a general characterization of the thermodynamic properties of the QECC in Fig.~\ref{fig:drawing}. We begin by describing the basic strokes of the cycle and then move on to characterize it in terms of the first and second laws of thermodynamics. \subsection{Description of the cycle} We assume the main system $S$ is a qubit with computational basis $|0\rangle, |1\rangle$ and Hamiltonian $H_S = \frac{\Omega}{2} (1- \sigma_z^S)$ (so that $|0\rangle$ is the ground-state). The code involves coupling the system with a set of ancillas, which we shall henceforth assume to be identical, with Hamiltonian $H_{A_i} = \frac{\omega}{2} (1- \sigma_z^{A_i})$. The ancillas are always prepared in the ground state $|0\rangle$, so that the global initial state of $N$ ancillas is $\rho_A = |0\rangle\langle 0 |^{\otimes N}$. Below, when convenient, we shall assume for simplicity that $\omega = \Omega$. In this paper we will consider 4-stroke codes, each of which we now explain in detail. The first stroke is the \emph{encoding stroke}, where the system density matrix $\rho_S$ is encoded in the ancillas by means of a unitary $U_e$, \begin{equation}\label{stroke_1} \rho_{SA}^{(1)} = U_e \big( \rho_S \otimes \rho_A) U_e^\dagger. \end{equation} The second stroke is the \emph{error (noise) stroke}, where both $S$ and $A$ are subject to local noise channels. In order to highlight the correction with thermodynamics, we consider the noise generated by the generalized amplitude damping (GAD) channel, \begin{equation}\label{GAD} \mathcal{E}_H(\rho) = \sum\limits_{k=1}^4 M_k \rho M_k^\dagger, \end{equation} where \begin{IEEEeqnarray}{LL} M_1 = \sqrt{1-f} \begin{pmatrix} 1 & 0 \\ 0 & \sqrt{1-\gamma}\end{pmatrix}, & M_2 = \sqrt{1-f} \begin{pmatrix} 0 & \sqrt{\gamma} \\ 0 & 0\end{pmatrix}, \nonumber \\[0.2cm] M_3 = \sqrt{f} \begin{pmatrix} \sqrt{1-\gamma} & 0 \\ 0 & 1\end{pmatrix}, & M_4 = \sqrt{f} \begin{pmatrix} 0 & 0 \\ \sqrt{\gamma} & 0\end{pmatrix}. \end{IEEEeqnarray} Here $\gamma \in [0,1]$ is the coupling strength and $f$ is excited state probability (Fermi-Dirac distribution). If $f = 0$ the map will target the ground-state $|0\rangle$. Since $S$ and $A$ have different frequencies, we will use the notation $f_x = (e^{\beta_H x} + 1)^{-1}$, with $x = \Omega, \omega$ and $\beta_H$ being the temperature of the hot bath. Error correction is mostly successful when the noise strength $\gamma \ll 1$, which we shall assume throughout this paper. Moreover, following customary treatments of error correction, all results for specific codes will be presented in terms of a power series, only to leading order in $\gamma$. The state after the second stroke will be \begin{equation}\label{stroke_2} \rho_{SA}^{(2)} = \mathcal{E}_H^S \otimes \mathcal{E}_{H}^{\otimes N} (\rho_{SA}^{(1)}). \end{equation} The third stroke is the \emph{decoding/correction} operation. This will again be described by a unitary $U_{dc}$, which in general cannot be split as the product of two unitaries for decoding and correction. The state after the third stroke will be \begin{equation}\label{stroke_3} \rho_{SE}^{(3)} = U_{dc} \rho_{SA}^{(2)} U_{dc}^\dagger. \end{equation} Finally, the fourth stroke is the \emph{recycling} stroke, where the ancillas interact with a cold bath and nothing is done to the system. This stroke can also be viewed as the action of a GAD~(\ref{GAD}), but with $\gamma = 1$ and $f = 0$. However, this is not necessary since its effect is simply to reset the state of the ancillas. Hence, after the fifth stroke the global state will be \begin{equation}\label{stroke_4} \rho_{SA}^{(4)} = \rho_S^{(3)} \otimes \rho_A, \end{equation} where $\rho_{S}^{(3)} = \tr_A \rho_{SA}^{(3)}$ is the state of $S$ after the fourth stroke and $\rho_A$ was the initial state of the ancillas. \subsection{Error correcting efficiency} For conciseness, we shall denote by $\rho_S' = \rho_S^{(3)} = \rho_{S}^{(4)}$ as the final state of the system after one cycle. Thus, from a global perspective the input state of the engine is $\rho_S \otimes \rho_A$ and the output state is $\rho_S' \otimes \rho_A$. We therefore see that, in general, the engine's operation is not cyclic (i.e., it has not reached a limit cycle). This, actually, is precisely what quantifies the efficiency of the error correcting code, as the goal of the engine is to have $\rho_S'$ as close as possible to $\rho_S$. Motivated by this one can define the efficiency of the QECC as follows. Let $D(\rho,\sigma)$ denote any proper distance measure between quantum states. To address the success of a QECC, one must compare $\rho_S'$ with the state $\mathcal{E}_H(\rho_S)$ which one would obtain if only the error map $\mathcal{E}_H$ were to be applied to the state. A QECC can be declared successful (at the ensemble level) if \begin{equation} D(\rho_S', \rho_S) < D(\mathcal{E}_H(\rho_S), \rho_S), \end{equation} since this implies that the effect of the noise was at least partially mitigated by the code. Hence, a proper measure of the efficiency of a QECC could be, for instance, \begin{equation}\label{efficiency} \eta_\text{\tiny QECC} = 1 - \frac{D(\rho_S',\rho_S)}{D(\mathcal{E}_H(\rho_S),\rho_S)}. \end{equation} This quantity is 1 when the correction is perfect, zero when the correction has no effect and negative when the code actually makes things worse. It resembles the thermodynamic efficiency, but is purely information-theoretic. Below we will not need this specific form of the QECC efficiency in order to construct the cycle's thermodynamic. We have presented it here simply to emphasize that the QECC efficiency is, in general, not at all related to any thermodynamic efficiency. \subsection{First law of thermodynamics} The operations described by the four strokes in Eqs.~(\ref{stroke_1})-(\ref{stroke_2}) are essentially implementing an Otto cycle. Strokes 1 and 3 are unitary, involving the possible expenditure of work, but without any exchange of heat. Similarly, strokes 2 and 4 are purely dissipative, involving only the exchange of heat and no work. The expressions for the heat and work in each stroke are thus easily calculated as the changes in energy in each stroke, $W_e = \Delta H_{10}$, $Q_H = \Delta H_{21}$, $W_{dc} = \Delta H_{32}$ and $Q_C = \Delta H_{43}$, where $H = H_S + H_A$ is the total Hamiltonian and $\Delta H_{i,i-1} = \tr\big\{ H(\rho_{SA}^{(i)} - \rho_{SA}^{(i-1)})\big\}$ is the total change in energy of each stroke. The decoding/correction stroke~(\ref{stroke_3}) is described by a unitary $U_{dc}$ which can be decomposed as a riffling $U_{dc} = U_d^{(1)} U_c^{(1)} U_d^{(2)} U_c^{(2)}\ldots$, where $U_d^{(i)}$ and $U_c^{(i)}$ represent decoding and correcting steps respectively. Based on this, the work $W_{dc}$ can also be split as $W_{dc} = W_d + W_c$, giving the individual contributions from each part of the code. The ancillas are reset after each stroke, but the system is not. Hence, as a consequence, the first law of thermodynamics reads \begin{equation}\label{first_law} \Delta U_S = W_e + Q_H + W_{dc} + Q_C, \end{equation} where $\Delta U_S = \tr\big\{ H_S (\rho_S' - \rho_S)\big\}$ is the change in energy of the system only. Since the total Hamiltonian is split as $H = H_S + H_A$, the same may also be done for all heat and work contributions. Thus, we may also write the first law as \begin{equation}\label{first_law_2} \Delta U_S = \big( W_e^S + Q_H^S + W_{dc}^S\big) + \big( W_e^A + Q_H^A + W_{dc}^A + Q_C^A\big), \end{equation} where we used the fact that the heat to the cold bath only has an ancilla part. But since the operations are all local and since the state of the ancillas are reset, it follows that the last term must be identically zero. Hence, the first law can be written solely as a system-related quantity: \begin{equation} \Delta U_S = W_e^S + Q_H^S + W_{dc}^S. \end{equation} \subsection{Second law for the noise stroke} One can also write down the second law of thermodynamics for the QECC. The heating stroke 2 involves a standard finite-temperature amplitude damping, for which the expression for the entropy production is very well established \cite{Esposito2010a,Reeb2014,Brandao2015,Manzano2017a,Strasberg2016} and reads \begin{equation}\label{Sigma} \Sigma_H = \Delta S_{21} - \beta_H Q_H \geq 0, \end{equation} where $\Delta S_{21} = S(\rho_{SA}^{(2)}) - S(\rho_{SA}^{(1)})$ is the change in von Neumann entropy ($S(\rho) = -\tr(\rho \ln \rho)$) in stroke 2. The positivity of $\Sigma_H$ is a direct consequence of the data processing inequality \cite{Breuer2003}. This expression can be manipulated so as to better highlight the physical origins of the irreversibility associated with the QECC cycle. Since stroke 3 is unitary, it follows that $S(\rho_{SA}^{(2)}) = S(\rho_{SA}^{(3)})$. Moreover, we can write \[ S(\rho_{SA}^{(3)}) = S(\rho_S') + S(\rho_A^{(3)}) - \mathcal{I}^{(3)}(S:A), \] where $\mathcal{I}^{(3)}(S:A)$ is the mutual information between system and ancilla in the state $\rho_{SA}^{(3)}$. Similarly, since the first stroke is unitary, $S(\rho_{SA}^{(1)}) = S(\rho_S) + S(\rho_A) = S(\rho_S)$, as the ancillas are taken to be in a pure state. Whence, the entropy production~(\ref{Sigma}) can be written as \begin{equation}\label{Sigma_2} \Sigma_H = \Delta S_S + S(\rho_A^{(3)}) - \mathcal{I}^{(3)}(S:A) - \beta_H Q_H \geq 0. \end{equation} This is an important result. The first term is the total change in entropy of the system, $\Delta S_S = S(\rho_S') - S(\rho_S)$. It is precisely one of the goals of the QECC to minimize $\Delta S_S$. The second term in Eq.~(\ref{Sigma_2}) is the entropy increase in the ancillas. As a byproduct of the QECC, the ancillas become dirty, which is precisely quantified by this term. Hence, $S(\rho_A^{(3)})$ will be exactly the amount of entropy that has to be cleaned up in the last recycling stroke. The third term in Eq.~(\ref{Sigma_2}) is the \emph{residual} mutual information that still remains between system and ancilla after the decoding/correction stroke. In the limit of perfect correction, the system would return to $\rho_S$, so that $\mathcal{I}^{(3)}(S\!:\! A) = 0$. Hence, $\mathcal{I}^{(3)}(S\!:\!A)$ represents the shared information that remained in the state $\rho_{SA}^{(3)}$ which the correcting scheme was unable to remove. This mutual information appears with a negative sign, hence contributing to make the process more reversible. The reason for this lies in the fact that before the recycling stroke $\mathcal{I}^{(3)}(S:A)$ is still in principle accessible. As we shall see below, once one includes the recycling stroke, however, these correlations are irretrievably lost. Finally, the last term in Eq.~(\ref{Sigma_2}) is the heat flow to the hot bath. Since $Q_H = Q_H^S + Q_H^A$, we may also write~(\ref{Sigma_2}) more symmetrically as \begin{equation}\label{Sigma_3} \Sigma_H = (\Delta S_S - \beta_H Q_H^S) + (S(\rho_A^{(3)}) - \beta_H Q_H^A) - \mathcal{I}^{(3)}(S\!: \!A), \end{equation} which is clearly split into two local contributions, plus a genuinely non-local term. \subsection{Second law for the recycling stroke} One can similarly write down the second law for the interaction with the cold bath. In this case, however, an equation of the form~(\ref{Sigma}) would give diverging results, as $\beta_C = \infty$. This pathological behavior of the entropy production in the limit of zero temperature is a known issue, which was discussed for instance in Refs~\cite{Santos2018,Santos2017b,Santos2018a}. To circumvent, one must provide additional details on the environment interaction generating the map. We therefore assume that each ancilla $A_i$ is coupled to a corresponding environment $E_i$ (not necessarily qubits) prepared in a pure state $|0\rangle_{E_i}$, while the system $S$ is not coupled to anything. We assume in this stroke that the ancillas are fully reset back to $|0\rangle_{A_i}$, which means that each $A_iE_i$ interaction must have the form of a full SWAP. With this proviso, the recycling stroke may be written as the map composition \begin{equation} \rho_{SA}^{(4)} = \mathcal{E}_C^{(A_1)} \otimes \ldots \otimes \mathcal{E}_C^{(A_N)} (\rho_{SA}^{(3)}), \end{equation} where \begin{equation} \mathcal{E}_C^{(A_i)}(\rho) = \tr_{E_i} \bigg\{ U_{A_i,E_i}^\text{SWAP} \bigg( \rho \otimes |0\rangle\langle 0 |_{E_i} \bigg) (U_{A_i,E_i}^\text{SWAP})^\dagger \bigg\}, \end{equation} is the Stinespring dilation for a map acting only on ancilla $A_i$. With this specific representation for the recycling stroke, we can now propose a formula for the entropy production. Namely, based on Refs.~\cite{Strasberg2016,Esposito2010a,Manzano2017a}, we define the entropy production as being only the mutual information between $SA$ and the cold environment $E$. \begin{equation}\label{Sigma_C} \Sigma_C = \mathcal{I}(SA:E) = S(\rho_{SA}^{(4)}) + S(\rho_E') - S(\rho_{SAE}'), \end{equation} where $\rho_{SAE}'$ denotes the global state of system, ancillas and cold environment after the map, with $\rho_E'$ being the corresponding reduced density matrix. Within the context of dilated unitary maps, entropy production is often defined with an additional term, proportional to the relative entropy between the initial and final states of the environment \cite{Esposito2010a}. In fact, quite recently this extra term was shown to be extremely important in a large variety of models \cite{Ptaszynski2019b}. However, in the case of zero temperature, it gives a diverging result since the initial state of the environment is pure. The expression~(\ref{Sigma_C}), which is discussed also in \cite{Strasberg2016,Manzano2017a}, is a choice that does not suffer from this pathology. Since the global $SAE$ dynamics is unitary, it follows that $S(\rho_{SAE}') = S(\rho_{SA}^{(3)}) + S(\rho_E) = S(\rho_{SA}^{(3)})$. Moreover, we are assuming full thermalization so that $S(\rho_{SA}^{(4)}) = S(\rho_{S}^{(3)})$. And, finally, again because of the assumption of full thermalization, $S(\rho_E') = S(\rho_A^{(3)})$. Whence, we conclude that Eq.~(\ref{Sigma_C}) may also be written as \begin{equation}\label{Sigma_C_2} \Sigma_C = S(\rho_S^{(3)}) + S(\rho_{A}^{(3)}) - S(\rho_{SA}^{(3)}) = \mathcal{I}^{(3)} (S\! : \! A). \end{equation} This result shows that the entropy production in the cold stroke is nothing but the residual mutual information that was developed between system and ancillas in the previous strokes, and which is lost due to the action of the cold bath. This is the same residual mutual information appearing in Eq.~(\ref{Sigma_3}). Combining Eqs.~(\ref{Sigma_3}) and (\ref{Sigma_C_2}) then finally leads to a formula for the total entropy production in the QECC engine: \begin{IEEEeqnarray}{rCl} \Sigma &=& \Sigma_H + \Sigma_C \nonumber\\[0.2cm] &=& \Delta S_S - \beta_H Q_H^S + S(\rho_A^{(3)}) - \beta_H Q_H^A. \end{IEEEeqnarray} Whence, the total entropy production is found to contain only \emph{local} contributions, referring to the changes taking place in system and ancilla. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{classical_a.pdf} \caption{\label{fig:classical} The classical error correcting algorithm capable of correcting for diagonal states of the system. } \end{figure} \section{\label{sec:classical}Classical error correcting engine} In the remainder of the paper, we apply our results to two specific QECCs. To start, we consider the particularly illuminating case of \emph{classical} error correction. That is, we first consider the protection of diagonal states (in the computational basis) of the form \begin{equation}\label{classical_state} \rho_S = (1-p) |0\rangle\langle 0 | + p |1\rangle \langle 1 |, \qquad p \in [0,1]. \end{equation} This state can be regarded as classical as far as the amplitude damping channel is concerned since, in the sense of einselection \cite{Zurek1981,Zurek2003b}, the amplitude damping chooses the computational basis as a preferred basis. The effects of the amplitude damping on the state~(\ref{classical_state}) can be corrected by the 3-qubit majority voting scheme shown in Fig.~\ref{fig:classical}(a). The encoding unitary $U_e$ is composed of a double CNOT, \begin{equation} U_e = |0\rangle\langle 0 |_S \otimes I_{A_1} \otimes I_{A_2}+|1\rangle\langle 1 |_S \otimes X_{A_1} \otimes X_{A_2}, \end{equation} where $X=\sigma_x$ is the Pauli operator. Moreover, the decoding/correction unitary $U_{dc}$ in this case is factored into a product of two terms, $U_{dc} = U_d U_c$, with $U_d = U_e$ and $U_c$ being a Toffoli gate. All strokes can be computed using standard symbolic algebra. We begin by considering the fidelity between the final and initial states of the system, with and without the QECC. In this case we assume for simplicity that $\omega = \Omega$, so we can set $f_\Omega = f_\omega \equiv f$. If no QECC is applied we find, to leading order in the noise strength $\gamma$, \begin{equation} F(\mathcal{E}_H(\rho_S), \rho_S) \simeq 1 - \frac{\gamma ^2 }{4 (1-p) p}(f-p)^2. \end{equation} As expected, the fidelity is unity if $p = f$, in which case the system already starts with the same population as the environment. Conversely, if the QECC is applied to protect the system one finds that \begin{equation} F(\rho_S', \rho_S) \simeq 1 - \frac{9\gamma^4}{4(1-p)p} \bigg[ p(1-2f)-f^2(1-2p) \bigg]^2, \end{equation} We see that the leading term in the fidelity when the QECC is applied now becomes $\sim\gamma^4$, as compared to $\gamma^2$ without the QECC. This neatly shows how error correction behaves at the ensemble level. Let us now compute the efficiency defined in Eq.~(\ref{efficiency}). As a proper distance measure we use the Bures distance squared, \begin{equation} D^2(\rho, \sigma) = 2\bigg(1- \sqrt{F(\rho,\sigma)}\bigg). \end{equation} The efficiency~(\ref{efficiency}), to leading order in $\gamma$, then becomes \begin{equation} \label{eq:effic_3qbs} \eta_\text{\tiny QECC} \simeq 1 - \frac{ 9 \gamma^2}{(f-p)^2} \bigg[ p(1-2f)-f^2(1-2p) \bigg]^2. \end{equation} We see that error correction becomes problematic when $p \to f$, as in this case the effect of the channel becomes trivial, so that there is no error to correct. Fig.~ \ref{fig:effic_3qbs_1} shows the behavior of the efficiency using Eq.~(\ref{eq:effic_3qbs}). \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{fig1_Color.pdf} \caption{\label{fig:effic_3qbs_1} Efficiency for states with $p=0.01$ (dash-dot), $p=0.99$ (dash), $p=0.5$ (dot) and $p=0.25$ (solid). For all cases $f=0.2$. } \end{figure} Next we present the heat and work in each step, which we divide into contributions from the system and from the ancillas. The contributions referring to the system, again, to leading order in $\gamma$, are \begin{IEEEeqnarray}{rCl} \label{classical_We_S} W_e^S &=& 0 , \\[0.2cm] \label{classical_QH_S} Q_H^S &=& \gamma \Omega (f_\Omega - p) , \\[0.2cm] \label{classical_Wd_S} W_d^S &=& 0,\\[0.2cm] \label{classical_WcS} W_c^S &\simeq& - \gamma \Omega(f_\Omega-p), \end{IEEEeqnarray} whereas the contributions from the ancillas are \begin{IEEEeqnarray}{rCl} \label{classical_We_A} W_e^A &=& 2 p \omega, \\[0.2cm] \label{classical_QH_A} Q_H^A &=& 2 \gamma \omega (f_\omega - p), \\[0.2cm] \label{classical_Wd_A} W_d^A &\simeq& - 2 p \omega + 2 \gamma \omega \bigg[ f_\Omega +p (3- 2 f_\omega - 2 f_\Omega)\bigg], \\[0.2cm] \label{classical_Wc_A} W_c^A &=& 0, \\[0.2cm] \label{classical_QC_A} Q_C^A &\simeq& -2 \gamma \omega \bigg[ f_\Omega +p (3- 2 f_\omega - 2 f_\Omega)\bigg] - 2 \gamma \omega(f_\omega - p). \IEEEeqnarraynumspace \end{IEEEeqnarray} The physics behind each term is quite interesting. First, the work $W_e$ of the encoding stroke is only associated with thecost of putting the two ancillas in the excited state with probability $p$. Next, the heat that flows to the hot bath is proportional to the population mismatch $f_\Omega - p$ and $f_\omega - p$. It may thus have any sign depending on the initial value of $p$. Hence, it is very well possible for heat to flow from $SA$ to the hot bath and not otherwise. Particularly interesting is now the analysis of the decoding and correction strokes, $W_d$ and $W_c$. The decoding work $W_d$ has a zeroth order contribution from the ancillas, which is \emph{minus} the encoding work, $2p\omega$. If there was no noise, then the process would be entirely reversible. However, due to the hot bath a new contribution appears. This new contribution, however, appears only in the ancilla, as $W_d^S = 0$. Moreover, this new term is always non-negative since positive temperatures imply $f \in [0,1/2]$. Hence, we see that the total work of the encoding/decoding process, $W_e + W_d >0$. It \emph{costs} work to encode and decode information when this information is scrambled by the GAD. The correction work $W_c$, on the other hand, is seen to be related only to changes in the system, and is precisely minus the heat flow $Q_H^S$ between system and hot bath. As a consequence, the total work performed in one cycle, $W_\text{tot} = W_e + W_d + W_c$, will be \begin{equation} W_\text{tot} = \Omega \gamma (p-f_\Omega) + 2 \gamma \omega \bigg[ p (3-2 f_\omega) + f_\Omega (1-2p)\bigg]. \end{equation} The second term is always non-negative, but the first term may have any sign whatsoever. And the step responsible for this is the correction stroke. Thus, while it always costs work to encode/decode information, correcting the state may lead to either a surplus or a deficit of work. To linear order in $\gamma$, it follows that $W_e + Q_H + W_d + W_c + Q_C \simeq 0$. Referring to the first law in Eq.~(\ref{first_law}), this does not mean that the process is cyclic. Instead, it means that the first non-zero contribution to $\Delta U_S$ is of order $\gamma^2$: \begin{equation} \Delta U_S = \gamma^2 \Omega \bigg\{ f_\omega \bigg( f_\omega + 4 p - 2 p f_\omega\bigg) + 2 f_\Omega \bigg( f_\omega + p - 2 p f_\omega \bigg) - 3 p \bigg\}. \end{equation} Thus, even though heat and work are all of order $\gamma$, their net effect only contributes to the total change in energy with a term of order $\gamma^2$. \section{\label{sec:shor}Shor's 9-qubit code} \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{shor_a.pdf} \caption{\label{fig:shor} Shor's -qubit code, capable of correcting both the diagonal as well as the coherent parts of $\rho_S$ against any kind of noise. } \end{figure} The 3-qubit error correcting scheme considered in the previous section is only capable of correcting diagonal states in the computational basis. Coherences in this basis are not correctly processed. A code which is capable of correcting both incoherent and coherent contributions is Shor's famous 9-qubit code shown in Fig.~\ref{fig:shor} \cite{Shor1995} (see \cite{FonsecadeOliveira2017} for the implementation without syndrome measurements). This code is quite similar in spirit to the classical code in Fig.~\ref{fig:classical}. The key difference, however, is that the coherent components of $\rho_S$ are also properly encoded due to the application of the Hadamard gates $H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$. Moreover, notice that now the decoding and correction strokes get mixed together, which we separate in Fig.~\ref{fig:shor} with different colors. We consider a general quantum state of the system, parametrized in the form \begin{equation}\label{parametrization_quantum_state} \rho_S = \begin{pmatrix} p & z\sqrt{p(1-p)} \\[0.2cm] z^*\sqrt{p(1-p)} & 1-p \end{pmatrix}, \qquad |z| \leq 1. \end{equation} The state is pure when $|z| = 1$. For simplicity, we shall also assume that $\omega = \Omega$, as the calculations become much more complex in this case. The work and heat in each stroke, for system and ancilla, are \begin{IEEEeqnarray}{rCl} W_e^S &=& \frac{\Omega}{2}(1-2p), \\[0.2cm] Q_H^S &=& - \frac{\gamma\Omega}{2} (1-2f), \\[0.2cm] \label{shor_Wd_S} W_d^S &\simeq& -\frac{\Omega}{2}(1-2p) + \frac{3\gamma \Omega }{4} (1-2p), \\[0.2cm] W_c^S &\simeq& -\frac{\gamma\Omega}{4} (1+4f - 6p), \\[0.2cm] \end{IEEEeqnarray} and \begin{IEEEeqnarray}{rCl} W_e^A &=& 4\Omega, \\[0.2cm] Q_H^A &=& -4 \gamma\Omega(1-2f) \\[0.2cm] \label{shor_Wd_A} W_d^A &\simeq& - 4 \Omega + 6 \gamma \Omega(2-f) \\[0.2cm] W_c^A &\simeq& \gamma\Omega(1-2f) \\[0.2cm] Q_C^A &\simeq& - 9 \gamma \Omega. \end{IEEEeqnarray} Several comments are worth making about these results, particularly when comparing them with the classical results in Eqs.~(\ref{classical_We_S})-(\ref{classical_QC_A}). First and foremost, we see that all results are independent of the coherences $z$ in Eq.~(\ref{parametrization_quantum_state}). The reason for this is two-fold. First, the GAD is a thermal operation and therefore process populations and coherences independently \cite{Cwikli2015, Santos2017b}. Secondly, the Hadamard gates in the encoding and decoding strokes (c.f. Fig~\ref{fig:shor}) acts in a way such that $z$ is not present in the reduced density matrices of a single qubit. Hence, since all thermodynamic quantities involve local Hamiltonians, $z$ does not appear at all in the thermodynamic aspects of the code. Starting with the encoding stroke, we now see that it requires work in both system and ancillas to encode information. Moreover, the work cost in the ancillas is entirely independent of the state of the system: for input state $\rho_S$, it will always cost the same amount $W_e^A = 4\Omega$ to encode the data in the ancillas (the work cost in the system still depends on $p$). A similar, but perhaps even more surprising result, is that the heat to the hot bath, for \emph{both} system and ancilla, is entirely independent of the state of the system (this state is true exactly and not to leading order in $\gamma$). The heat flow is simply $-\gamma \Omega(1-2f)/2$ per qubit. This is again a consequence of the dramatic influence of the Hadamard gates in Shor's code, which makes it so that after the encoding stroke the reduced density matrices of all qubits are simply the identity. The work cost of decoding is similar to the classical case [compare Eqs.~(\ref{shor_Wd_S}) and (\ref{shor_Wd_A}) with Eqs.~(\ref{classical_Wd_S}) and (\ref{classical_Wd_A})]: there is a zeroth order contribution in $\gamma$ which is simply the reverse of the encoding work (again representing the reversible part of the process). We also see once again that the correction work can have any sign, as in the classical case. And, finally, we find that the heat to the cold bath is again entirely independent of the state of the system. On the other hand the efficiency for the Shor correcting code defined in Eq. (\ref{efficiency}) depends on the state, as shown in Figures \ref{fig:shor_efic}, for pure states ($|{z}|= 1$), represented as points of the Bloch sphere. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth]{fig2_02y03.jpg} \caption{\label{fig:shor_efic} Shor' code efficiency for input states on the Bloch sphere (pure states), for $f = 0.2$, $\gamma = 0.02$ (a) and $\gamma = 0.03$ (b). } \end{figure} \section{\label{sec:disc}Discussions and Conclusions} The framework of operator error correction (Fig.~\ref{fig:drawing}) is formally equivalent to the cyclic operation of a heat engine. In this paper we aimed to explore this connection, but putting forth a thermodynamic analysis of 4-stroke codes, which parallel an Otto engine. We emphasize, once again, that QECC and heat engines have entirely different goals. In particular, for operation a QECC the work cost is only a marginal concern, as this is marginal compared to the energetics of any real experimental setup. That being said, the directions in which energy flows \emph{is} indeed important. Our analysis shows, for instance, that heat may very well flow \emph{from the system to the hot bath}, something which is counterintuitive. Indeed, this is a common misconception: neither entropy nor heat have a well defined sign. What does have is the \emph{entropy production}, Eq.~(\ref{Sigma}). Another interesting aspect of this thermodynamic analysis is the interplay between the encoding and decoding work costs. The decoding is always the reverse of the encoding operation. But the effect of the noise channel in the middle of the two steps makes the process irreversible, as it scrambles information. As a consequence, there is always a work cost associated with the encoding+decoding steps. Finally, we mention an alternative perspective of the problem. In our formulation, the working fluid was taken to be composed of both system and ancillas, which then interacted with a hot and a cold bath. Alternative, one may interpret the system only as the working fluid and the ancillas as a finite sized cold bath. The problem with this formulation is that the system would then interact twice with this cold bath, which leads to questions related to non-Markovianity. The formulation as presented here is more fitting of an actual engine. {\it Acknowledgements.--} The authors acknowledge fruitful correspondence with G. Guarnieri, G. Adesso, C. Boraschi, L. Knope and L. C\'eleri. GTL acknowledges the S\~ao Paulo Research Foundation (FAPESP) under grant 2018/12813-0. GTL acknowledges Universidad ORT Uruguay, where part of this work was developed, for both the hospitality and the financial support.
1,477,468,750,571
arxiv
\section{Introduction} Neutron stars (NSs) result from the gravitational collapse of massive stars with $M \gtrsim 8 M_\odot$ at the end point of their evolution. They are among the most compact objects in the universe, with a central density which can reach several times the nuclear saturation density. At least, three different regions can be identified in the interior of a NS: (i) the ``outer crust'', at densities above $\sim 10^4$ g~cm$^{-3}$, composed of fully ionized atoms, arranged in a Coulomb lattice of nuclei, neutralized by a degenerate electron gas, (ii) the ``inner crust'', at densities above $\sim 4 \times 10^{11}$~g~cm$^{-3}$, composed of neutron-proton clusters and unbound neutrons, neutralized by a degenerate electron gas, and (iii) the core, at densities above $\sim 10^{14}$ g~cm$^{-3}$. The precise measurement of the mass of pulsar PSR J1614$-$2230 by \cite{demorest2010} has revived the question of the composition of the core. Just below the crust, the matter consists of a mixture of neutrons, protons, electrons and possibly muons. The composition of the central region of a NS is still a matter of debate (see e.g. \cite{haensel2007}). In the present work, we study the impact of a hadron-quark phase transition in dense matter on the maximum mass of cold isolated NSs (see \cite{chamel2013} for a general discussion of the maximum mass of hybrid stars). \section{Hadronic equation of state} The global structure of a NS is determined by the equation of state (EoS), i.e. the relation between the matter pressure $P$ and the mass-energy density $\rho$. Before considering the possibility of a phase transition from hadronic to quark matter in the core of NSs, we will begin with the hadronic EoSs. A good starting point is the family of three EoSs, BSk19, BSk20 and BSk21, which have been developed to provide a unified treatment of all regions of a NS (see \cite{pearson2011, pearson2012}). These EoSs are based on nuclear energy-density functionals derived from generalized Skyrme forces (in that they contain additional momentum- and density-dependent terms), which fit essentially all measured masses of atomic nuclei with an rms deviation of 0.58 MeV for all three models. Moreover, these functionals were constrained to reproduce three different neutron matter EoSs, as obtained from microscopic calculations (see \cite{goriely2010}). All three EoSs assume that the core of a NS is made of nucleons and leptons. The BSk19 EoS was found to be too soft to support NSs as massive as PSR J1614$-$2230 (\cite{chamel2011}) and therefore, it will not be considered here. \section{Hadron-quark phase transition} Given the uncertainties in the composition of dense matter in NSs, we will simply suppose that above the average baryon density $n_{\rm N}$, matter undergoes a first-order phase transition to deconfined quark matter subject to the following restrictions: (i) for the transition to occur the energy density of the quark phase must be lower than that of the hadronic phase, (ii) according to perturbative quantum chromodynamics (QCD) calculations (e.g. \cite{kurkela2010}), the speed of sound in quark matter cannot exceed $c/\sqrt{3}$ where $c$ is the speed of light. At densities below $n_{\rm N}$, matter is purely hadronic while a pure quark phase is found at densities above some density $n_{\rm X}$. In the intermediate region ($n_{\rm N}<n<n_{\rm X}$) where the two phases can coexist, the pressure and the chemical potential of the two phases are equal: $P_{\rm quark}(n) = P_{\rm hadron}(n_N)$ and $\mu_{\rm quark}(n) = \mu_{\rm hadron}(n_N)$. The EoS of the quark phase at $n>n_{\rm X}$ is given by: \begin{equation} P_{\rm quark}(n) = \frac{1}{3} (\mathcal{E}_{\rm quark}(n)-\mathcal{E}_{\rm quark}(n_{\rm X})) + P_{\rm hadron}(n_{\rm N}) \, . \label{eq:quark} \end{equation} We set the density $n_{\rm N}$ to lie above the highest density found in nuclei as predicted by Hartree-Fock-Bogoliubov calculations, namely $n_{\rm N}=0.2$~fm$^{-3}$ (\cite{bruslib}). The density $n_{\rm X}$ is adjusted to optimize the maximum mass under the conditions mentioned above. Eq.~(\ref{eq:quark}) turns out to be very similar to that obtained within the simple MIT bag model, which has been widely applied to describe quark matter in compact stars (see e.g. \cite{haensel2007}). The effective bag constant $B$ associated with the BSk21 hadronic EoS is $56.7$~MeV~fm$^{-3}$. \section{Maximum mass} Considering the stiffest hadronic EoS (BSk21), we have solved the Tolman-Oppenheimer-Volkoff equations (\cite{tolman1939, oppenheimervolkoff1939}) in order to determine the global structure of a nonrotating NS. The effect of rotation on the maximum mass was found to be very small for stars with spin-periods comparable to that of PSR J1614$-$2230 (\cite{chamel2011}); we therefore neglect it. The gravitational mass versus circumferential radius relation is shown in Fig.~\ref{fig01}. We have considered two cases: a purely hadronic NS described by our BSk21 EoS (dashed line) and a hybrid star with a quark core (solid line). The corresponding maximum masses are 2.28~$M_\odot$ and 2.02~$M_\odot$ respectively. In both cases, the existence of two-solar mass NSs is therefore allowed. \begin{figure}[t] \centering \includegraphics[scale=0.3]{fig01.eps} \caption{Gravitational mass versus circumferential radius with (solid line) and without (dashed line) a quark-matter core. See the text for detail.} \label{fig01} \end{figure} \section{Conclusions} The presence of a deconfined quark-matter phase in NS cores leads to a maximum mass of about $2M_\odot$, which is still compatible with the mass measurement of PSR J1614$-$2230 by \cite{demorest2010}, but which could be challenged by observations of significantly more massive NSs (see \cite{cla02,fre08,kbk11}) unless the sound speed in quark matter is significantly larger than that predicted by perturbative QCD calculations (\cite{kurkela2010}). \acknowledgments FNRS (Belgium), NSERC (Canada) and CompStar, a Research Networking Programme of the European Science Foundation are gratefully acknowledged.
1,477,468,750,572
arxiv
\section{Introduction} Binary and higher order multiple stars play a very important role in the dynamical evolution of star clusters, mainly because dynamically active binary and multiple systems can absorb (negative) energy. In the case of globular clusters, such systems are known to prevent the so-called gravo-thermal catastrophe \citep{Heggie,Sugimoto}. As for open clusters, dynamically active pairs can inflate the parent cluster. According to theory, early on, during a star cluster assembly, wide and close binary and multiple systems are forming, before the non-stationary, dynamically young star cluster, starts to contract. Then wide systems (and dynamically active pairs) tend to be disrupted during this contraction phase \citep{P1,P2,P3}. Wide systems are forming again during the expansion phase \citep{Danilov2020} which naturally follows the contraction. On the observational side, a deficit of dynamically active binaries has been identified in some nearest open clusters \citep{DK,Danilov2020}. Therefore, the evolution of wide binaries (their number and location inside the cluster) and multiple stars can serve as probe of open clusters' dynamical state \citep{Danilov2020}. The binary fraction changes (diminishes) during the evolution of a star cluster. Nevertheless, the distribution of orbital parameters and the distribution of the stellar mass ratio are conserved keeping a memory of the primordial binaries' properties (see references in \citet{Boro19}). The dynamical evolution of stars in close binary systems in star clusters generate different types of exotic stellar objects, such as blue straggler stars \citep{Arp2,EBSS}, millisecond pulsars \citep{msp}, cataclysmic variables \citep{CV}, X-ray binaries \citep{Xray}, binary black holes \citep{BBH}, and so on. The loss of material enriched by elements produced in CNO cycle during the mass transfer in massive interacting binaries constitutes one of possible scenarios for the origin of the multiple stellar populations in globular clusters \citep{Renzini+2015}. The presence of unresolved binary and multiple systems can distort the estimates of the cluster mass determined both by the velocity dispersion \citep{4337,raste} and by the cluster luminosity function obtained from star counts (see a discussion in \citet{Boro19}). The present study follows up \citet{Boro19} study where the influence of unresolved binary stars (UBS) on the estimate of open star cluster (OSC) mass derived through its luminosity function (LF) was investigated. In \citet{Boro19} two general parameters for characterizing the binary star population --- the binary fraction $\alpha$ and the stellar mass ratio $q$ distribution --- were used (see the review in \citet{Boro19}). The findings were compared against the results obtained by \citet{KB} for Praesepe. for which a mass increment value of 1.35 for a binary fraction of 35$\%$. Instead of 1.35, \citet{Boro19} found increment values in the range 1.06-1.19, and the range depends on the adopted $q$ distribution function. In an attempt to lift this significant discrepancy, we explore in this study a possible improvement, namely we add multiple systems with higher multiplicity, namely triple and quadruple systems, to the UBS. As a result, our paper is organised as follows. Section 2 presents a survey of recent literature on multiple systems, which our study stems from. Section 3 is devoted to the description of our method. Section 4 describes and discusses our results. Section 5, finally, is dedicated to a summary of our findings. \section{High order stellar systems in star clusters} No much data are available in the literature on the presence and abundance ratios of multiple systems in star clusters. For the multiple systems ratio we take the data from \citet{Mermilliod_Pleiades} for the Pleiades cluster, which should be more appropriate for the case of OSCs than the field stars' data from \citet{Tokovinin}. Anyway, {\citet{Tokovinin} found for stellar systems with multiplicity of 1:2:3:4:5 (``1'' stands for single stars, ``2'' for binaries, ``3'' for triples, ``4'' for quadruples, and ``5'' for quintuples) the relative abundance ratios of 54:33:8:4:1. These estimates were obtained compiling data in the solar neighborhood. The population of multiple stars in stellar clusters might be significantly different, though. For instance, \citet{Mermilliod_Pleiades} found the relative abundance ratios of 56:30:2 for stars in the Pleiades cluster (singles, binaries, and triples). A similar work was performed by \citet{Mermilliod_Praesepe} for Praesepe cluster as well. They found the relative abundance ratios of 47:30:3 for stars of different multiplicity. \citet{Mermilliod} considered different sources of incompleteness in the search of binary and multiple systems and concluded that ``in spite of the large efforts undertaken, the available material is still incomplete at several levels''. The situation has not changed substantially since that time. In addition, \citet{Mermilliod_Pleiades} data on Pleiades bases on 88 stars of F5-K0 spectral classes ($(B-V)\in[0.38;1.1]$) from the circle of about 70 arcminutes radius around the star Alcyone. This field lies well inside the cluster core (the Pleiades core radius is of 2.62$^\circ$ and corona radius is of 10.9$^\circ$ according to \citet{DS_Pleiades}). Then, the data of \citet{Mermilliod_Pleiades} are incomplete because they refer to the inner cluster area only. \citet{Bouvier+_Pleiades} observed 144 G and K dwarf members of the Pleiades and found 22 binary systems and 3 triples. \citet{Tokovinin+2006} found that most low-period spectroscopic binaries have a tertiary companion (at least for field stars). Numerous works devoted to search of spectroscopic binaries or binaries with photometric data in nearby open star clusters can easily miss a tertiary companions if these companions are visually separated. On the other hand, more recently, \citet{Danilov2020} considered a sample of 395 stars (probable members of the Pleiades) in an area of 2.5$^\circ$ around the cluster center with $G<15^m$ and errors in the tangential velocities less than 0.177 km s$^{-1}$. 36-37 wide visual pairs of single stars and 62-70 unresolved binary (or multiple) stars were extracted basing on their position in the color-magnitude diagram. The distances between the components in visual pairs were found to be larger than 0.165 pc (approximately 4000 astronomical units). The mean ratio of the component masses in the visual pairs is $q=0.67\pm0.04$, the $q$ distribution is approximately flat for $q\in[0.05;0.8]$ with the local maximum at $q=0.85$. \citet{Danilov2020} marked 9 coincidences of the unresolved multiple stars with the components of the visual pairs, that is, possibly, triple or quadruple systems. Moreover, he selected two triple, one quadruple and one sextuple visual systems with the relative velocities in pairs close to circular ones. If we consider all unresolved multiples as binaries, then a grnnad-total of 260-270 singles, 89-98 binaries and 9 triples is found. Nevertheless, for our investigation we take data of \citet{Mermilliod_Pleiades} as representative for our program clusters (they give minimum triples content among all multiples) and data of \citet{Tokovinin} for field stars in the assumption that for different clusters we should get some intermediate result for the mass increment between these two extreme cases. } For the $q$ distribution we limit ourselves to the flat distribution given the recent findings of \citet{LiLu}. We make use of the same data on luminosity functions for clusters IC 2714, NGC 1912, NGC 2099, NGC 6834, and NGC 7142 as in \citet{Boro19}. These LFs were obtained with the use of 2MASS data \citep{2MASS} by the statistical method described in \citet{LF,Pal1,kernel,4337,Rup147}. \section{Description of the method} \noindent To determine the cluster mass increment produced by the additional mass budget stored in unresolved stars we simulate open clusters by creating stars according to the real luminosity function with binary fraction $\alpha$ defined as $$\alpha = \frac{N_{binaries}+N_{triples} + N_{quadruples}}{N_{singles} + N_{binaries}+N_{triples}+N_{quadruples}}$$ triples fraction among multiples $\beta$ $$\beta = \frac{N_{triples} }{N_{binaries}+N_{triples}+ N_{quadruples}}$$ and, likewise, quadruples fraction $\gamma$ $$\gamma = \frac{N_{quadruples} }{N_{binaries}+N_{triples}+ N_{quadruples}}$$ and components' mass ratio distribution $f(q)$. In our algorithm the binary fraction $\alpha$ varies between 0.1 and 0.9 in increments of 0.1. Triples fraction among multiples $\beta$ is calculated from either \citet{Mermilliod_Pleiades} or \citet{Tokovinin} studies and is equal to 2:32 (it means that every 2 stars among 32 multiples are triples) or 8:45, respectively. Quadruples fraction among multiples $\gamma$ for \citet{Mermilliod_Pleiades} case is equal to zero and for \citet{Tokovinin} ratio is equal to 4:45. We do not take into account the number of quintuple systems and consider it negligible. Then, the mass ratios $q_i = M_i/M_1$ in our simulations are uniformly distributed between 0 and 1 (i = 2 for binary, i = 2, 3 for triple star, and i = 2, 3, and 4 for quadruple star). The magnitude distribution is binned in equal intervals $\Delta J$, and in each of them, we count the number of stars $N$ in accordance with the cluster' LF (analogous to \citet{Boro19}. Then, by considering $\alpha$, $\beta$, \textbf{$\gamma$}, we calculate the number of binary, triple, and quadruple stars in each interval. \begin{equation} N_{triples} = \alpha * \beta * N \end{equation} \begin{equation} N_{quadruples} = \alpha * \gamma * N \end{equation} \begin{equation} N_{binaries} = \alpha * N - N_{triples} - N_{quadruples} \end{equation} \begin{equation} N_{singles} = N - N_{binaries} - N_{triples} - N_{quadruples} \end{equation} \noindent Since all $N_{quadruples}, N_{triples}, N_{binaries}, N_{singles}$ should be integers, we round up numbers of multiple stars. For all stars in the bin we use the same mean magnitude. Then we derive the luminosity corresponding to each mean magnitude. For MS stars we calculate it using the \citet{Eker} formula extracting mass from isochrone tables. For stars that have already left MS we use the younger isochrone with the age of $4\cdot10^7$ years to determine the luminosity of the evolved stars at the MS stage with the same mass as evolved star mass. We then generate the component mass ratios $q_2, q_3, q_4$ for every quadruple star, $q_2, q_3$ for every triple star, and $q_2$ for every binary star using the Neumann method \citep{Boro19}. Therefore, for each multiple we have the following system of equations \citep{Danilov2020}: \begin{equation} \left\{ \begin{aligned} &L= \sum^k_{i=1} L_i\\ &\log{L_i}= -0.705(\log{M_i})^2 + 4.655(\log {M_i}) - 0.025 \\ &q_i=\frac{M_i}{M_1} \\ &i = 1, 2, ...\: k\\ \end{aligned} \right. \label{MAINsystem} \end{equation} \noindent where $q_1 \equiv 1$, $L$ and $L_i$ are the luminosity of the whole system and, separately, for each components, and $k$ is the components number in multiple stars. $M_1, M_2, M_3, M_4$ are the dependent variables we are searching a value for, and $L_1, L_2, L_3, L_4$ are unknown quantities as well. The second equation in this system is the mass-luminosity relation from \citet{Eker}.\\ \noindent We can re-arrange this system into one final equation $f(x) = 0$: \begin{equation} f(x) = dx^2 + bx + c + \log(1 + F(x)) - \log L \, , \end{equation} \begin{equation} F(x) = \sum_{i=2}^k 10^{d (\log q_i)^2 + \log q_i (2dx + b)} \, , \end{equation} \noindent where $x = \log M_1$, $d = - 0.705, b = 4.655, c = - 0.025$\\ \noindent Solving for $x$ we are able to derive all components masses $M_1 = 10^x$, $M_i = q_i M_1$ ($i = 2$ for binary, $i = 2, 3$ for triple star, and $i = 2, 3, 4$ for quadruple star).\\ As a result, we can add up the masses of multiples with the single star masses and eventually obtain an estimate of the total mass of the cluster. Because mass ratios $q_i$ are generated randomly, the resulting mass can vary, so we repeat all procedures several times (30) and calculate the mean and standard deviation of the derived cluster mass. If we consider all stars as singles, we can calculate a different estimate of cluster mass $M_{wm}$ using the isochrone table and therefore the mass increment $y=M/M_{wm}$ (\textit{wm} stands here for ``without multiples''). The code is available online at the \href{https://github.com/olgaborodina/Unresolved_stars_in_clusters}{link} (https://github.com/olgaborodina/Unresolved\_stars\_in\_clusters). \section{Results and discussion} \begin{figure} \centering \includegraphics[width=19cm]{fig1.eps} \caption{The dependence of the cluster mass increment on the binary star fraction $\alpha$. Solid line: binary systems only; dotted line: binary and triple systems according to \citet{Mermilliod_Pleiades}; dashed line: binary, triple, and quadruple systems according to \citet{Tokovinin}. (a) IC 2714; (b) NGC 1912; (c) NGC 2099; (d) NGC 6834; (e) NGC 7142.} \label{increments-alpha} \end{figure} Figure 1 shows the dependence of the cluster mass increment on the binary star fraction $\alpha$ for three cases. The first case (the lower solid line) corresponds to clusters with unresolved binaries only (this result comes from our previous paper \citet{Boro19}). The second case (the middle dotted line) corresponds to clusters with binary and triple systems adopting their ratio from \citet{Mermilliod_Pleiades}. Finally, the third case (the upper dashed line) corresponds to clusters with binary, triple, and quadruple systems adopting their ratio from \citet{Tokovinin}. In all cases we use a flat distribution for the multiple star component mass ratio.\\ \noindent The case with the multiple system ratio of \citet{Mermilliod_Pleiades} differs slightly from the case of binary systems only, unlike to the case of multiple system ratio of \citet{Tokovinin}. We can explain it because in the case of \citet{Mermilliod_Pleiades} the ratio of triple systems to all multiples is $1/16\approx0.06$ only while in the case of \citet{Tokovinin} the ratio of triples and quadruples makes up $12/33\approx0.36$ of all multiples. It is readily seen that even in the case of adopting the multiple star ratio for Galactic field \citep{Tokovinin} the cluster mass increment does not exceed 1.20 for the specific value of the binary fraction of 0.35. In the more realistic case of multiple star ratio for the Pleiades cluster \citep{Mermilliod_Pleiades}, the cluster mass increment does not exceed 1.16. Then, the value of \citet{KB} of 1.35 for the mass increment can not be explained with the luminosity-limited pairing \citep{Boro19}. We deem that the likely explanation of the \citet{KB} value for the cluster mass increment could be following. If the binary fraction is 0.35 and the second components of the binary systems are distributed with the same mass function as the primary components, then an addition to the cluster mass should be just 0.35 of the cluster mass in the case of single stars only (which is ``primary-constrained random pairing'' as described by \citet{Kouwenhoven}). Such approach is quite reasonable, for example, when one sets up an initial cluster model for N-body experiments. However, such arguments contain a mistake when one estimates the cluster mass from photometric data because the luminosity of the binary star composed in this way would be larger than the observed one. \citet{KB} determined the mass of the Praesepe cluster following this logical path. First, they selected probable cluster members and evaluated their masses using isochrone tables and treating them as single ones. Second, \citet{KB} evaluated the mass of invisible low-mass stars and stellar remnants of massive stars. Then, they assumed that 35 percent of stars were binaries and added the mass of secondary components taken from the same mass function of single stars. As a result, the mass of each binary star would naturally increase by 35 percent on average. In turn, however, the luminosity of every binary star would also increase (and its stellar magnitude would decrease). The correct way should probably be to take different (smaller) mass values both for primary and secondary components, using for instance the ``luminosity limited pairing'' as in \citet{Boro19} or in the present paper. \begin{deluxetable}{lcccccc} \tablecaption{Linear approximation $y=A+B\alpha$ for the cluster mass increment dependence on the multiple fraction \label{tab:approx}} \tablehead{ \colhead{Multiple star ratio} & \colhead{Cluster} & \colhead{A} & \colhead{B} & \colhead{$\chi^2$} & \colhead{Q} & \colhead{$y(0.35\pm0.05)$} } \colnumbers \startdata The ratio of & IC 2714 & 0.997 $\pm$ 0.003 & 0.451 $\pm$ 0.007 & 0.503 & 1.000 & 1.15$\pm$0.02 \\ multiple stars & NGC 1912 & 0.997 $\pm$ 0.002 & 0.441 $\pm$ 0.007 & 0.465 & 1.000 & 1.15$\pm$0.02 \\ for Pleiades & NGC 2099 & 0.998 $\pm$ 0.002 & 0.447 $\pm$ 0.004 & 0.667 & 1.000 & 1.15$\pm$0.02 \\ \citet{Mermilliod_Pleiades} & NGC 6834 & 0.997 $\pm$ 0.002 & 0.449 $\pm$ 0.006 & 0.656 & 1.000 & 1.15$\pm$0.02 \\ & NGC 7142 & 0.996 $\pm$ 0.003 & 0.471 $\pm$ 0.009 & 0.869 & 0.999 & 1.16$\pm$0.02 \\ \hline The ratio of & IC 2714 & 0.993 $\pm$ 0.003 & 0.576 $\pm$ 0.007 & 1.333 & 0.995 & 1.19$\pm$0.03 \\ multiple stars & NGC 1912 & 0.989 $\pm$ 0.002 & 0.574 $\pm$ 0.006 & 2.161 & 0.976 & 1.19$\pm$0.03 \\ for Galactic field & NGC 2099 & 0.999 $\pm$ 0.002 & 0.564 $\pm$ 0.005 & 0.407 & 1.000 & 1.20$\pm$0.03 \\ \citet{Tokovinin} & NGC 6834 & 0.992 $\pm$ 0.003 & 0.574 $\pm$ 0.007 & 3.511 & 0.898 & 1.19$\pm$0.03 \\ & NGC 7142 & 0.990 $\pm$ 0.003 & 0.606 $\pm$ 0.008 & 3.967 & 0.860 & 1.20$\pm$0.03 \\ \hline Binary stars only & IC 2714 & 1.003 $\pm$ 0.003 & 0.424 $\pm$ 0.006 & 0.612 & 1.000 & 1.15$\pm$0.02 \\ \citet{Boro19} & NGC 1912 & 0.999 $\pm$ 0.003 & 0.415 $\pm$ 0.007 & 0.574 & 1.000 & 1.14$\pm$0.02 \\ & NGC 2099 & 1.000 $\pm$ 0.002 & 0.418 $\pm$ 0.004 & 0.314 & 1.000 & 1.15$\pm$0.02 \\ & NGC 6834 & 1.000 $\pm$ 0.002 & 0.419 $\pm$ 0.006 & 0.234 & 1.000 & 1.15$\pm$0.02 \\ & NGC 7142 & 0.999 $\pm$ 0.003 & 0.444 $\pm$ 0.008 & 0.230 & 1.000 & 1.15$\pm$0.02 \\ \enddata \end{deluxetable} Table 1 lists the coefficients of a linear regression for the dependencies shown in Figure 1 and the values of the mass increment for a representative binary ratio of 0.35$\pm$0.05 \citep{KB}, for the sake of illustration and comparison with \citet{KB}. The data for the case of binary stars only is from \citet{Boro19} (we referred there to it as ``flat distribution''). The large values of $\chi^2$ are explained by the small values of the standard deviation for the cluster mass when repeating the random pairing procedure. If we artificially increase ten times the standard deviations for the mass estimates of NGC 1912 (second line in Table 1), the $\chi^2$ value becomes 0.367 and the Q value becomes 1.0.\\ \noindent A general, important, remark is now in order. When calculating the cluster mass increment due to the presence of unresolved binary and multiple systems, one needs in principle to take into account the spatial resolution of the data employed to construct thea LF. In the present work we use the cluster LFs obtained counting stars extracted from the 2MASS Point Source catalog \citep{2MASS}. The spatial resolution of 2MASS is of about $\delta=4$ arcseconds (https://old.ipac.caltech.edu/2mass/overview/about2mass.html). It corresponds to a separation between the binary star components of 4000 astronomical units (AU) at a cluster distance of 1 kpc. Then, even the very wide binaries and hierarchical triples in the clusters of our sample could be unresolved (see the sample cluster distances in Table 1 of \citet{Boro19}). However, if we were to use Gaia data \citep{Gaia}, for example, the situation would change significantly. The spatial resolution of Gaia DR2 for binary components is of about $\delta=0.5-0.6$ arcseconds \citep{Ziegler+2018} (7-8 times smaller than the 2MASS resolution). This corresponds to separations of about 500-600 AU for a cluster distance of 1 kpc. In that case wide binaries would be resolved and the cluster mass increment would in turn be smaller. {The resolution of Gaia DR3 should be even better. Gaia mission goal in binary resolution for the final data release is $\delta=0.1$ arcseconds (https://www.cosmos.esa.int/web/gaia/science-performance). } We investigated the distribution of the apparent on-sky separations of the binary components. With this aim, we used the distribution of the logarithm of period $P$ and the distribution of eccentricities for the solar-type binaries as from \citet{D&M91}. This distribution of the period logarithms is very close to normal distribution with $\overline{\log P}=4.8$ and $\sigma_{\log P}=2.3$ (it means that the period distribution is log-normal). We randomly (uniformly) set the values of the orbit plane inclination, the peri-astron longitude, and the time after the peri-astron passing. The semi-major axes $a$ were determined using the period value and Third Kepler Law supposing the mean mass of the primary component to be 1 $M_{\odot}$ and the mean mass of secondary component to be 0.5$M_{\odot}$. This results in a normal distribution of the semi-major axis logarithms with $\overline{\log a}=1.55$ (it corresponds to $\overline{a}=35$ AU) and $\sigma_{\log a}=1.53$. Adopting constant values for the component masses is a rough approximation, especially for nearby star clusters with large interval of masses of stars available from observation. However, for our program clusters' sample this interval is not that wide and the approximation seems appropriate. The distribution of the logarithm of the apparent separations turned out to mirror the distribution of the semi-major axis logarithms (we used 10000 random pairs). Therefore, we used a Gaussian function for the apparent separation distribution function with the same parameters as the semi-major axis distribution function. In order to determine the unresolved binary fraction we need to integrate this distribution from $-\infty$ to the $\log a_0$, where $a_0$ is the value of separation corresponding to the resolution of the catalog for the binary components $a_0(AU)=r(pc)*\delta(arcsec)$. The required unresolved binary fraction (UBF), that is the ratio of unresolved binaries among all binaries, is: \begin{equation} UBF(\log a_0) = \frac{1}{\sigma_{\log a}\sqrt{2\pi}}\int\limits_{-\infty}^{\log a_0} \exp \left\{{-\frac{(\log a-\overline{\log a})^2}{2\sigma_{\log a}^2}}\right\}d\log a = \frac{1}{2} + \frac{1}{2}erf\left(\frac{\log a_0-\overline{\log a}}{\sigma_{\log a}\sqrt{2}} \right) \, , \end{equation} \noindent assuming that $\log a_0>\overline{\log a}$. This looks reasonable because even for $r=100$ pc the resolution of binary components would be 50--60 AU (in the case of Gaia DR2) and $\overline{a}=35$ AU. Table 2 lists the UBF values for star clusters considered in \citet{Boro19} and in this paper for two cases of spatial resolution for binary components. The first one corresponds to the 2MASS resolution (4 arcsecond) and the second one corresponds to the Gaia DR2 resolution (we take 0.5 arcsecond as ambiguity limit). The cluster distances were taken the same as in \citet{Boro19}. \begin{deluxetable}{lccccc} \tablecaption{The unresolved binary fractions for sample clusters \label{tab:UBF}} \tablehead{ \colhead{} & \colhead{} & \multicolumn{2}{c}{\hspace{1cm}2MASS resolution} & \multicolumn{2}{c}{\hspace{1cm}Gaia DR2 resolution} \\ \colhead{Cluster} & \colhead{r} & \colhead{\hspace{1cm}$a_0$} & \colhead{UBF} & \colhead{\hspace{1cm}$a_0$} & \colhead{UBF} \\ \colhead{} & \colhead{pc} & \colhead{\hspace{1cm}AU} & \colhead{} & \colhead{\hspace{1cm}AU} & \colhead{} \\ \colhead{(1)} & \colhead{(2)} & \colhead{\hspace{1cm}(3)} & \colhead{(4)} & \colhead{\hspace{1cm}(5)} & \colhead{(6)} } \startdata IC 2714 & 1250 & \hspace{1cm}5000 & 0.92 & \hspace{1cm}625 & 0.80 \\ NGC 1912 & 1140 & \hspace{1cm}4560 & 0.92 & \hspace{1cm}570 & 0.79 \\ NGC 2099 & 1410 & \hspace{1cm}5640 & 0.93 & \hspace{1cm}705 & 0.80 \\ NGC 6834 & 2080 & \hspace{1cm}8320 & 0.94 & \hspace{1cm}1040 & 0.83 \\ NGC 7142 & 1780 & \hspace{1cm}7120 & 0.94 & \hspace{1cm}890 & 0.82 \\ \enddata \end{deluxetable} One can readily see that even for Gaia DR2 resolution the fraction of unresolved binaries keeps high. The probability to detect resolved binary depends on the component mass ratio and the limiting stellar magnitude of the sample. Independently on the visibility of the secondary component, a resolved binary would behave as a single star for the purpose of deriving the cluster mass. When using Gaia data, one would need to find which fraction of binary and multiple systems is unresolved in the cluster according to its distance. After that, the mass increment could be evaluated using this paper results. We plan to investigate the population of the resolved binaries in nearest open clusters in the future, especially when the new Gaia data release (DR3) is publicly available. The key point for such investigation is the accuracy of astrometric parameters and the presence of accurate radial velocities. We also plan to investigate the population of unresolved binaries with photometric data and spectroscopic monitoring of bright stars in nearest clusters. \section{Conclusions} In this study we investigated the effect of unresolved multiple stars (binaries, triples, and quadruples altogether) on Galactic clusters' mass estimates as obtained from clusters' LF built through star counts. We used the same LF data and the ``luminosity limited pairing'' method as described in \citet{Boro19}.\\ \noindent The data on the multiple stars' ratio were taken from \citet{Mermilliod_Pleiades} for the Pleiades open cluster and from \citet{Tokovinin} for the general Galactic field in the solar vicinity.\\ \noindent The inspection of Figure 1 and Table 1 allows one to conclude that mass estimates obtained considering all stars as single should be corrected for factors which depend on the ratio of binary and multiple stars. The correction factor, which implies always a mass increment, ranges from 1.18 to 1.27 (for a binary ratio of 0.35 as \citet{KB} determined for the Praesepe cluster).\\ \noindent The correction factor depends on the considered cluster only marginally. On the contrary it shows quite a significant variation whether either field stars or Pleiades multiple star percentages are adopted. \noindent As expected, increasing multiple stars ratio, the mass increment turns out to be larger. Therefore, the mass correction is larger if one adopts field stars' percentages for binary and multiple systems.\\ \noindent Ideally one should obtain an independent, cluster by cluster, binary and multiple star percentage. In fact, the Pleiades cannot be fully representative of every star cluster, since any individual star cluster has different mass at birth and undergoes different dynamical evolutionary history. All this affects the number and nature of binary and multiple systems present at any given time.\\ \noindent It is expected that the third data release (DR3) will be very helpful to obtain more information on binary percentages. Key also is the cluster distance which determines the amount of binaries we detect as unresolved, given the Gaia DR3 fixed spatial resolution. \acknowledgments The work of A.F.Seleznev and V.M.Danilov was supported by the Ministry of Science and Higher Education of the Russian Federation, FEUZ-2020-0030, and by the Act No. 211 of the Government of the Russian Federation, agreement no. 02.A03.21.0006. The O.I. Borodina's work was partly supported by the grant 075-15-2020-780 for major research projects of the Ministry of education and science and by RFBR and DFG according to the research project No. 20-52-12009. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. The input of the anonymous referee has been greatly appreciated.
1,477,468,750,573
arxiv
\section{Introduction} A basic intention of this article is to contribute to the classification of smooth (almost) Fano varieties with torus action. Most studied in this context are the toric Fano varieties; based on their description in terms of lattice polytopes, there are meanwhile classification results up to dimension nine~\cite{Ba:1982,WaWa,Ba:1999,KrNi,Obro,Paff9}. We go one step beyond the toric case and focus on rational varieties with a torus action of complexity one, i.e., the general torus orbit is of dimension one less than the variety; see~\cite{Su} for results on smooth Fano threefolds with an action of a two-dimensional torus. Instead of bounding the dimension, we look here at varieties of small Picard number. Recall that for toric varieties, the projectives spaces are the only smooth examples of Picard number one, and we have Kleinschmidt's description~\cite{Kl} of all smooth toric varieties of Picard number two, which in particular allows to figure out the (almost) Fano ones in this setting. We follow that line and study first arbitrary smooth projective rational varieties with a torus action of complexity one. The case of Picard number one is basically settled by a result of Liendo and S\"u\ss~\cite[Thm.~6.5]{LiSu}: the only non-toric examples are the smooth projective quadrics in dimensions three and four. Picard number two means to provide an analogue of Kleinschmidt's description for complexity one. Our approach goes via the Cox ring and we use the methods developed in~\cite{HaSu:2010,HaHe:2013,ArDeHaLa}; the ground field $\KK$ is algebraically closed and of characteristic zero. Recall that the Cox ring is graded by the divisor class group and, together with the choice of an ample class, it fixes our variety up to isomorphism; we refer to~\cite{ArDeHaLa} for the basic background. Here comes the first result. \begin{theorem} \label{thm:main1} Every smooth rational projective non-toric variety of Picard number two that admits a torus action of complexity one is isomorphic to precisely one of the following varieties $X$, specified by their Cox ring $\cox(X)$ and an ample class $u \in \Cl(X)$, where we always have $\Cl(X) = \ZZ^2$ and the grading is fixed by the matrix $[w_1, \ldots ,w_r]$ of generator degrees $\deg(T_i), \deg(S_j) \in \Cl(X)$. \medskip {\centering {\small \setlength{\tabcolsep}{4pt} \begin{longtable}{ccccc} No. & \small{$\mathcal{R}(X)$} & \small{$[w_1,\ldots, w_r]$} & \small{$u$} & \small{$\dim(X)$} \\ \toprule 1 & $ \frac {\KK[T_1, \ldots , T_7]} {\langle T_{1}T_{2}T_{3}^2+T_{4}T_{5}+T_6T_7 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccccc} 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & a & 2-a & b & 2-b \end{array} \!\!\right] \\[1em] 1 \le a \le b \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1+b \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 2 & $ \frac {\KK[T_1, \ldots , T_7]} {\langle T_{1}T_{2}T_{3}+T_{4}T_{5}+T_6T_7 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccccc} 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 3 & $ \frac{\KK[T_1, \ldots , T_6]} {\langle T_{1}T_{2}T_{3}^2+T_{4}T_{5}+T_{6}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc} 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 2-a & a & 1 \end{array} \!\!\right] \\[1em] a \ge 1 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1+a \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 4 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^{l_{2}}+T_{3}T_{4}^{l_{4}}+T_5T_{6}^{l_{6}} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & a & 1 & b & 1 & c_1 & \ldots & c_m \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0 \le a \le b, \ c_1 \le \ldots \le c_m, \\ l_{2}=a+l_{4}=b+l_{6} \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{c} d+1 \\ 1 \end{array} \!\!\right] \\[1em] d \sei \max(b,c_m) \end{array} $ } & \small{$m+3$} \\ \midrule 5 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1, \ldots, S_m]} {\langle T_{1}T_{2}+T_{3}^2T_{4}+T_5^2T_{6} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 2a+1 & a & 1 & a & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] a \ge 0 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2a+2 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 6 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5^2T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 2c+1 & a & b & c & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] a, b, c \ge 0, \quad a<b, \\ a+b=2c+1 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2c+2 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 7 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 0 & 0 & 0 & -1 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & \ldots & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 8 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|cccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \end{array} \!\!\right] \\[1em] 0 \le a_2 \le \ldots \le a_m, ~a_m >0 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ a_m+1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 9 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1, \ldots, S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccc|ccc} 0 & a_2 & \ldots & a_6 & 1 & \ldots & 1 \\ 1 & 1 & \ldots & 1 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] 0 \le a_3 \le a_5 \le a_6 \le a_4 \le a_2,\\ a_2=a_3+a_4=a_5+a_6 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} a_2+1 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 10 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccc|ccc} 1 & 1 & 1 & 1 & 1 & 0 & \ldots & 0 \\ -1 & 1 & 0 & 0 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 1 \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 11 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cccc} 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0\le a_2 \le \ldots \le a_m, ~ a_m>0 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} a_m + 1 \\ 1 \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 12 & $ \begin{array}{c} \frac{\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cccc} 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \\ 0 & 2c & a & b & c & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0 \le a \le c \le b, \ a+b=2c \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2c+1 \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 13 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_8]} { \left\langle \begin{array}{l} \scriptstyle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6}, \\[-3pt] \scriptstyle \lambda T_{3}T_{4}+T_{5}T_{6}+T_{7}T_{8} \end{array} \right\rangle } \\ \scriptstyle \lambda \in \KK^* \setminus \{1\} \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccccc} 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \bottomrule \end{longtable} } } \noindent Moreover, each of the listed data defines a smooth rational non-toric projective variety of Picard number two coming with a torus action of complexity one. \end{theorem} Note that by our approach we obtain the Cox ring of the respective varieties for free which in turn allows an explicit treatment of geometric questions by means of Cox ring based techniques. In particular, the canonical divisor of the varieties listed in Theorem~\ref{thm:main1} admits a simple description in terms of the defining data. This enables us to determine for every dimension the (finitely many) non-toric smooth rational Fano varieties of Picard number two that admit a torus action of complexity one; we refer to Section~\ref{sec:geomfanos} for a geometric description of the listed varieties. \begin{theorem} \label{thm:main2} Every smooth rational non-toric Fano variety of Picard number two that admits a torus action of complexity one is isomorphic to precisely one of the following varieties $X$, specified by their Cox ring $\cox(X)$, where the grading by $\Cl(X) = \ZZ^2$ is given by the matrix $[w_1, \ldots ,w_r]$ of generator degrees $\deg(T_i), \deg(S_j) \in \Cl(X)$ and we list the (ample) anticanonical class $-\mathcal{K}_X$. \medskip {\centering {\small \setlength{\tabcolsep}{4pt} \begin{longtable}{ccccc} No. & \small{$\mathcal{R}(X)$} & \small{$[w_1,\ldots, w_r]$} & \small{$-\mathcal{K}_X$} & \small{$\dim(X)$} \\ \toprule 1 & $ \frac {\KK[T_1, \ldots , T_7]} {\langle T_{1}T_{2}T_{3}^2+T_{4}T_{5}+T_6T_7 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccccc} 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 &1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3 \\ 4 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 2 & $ \frac {\KK[T_1, \ldots , T_7]} {\langle T_{1}T_{2}T_{3}+T_{4}T_{5}+T_6T_7 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccccc} 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 4 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 3 & $ \frac{\KK[T_1, \ldots , T_6]} {\langle T_{1}T_{2}T_{3}^2+T_{4}T_{5}+T_{6}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc} 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 3 \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 4.A & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|cccc} 0 & 1 & 0 & 1 & 0 & 1 & c & 0 & \ldots & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] c \in \{-1, 0 \}, \\ c:=0 \text{ if } m=0 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2+c \\ 2+m \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.B & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^2+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & 1 & 1 & 1 & 1 & 1 & \ldots & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3+m \\ 2+m \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.C & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^2+T_{3}T_{4}^2+T_5T_{6}^2 \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & 0 & 1 & 0 & 1 & 0 & \ldots & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2+m \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 5 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1, \ldots, S_m]} {\langle T_{1}T_{2}+T_{3}^2T_{4}+T_5^2T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 2a+1 & a & 1 & a & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] 0 \le 2a < m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2a+m+2 \\ 2 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 6 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5^2T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 2c+1 & a & b & c & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] a, b, c \ge 0, \quad a<b,\\ a+b=2c+1,\\ m > 3c+1 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3c+2+m \\ 3 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 7 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle 1 \le m \le 3 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 0 & 0 & 0 & -1 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & \ldots & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} m \\ 4 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 8 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|cccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \end{array} \!\!\right] \\[1em] 0 \le a_2 \le \ldots \le a_m, \\ a_m\in\{1,2,3\}, \\ 4+\sum_{k=2}^m a_k > ma_m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} m \\ 4+\sum_{k=2}^m a_k \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 9 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1, \ldots, S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccc|ccc} 0 & a_2 & \ldots & a_6 & 1 & \ldots & 1 \\ 1 & 1 & \ldots & 1 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] 0 \le a_3 \le a_5 \le a_6 \le a_4 \le a_2,\\ a_2=a_3+a_4=a_5+a_6, \\ 2a_2 < m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2a_2+m \\ 4 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 10 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle 1 \le m \le 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccc|ccc} 1 & 1 & 1 & 1 & 1 & 0 & \ldots & 0 \\ -1 & 1 & 0 & 0 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3 \\ m \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 11 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cccc} 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0\le a_2 \le \ldots \le a_m, \\ a_m\in\{1,2\}, \\ ~3+\sum_{k=2}^m a_k > ma_m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3+ \sum_{k=2}^m a_k \\ m \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 12 & $ \begin{array}{c} \frac{\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cccc} 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \\ 0 & 2c & a & b & c & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0 \le a \le c \le b, \ a+b=2c, \\ 3c<m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3 \\ 3c+m \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 13 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_8]} { \left\langle \begin{array}{l} \scriptstyle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6}, \\[-3pt] \scriptstyle \lambda T_{3}T_{4}+T_{5}T_{6}+T_{7}T_{8} \end{array} \right\rangle } \\ \scriptstyle \lambda \in \KK^* \setminus \{1\} \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccccc} 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 2 \end{array} \!\!\right] $ } & \small{$4$} \\ \bottomrule \end{longtable} } } \noindent Moreover, each of the listed data defines a smooth rational non-toric Fano variety of Picard number two coming with a torus action of complexity one. \end{theorem} For $\KK = \CC$, the assumption of rationality can be omitted in Theorem~\ref{thm:main2} due to~\cite[Sec.~2.1]{ProkhorovFano} and~\cite[Rem.~4.4.1.5]{ArDeHaLa}. A closer look to the varieties of Theorem~\ref{thm:main2} reveals that they all are obtained from a series of lower dimensional varieties via iterating the following procedure: we take a certain $\PP_1$-bundle over the given variety, apply a natural series of flips and then contract a prime divisor. In terms of Cox rings, this generalized cone construction simply means duplicating a free weight, i.e., given a variable not showing up in the defining relations, one adds a further one of the same degree, see Section~\ref{section:finite}. Proposition~\ref{prop:noisodivs} and Theorem~\ref{thm:duplicate} then yield the following. \goodbreak \begin{corollary} \label{cor:duplicate} Every smooth rational non-toric Fano variety with a torus action of complexity one and Picard number two arises via iterated duplication of a free weight from a smooth rational projective (not necessarily Fano) variety with a torus action of complexity one, Picard number two and dimension at most seven. \end{corollary} Note that we cannot expect such a statement in general: Remark~\ref{rem:toric-not} shows that the smooth toric Fano varieties of Picard number two do not allow a bound~$d$ such that they all arise via iterated duplication of free weights from smooth varieties of dimension at most $d$. Similar to the Fano varieties, we can figure out the almost Fano varieties from Theorem~\ref{thm:main1}, i.e., those with a big and nef anticanonical divisor. In general, i.e., without the assumption of a torus action, the classification of smooth almost Fano varieties of Picard number two is widely open; for the threefold case, we refer to the work of Jahnke, Peternell and Radloff~\cite{JaPeRa1,JaPeRa2}. In the setting of a torus action of complexity one, the following result together with Theorem~\ref{thm:main2} settles the problem in any dimension; by a \emph{truly almost Fano variety} we mean an almost Fano variety which is not Fano. \begin{theorem} \label{thm:main3} Every smooth rational projective non-toric truly almost Fano variety of Picard number two that admits a torus action of complexity one is isomorphic to precisely one of the following varieties $X$, specified by their Cox ring $\cox(X)$ and an ample class $u \in \Cl(X)$, where we always have $\Cl(X) = \ZZ^2$ and the grading is fixed by the matrix $[w_1, \ldots ,w_r]$ of generator degrees $\deg(T_i), \deg(S_j) \in \Cl(X)$. \medskip {\centering {\small \setlength{\tabcolsep}{4pt} \begin{longtable}{ccccc} No. & \small{$\mathcal{R}(X)$} & \small{$[w_1,\ldots, w_r]$} & \small{$u$} & \small{$\dim(X)$} \\ \toprule 4.A & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & 0 & 1 & 0 & 1 & c_1 & \ldots & c_m \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] c_1 \le \ldots \le c_m \\ d \sei \max(0,c_m) \\ (2+m)d = 2 + c_1 +\dots +c_m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1+d \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.B & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|cccc} 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & \ldots & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.C & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^{2}+T_{3}T_{4}^{2}+T_5T_{6}^{2} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|cccc} 0 & 1 & 0 & 1 & 0 & 1 & -1 & 0 & \ldots & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.D & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^{2}+T_{3}T_{4}^{2}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & 0 & 1 & 1 & 1 & 1 & \ldots & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.E & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^{3}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & 2 & 1 & 2 & 1 & 2 & \ldots & 2 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 3 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 4.F & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}^{3}+T_{3}T_{4}^{2}+T_5T_{6}^{2} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|ccc} 0 & 1 & 1 & 1 & 1 & 1 & 1 & \ldots & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \ldots & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 5 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1, \ldots, S_m]} {\langle T_{1}T_{2}+T_{3}^2T_{4}+T_5^2T_{6} \rangle} \\ \scriptstyle m \ge 0 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 2a+1 & a & 1 & a & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] m=2a \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} m+2 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 6 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5^2T_{6} \rangle} \\ \scriptstyle m \ge 1 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 2c+1 & a & b & c & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] a, b, c \ge 0, \quad a<b, \\ a+b=2c+1, \\ m=3c+1 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2c+2 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 7 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m = 4 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|cccc} 0 & 0 & 0 & 0 & -1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$7$} \\ \midrule 8 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \ge 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|cccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \end{array} \!\!\right] \\[1em] 0 \le a_2 \le \ldots \le a_m, ~a_m >0, \\ 4 + a_2 + \ldots + a_m = ma_m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ a_m+1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 9 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_6, S_1, \ldots, S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccc|ccc} 0 & a_2 & \ldots & a_6 & 1 & \ldots & 1 \\ 1 & 1 & \ldots & 1 & 0 & \ldots & 0 \end{array} \!\!\right] \\[1em] 0 \le a_3 \le a_5 \le a_6 \le a_4 \le a_2,\\ a_2=a_3+a_4=a_5+a_6, \\ m = 2a_2 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} a_2+1 \\ 1 \end{array} \!\!\right] $ } & \small{$m+3$} \\ \midrule 10 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m = 3 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccc|ccc} 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 1 \end{array} \!\!\right] $ } & \small{$5$} \\ \midrule 11 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \geq 2 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cccc} 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0\le a_2 \le \ldots \le a_m, ~ a_m>0, \\ 3 + a_2 + \ldots + a_m = ma_m \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ a_m+1 \end{array} \!\!\right] $ } & \small{$m+2$} \\ \midrule 12 & $ \begin{array}{c} \frac{\KK[T_1, \ldots , T_5, S_1,\ldots,S_m]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} \\ \scriptstyle m \geq 3 \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cccc} 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \\ 0 & 2c & a & b & c & 1 & 1 & \ldots & 1 \end{array} \!\!\right] \\[1em] 0 \le a \le c \le b, \ a+b=2c, \\ m=3c \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2c+1 \end{array} \!\!\right] $ } & \small{$m+2$} \\ \bottomrule \end{longtable} } } \noindent Moreover, each of the listed data defines a smooth rational non-toric truly almost Fano variety of Picard number two coming with a torus action of complexity one. \end{theorem} The article is organized as follows. In Section~\ref{sec:cpl1}, we briefly present the necessary background on rational varieties $X$ with a torus action of complexity one. In Section~\ref{sec:firstStruct}, we derive first constraints on the defining data for smooth $X$ of Picard number two. Section~\ref{sec:classif} is devoted to proving the main results. In Section~\ref{section:finite}, we introduce and discuss duplication of free weights and show how to obtain the Fano varieties of Theorem~\ref{thm:main2} via this procedure from lower dimensional varieties. Finally, in Section~\ref{sec:geomfanos}, we describe the Fano varieties of Theorem~\ref{thm:main2} in more geometric terms. \goodbreak We would like to thank Ivo Radloff for his interest in the subject and for helpful discussions. \tableofcontents \section{Varieties with torus action of complexity one} \label{sec:cpl1} We recall from~\cite{HaSu:2010,HaHe:2013, ArDeHaLa} the Cox ring based approach to normal (projective) rational varieties~$X$ with a torus action of complexity one and thereby fix the notation used throughout the article. The first step is to describe the possible Cox rings~$\mathcal{R}(X)$; they are encoded by a pair $(A,P)$ of matrices of the following shape. \begin{notation} \label{constr:defdata} Fix $r \in \ZZ_{\ge 1}$, a sequence $n_0, \ldots, n_r \in \ZZ_{\ge 1}$, set $n := n_0 + \ldots + n_r$, and fix integers $m \in \ZZ_{\ge 0}$ and $0 < s < n+m-r$. A pair $(A,P)$ of \emph{defining matrices} consists of \begin{itemize} \item a matrix $A := [a_0, \ldots, a_r]$ with pairwise linearly independent column vectors $a_0, \ldots, a_r \in \KK^2$, \item an integral block matrix $P$ of size $(r + s) \times (n + m)$, the columns of which are pairwise different primitive vectors generating $\QQ^{r+s}$ as a cone: \begin{eqnarray*} P & = & \left[ \begin{array}{cc} L & 0 \\ d & d' \end{array} \right], \end{eqnarray*} where $d$ is an $(s \times n)$-matrix, $d'$ an $(s \times m)$-matrix and $L$ an $(r \times n)$-matrix built from tuples $l_i := (l_{i1}, \ldots, l_{in_i}) \in \ZZ_{\ge 1}^{n_i}$ as follows \begin{eqnarray*} L & = & \left[ \begin{array}{cccc} -l_0 & l_1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ -l_0 & 0 &\ldots & l_{r} \end{array} \right]. \end{eqnarray*} \end{itemize} Denote by $v_{ij}$, where $0 \le i \le r$ and $1 \le j \le n_i$, the first $n$ columns of $P$ and by~$v_k$, where $1 \le k \le m$, the last $m$ ones. Moreover, $e_{ij},e_k \in \ZZ^{n+m}$ are the canonical basis vectors indexed accordingly, i.e., $P$ sends $e_{ij}$ to $v_{ij}$ and $e_k$ to $v_k$. \end{notation} \begin{construction} \label{constr:RAPdown} Fix $(A,P)$ as in~\ref{constr:defdata}. Consider the polynomial ring $\KK[T_{ij},S_k]$ in the variables $T_{ij}$, where $0 \le i \le r$, $1 \le j \le n_i$, and $S_k$, where $1 \le k \le m$. For every $0 \le i \le r$, define a monomial $$ T_i^{l_i} \ := \ T_{i1}^{l_{i1}} \cdots T_{in_i}^{l_{in_i}} \ \in \ \KK[T_{ij},S_k]. $$ Denote by $\mathfrak{I}$ the set of all triples $I = (i_1,i_2,i_3)$ with $0 \le i_1 < i_2 < i_3 \le r$ and define for any $I \in \mathfrak{I}$ a trinomial $$ g_I \ := \ g_{i_1,i_2,i_3} \ := \ \det \left[ \begin{array}{ccc} T_{i_1}^{l_{i_1}} & T_{i_2}^{l_{i_2}} & T_{i_3}^{l_{i_3}} \\ a_{i_1} & a_{i_2} & a_{i_3} \end{array} \right]. $$ Let $P^*$ denote the transpose of $P$, consider the factor group $K := \ZZ^{n+m}/\text{im}(P^*)$ and the projection $Q \colon \ZZ^{n+m} \to K$. We define a $K$-grading on $\KK[T_{ij},S_k]$ by setting $$ \deg(T_{ij}) \ := \ w_{ij} \ := \ Q(e_{ij}), \qquad \deg(S_{k}) \ := \ w_k \ := \ Q(e_{k}). $$ Then the trinomials $g_I$ just introduced are $K$-homogeneous, all of the same degree. In particular, we obtain a $K$-graded factor ring \begin{eqnarray*} R(A,P) & := & \KK[T_{ij},S_k; \; 0 \le i \le r, \, 1 \le j \le n_i, 1 \le k \le m] \ / \ \bangle{g_I; \; I \in \mathfrak{I}}. \end{eqnarray*} \end{construction} The rings $R(A,P)$ are precisely those which occur as Cox rings of normal rational projective (or, more generally, complete $A_2$-) varieties with a torus action of complexity one; see~\cite[Theorem~1.5]{HaHe:2013}. We recall basic properties. \begin{remark} \label{rem:ci} The $K$-graded ring $R(A,P)$ of Construction~\ref{constr:RAPdown} is a complete intersection: with $g_i := g_{i,i+1,i+2}$ we have $$ \bangle{g_I; \; I \in \mathfrak{I}} \ = \ \bangle{g_0,\ldots,g_{r-2}}, \qquad\quad \dim(R(A,P)) \ = \ n+m-(r-1). $$ \end{remark} \begin{remark} \label{remark:admissibleops} The following operations on the columns and rows of the defining matrix $P$ do not change the isomorphy type of the graded ring $R(A,P)$; we call them \emph{admissible operations}: \begin{enumerate} \item swap two columns inside a block $v_{ij_1}, \ldots, v_{ij_{n_i}}$, \item swap two whole column blocks $v_{ij_1}, \ldots, v_{ij_{n_i}}$ and $v_{i'j_1}, \ldots, v_{i'j_{n_{i'}}}$, \item add multiples of the upper $r$ rows to one of the last $s$ rows, \item any elementary row operation among the last $s$ rows, \item swap two columns inside the $d'$ block. \end{enumerate} The operations of type~(iii) and~(iv) do not even change $R(A,P)$, whereas types~(i), (ii), (v) correspond to certain renumberings of the variables of $R(A,P)$ keeping the (graded) isomorphy type. \end{remark} \begin{remark} If we have $n_i=1$ and $l_{i1} = 1$ in a defining matrix~$P$, then we may eliminate the variable $T_{i1}$ in $R(A,P)$ by modifying $P$ appropriately. This can be repeated until $P$ is \emph{irredundant} in the sense that $l_{i1} + \ldots + l_{in_i} \ge 2$ holds for all $i = 0,\ldots, r$. \end{remark} We come to the construction of all normal projective varieties sharing a given $R(A,P)$ as their Cox ring. By $K_\QQ := K \otimes_{\ZZ} \QQ$ we denote the rational vector space associated to an abelian group $K$. We shortly write $w$ for $w \otimes 1 \in K_\QQ$ and, similarly, we keep the symbols when passing from homomorphisms $K \to K'$ to the associated linear maps $K_\QQ \to K'_\QQ$. Moreover, when we speak of a cone $\tau \subseteq K_\QQ$, then we mean a convex, polyhedral cone in $K_\QQ$. The relative interior of $\tau$ is denoted by $\tau^\circ$. \begin{definition} The \emph{moving cone} in $K_\QQ$ of the $K$-graded ring $R(A,P)$ from Construction~\ref{constr:RAPdown} is the $$ \Mov(A,P) \ := \ \bigcap_{i,j} \cone(Q(e_{uv},e_{t}; \; (u,v) \ne (i,j))) \ \cap \ \bigcap_{k} \cone(Q(e_{uv},e_{t}; \; t \ne k)). $$ \end{definition} \begin{construction} \label{constr:RAPu} Take $R(A,P)$ as in Construction~\ref{constr:RAPdown} and fix $u \in \Mov(A,P)^\circ$. The $K$-grading on $\KK[T_{ij},S_k]$ defines an action of the quasitorus $H := \Spec \; \KK[K]$ on $\ol{Z} := \KK^{n+m}$ leaving $\ol{X} := V(g_I; \; I \in \mathfrak{I}) \subseteq \ol{Z}$ invariant. Consider $$ \wh{Z} \ := \ \{z \in \ol{Z}; \; f(z) \ne 0 \text{ for some } f \in \KK[T_{ij},S_k]_{ \nu u}, \, \nu \in \ZZ_{>0} \} \ \subseteq \ \ol{Z}, $$ the set of $H$-semistable points with respect to the weight $u$. Then $\wh{X} := \ol{X} \cap \wh{Z}$ is an open $H$-invariant set in $\ol{X}$ and we have a commutative diagram $$ \xymatrix{ {\wh{X}} \ar[r] \ar[d]_{\quot H}^{\pi} & {\wh{Z}} \ar[d]^{\quot H} \\ X(A,P,u) \ar[r] & Z} $$ where $X=X(A,P,u)$ is a variety with torus action of complexity one, $Z := \wh{Z} \quot H$ is a toric variety, the downward maps are characteristic spaces and the lower horizontal arrow is a closed embedding. We have $$ \dim(X) = s+1, \qquad \Cl(X) \ \cong \ K, \qquad \mathcal{R}(X) \ \cong \ R(A,P). $$ Moreover, for an irredundant defining matrix $P$, the variety $X = X(A,P)$ is non-toric if and only if $r \ge 2$ holds. \end{construction} See~\cite{HaHe:2013,ArDeHaLa} for the proof that this construction yields indeed all normal rational projective varieties with a torus action of complexity one. We will make intensive use of the machinery developed in~\cite{BeHa:2007,Ha:2008,ArDeHaLa}. Let us briefly summarize the necessary notions and statements in a series of remarks adapted to our needs. \begin{remark} \label{const:rlvu} Fix defining matrices $(A,P)$ and let $\gamma \subseteq \QQ^{n+m}$ be the positive orthant, spanned by the canonical basis vectors $e_{ij},e_k \in \ZZ^{n+m}$. Every face $\gamma_0 \preceq \gamma$ defines a toric orbit in $\ol{Z} = \KK^{n+m}$: $$ \ol{Z}(\gamma_0) \ := \ \{ z \in \ol{Z}; \; z_{ij} \neq 0 \Leftrightarrow e_{ij} \in \gamma_0 \text{ and } z_{k} \neq 0 \Leftrightarrow e_{k} \in\gamma_0 \} \ \subseteq \ \ol{Z} $$ We say that $\gamma_0 \preceq \gamma$ is an \emph{$\mathfrak{F}$-face} (for $(A,P)$) if the associated toric orbit meets the total coordinate space $\ol{X} = V(g_I; \; I \in \mathfrak{I}) \subseteq \ol{Z}$, that means if we have $$ \ol{X}(\gamma_0) \ := \ \ol{X} \cap \ol{Z}(\gamma_0) \ \ne \ \emptyset. $$ In particular, $\ol{X}$ is the disjoint union of the locally closed pieces $\ol{X}(\gamma_0)$ associated to the $\mathfrak{F}$-faces. \end{remark} \begin{remark} \label{rem:rlv2} Fix $u \in \Mov(A,P)^\circ$. Then, for the ambient toric variety $Z$ and $X=X(A,P,u)$ of Construction~\ref{constr:RAPu}, we have the collections of \emph{relevant faces}: \begin{eqnarray*} \rlv(Z) & := & \{ \gamma_0 \preceq \gamma; \; u \in Q(\gamma_0)^\circ\}, \\ \rlv(X) & := & \{ \gamma_0 \in \rlv(Z); \; \gamma_0 \text{ is an } \mathfrak{F}\text{-face} \}. \end{eqnarray*} Let $\gamma_0^* := \gamma_0^\perp \cap \gamma\preceq \gamma$ denote the complementary face of $\gamma_0 \preceq \gamma$. Then there is a bijection between $\rlv(Z)$ and the fan $\Sigma$ of the toric variety $Z$: $$ \rlv(Z) \ \to \ \Sigma, \qquad\qquad \gamma_0 \ \mapsto \ P(\gamma_0^*). $$ The toric orbits of $Z$ correspond to the cones of the fan $\Sigma$ and thus to the cones of~$\rlv(Z)$. Concretely, the toric orbit of $Z$ associated with $\gamma_0\in \rlv(Z)$ is $$ Z(\gamma_0) \ = \ \pi(\ol{Z}(\gamma_0)). $$ The relevant faces $\rlv(X)$ of $X$ define exactly the toric orbits of $Z$ that intersect $X \subseteq Z$ non-trivially and thus give a locally closed decomposition $$ X \ = \ \bigcup_{\gamma_0\in\rlv(X)} X(\gamma_0), \qquad\qquad X(\gamma_0) \ := \ X \cap Z(\gamma_0) \ = \ \pi((\ol{X}(\gamma_0)). $$ The fan $\Sigma_X$ generated by the cones $\sigma=P(\gamma_0^*)$, where $\gamma_0\in\rlv(X)$, defines the minimal toric open subset $Z_X \subseteq Z$ containing $X$. For the set of rays we have $$ \Sigma_X^{(1)} \ = \ \Sigma^{(1)} \ = \ \{\varrho_{ij}, \varrho_k; \ 0 \le i \le r, \ 1 \le j \le n_i, \ 1 \le k \le m\}, $$ where the $\varrho_{ij} := \cone(v_{ij})$ and $\varrho_k := \cone(v_k)$ are the rays through the columns $v_{ij}$ and~$v_k$ of the defining matrix $P$. \end{remark} \begin{remark} \label{rem:divcones} Let $X=X(A,P,u)$ arise from Construction~\ref{constr:RAPu}. Then the cones of effective, movable, semiample and ample divisor classes are given as $$ \Eff(X) \ = \ Q(\gamma), \qquad \Mov(X) \ = \ \Mov(A,P) \ = \ \bigcap_{\gamma_0 \text{ facet of }\gamma} Q(\gamma_0), $$ $$ \SAmple(X) \ = \ \bigcap_{\gamma_0\in\rlv(X)} Q(\gamma_0), \qquad \Ample(X) \ = \ \bigcap_{\gamma_0\in\rlv(X)} Q(\gamma_0)^\circ. $$ In particular, the GIT-fan of the $H$-action on $\ol{X}$ induces the Mori chamber decomposition, i.e., it subdivides $\Mov(X)$ into the nef cones of the small birational relatives of~$X$. \end{remark} \begin{remark} \label{rem:Qfact} Let $X=X(A,P,u)$ arise from Construction~\ref{constr:RAPu}. Consider $\gamma_0\in\rlv(X)$ and $x\in X(\gamma_0)$. Then the following statements hold: \begin{enumerate} \item $x$ is $\QQ$-factorial if and only if $Q(\gamma_0)$ is full-dimensional, \item $x$ is factorial if and only if $Q$ maps $\lin(\gamma_0)\cap\ZZ^{n+m}$ onto $\Cl(X)$, \item $x$ is smooth if and only if $x$ is factorial and all $z \in \pi^{-1}(x)$ are smooth in~$\ol{X}$. \end{enumerate} \end{remark} \begin{remark} \label{rem:fanoRAP} Let $X=X(A,P,u)$ arise from Construction~\ref{constr:RAPu}. The anticanonical class of $X$ does not depend on $u$ and is given by $$ -\mathcal{K}_X \ = \ \kappa(A,P) \ := \ \sum_{i,j} Q(e_{ij}) \ + \ \sum_k Q(e_k) \ - \ (r-1) \sum_{j=0}^{n_0} l_{0j} Q(e_{0j}) \ \in \ K . $$ In particular, a $K$-graded ring $R(A,P)$ is the Cox ring of a Fano variety if and only if $\kappa(A,P)$ belongs to the relative interior of $\Mov(A,P)$. \end{remark} \begin{remark} \label{rem:trop} Consider $X \subseteq Z$, where $X=X(A,P,u)$ and $Z$ are as in Construction~\ref{constr:RAPu}. Then, with $\lambda := 0 \times \QQ^s \subseteq \QQ^{r+s}$, the canonical basis vectors $e_1,\ldots, e_r\in\ZZ^{r+s}$ and $e_0 := -e_1- \ldots -e_r$, the associated tropical variety is $$ \trop(X) \ = \ \lambda_0 \cup \ldots \cup \lambda_r \ \subseteq \ \QQ^{r+s}, \qquad \text{where} \quad \lambda_i \ := \ \lambda + \cone(e_i). $$ Note that this defines the coarsest possible quasifan structure on $\trop(X)$, and the lineality space of this quasifan is $\lambda$. Moreover, a cone $\sigma \in \Sigma$ corresponds to $\gamma_0 \in \rlv(X)$ if and only if $\sigma^\circ \cap \trop(X) \ne \emptyset$ holds. \end{remark} \begin{definition} Consider $X \subseteq Z$, where $X=X(A,P,u)$ and $Z$ are as in Construction~\ref{constr:RAPu}. A cone $\sigma \in \Sigma_X$ is called \begin{enumerate} \item \emph{big}, if $\sigma \cap \lambda_i^\circ \ne \emptyset$ holds for each $i = 0, \ldots, r$. \item \emph{elementary big} if it is big, has no rays inside $\lambda$ and precisely one inside $\lambda_i$ for each $i = 0, \ldots, r$. \item a \emph{leaf cone} if $\sigma \subseteq \lambda_i$ holds for some $i$. \end{enumerate} We say that the variety $X$ is \emph{weakly tropical}, if the fan $\Sigma_X$ is supported on the tropical variety $\trop(X)$. \end{definition} \begin{remark} \label{rem:weaklytrop} Let $X=X(A,P,u)$ arise from Construction~\ref{constr:RAPu}. \begin{enumerate} \item The fan $\Sigma_X$ is generated by big cones and leaf cones. \item Every big cone of $\Sigma_X$ is of the form $P(\gamma_0^*)$ with a $\gamma_0 \in \rlv(X)$. \item The tropical variety $\trop(X)$ is contained in the support of $\Sigma_X$. \item $X$ is weakly tropical if and only $\Sigma_X$ consists of leaf cones. \item If $X$ is weakly tropical, then $\lambda \subseteq \trop(X)$ is a union of cones of $\Sigma_X$. \end{enumerate} \end{remark} \section{First structural constraints} \label{sec:firstStruct} We derive first constraints on the defining matrices of smooth rational varieties with a torus action of complexity one having Picard number two. We work in the notation of Section~\ref{sec:cpl1}. The aim is to show the following. \begin{proposition} \label{prop:smooth-rho2} Let $X$ be a non-toric smooth rational projective variety with a torus action of complexity one and Picard number $\rho(X) = 2$. Then $X \cong X(A,P,u)$, where $P$ is irredundant and fits into one of the following cases: \begin{enumerate} \item[(I)] We have $r=2$ and one of the following constellations: \begin{enumerate} \item $m \ge 0$ and $n = 4+n_0$, where $n_0 \geq 3$, $n_1 = n_2 = 2$. \item $m = 0$ and $n = 6$, where $n_0 = 3$, $n_1 = 2$, $n_2 = 1$. \item $m = 0$ and $n = 5$, where $n_0 = 3$, $n_1 = 1$, $n_2 = 1$. \item $m\ge 0$ and $n = 6$, where $n_0 = n_1 = n_2 = 2$. \item $m\ge 0$ and $n = 5$, where $n_0 = n_1 = 2$, $n_2 = 1$. \item $m\ge 1$ and $n = 4$, where $n_0 = 2$, $n_1 = n_2 = 1$. \end{enumerate} \item[(II)] We have $r=3$ and one of the following constellations: \begin{enumerate} \item $m = 0$ and $n = 8$, where $n_0 = n_1 = n_2 = n_3 = 2$. \item $m = 0$ and $n = 7$, where $n_0 = n_1 = n_2 = 2$, $n_3 = 1$. \item $m = 0$ and $n = 6$, where $n_0 = n_1 = 2$, $n_2 = n_3 = 1$. \end{enumerate} \end{enumerate} \end{proposition} The statement is an immediate consequence of Propositions~\ref{prop:4-1-m-pos} and~\ref{prop:4-1-m-0}; see end of this section. Throughout the whole section, the defining matrix $P$ is irredundant. In particular, $X(A, P, u)$ is non-toric if and only if $r \ge 2$ holds, i.e., we have a relation in the Cox ring. During our considerations, we will freely use the Remarks~\ref{const:rlvu} to~\ref{rem:weaklytrop}. We first study the impact of $X = X(A,P,u)$ being locally factorial on the defining matrix~$P$, where locally factorial means that the local rings of the points $x \in X$ are unique factorization domains. \begin{lemma} \label{lem:ebcni2} Let $X=X(A,P,u)$ be non-toric and locally factorial. If $X$ is weakly tropical, then $n_i \geq 2$ holds for all $i=0, \ldots , r$. \end{lemma} \begin{proof} Assume that $n_i=1$ holds for some $i$. Since $X$ is weakly tropical, there exists a cone $\sigma \in \Sigma_X$ of dimension $s+1$ contained in the leaf $\lambda_i$. Because of $n_i=1$ we have $\sigma = \varrho_{i1}+\tau$ with a face $\tau \preceq \sigma$ such that $\tau \subseteq \lambda$. Now, $\sigma = P(\gamma_0^*)$ holds for some $\gamma_0 \subseteq \rlv(X)$. Since the points of $X(\gamma_0)$ are factorial, $\sigma$ is a regular cone. Thus, also $\tau \subseteq \lambda$ must be regular. This implies $l_{i1}=1$, contradicting irredundancy of $P$. \end{proof} \begin{lemma} \label{lem:weaklytroprho5} Let $X=X(A,P,u)$ be non-toric and locally factorial. If $X$ is weakly tropical, then $\rho(X)\ge r+3$ holds. \end{lemma} \begin{proof} Lemma \ref{lem:ebcni2} ensures $n_i \geq 2$ for all $i=1, \ldots , r$, hence $n \geq 2 \cdot (r+1)$. The $s$-dimensional lineality space $\lambda = \{0\} \times \QQ^s \subseteq \trop(X)$ is a union of cones of $\Sigma_X$. Thus $P$ must have at least $s+1$ columns $v_k$ which means $m \geq s+1$. Together this yields $$ \rho(X) \ = \ n + m - (r-1) - (s+1) \ \ge \ r + 3. $$ \end{proof} \begin{lemma} \label{lem:notwtrop} Let $X=X(A, P, u)$ be non-toric and not weakly tropical. If $X$ is $\QQ$-factorial, then there is an elementary big cone in $\Sigma_X$. \end{lemma} \begin{proof} Since $X$ is not weakly tropical, there exists a big cone $\sigma \in \Sigma_X$. We have $\sigma = P(\gamma_0^*)$ with $\gamma_0 \in \rlv(X)$. Since the points of $X(\gamma_0)$ are $\QQ$-factorial, the cone $\sigma$ is simplical. For every $i = 0 \ldots, r$ choose a ray $\varrho_i \preceq \sigma$ with $\varrho_i \in \lambda_i$. Then $\sigma_0 := \varrho_0 + \ldots + \varrho_r \preceq \sigma$ is as wanted. \end{proof} \begin{corollary} \label{cor:ebc} Let $X=X(A,P,u)$ be non-toric and locally factorial. If $\rho(X) \leq 4$ holds, then there exists an elementary big cone $\sigma \in \Sigma_X$. \end{corollary} Next we investigate the effect of quasismoothness on the defining matrix $P$, where we call $X = X(A,P,u)$ \emph{quasismooth} if $\wh{X}$ is smooth. Thus, quasismoothness means that $X$ has at most quotient singularities by quasitori. The smoothness of $\wh{X}$ will lead to conditions on $P$ via the Jacobian of the defining relations of $\ol{X}$. \begin{remark} Let $(A,P)$ be defining matrices. Then the Jacobian $J_g$ of the defining relations $g_0,\ldots,g_{r-2}$ from Remark~\ref{rem:ci} is of the shape $J_g=(J,0)$ with a zero block of size $(r-1) \times m$ corresponding to the variables $S_1, \dots , S_m$ and a block $$ J \ \sei \ \left[ \begin{array}{cccccccccc} \delta_{10} & \delta_{11} & \delta_{12} & 0 & \\ 0 & \delta_{21} & \delta _{22} & \delta_{23} & 0 \\ & & & & & \vdots & \\ & & & & & & \delta_{r-2,r-3} & \delta_{r-2,r-2} & \delta_{r-2,r-1}&0 \\ & & & & & & 0& \delta_{r-1,r-2} & \delta_{r-1,r-1} & \delta_{r-1,r} \\ \end{array} \right], $$ of size $(r-1) \times n$, where each vector $\delta_{a,i}$ is a nonzero multiple of the gradient of the monomial $T_i^{l_i}$: $$ \delta_{a,i} \ = \ \alpha_{a,i} \left( l_{i1} \frac{T_i^{l_i}}{T_{i1}}, \ \ldots, \ l_{in_i} \frac{T_i^{l_i}}{T_{in_i}} \right), \qquad \alpha_{a,i} \ \in \ \KK^*. $$ For given $1 \le a,b \le r-1$, $0 \le i \le r$ and $z \in \ol{X}$, we have $\delta_{a,i}(z) = 0$ if and only if $\delta_{b,i}(z) = 0$. Moreover, the Jacobian $J_g(z)$ of a point $z\in\ol{X}$ is of full rank if and only if $\delta_{a,i}(z) = 0$ holds for at most two different~$i = 0, \ldots, r$. \end{remark} \begin{lemma} \label{lem:triple1xy} Assume that $X=X(A,P,u)$ is non-toric and that there is an elementary big cone $\sigma=\varrho_{0j_0}+\ldots+\varrho_{rj_r}\in\Sigma_X$. If $X$ is quasismooth, then $l_{ij_i} \ge 2$ holds for at most two $i=0,\dots,r$. \end{lemma} \begin{proof} We have $\sigma = P(\gamma_0^*)$ with a relevant face $\gamma_0 \in \rlv(X)$. Since $X$ is quasismooth, any $z \in \ol{X}(\gamma_0)$ is a smooth point of~$\ol{X}$. Thus, $J_g(z)$ is of full rank $r-1$. Consequently, $\delta_{a,i}(z) = 0$ holds for at most two different~$i$. This means $l_{ij_i} \ge 2$ for at most two different~$i$. \end{proof} \begin{corollary} \label{cor:qg-bigtower} Let $X = X(A,P,u)$ be non-toric and quasismooth. If there is an elementary big cone in $\Sigma_X$, then $n_i=1$ holds for at most two different $i=0, \dots ,r$. \end{corollary} \begin{lemma} \label{lem:sampleFF} Let $(A,P)$ be defining matrices. Consider the rays $\gamma_{k} := \cone(e_k)$ and $\gamma_{ij} := \cone(e_{ij})$ of the orthant $\gamma \subseteq \QQ^{r+s}$ and the twodimensional faces $$ \gamma_{k_1,k_2} \ := \gamma_{k_1} + \gamma_{k_2}, \quad \gamma_{ij,k} := \gamma_{ij} + \gamma_{k}, \quad \gamma_{i_1j_1,i_2j_2} := \gamma_{i_1j_1} + \gamma_{i_2j_2}. $$ \begin{enumerate} \item All $\gamma_k$, resp.~$\gamma_{k_1,k_2}$, are $\mathfrak{F}$-faces and each $\ol{X}(\gamma_{k})$, resp.~$\ol{X}(\gamma_{k_1,k_2})$, consists of singular points of $\ol{X}$. \item A given $\gamma_{ij}$, resp.~$\gamma_{ij,k}$, is an $\mathfrak{F}$-face if and only if $n_i \ge 2$ holds. In that case, $\ol{X}(\gamma_{ij})$, resp.~$\ol{X}(\gamma_{ij,k})$, consists of smooth points of $\ol{X}$ if and only if $r=2$, $n_i =2$ and $l_{i,3-j} = 1$ hold. \item A given $\gamma_{ij_1,ij_2}$ with $j_1 \ne j_2$ is an $\mathfrak{F}$-face if and only if $n_i \ge 3$ holds. In that case, $\ol{X}(\gamma_{ij_1,ij_2})$ consists of smooth points of $\ol{X}$ if and only if $r=2$, $n_i=3$ and $l_{ij} = 1$ for the $j \ne j_1,j_2$ hold. \item A given $\gamma_{i_1j_1,i_2j_2}$ with $i_1\neq i_2$ is an $\ff$-face if and only if we have $n_{i_1}, n_{i_2}\ge 2$ or $n_{i_1}=n_{i_2}=1$ and $r=2$. In the former case, $\ol{X}(\gamma_{i_1j_1,i_2j_2})$ consists of smooth points of $\ol{X}$ if and only if one of the following holds: \begin{itemize} \item $r=2$, $n_{i_t}=2$ and $l_{i_t,3-j_t} = 1$ for a $t\in\{1,2\}$, \item $r=3$, $n_{i_1}=n_{i_2}=2$, $l_{i_1,3-j_1}=l_{i_2,3-j_2} = 1$. \end{itemize} \end{enumerate} \end{lemma} \begin{proof} The statements follow directly from the structure of the defining relations $g_0, \ldots, g_{r-2}$ of $R(A,P)$ and the shape of the Jacobian $J_g$. \end{proof} We now restrict to the case that the rational divisor class group $\Cl(X)_\QQ = K_\QQ$ of $X = X(A,P,u)$ is of dimension two. Set $\tx:=\Ample(X)$. Then the effective cone $\Eff(X)$ is of dimension two and is uniquely decomposed into three convex sets $$ \Eff(X) \ = \ \tp \cup \tx \cup \tm, $$ such that $\tp, \tm$ do not intersect the ample cone $\tx$ and $\tp \cap \tm$ consists of the origin. Recall that $u \in \tx$ holds and that, due to $\tx \subseteq \Mov(X)$, each of $\tp$ and~$\tm$ contains at least two of the weights $w_{ij},w_k$. \begin{center} \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(0.6,3.4)--(0,0); \path[fill, color=black] (1.4,2.4) circle (0.0ex) node[]{$\tx$}; \path[fill, color=black] (1,1.2) circle (0.5ex) node[]{}; \path[fill, color=black] (1,1.5) circle (0.0ex) node[]{\small{$u$}}; \draw (0,0)--(0.6,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (-0.35,2.65) circle (0.0ex) node[]{\small{$\tau^+$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (3.5,0.5); \path[fill, color=black] (2.6,1.2) circle (0.0ex) node[]{\small{$\tau^-$}}; \path[fill, color=white] (4,1.9) circle (0.0ex); \end{tikzpicture} \end{center} \begin{remark} \label{rem:projFF} Consider $X = X(A,P,u)$ such that $\Cl(X)_\QQ$ is of dimension two. Then, for every $\mathfrak{F}$-face $\{0\} \ne \gamma_0 \preceq \gamma$ precisely one of the following inclusions holds $$ Q(\gamma_0) \ \subseteq \ \tp, \qquad \tx \ \subseteq \ Q(\gamma_0)^\circ, \qquad Q(\gamma_0) \ \subseteq \ \tm. $$ The $\mathfrak{F}$-faces $\gamma_0 \preceq \gamma$ satisfying the second inclusion are exactly those with $\gamma_0 \in \rlv(X)$, i.e., the relevant ones. \end{remark} \begin{lemma} \label{lem:tau} Let $X = X(A, P, u)$ be non-toric with $\rk(\Cl(X))=2$. \begin{enumerate} \item Suppose that $X$ is $\QQ$-factorial. Then $w_{k} \notin \tx$ holds for all $1\le k \le m$ and for all $0\le i \le r$ with $n_i\ge2$ we have $w_{ij} \notin \tx$, where $1 \leq j \leq n_i$. \item Suppose that $X$ is quasismooth, $m > 0$ holds and there is $0 \le i_1 \le r$ with $n_{i_1} \ge 3$. Then the $w_{ij}, w_k$ with $n_i \ge 3$, $j=1, \ldots , n_i$ and $k=1, \ldots , m$ lie either all in $\tp$ or all in $\tm$. \item Suppose that $X$ is quasismooth and there is $0 \le i_1 \le r$ with $n_{i_1} \ge 4$. Then the $w_{ij}$ with $n_i \geq 4$ and $j=1, \ldots , n_i$ lie either all in $\tp$ or all in $\tm$. \item Suppose that $X$ is quasismooth and there exist $0 \le i_1 < i_2 \le r$ with $n_{i_1}, n_{i_2} \ge 3$. Then the $w_{ij}$ with $n_i \ge 3$, $j=1, \ldots , n_i$ lie either all in $\tp$ or all in $\tm$. \item Suppose that $X$ is quasismooth. Then $w_1, \ldots, w_m$ lie either all in $\tp$ or all in $\tm$. \end{enumerate} \end{lemma} \begin{proof} We prove~(i). By Lemma~\ref{lem:sampleFF}~(i) and (ii), the rays $\gamma_{k}, \gamma_{ij} \preceq \gamma$ with $n_i \ge 2$ are $\mathfrak{F}$-faces. Since $X$ is $\QQ$-factorial, the ample cone $\tx \subseteq K_\QQ$ of $X$ is of dimension two and thus $\tx \subseteq Q(\gamma_{ij})^\circ$ or $\tx \subseteq Q(\gamma_{k})^\circ$ is not possible. Remark~\ref{rem:projFF} yields the assertion. We turn to~(ii). By Lemma~\ref{lem:sampleFF}~(i) and~(ii), all $\gamma_{k}, \gamma_{ij}, \gamma_{ij,k} \preceq \gamma$ in question are $\mathfrak{F}$-faces and the corresponding pieces in $\ol{X}$ consist of singular points. Because $X$ is quasismooth, none of these $\mathfrak{F}$-faces is relevant. Thus, Remark~\ref{rem:projFF} gives $w_{i_11} \in \tp$ or $w_{i_11} \in \tm$; say we have $w_{i_11} \in \tp$. Then, applying again Remark~\ref{rem:projFF}, we obtain $w_k,w_{ij} \in \tp$ for $k = 1,\ldots, m$, all $i$ with $n_i \ge 3$ and $j=1,\ldots,n_i$. Assertion~(iii) is proved analogously: treat first $\gamma_{i_11,i_12}$ with Lemma~\ref{lem:sampleFF}~(iii), then $\gamma_{i_11,ij}$ with Lemma~\ref{lem:sampleFF}~(iii) and~(iv). Similarly, we obtain~(iv) by treating first $\gamma_{i_11,i_21}$ and then all $\gamma_{i_11,ij}$ and $\gamma_{i_21,ij}$ with Lemma~\ref{lem:sampleFF}~(iii) and~(iv). Finally, we obtain~(v) using Lemma~\ref{lem:sampleFF}~(i). \end{proof} \begin{proposition} \label{prop:4-1-m-pos} Let $X = X(A,P,u)$ be non-toric, quasismooth and $\QQ$-factorial with $\rho(X)=2$. Assume that there is an elementary big cone in $\Sigma_X$ and that we have $n_0 \ge \ldots \ge n_r$. If $m > 0$ holds, then there is a $\gamma_{ij,k} \in \rlv(X)$, we have $r=2$ and the constellation of the $n_i$ is $(n_0,2,2)$, $(2,2,1)$ or $(2,1,1)$. \end{proposition} \begin{proof} According to Lemma~\ref{lem:tau}~(v), we may assume $w_1, \ldots, w_m \in \tp$. We claim that there is a $w_{i_1j_1} \in \tm$ with $n_{i_1} \ge 2$. Otherwise, use Corollary~\ref{cor:qg-bigtower} to see that there exist $w_{ij}$ with $n_i \ge 2$ and Lemma~\ref{lem:tau}~(i) to see that they all lie in $\tp$. Since all monomials $T_{i}^{l_i}$ have the same degree in $K$, we obtain in addition $w_{i1} \in \tp$ for all $i$ with $n_i=1$. But then no weights $w_{ij}, w_k$ are left to lie in $\tm$, a contradiction. Having verified the claim, we may take a $w_{i_1j_1} \in \tm$ with $n_{i_1} \ge 2$. Then $\gamma_{i_1j_1,1} \in \rlv(X)$ is as desired. Moreover, Lemma~\ref{lem:sampleFF}~(ii) yields $r=2$ and $n_{i_1}=2$. If $n_0 \ge 3$ holds, then Lemma~\ref{lem:tau}~(ii) gives $w_{ij} \in \tp$ for all $i$ with $n_i \ge 3$. Moreover, as all $T_{i}^{l_i}$ share the same $K$-degree, we have $w_{i1} \in \tp$ for all $i$ with $n_i=1$. By the same reason, one of the $w_{i_11}$, $w_{i_12}$ must lie in $\tp$. As $\tm$ contains at least two weights, there is a $w_{i_2j_2} \in \tm$ with $n_{i_2} = 2$ and $i_1 \ne i_2$. Thus, the constellation of $n_0 \ge n_1 \ge n_2$ is as claimed. \end{proof} \begin{proposition} \label{prop:4-1-m-0} Let $X = X(A,P,u)$ be non-toric, quasismooth and $\QQ$-factorial with $\rho(X)=2$. Assume that there is an elementary big cone in $\Sigma_X$ and that we have $n_0 \ge \ldots \ge n_r$. If $m = 0$ holds, then there is a $\gamma_{i_1j_1,i_2j_2} \in \rlv(X)$, we have $r \le 3$ and the constellation of the $n_i$ is one of the following $$ \begin{array}{lcl} r = 2 \colon & & (n_0,2,2), \ (3,2,1), \ (3,1,1), \ (2,2,2), \ (2,2,1), \\ r = 3 \colon & & (2,2,2,2), \ (2,2,2,1), \ (2,2,1,1). \end{array} $$ \end{proposition} \begin{proof} We first show $n_1 \le 2$. Otherwise, we had $n_1 \ge 3$. Then, according to Lemma~\ref{lem:tau}~(iv), we may assume that all the $w_{ij}$ with $n_i \ge 3$ lie in $\tp$. In particular, $w_{11}$, lies in $\tp$. Because all monomials $T_{i}^{l_i}$ have the same degree in $K$, also $w_{i1} \in \tp $ holds for all $i$ with $n_i=1$. At least two weights $w_{i_1j_1}$ and $w_{i_2j_2}$ must belong to $\tm$. For these, only $n_{i_1} = n_{i_2} = 2$ and $i_1 \ne i_2$ is possible. Applying Lemma~\ref{lem:sampleFF}~(iv) to $\gamma_{11,i_1j_1} \in \rlv(X)$ gives $r = 2$, contradicting $n_0\ge n_1 \ge 3$ and $n_{i_1} = n_{i_2} = 2$. We treat the case $n_0 \ge 4$. By Lemma~\ref{lem:tau}~(iii), we can assume $w_{01}, \ldots, w_{0n_0} \in \tp$. As before, we obtain $w_{i1} \in \tp$ for all $i$ with $n_i=1$ and we find two weights $w_{i_1j_1}, w_{i_2j_2} \in \tm$ with $n_{i_1} = n_{i_2} = 2$ and $i_1 \ne i_2$. Then $\gamma_{01,i_1j_1} \in \rlv(X)$ is as wanted. Lemma~\ref{lem:sampleFF}~(iv) gives $r = 2$ and we end up with $(n_0,2,2)$. Now let $n_0=3$. Lemma~\ref{lem:tau}~(i) guarantees that no $w_{0j}$ lies in $\tx$. If weights $w_{0j}$ occur in both cones $\tp$ and $\tm$, say $w_{01} \in \tp$ and $w_{02} \in \tm$, then $\gamma_{01,02}$ is as wanted. Lemma~\ref{lem:sampleFF}~(iii) yields $r=2$ and we obtain the constellations $(n_0,2,2)$, $(3,2,1)$ and $(3,1,1)$. So, assume that all weights $w_{0j}$ lie in one of $\tp$ and $\tm$, say in~$\tp$. Then we proceed as in the case $n_0 \ge 4$ to obtain a $\gamma_{01,i_1j_1} \in \rlv(X)$ and $r=2$ with the constellation $(3,2,2)$. Finally, let $n_0 \le 2$. Corollary~\ref{cor:qg-bigtower} yields $n_0 = 2$. According to Lemma~\ref{lem:tau}~(i) no $w_{ij}$ with $n_i=2$ lies in $\tx$. So, we may assume $w_{01} \in \tp$. Moreover, all $w_{ij}$ with $n_i=1$ lie together in one $\tp$, $\tx$ or in $\tm$. Since each of $\tp$ and $\tm$ contains two weights, we obtain $n_1=2$ and some $\gamma_{0j_1,1j_2}$ is as wanted. Lemma~\ref{lem:sampleFF}~(iv) shows $r \le 3$. \end{proof} We retrieve a special case of~\cite[Cor. 4.18]{Deb}. \begin{corollary} \label{cor:clfree} Let $X = X(A,P,u)$ be smooth with $\rho(X)=2$. Then the divisor class group $\Cl(X)$ is torsion-free. \end{corollary} \begin{proof} By Corollary~\ref{cor:ebc}, there is an elementary big cone in $\Sigma_X$. Thus, Propositions~\ref{prop:4-1-m-pos} and~\ref{prop:4-1-m-0} deliver a twodimensional $\gamma_0 \in \rlv(X)$. The corresponding weights generate $K$ as a group. This gives $\Cl(X) \cong K \cong \ZZ^2$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:smooth-rho2}] The variety $X$ is isomorphic to some $X(A,P,u)$, where after suitable admissible operations we may assume $n_0 \ge \ldots \ge n_r$. Thus, Propositions~\ref{prop:4-1-m-pos} and~\ref{prop:4-1-m-0} apply. \end{proof} \section{Proof of Theorems~\ref{thm:main1}, \ref{thm:main2} and~\ref{thm:main3}} \label{sec:classif} We prove Theorems~\ref{thm:main1},~\ref{thm:main2} and~\ref{thm:main3} by going through the cases established in Proposition~\ref{prop:smooth-rho2}. The notation is the same as in Sections~\ref{sec:cpl1} and~\ref{sec:firstStruct}. So, we deal with a smooth projective variety $X = X(A,P,u)$ of Picard number $\rho(X) = 2$ coming with an effective torus action of complexity one. From Corollary~\ref{cor:clfree} we know that $\Cl(X) = K = \ZZ^2$ holds. With $w_{ij} = Q(e_{ij})$ and $w_k = Q(e_k)$, the columns of the $2 \times (n+m)$ degree matrix $Q$ will be written as $$ w_{ij} \ = \ (w_{ij}^1,w_{ij}^2) \ \in \ \ZZ^2, \qquad\qquad w_{k} \ = \ (w_{k}^1,w_{k}^2) \ \in \ \ZZ^2. $$ Recall that all relations $g_0, \ldots, g_{r-2}$ of $R(A,P)$ have the same degree in $K = \ZZ^2$; we set for short $$ \mu \ = \ (\mu^1, \mu^2) \ := \ \deg(g_0) \ \in \ \ZZ^2. $$ We will frequently work with the faces of the orthant $\gamma = \QQ_{\ge 0}^{n+m}$ introduced in Lemma~\ref{lem:sampleFF}: $$ \gamma_{ij,k} \ = \ \cone(e_{ij},e_k) \ \preceq \ \gamma, \qquad \gamma_{i_1j_1,i_2j_2} \ = \ \cone(e_{i_1j_1},e_{i_2j_2}) \ \preceq \ \gamma. $$ \begin{remark} \label{rem:Q} Consider a face $\gamma_0 \preceq \gamma$ of type $\gamma_{ij,k}$ or $\gamma_{i_1j_1,i_2j_2}$. Write $e'$, $e''$ for the two generators of $\gamma_0$ and $w' = Q(e')$, $w'' = Q(e'')$ for the corresponding columns of the degree matrix $Q$ such that $(w',w'')$ is positively oriented in $\ZZ^2$. Then Remark~\ref{rem:Qfact} tells us \begin{eqnarray*} \gamma_0 \ \in \ \rlv(X) & \Rightarrow & \det(w',w'') \ = \ 1. \end{eqnarray*} So, if $\gamma_0 \in \rlv(X)$, then we may multiply $Q$ from the left with a unimodular \mbox{$2 \times 2$} matrix transforming $w'$ and $w''$ into $(1,0)$ and $(0,1)$. This change of coordinates on $\Cl(X)$ does not affect the defining data $(A,P)$. If $w' = (1,0)$ and $w'' = (0,1)$ hold and $e \in \gamma$ is a canonical basis vector with corresponding column $w = Q(e)$, then we have \begin{align*} \cone(e',e) \in \rlv(X) \quad &\Rightarrow \quad w = (w^1,1), \\ \cone(e'',e) \in \rlv(X) \quad &\Rightarrow \quad w = (1,w^2). \end{align*} \end{remark} We are ready to go through the cases of Proposition~\ref{prop:smooth-rho2}; we keep the numbering introduced there. \begin{case} We have $r=2$, $m \ge 0$ and the list of $n_i$ is $(n_0,2,2)$, where $n_0 \ge 3$. This leads to No.~1 and No.~2 in Theorems~\ref{thm:main1} and~\ref{thm:main2}. \end{case} \begin{proof} In a first step we show that there occur weights $w_{0j}$ in each of $\tp$ and $\tm$. Otherwise, we may assume that all $w_{0j}$ lie in $\tp$, see Lemma~\ref{lem:tau}~(i). Then Lemma~\ref{lem:tau}~(ii) says that also all $w_k$ lie in $\tp$. Moreover, we have $\deg(T_i^{l_i}) \in \tp$ for $i=0,1,2$. Thus, we may assume $w_{11},w_{21} \in \tp$ and obtain $w_{12},w_{22} \in \tm$, as there must be at least two weights in $\tm$. Finally, we may assume that $\cone(w_{01},w_{12})$ contains $w_{02},\ldots,w_{0n_0}$ and $w_{22}$. Applying Remark~\ref{rem:Q} first to $\gamma_{01,12}$, then to all $\gamma_{0j,12}$, $\gamma_{12,k}$ and $\gamma_{01,22}$, $\gamma_{12,21}$ yields $$ { Q \ = \ \left[ \begin{array}{cccc|cc|cc|ccc} 0 & w_{02}^1 & \dots & w_{0n_0}^1 & w_{11}^1 & 1 & w_{21}^1 & 1 & w_1^1 & \dots & w_m^1 \\ 1 & 1 & \dots & 1 & w_{11}^2 & 0 & 1 & w_{22}^2 & 1 & \dots & 1 \end{array} \right], } $$ where $w_{0j}^1 \geq 0$ and $w_{22}^2 \geq 0$. Since $\gamma_{01,12}, \gamma_{01,22} \in \rlv(X)$ holds, Lemma~\ref{lem:sampleFF}~(iv) implies $l_{11}=l_{21}=1$. Applying $P \cdot Q^t = 0$ to the first row of $P$ and the second row of $Q$ gives $$ 0 \ < \ 3 \ \le \ n_0 \ \le \ l_{01} + \ldots + l_{0n_0} \ = \ w_{11}^2 \ = \ 1 + w_{22}^2w_{11}^1, $$ where the last equality is due to $\gamma_{11,22} \in \rlv(X)$ and thus $\det(w_{22},w_{11})=1$. We conclude $w^2_{22} > 0$ and $w^1_{11} > 0$. Because of $\gamma_{0j,22} \in \rlv(X)$, we obtain $\det(w_{22},w_{0j})=1$. This implies $w_{0j}^1 = 0$ for all $j=2,\ldots,n_0$. Applying $P \cdot Q^t = 0$ to the first row of~$P$ and the first row of $Q$ gives $w^1_{11} + l_{12} = 0$; a contradiction. Knowing that each of $\tp$ and $\tm$ contains weights $w_{0j}$, we can assume $w_{01}, w_{02} \in \tp$ and $w_{03} \in \tm$. Lemma~\ref{lem:tau}~(ii) and~(iii) show $n_0=3$ and $m=0$. There is at least one other weight in $\tm$, say $w_{11} \in \tm$. Applying Lemma~\ref{lem:sampleFF}~(iii) to $\gamma_{0j,03} \in \rlv(X)$ for $j=1,2$ and~(iv) to suitable $\gamma_{0j_1,i_2j_2} \in \rlv(X)$, we obtain $$ l_{01} = l_{02} = 1, \qquad l_{11} = l_{12} = 1, \qquad l_{21} = l_{22} = 1. $$ Moreover, Remark~\ref{rem:Q} applied to $\gamma_{01,03}$ as well as $\gamma_{02,03}$ and $\gamma_{01,11}$ brings the matrix~$Q$ into the shape $$ { Q \ = \ \left[ \begin{array}{ccc|cc|cc} 0 & w_{02}^1 & 1 & 1 & w_{12}^1 & w_{21}^1 & w_{22}^1 \\ 1 & 1 & 0 & w_{11}^2 & w_{12}^2 & w_{21}^2 & w_{22}^2 \end{array} \right]. } $$ Observe that the second component of the degree of the relation is $\mu^2 = 2$. The possible positions of the weights $w_{2j}$ define three subcases: \vspace{0.3cm} \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.4,1.9) circle (0.0ex) node[]{\tiny{$w_{01}$}}; \path[fill, color=black] (-0.15,1.45) circle (0.0ex) node[]{\tiny{$w_{02}$}}; \path[fill, color=black] (-0.25,2.8) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.6,1.15) circle (0.0ex) node[]{\tiny{$w_{03} \ w_{11}$}}; \path[fill, color=black] (2.1,0.7) circle (0.0ex) node[]{\tiny{$w_{21} \ w_{22}$}}; \path[fill, color=black] (3.7,1.75) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-1) circle (0.0ex) node[]{\small{(i)}}; \end{tikzpicture} \ \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.25,2.15) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{02}$}}; \path[fill, color=black] (-0.25,1.8) circle (0.0ex) node[]{\tiny{$w_{22}$}}; \path[fill, color=black] (-0.25,3) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.1,1.15) circle (0.0ex) node[]{\tiny{$w_{03}$}}; \path[fill, color=black] (2.1,0.7) circle (0.0ex) node[]{\tiny{$w_{11} \ w_{21}$}}; \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-1) circle (0.0ex) node[]{\small{(ii)}}; \end{tikzpicture} \ \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.25,2.35) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{02}$}}; \path[fill, color=black] (-0.25,2) circle (0.0ex) node[]{\tiny{$w_{21} \ w_{22}$}}; \path[fill, color=black] (-0.25,3.1) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.2,1.05) circle (0.0ex) node[]{\tiny{$w_{03}$}}; \path[fill, color=black] (1.7,0.7) circle (0.0ex) node[]{\tiny{$w_{11}$}}; \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-1) circle (0.0ex) node[]{\small{(iii)}}; \end{tikzpicture} \noindent We will see that cases~(i) and~(ii) give No.~1 and No.~2 of Theorem~\ref{thm:main1} respectively and case~(iii) will not provide any smooth variety. In~(i) we assume $w_{21},w_{22}\in\tm$. Then $\gamma_{01,21}, \gamma_{01,22} \in \rlv(X)$ holds and Remark~\ref{rem:Q} shows $w_{21}^1=w_{22}^1=1$. This implies $\mu^1=2$. Similarly, considering $\gamma_{02,21}, \gamma_{02,22} \in \rlv(X)$, we obtain $w_{02}^1=0$ or $w_{21}^2=w_{22}^2=0$. The latter contradicts $\mu^2=2$ and thus $w_{02}^1=0$ holds. We conclude $l_{03}=\mu^1=2$. Furthermore $w_{12}^1=\mu^1-w_{11}^1=1$. Together, we have $$ g_0 \ = \ T_{01}T_{02}T_{03}^2+T_{11}T_{12}+T_{21}T_{22}, \qquad Q \ = \ \left[ \begin{array}{ccc|cc|cc} 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & a & 2-a & b & 2-b \end{array} \right], $$ where $a, b \in \ZZ$. Observe that $w_{12} \in \tm$ must hold; otherwise, $\gamma_{03,12} \in \rlv(X)$ and Remark~\ref{rem:Q} yields $w_{12}^2=1$, contradicting $w_{12}=(1,1)=w_{11}\in\tm$. The semiample cone is $\SAmple(X) = \cone((0,1), (1,d))$, where $d=\max(a, 2-a, b, 2-b)$. The anticanonical class is $-\mathcal{K}_X=(3,4)$. Hence $X$ is an almost Fano variety if and only if $d=1$, which is equivalent to $a=b=1$. In this situation $X$ is already a Fano variety. In (ii) we assume $w_{21} \in \tm$ and $w_{22} \in \tp$. Remark~\ref{rem:Q}, applied to $\gamma_{01,21}, \gamma_{03,22} \in \rlv(X)$ shows $w_{21}^1 = w_{22}^2 = 1$. The latter implies $w_{21}^2= \mu^2 -w_{22}^2=1$. We claim $w_{11}^2 \ne 0$. Otherwise, we have $w_{12}^2 = \mu^2 = 2$. This gives $\det(w_{03}, w_{12})=2$. We conclude $\gamma_{03,12} \not\in \rlv(X)$ and $w_{12} \in \tm$. Then $\gamma_{01,12} \in \rlv(X)$ implies $w_{12}^1=1$. Thus, $w_{22}=(1,1)$ and $w_{12}=(1,2)$ hold, contradicting $w_{22} \in \tp$ and $w_{12} \in \tm$. Now, $\gamma_{11,22} \in \rlv(X)$ yields $w_{11}^2w_{22}^1=0$ and thus $w_{22}^1=0$. We obtain $\mu^1=1$ and, as a consequence $l_{03}=1, w_{02}^1=0$ and $w_{12}^1=0$. Therefore $w_{12} \in \tp$ holds. Now $\gamma_{03,12}\in\rlv(X)$ implies $w_{12}^2=1$ and $w_{11}^2=\mu^2-w_{12}^2=1$. We arrive at $$ g_0 \ = \ T_{01}T_{02}T_{03}+T_{11}T_{12}+T_{21}T_{22}, \qquad Q \ = \ \left[ \begin{array}{ccc|cc|cc} 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 1 & 1 \end{array} \right]. $$ The anticanonical class is $-\mathcal{K}_X=(2,4)$ and the semiample cone is $\SAmple(X) = \cone((0,1), (1,1))$. In particular $X$ is Fano. We turn to (iii), where both $w_{21}$ and $w_{22}$ lie in $\tp$. The homogeneity of $g_0$ yields $w_{12} \in \tp$. Thus, $\gamma_{03,12},\gamma_{03,21},\gamma_{03,22} \in \rlv(X)$ holds and Remark~\ref{rem:Q} implies $w_{12}^2=w_{21}^2=w_{22}^2=1$. We conclude $w_{11}^2 = \mu^2-w_{12}^2=1$. Similarly, $\gamma_{02,11}, \gamma_{11, 21}, \gamma_{11, 22} \in \rlv(X)$ yields $w_{02}^1=w_{21}^1= w_{22}^1=0$. This gives $0 \neq l_{03} = \mu^1=w_{21}^1+ w_{22}^1=0$ which is not possible. \end{proof} \begin{case} We have $r=2$, $m = 0$, $n = 6$ and the list of $n_i$ is $(3,2,1)$. This leads to No. $3$ in Theorems~\ref{thm:main1} and~\ref{thm:main2}. \end{case} \begin{proof} Since there are at least two weights in $\tp$ and another two in $\tm$, we can assume $w_{01},w_{02}\in\tp$ and $w_{03},w_{12}\in\tm$. By Lemma~\ref{lem:sampleFF}~(iii) and~(iv) we obtain $l_{01}=l_{02}=l_{11}=l_{12}=1$. We may assume that $\cone(w_{01},w_{03})$ contains $w_{02}$. Applying Remark~\ref{rem:Q} firstly to $\gamma_{01,03}$, then to $\gamma_{02,03}$ and $\gamma_{01,12}$, we obtain $$ Q \ = \ \left[ \begin{array}{ccc|cc|c} 0 & w_{02}^1 & 1 & w_{11}^1 & 1 & w_{21}^1 \\ 1 & 1 & 0 & w_{11}^2 & w_{12}^2 & w_{21}^2 \end{array} \right], $$ where $w_{02}^1\ge0$. For the degree $\mu$ of $g_0$, we have $\mu^2 = 2$. We conclude $w_{11}^2 = 2-w_{12}^2$ and $l_{21}w_{21}^2 = 2$ which in turn implies $l_{21}=2$ and $w_{21}^2=1$. For $\gamma_{02,12} \in \rlv(X)$, Remark~\ref{rem:Q} gives $\det(w_{12}, w_{02}) = 1$ and thus $w_{02}^1=0$ or $w_{12}^2=0$ must hold. We treat the case $w_{02}^1=0$. Then $\mu=(l_{03},2)$ holds. We conclude $w_{11}^1=l_{03}-1$ and $w_{21}^1=l_{03}/2$. With $c := l_{03}/2 \in\ZZ_{\ge1}$ and $a := w_{12}^2 \in \ZZ$, we obtain the degree matrix $$ Q \ = \ \left[ \begin{array}{ccc|cc|c} 0 & 0 & 1 & 2c-1 & 1 & c \\ 1 & 1 & 0 & 2-a & a & 1 \end{array} \right]. $$ We show $w_{11} \in \tm$. Otherwise, $w_{11}\in\tp$ holds, we have $\gamma_{03,11} \in \rlv(X)$ and Remark~\ref{rem:Q} yields $a=1$. But then $w_{01} = (0,1) \in \tp$ and $w_{11} = (2c-1,1) \in \tp$ imply $w_{12}=(1,1) \in\tp$; a contradiction. So we have $w_{11}\in\tm$. Then $\gamma_{01,11}\in\rlv(X)$ holds. Remark~\ref{rem:Q} gives $\det(w_{11},w_{01}) = 1$ which means $c=1$ and, as a consequence, $l_{03}=2$. Together, we have $$ g_0 \ = \ T_{01}T_{02}T_{03}^2+T_{11}T_{12}+T_{21}^2, \qquad Q \ = \ \left[ \begin{array}{ccc|cc|c} 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 2-a & a & 1 \end{array} \right], $$ where we may assume $a \ge 2-a$ that means $a \in \ZZ_{\ge 1}$. The semiample cone is $\SAmple(X)=\cone((0,1),(1,a))$, and the anticanonical class is $-\mathcal{K}_X=(2,3)$. In particular, $X$ is an almost Fano variety if and only $a=1$ holds. In this situation $X$ is already a Fano variety. We turn to the case $w_{12}^2=0$. Here, $w_{11}^2=\mu^2=2$ leads to $\det(w_{03},w_{11})=2$ and thus the $\mathfrak{F}$-face $\gamma_{03,11}$ does not belong to $\rlv(X)$; see Remark~\ref{rem:Q}. Hence $w_{11} \in \tm$ and thus $\gamma_{01,11}\in\rlv(X)$. This gives $w_{11}^1=1$ and thus $w_{11}=(1,2)$. Because of $w_{02}=(w_{02},1)\in\tp$, we must have $w_{02}^1=0$ and the previous consideration applies. \end{proof} \begin{case} We have $r=2$, $m = 0$, $n = 5$ and the list of $n_i$ is $(3,1,1)$. This case does not provide smooth varieties. \end{case} \begin{proof} Each of $\tp$ and $\tm$ contains at least two weights. We may assume $w_{01},w_{02}\in\tp$ and $w_{03},w_{11},w_{21}\in\tm$. Then $\gamma_{01,03},\gamma_{02,03}\in\rlv(X)$ holds and Lemma~\ref{lem:sampleFF}~(iii) yields $l_{01}=l_{02}=1$. By Remark~\ref{rem:Q} we can assume $w_{03}=(1,0)$ and $w_{01}^2=w_{02}^2=1$. This implies $\mu^2=2$ and, as a consequence, $l_{11}=l_{21}=2$. By~\cite[Thm.~1.1]{HaHe:2013}, we have torsion in $\Cl(X)$; a contradiction to Corollary~\ref{cor:clfree}. \end{proof} \begin{case} \label{case:d} We have $r=2$, $m\ge 0$, $n = 6$ and the list of $n_i$ is $(2,2,2)$. Suitable admissible operations lead to one of the following configurations for the weights $w_{ij}$: $$ \begin{array}{ccc} && \\ \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.25,2.15) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{11}$}}; \path[fill, color=black] (-0.25,1.8) circle (0.0ex) node[]{\tiny{$w_{21}$}}; \path[fill, color=black] (-0.25,3) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.1,1.15) circle (0.0ex) node[]{\tiny{$w_{02}$}}; \path[fill, color=black] (2.1,0.7) circle (0.0ex) node[]{\tiny{$w_{12} \ w_{22}$}}; \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-.6) circle (0.0ex) node[]{\small{{\rm (i)}}}; \end{tikzpicture} & \qquad \qquad & \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.25,2.35) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{02}$}}; \path[fill, color=black] (-0.25,2) circle (0.0ex) node[]{\tiny{$w_{11} \ w_{21}$}}; \path[fill, color=black] (-0.25,3.1) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.2,1.05) circle (0.0ex) node[]{\tiny{$w_{12}$}}; \path[fill, color=black] (1.7,0.7) circle (0.0ex) node[]{\tiny{$w_{22}$}}; \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-.6) circle (0.0ex) node[]{\small{{\rm (ii)}}}; \end{tikzpicture} \\[1ex] \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.25,2.5) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{02}$}}; \path[fill, color=black] (-0.25,2.15) circle (0.0ex) node[]{\tiny{$w_{11} \ w_{12}$}}; \path[fill, color=black] (-0.25,1.8) circle (0.0ex) node[]{\tiny{$w_{21}$}}; \path[fill, color=black] (-0.25,3.25) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (1.7,0.7) circle (0.0ex) node[]{\tiny{$w_{22}$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \path[fill, color=black] (1,-.6) circle (0.0ex) node[]{\small{{\rm (iii)}}}; \end{tikzpicture} & \qquad \qquad & \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.25,2.7) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{02}$}}; \path[fill, color=black] (-0.25,2.35) circle (0.0ex) node[]{\tiny{$w_{11} \ w_{12}$}}; \path[fill, color=black] (-0.25,2) circle (0.0ex) node[]{\tiny{$w_{21} \ w_{22}$}}; \path[fill, color=black] (-0.25,3.25) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-.6) circle (0.0ex) node[]{\small{{\rm (iv)}}}; \end{tikzpicture} \end{array} $$ Configuration~(i) amounts to No.~4 in Theorems~\ref{thm:main1},~\ref{thm:main2} and~\ref{thm:main3}, configuration~(ii) to No.~5, configuration~(iii) to Nos.~6 and~7, and configuration~(iv) to Nos.~8 and~9. \end{case} \begin{proof}[Proof for configuration~(i)] We have $w_{01},w_{11},w_{21}\in\tp$ and $w_{02},w_{12},w_{22}\in\tm$. We may assume $w_k \in \tp$ for all $k = 1, \ldots, m$. If $m>0$, we have $\gamma_{i2,1} \in \rlv(X)$ and Lemma~\ref{lem:sampleFF}~(ii) gives $l_{i1} = 1$ for $i=0,1,2$. If $m=0$, we use $\gamma_{i_11,i_22} \in \rlv(X)$ and Lemma~\ref{lem:sampleFF}~(iv) to obtain $l_{i_12}=1$ or $l_{i_21}=1$ for all $i_1\neq i_2$. Thus, for $m=0$, we may assume $l_{01}=l_{11}=1$ and are left with $l_{21}=1$ or $l_{22}=1$. We treat the case $m \ge 0$ and $l_{01}=l_{11}=l_{21}=1$. Here we may assume $w_{11},w_{21},w_{22} \in \cone(w_{01},w_{12})$. Applying Remark~\ref{rem:Q} firstly to $\gamma_{01,12}$ and then to $\gamma_{01,22}$, $\gamma_{12,21}$ and all $\gamma_{12,k}$ gives $$ Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & w_{02}^1 & w_{11}^1 & 1 & w_{21}^1 & 1 & w_1^1 & \ldots & w_m^1 \\ 1 & w_{02}^2 & w_{11}^2 & 0 & 1 & w_{22}^2 & 1 & \ldots & 1 \end{array} \right]. $$ Using $w_{11},w_{21},w_{22} \in \cone(w_{01},w_{12})$ and the fact that the determinants of $(w_{02},w_{01})$, $(w_{12},w_{11})$ and $(w_{22},w_{21})$ are positive, we obtain $$ w_{11}^1,\, w_{21}^1,\, w_{22}^2 \ \ge \ 0, \qquad w_{02}^1,\, w_{11}^2 \ > \ 0, \qquad 1 \ > \ w_{22}^2 w_{21}^1. $$ The degree $\mu$ of the relation satisfies $$ 0 \ < \ \mu^1 \ = \ l_{02}w_{02}^1 \ = \ w_{11}^1 + l_{12} \ = \ w_{21}^1+ l_{22}, $$ $$ 0 \ < \ \mu^2 \ = \ 1+l_{02}w_{02}^2 \ = \ w_{11}^2 \ = \ 1+l_{22}w_{22}^2. $$ In particular, $w_{02}^2 \ge 0$ holds and thus all components of the $w_{ij}$ are non-negative. With $\gamma_{02,11},\gamma_{02,21}, \in \rlv(X)$ and Remark~\ref{rem:Q}, we obtain $$ w_{02}^1w_{11}^2 \ = \ 1 + w_{02}^2w_{11}^1, \qquad\qquad w_{02}^1-1 \ = \ w_{02}^2w_{21}^1. $$ We show $w_{22}^2 = 0$. Otherwise, because of $1 > w_{22}^2 w_{21}^1$, we have $w_{21}^1=0$. This implies $w_{02}^1=1$ and thus $$ w_{11}^2 \ = \ 1 + w_{02}^2w_{11}^1 \ = \ 1+l_{02}w_{02}^2. $$ This gives $w_{02}^2=0$ or $w_{11}^1=l_{02}$. The first is impossible because of $l_{02}w_{02}^2 = l_{22}w_{22}^2$ and the second because of $l_{02} = l_{02}w_{02}^1 = w_{11}^1 + l_{12}$. Knowing $w_{22}^2 = 0$, we directly conclude $w_{11}^2 = 1$ and $w_{02}^2 = 0$ from $\mu^2 = 1$. This gives $w_{02}^1=1$. With $a := w_{11}^1 \in \ZZ_{\ge 0}$, $b := w_{21}^1 \in \ZZ_{\ge 0}$ and $c_k := w_{k}^1 \in \ZZ$ we are in the situation $$ g_0 \ = \ T_{01}T_{02}^{l_{02}}+T_{11}T_{12}^{l_{12}}+T_{21}T_{22}^{l_{22}}, \qquad Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & 1 & a & 1 & b & 1 & c_1 & \dots & c_m \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & \dots & 1 \end{array} \right], $$ where we may assume $0 \le a \le b$ and $c_1 \le \ldots \le c_m$. Observe $l_{02}=a+l_{12}=b+l_{22}$. The anticanonical class and the semiample cone of $X$ are given by \begin{eqnarray*} -\mathcal{K}_X & = & (3 + b + c_1 + \ldots + c_m - l_{12}, \, 2 + m), \\ \SAmple(X) & = & \cone((1,0), (d,1)), \end{eqnarray*} where $d \sei \max(b, c_m)$. Consequently, $X$ is a Fano variety if and only if the following inequality holds $$ 3 + b + c_1 + \ldots + c_m - l_{12} \ > \ (2+m)d. $$ A necessary condition for this is $0 \le d \le 1$ with $l_{12} = 1$ if $d=1$ and $l_{12} \le 2$ if $d=0$ The tuples $(a,b,d,l_{02},l_{12},l_{22})$ fulfilling that condition are $$ (0,0,0,2,2,2), \qquad (0,0,0,1,1,1), \qquad (1,1,1,2,1,1). $$ Each of these three tuples leads indeed to a Fano variety $X$; the respectively possible choices of the $c_k$ lead to Nos.~4.A, 4.B and~4.C of Theorem~\ref{thm:main2} and are as follows: $$ c_1 = \ldots =c_m=0, \qquad -1 \le c_1 \le 0 = c_2 = \ldots = c_m, \qquad c_1 = \ldots =c_m=1. $$ Moreover $X$ is a truly almost Fano variety if and only if the following equality holds $$ 3 + b + c_1 + \ldots + c_m - l_{12} \ = \ (2+m)d. $$ This implies $0 \le d \le 2$ and the only possible parameters fulfilling that condition are listed as Nos.~4.A to~4.F in the table of Theorem~\ref{thm:main3}. We turn to the case $m = 0$, $l_{01}=l_{11}=1$ and $l_{21} \ge 2$. Lemma~\ref{lem:sampleFF}~(iv) applied to $\gamma_{01,22}, \gamma_{11,22} \in \rlv(X)$ gives $l_{02}=l_{12}=1$. If $l_{22}=1$, then suitable admissible operations bring us to the previous case. So, let $l_{22} \ge 2$. We may assume $w_{11} \in \cone(w_{01},w_{12})$. We apply Remark~\ref{rem:Q} firstly to~$\gamma_{01,12}$, then to $\gamma_{01,22}$, $\gamma_{12,21}$ and arrive at $$ g_0 \ = \ T_{01}T_{02}+T_{11}T_{12}+T_{21}^{l_{21}}T_{22}^{l_{22}}, \quad Q \ = \ \left[ \begin{array}{cc|cc|cc} 0 & w_{02}^1 & w_{11}^1 & 1 & w_{21}^1 & 1 \\ 1 & w_{02}^2 & w_{11}^2 & 0 & 1 & w_{22}^2 \end{array} \right], $$ where $w_{11}^1 \ge 0$ and $w_{11}^2 = \det(w_{12},w_{11}) > 0$. We have $\mu = w_{02}+w_{01} = w_{11}+w_{12}$ and thus $w_{02} = w_{11}+w_{12}-w_{01}$. Because of $\gamma_{02,11} \in \rlv(X)$, we obtain $$ 1 \ = \ \det(w_{02},w_{11}) \ = \ \det(w_{12}-w_{01},w_{11}) \ = \ w_{11}^1+w_{11}^2. $$ We conclude $w_{11} = (0,1)$ and $\mu = (1,1)$. Using $\mu = l_{21}w_{21}+l_{22}w_{22}$ and $l_{21},l_{22} \ge 2$ we see $w_{21}^1, w_{22}^2 <0$. On the other hand, $0 < \det(w_{22},w_{21}) = 1 - w_{21}^1w_{22}^2$, a contradiction. Thus $l_{22} \ge 2$ does not occur. \end{proof} \begin{proof}[Proof for configuration~(ii)] We have $w_{01},w_{02},w_{11},w_{21}\in\tp$ and $w_{12},w_{22}\in\tm$. We may assume that $w_{02},w_{12} \in \cone(w_{01},w_{22})$ holds. Applying Remark~\ref{rem:Q} first to $\gamma_{01,22} \in \rlv(X)$ and then to $\gamma_{01,12}, \gamma_{02,22}, \gamma_{11,22} \in \rlv(X)$ we obtain $$ Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & w_{02}^1 & w_{11}^1 & 1 & w_{21}^1 & 1 & w_1^1 & \dots & w_m^1 \\ 1 & 1 & 1 & w_{12}^2 & w_{21}^2 & 0 & w_1^2 & \dots & w_m^2 \end{array} \right], $$ where we have $w_{02}^1, w_{12}^2 \ge 0$ due to $w_{02},w_{12} \in \cone(w_{01},w_{22})$. Moreover, $w_{21}^2>0$ holds, as we infer from the conditions $$ 0 \ \le \ \mu^1 \ = \ l_{02}w_{02}^1 \ = \ l_{11}w_{11}^1 + l_{12} \ = \ l_{21}w_{21}^1+ l_{22}, $$ $$ 0 \ < \ \mu^2 \ = \ l_{01}+l_{02} \ = \ l_{11}+l_{12}w_{12}^2 \ = \ l_{21}w_{21}^2. $$ We show $l_{11} \ge 2$. Otherwise, the above conditions give $l_{12}w_{12}^2 > 0$ and thus $w_{12}^2 > 0$. For $\gamma_{02,12} \in \rlv(X)$, Remark~\ref{rem:Q} gives $\det(w_{12},w_{02}) = 1$ which means $w_{12}^2w_{02}^1=0$ and thus $w_{02}^1 = 0$. This implies $l_{21}w_{21}^1+ l_{22} = 0$ and thus $w_{21}^1 < 0$; a contradiction to $1= \det(w_{12},w_{21}) = w_{21}^2 - w_{12}^2w_{21}^1$ which in turn holds due to $\gamma_{12,21} \in \rlv(X)$ and Remark~\ref{rem:Q}. Lemma~\ref{lem:sampleFF}~(iv) applied to $\gamma_{02,12}, \gamma_{01,12}, \gamma_{21,12} \in \rlv(X)$ shows $l_{01}=l_{02}=l_{22}=1$. Putting together $\mu^2=2= l_{11}+l_{12}w_{12}^2$ and $l_{11} \neq 1$, we conclude $l_{11}=2$ and $w_{12}^2=0$. With $\gamma_{12,21} \in \rlv(X)$ and Remark~\ref{rem:Q} we obtain $w_{21}^2=1$ and hence $l_{21}=\mu^2=2$. From $$ 0 \ \le \ \mu^1 \ = \ w_{02}^1 \ = \ 2w_{11}^1+1 \ = \ 2w_{21}^1+1 $$ we conclude $w_{11}^1 =w_{21}^1 \geq 0$ and thus $w_{02}^1 > 0$. Lemma~\ref{lem:sampleFF}~(ii) implies that possible weights of type $w_k$ lie in $\tm$. Thus Remark~\ref{rem:Q} and $\gamma_{01,k}$ imply $w_{k}^1=1$ for all $k$. Moreover, since $\gamma_{02,k} \in \rlv(X)$, the latter implies $w_k^2=0$. All in all, we arrive at $$ g_0 \ =\ T_{01}T_{02}+T_{11}^2T_{12}+T_{21}^2T_{22}, \quad Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & 2a+1 & a & 1 & a & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & \ldots & 0 \end{array} \right], $$ where $a \in \ZZ_{\ge 0}$. The anticanonical class is $-\mathcal{K}_X=(2a+2+m,2)$ and the semiample cone is $\SAmple(X) =\cone((1,0), (2a+1,1))$. Hence $X$ is an almost Fano variety if and only if $m \ge 2a$ holds and $X$ is a Fano variety if and only if $m > 2a$ holds. \end{proof} \begin{proof}[Proof for configuration~(iii)] We have $w_{01},w_{02},w_{11},w_{12},w_{21}\in\tp$ and $w_{22}\in \tm$. As there must be another weight in $\tm$, we obtain $m > 0$. Lemma~\ref{lem:tau}~(v) yields $w_1, \ldots, w_m \in \tm$. We may assume $w_{02}, w_{11}, w_{12}, w_k \in \cone(w_{01},w_1)$, where $k = 2, \ldots, m$. Applying Remark~\ref{rem:Q} firstly to $\gamma_{01,1} \in \rlv(X)$ and then to the remaining faces $\gamma_{01,22}, \gamma_{01,k}, \gamma_{ij,1}$ from $\rlv(X)$ leads to the degree matrix $$ Q \ = \ \left[ \begin{array}{cc|cc|cc|cccc} 0 & w_{02}^1 & w_{11}^1 & w_{12}^1 & w_{21}^1 & 1 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & w_{22}^2 & 0 & w_2^2 & \ldots & w_m^2 \end{array} \right] $$ with at most $w_{21}^1,w_{22}^2$ negative. We infer $l_{01}=l_{02}=l_{11}=l_{12}=l_{22}=1$ from Lemma~\ref{lem:sampleFF}~(ii). For $\gamma_{02,22},\gamma_{11,22},\gamma_{12,22} \in \rlv(X)$ Remark~\ref{rem:Q} tells us $$ w_{22}^2 \ = \ 0 \qquad \text{or} \qquad w_{02}^1 \ = \ w_{11}^1 \ = \ w_{12}^1 \ = \ 0. $$ We treat the case $w_{22}^2=0$. Here $l_{21} = \mu^2 =2$ holds. Thus $\mu^1=w_{02}^1 = 2w_{21}^1+1$ holds. Because of $w_{02}^1 \ge 0$, we conclude $w_{02}^1 > 0$ and $w_{21}^1 \ge 0$. Remark \ref{rem:Q} applied to $\gamma_{02,k} \in \rlv(X)$ gives $w_k^2=0$ for all $k=2,\dots,m$. We arrive at $$ g_0 \ = \ T_{01}T_{02}+T_{11}T_{12}+T_{21}^2T_{22}, \quad Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & 2c+1 & a & b & c & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \end{array} \right], $$ where $a,b,c \in \ZZ_{\geq 0}$ and $a+b=2c+1$. Furthermore, the anticanonical class is $-\mathcal{K}_X =(3c+2+m,3)$ and we have $\SAmple(X)=\cone((1,0), (2c+1,1))$. In particular, $X$ is an almost Fano variety if and only if $3c+1 \le m$ holds and a Fano variety if and only if the corresponding strict inequality holds. Now we consider the case $w_{02}^1=w_{11}^1=w_{12}^1=0$. We have $\mu^1 =0$, which implies $l_{21}=1$, $w_{21}^1=-1$. Consequently, $\mu^2 =2$ gives $w_{22}^2=1$. Since $\gamma_{21,k}\in \rlv(X)$ for $2 \leq k \leq m$, we conclude~$w_k^2=0$ for all $k$. Therefore we obtain $$ g_0 \ = \ T_{01}T_{02}+T_{11}T_{12}+T_{21}T_{22}, \quad Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & 0 & 0 & 0 & -1 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & \ldots & 0 \end{array} \right]. $$ Finally, we have $-\mathcal{K}_X =(m,4)$ and $\SAmple(X)=\cone((1,1), (0,1))$. Thus, $X$ is a Fano variety if and only if $m<4$ holds. Moreover, $X$ is an almost Fano variety if and only if $m\le 4$ holds. \end{proof} \begin{proof}[Proof for configuration~(iv)] All $w_{ij}$ lie in $\tp$. Then we have $m \ge 2$ and one and hence all $w_k$ in lie in $\tm$, see Lemma~\ref{lem:tau}~(v). Applying Lemma~\ref{lem:sampleFF}~(ii) to $\gamma_{ij,1} \in \rlv(X)$, we conclude $l_{ij} = 1$ for all $i,j$. Thus we have the relation $$ g_0 \ = \ T_{01}T_{02}+T_{11}T_{12}+T_{21}T_{22}. $$ We may assume that $\cone(w_{01},w_1)$ contains all $w_{ij}, w_k$. Remark~\ref{rem:Q} applied to $\gamma_{01,1} \in \rlv(X)$ leads to $w_1 = (1,0)$ and $w_{01} = (0,1)$. All other weights lie in the positive orthant. For $\gamma_{ij,1}, \gamma_{01,k} \in \rlv(X)$ Remark~\ref{rem:Q} shows $w_{ij}^2=w_k^1=1$ for all $i,j,k$. Consider the case that all $w_k^2$ vanish. Then the degree matrix is of the form $$ Q \ = \ \left[ \begin{array}{cc|cc|cc|ccc} 0 & a_2 & a_3 & a_4 & a_5 & a_6 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & \ldots & 0 \end{array} \right], $$ where $a_i \in \ZZ_{\ge 0}$ and $a_2=a_3+a_4=a_5+a_6$. We have $-\mathcal{K}_X = (2a_2+m,4)$ and $\SAmple(X)= \cone((1,0),(a_2,1))$. Hence~$X$ is a Fano variety if and only if $2a_2 < m$ holds and an almost Fano variety if and only if $2a_2 \le m$ holds. Finally, let $w_k^2>0$ for some $k$. Note that we may assume $0\le w_2^2 \le\ldots\le w_m^2$; in particular $w_m^2>0$. Since $\gamma_{ij,m} \in \rlv(X)$ for all $i,j$, Remark~\ref{rem:Q} yields $w_{ij}^1 = 0$ for all $i,j$. Thus we obtain the degree matrix $$ Q \ = \ \left[ \begin{array}{cc|cc|cc|cccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & a_2 & \ldots & a_m \end{array} \right], $$ where $0\le a_2 \le\ldots\le a_m$ and $a_m>0$. The anticanonical class and the semiample cone are given as $$ -\mathcal{K}_X \ = \ (m,\, 4+a_2+ \ldots + a_m), \qquad \SAmple(X) \ = \ \cone((0,1),(1,a_m)). $$ In particular, $X$ is a Fano variety if and only if $4+a_2+ \ldots + a_m > ma_m$ holds. Note that for the latter $a_m \le 3$ is necessary. Moreover, $X$ is a truly almost Fano variety if and only if the equality $4+a_2+ \ldots + a_m = ma_m$ holds. \end{proof} \begin{case} We have $r=2$, $m\ge 0$, $n = 5$ and the list of $n_i$ is $(2,2,1)$. This leads to Nos.~10, 11 and~12 in Theorems~\ref{thm:main1},~\ref{thm:main2} and~\ref{thm:main3}. \end{case} \begin{proof} We divide this case into the following three configurations, according to the way some weights lie with respect to $\tx$. \vspace{0.3cm} \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.4,1.9) circle (0.0ex) node[]{\tiny{$w_{02}$}}; \path[fill, color=black] (-0.15,1.45) circle (0.0ex) node[]{\tiny{$w_{12}$}}; \path[fill, color=black] (-0.25,2.8) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.2,1.05) circle (0.0ex) node[]{\tiny{$w_{01}$}}; \path[fill, color=black] (1.7,0.7) circle (0.0ex) node[]{\tiny{$w_{11}$}}; \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-1) circle (0.0ex) node[]{\small{(i)}}; \end{tikzpicture} \ \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.4,1.9) circle (0.0ex) node[]{\tiny{$w_{02}$}}; \path[fill, color=black] (-0.15,1.45) circle (0.0ex) node[]{\tiny{$w_{1}$}}; \path[fill, color=black] (-0.25,2.8) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.1,1.15) circle (0.0ex) node[]{\tiny{$w_{01}$}}; \path[fill, color=black] (2.1,0.7) circle (0.0ex) node[]{\tiny{$w_{11} \ w_{12}$}}; \path[fill, color=black] (3.5,1.7) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-1) circle (0.0ex) node[]{\small{(ii)}}; \end{tikzpicture} \ \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(3.5,2.9)--(1.3,3.4)--(0,0); \path[fill, color=black] (1.5,2) circle (0.0ex) node[]{\small{$\tx$}}; \path[fill, color=black] (-0.4,1.9) circle (0.0ex) node[]{\tiny{$w_{1}$}}; \path[fill, color=black] (-0.15,1.45) circle (0.0ex) node[]{\tiny{$w_{2}$}}; \path[fill, color=black] (-0.25,2.8) circle (0.0ex) node[]{\small{$\tp$}}; \draw (0,0)--(1.3,3.4); \draw (0,0) --(-2,3.4); \path[fill, color=black] (2.6,1.15) circle (0.0ex) node[]{\tiny{$w_{01} \ w_{02}$}}; \path[fill, color=black] (2.1,0.7) circle (0.0ex) node[]{\tiny{$w_{11} \ w_{12}$}}; \path[fill, color=black] (3.7,1.75) circle (0.0ex) node[]{\small{$\tm$}}; \draw (0,0) -- (3.5,2.9); \draw (0,0) -- (4.5,0.7); \path[fill, color=black] (1,-1) circle (0.0ex) node[]{\small{(iii)}}; \end{tikzpicture} We show that configuration~(i) does not provide any smooth variety, (ii) delivers No.~10 of Theorem~\ref{thm:main1} and~(iii) delivers Nos.~11 and 12. In configuration~(i) we have $w_{01},w_{11}\in\tm$ and $w_{02},w_{12}\in\tp$. We may assume $w_{11} \in \cone(w_{01}, w_{12})$. Remark~\ref{rem:Q} applied to $\gamma_{01,12}\in\rlv(X)$ leads to $w_{01}=(1,0)$ and $w_{12}=(0,1)$. Observe $w_{11}^1, w_{11}^2 \ge 0$. Due to $\det(w_{11},w_{12}) > 0$, we even have $w_{11}^1 > 0$ and $\det(w_{01},w_{02}) > 0$ gives $w_{02}^2 > 0$. Since $T_0^{l_0}$ and $T_1^{l_1}$ share the same degree, we have $$ l_{01}w_{01} + l_{02}w_{02} \ = \ l_{11}w_{11} + l_{12}w_{12}. $$ Lemma~\ref{lem:sampleFF}~(iv) says $l_{02}=1$ or $l_{11}=1$, which allows us to resolve for $w_{02}$ or for $w_{11}$ in the above equation. Using $\gamma_{02,11}\in\rlv(X)$, we obtain \begin{eqnarray*} l_{02} = 1 & \implies & 1 = \det(w_{11},w_{02}) = \det(w_{11},l_{12}w_{12}-l_{01}w_{01}) = l_{12}w_{11}^1+l_{01}w_{11}^2, \\ l_{11} = 1 & \implies & 1 = \det(w_{11},w_{02}) = \det(l_{01}w_{01}-l_{12}w_{12},w_{02}) = l_{01}w_{02}^2+l_{12}w_{02}^1. \end{eqnarray*} We show $l_{02} > 1$. Otherwise, $l_{02} = 1$ holds. The above consideration shows $w_{11}^2 = 0$ and $l_{12}= w_{11}^1 = 1$. Thus, $l_{21}w_{21}^2 = l_{12} =1$ holds and we obtain $l_{21} = 1$; a contradiction to $P$ being irredundant. Thus, $l_{02} > 1$ and $l_{11} = 1$ must hold. Because of $w_{02}^2 > 0$, we must have $w_{02}^1 \le 0$. With $$ 1 \ = \ \det(w_{11},w_{02}) \ = \ w_{11}^1 w_{02}^2 - w_{11}^2 w_{02}^1 $$ we see $w_{11}^2 w_{02}^1 = 0$ and $w_{11}^1 = w_{02}^2 = 1$. But then we arrive at $1 = l_{11}w_{11}^1 = l_{21}w_{21}^1$. Again this means $l_{21} = 1$; a contradiction to $P$ being irredundant. In configuration~(ii) we have $w_{01},w_{11},w_{12} \in \tm$ and $w_{02},w_{1} \in \tp$. In particular $m\ge1$. Lemma~\ref{lem:tau}~(v) yields $w_2,\ldots,w_m \in \tp$. Applying Remark~\ref{rem:Q} first to $\gamma_{11,1}\in\rlv(X)$ an then to $\gamma_{01,1},\gamma_{12,1},\gamma_{02,11},\gamma_{11,k}\in\rlv(X)$ leads to $$ Q \ = \ \left[ \begin{array}{cc|cc|c|cccc} 1 & w_{02}^1 & 1 & 1 & w_{21}^1 & 0 & w_2^1 & \dots & w_m^1 \\ w_{01}^2 & 1 & 0 & w_{12}^2 & w_{21}^2 & 1 & 1 & \dots & 1 \end{array} \right]. $$ Applying Lemma~\ref{lem:sampleFF}~(ii) to $\gamma_{01,1},\gamma_{12,1},\gamma_{11,1}\in\rlv(X)$ we obtain $l_{02}=l_{11}=l_{12}=1$. For the degree $\mu$ of the relation $g_0$ we note $$ \mu^1 \ = \ l_{01}+w_{02}^1 \ = \ 2 \ = \ l_{21} w_{21}^1, \qquad \qquad \mu^2 \ = \ l_{01}w_{01}^2+1 \ = \ w_{12}^2 \ = \ l_{21}w_{21}^2. $$ From $\mu^1=2$ we infer $l_{21}=2$ and $w_{21}^1=1$. Consequently, $\mu^2$ is even and both $l_{01}, w_{01}^2$ are odd. Using again $\mu^1=2$ gives $w_{02}^1 \ne 0$. For $\gamma_{02,12}\in\rlv(X)$ Remark~\ref{rem:Q} yields $\det(w_{12},w_{02})=1$ which means $w_{02}^1w_{12}^2=0$. We conclude $w_{12}^2=0=\mu^2$. This implies $w_{21}^2=0$, $w_{01}^2=-1$, $l_{01}=1$ and $w_{02}^1=1$. We obtain $$ g_0 \ = \ T_{01}T_{02}+T_{11}T_{12}+T_{21}^2, \qquad Q \ = \ \left[ \begin{array}{cc|cc|c|ccc} 1 & 1 & 1 & 1 & 1 & 0 & \dots & 0 \\ -1 & 1 & 0 & 0 & 0 & 1 & \dots & 1 \end{array} \right], $$ where $w_2^1= \ldots =w_m^1=0$ follows from Remark~\ref{rem:Q} applied to $\gamma_{01,k}\in\rlv(X)$. The semiample cone is given as $\SAmple(X)= \cone((1,0),(1,1))$ and the anticanonical class as $-\mathcal{K}_X=(3,m)$. Therefore $X$ is a Fano variety if and only if $m<3$, i.e $m=1,2$. Moreover, $X$ is an almost Fano variety if and only if $m\le3$. In configuration~(iii) we have $w_{01},w_{02},w_{11},w_{12}\in\tm$ and $w_{1},w_{2}\in\tp$. In particular $m\ge2$. Lemma~\ref{lem:tau}~(v) ensures $w_3, \ldots, w_m \in \tp$. We can assume that all $w_{ij}, w_k$ lie in $\cone(w_{01},w_{1})$. Applying Remark~\ref{rem:Q}, firstly to $\gamma_{01,1}$ and then to all relevant faces of the types $\gamma_{ij,1}$ and $\gamma_{01,k}$, we achieve $$ w_{01}=(1,0), \quad w_{1}=(0,1), \quad w_{02}^1=w_{11}^1=w_{12}^1=1, \quad w_2^2= \ldots = w_m^2 =1. $$ Lemma~\ref{lem:sampleFF}~(ii) applied to all $\gamma_{ij,1}$ shows $l_{ij}=1$ for all $i,j$. We conclude $\mu^1=2$ which in turn implies $l_{21}=2$ and $w_{21}^1=1$. In particular, we have the relation $$ g_0 \ = \ T_{01}T_{02} + T_{11}T_{12} + T_{21}^2. $$ We treat the case that $w_1^1 = \ldots = w_m^1 =0$ holds. All columns of the degree matrix lie in $\cone(w_{01},w_{1})$ and thus $Q$ is of the form $$ Q \ = \ \left[ \begin{array}{cc|cc|c|cccc} 1 & 1 & 1 & 1 & 1 & 0 & 0 & \ldots & 0 \\ 0 & 2c & a & b & c & 1 & 1 & \ldots & 1 \end{array} \right], $$ where $a,b,c\in\ZZ_{\ge0}$ and $a+b=2c$. The anticanonical class is $-\mathcal{K}=(3,m+3c)$ and we have $\SAmple(X)=\cone((0,1),(1,2c))$. Therefore $X$ is a Fano variety if and only if $m>3c$. Moreover, $X$ is an almost Fano variety if and only if $m\ge3c$. We treat the case that $w_k^1>0$ holds for some $k$. Then we obtain $w_{02}^2=0$ by applying Remark~\ref{rem:Q} to $\gamma_{02,k}$. This yields $\mu^2=0$ and thus $w_{ij}^2=0$ for all $i,j$. Consequently, the degree matrix is given as $$ Q \ = \ \left[ \begin{array}{cc|cc|c|cccc} 1 & 1 & 1 & 1 & 1 & 0 & w_2^1 & \dots & w_m^1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & \dots & 1 \end{array} \right], $$ where we can assume $0 \le w_2^1 \le \ldots \le w_m^1$. The semiample cone and the anticanonical divisor are given as $$ \SAmple(X)=\cone((1,0),(w_m^1,1)), \qquad -\mathcal{K}=(3+w_2^1+\ldots+w_m^1,m). $$ We see that $X$ is an almost Fano variety if and only if $mw_m^1 \le 3+w_2^1+\ldots+w_m^1$ and that $X$ is a Fano variety if and only if the corresponding strict inequality holds. \end{proof} \begin{case} We have $r=2$, $m\ge 1$, $n = 4$ and the list of $n_i$ is $(2,1,1)$. This case does not provide any smooth variety. \end{case} \begin{proof} We can assume $w_{01}\in\tm$ and $w_{1}\in\tp$. Lemma~\ref{lem:tau}~(v) ensures $w_2, \ldots, w_m \in \tp$. Applying Remark~\ref{rem:Q} first to $\gamma_{01,1} \in \rlv(X)$ and then to the remaining $\gamma_{01,k} \in \rlv(X)$, we achieve $$ Q \ = \ \left[ \begin{array}{cc|c|c|cccc} 1 & w_{02}^1 & w_{11}^1 & w_{21}^1 & 0 & w_2^1 & \dots & w_m^1 \\ 0 & w_{02}^2 & w_{11}^2 & w_{21}^2 & 1 & 1 & \dots & 1 \end{array} \right]. $$ Moreover $\gamma_{01,1}\in\rlv(X)$ implies $l_{02}=1$ by Lemma~\ref{lem:sampleFF}~(ii). Recall from Corollary~\ref{cor:clfree} that $\Cl(X)$ is torsion-free. Thus~\cite[Thm.~1.1]{HaHe:2013} implies that $l_{11}$ and $l_{21}$ are coprime. Consider the case $w_{02}\in\tm$. Then $\gamma_{02,1} \in \rlv(X)$ holds, Lemma~\ref{lem:sampleFF}~(ii) yields $l_{01}=1$ and Remark~\ref{rem:Q} shows $w_{02}^1=1$. We conclude $\mu^1 = 2$ and thus obtain $l_{11}=l_{21}=2$; a contradiction. Now let $w_{02}\in\tp$, which implies $\gamma_{01,02,11}\in\rlv(X)$. Since $X$ is locally factorial, Remark \ref{rem:Qfact}~(ii) shows that $w_{02}^2$ and~$w_{11}^2$ are coprime. Now we look at $$ \mu^2 \ = \ w_{02}^2 \ = \ l_{11}w_{11}^2 \ = \ l_{21}w_{21}^2. $$ We infer that~$l_{21}$ divides $w_{02}^2$ and $w_{11}^2$. This contradicts coprimeness of $w_{02}^2$ and~$w_{11}^2$, because by irredundancy of $P$ we have $l_{21} \ge 2$. \end{proof} \begin{case2} We have $r=3$, $m=0$ and $2 = n_0 = n_1 \ge n_2 \ge n_3 \ge 1$. This leads to No.~13 in Theorems~\ref{thm:main1} and~\ref{thm:main2}. \end{case2} \begin{proof} We treat the constellations~(a), (b) and~(c) at once. First observe that for every $w_{i_1j_1}$ with $n_{i_1}=2$, there is at least one $w_{i_2j_2}$ with~$n_{i_2}=2$ and $i_1\neq i_2$ such that $\tx \subseteq Q(\gamma_{i_1j_1,i_2j_2})^{\circ}$ and thus $\gamma_{i_1j_1,i_2j_2} \in \rlv(X)$. Since $r=3$, we conclude $l_{ij}=1$ for all $i$ with $n_i=2$; see Lemma~\ref{lem:sampleFF}~(iv). We can assume $w_{01},w_{11}\in\tm$ and $w_{02},w_{12}\in\tp$ as well as $w_{11} \in \cone(w_{01},w_{12})$. Applying Remark~\ref{rem:Q} to $\gamma_{01,12},\in\rlv(X)$, we obtain $w_{01}=(1,0)$ and $w_{12}=(0,1)$. Moreover $w_{11}^1,w_{11}^2\ge 0$ holds and, because of $w_{11} \not\in \tp$, we even have $w_{11}^1>0$. For the degree $\mu$ of $g_0$ and $g_1$ we note $$ \mu^1 \ = \ w_{02}^1+1 \ = \ w_{11}^1, \qquad \qquad \mu^2 \ = \ w_{02}^2 \ = \ w_{11}^2+1. $$ Thus, we can express $w_{02}$ in terms of $w_{11}$. Remark~\ref{rem:Q} applied to $\gamma_{02,11} \in \rlv(X)$ gives $1 = \det(w_{11},w_{02}) = w_{11}^1 + w_{11}^2$. We conclude $w_{11}=(1,0)$ and $w_{02}=(0,1)$. In particular, the degree of the relations $g_0$ and $g_1$ is $\mu=(1,1)$. In constellations~(b) and~(c), we have $n_3=1$ and $\mu=(1,1)$. This implies $l_{31}=1$, a contradiction to $P$ being irredundant. Thus, constellations~(b) and~(c) do not occur. We are left with constellation~(a), that means that we have $n_0= \ldots = n_3=2$. As seen before, $l_{ij} = 2$ for all $i,j$. Thus, the relations are $$ g_0 \ = \ T_{01}T_{02} + T_{11}T_{12} + T_{21}T_{22}, \qquad g_1 \ = \ \lambda T_{11}T_{12} + T_{21}T_{22} + T_{31}T_{32}, $$ where $\lambda\in\KK^*\setminus\{1\}$. In this situation, we may assume $w_{21}, w_{31} \in \tm$. Applying Remark~\ref{rem:Q} to the relevant faces $\gamma_{02,21}, \gamma_{02,31}$, we conclude $w_{21}^1=w_{31}^1=1$. Since $\mu^1=1$ and $l_{ij}=1$, we obtain $w_{22}^1=w_{32}^1=0$. Thus, $w_{22}$ and $w_{32}$ lie in $\tp$. Again Remark~\ref{rem:Q}, this time applied to $\gamma_{01,22}, \gamma_{01,32} \in \rlv(X)$, yields $w_{22}^2=w_{32}^2=1$. Since $\mu^2=1$ and $l_{ij}=1$, we obtain $w_{21}^2=w_{31}^2=0$. Hence we obtain the degree matrix $$ Q \ = \ \left[ \begin{array}{cc|cc|cc|cc} 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \right]. $$ The semiample cone is $\SAmple(X)=(\QQ_{\ge0})^2$ and the anticanonical divisor is $-\mathcal{K}_X=(2,2)$. In particular, $X$ is a Fano variety. \end{proof} \begin{proof}[Proof of Theorems~\ref{thm:main1}, \ref{thm:main2} and~\ref{thm:main3}] The preceeding analysis of the cases of Proposition~\ref{prop:smooth-rho2} shows that every smooth rational non-toric projective variety of Picard number two coming with a torus action of complexity one occurs in Theorem~\ref{thm:main1} and, among these, the Fano ones in Theorem~\ref{thm:main2} and the truly almost Fano ones in Theorem~\ref{thm:main3}. Comparing the defining data, one directly verifies that any two different listed varieties are not isomorphic to each other. Finally, using Remark~\ref{rem:Qfact} one explicitly checks that indeed all varieties listed in Theorem~\ref{thm:main1} are smooth. \end{proof} \section{Duplicating free weights} \label{section:finite} As mentioned in the introduction, there are (up to isomorphy) just two smooth non-toric projective varieties with a torus action of complexity one and Picard number one, namely the smooth projective quadrics in dimensions three and four. In Picard number two we obtained examples in every dimension and this even holds when we restrict to the Fano case. Nevertheless, also in Picard number two we will observe a certain finiteness feature: each Fano variety listed in Theorem~\ref{thm:main2} arises from a smooth, but not necessarily Fano, variety of dimension at most seven via an iterated generalized cone construction. In terms of the Cox ring the generalized cone construction simply means \emph{duplicating a free weight}. For the precise treatment, the setting of bunched rings $(R,\mathfrak{F},\Phi)$ is most appropriate. Recall from~\cite[Sec.~3.2]{ArDeHaLa} that $R$ is a normal factorially $K$-graded $\KK$-algebra, $\mathfrak{F}$ a system of pairwise non-associated $K$-prime generators for $R$ and $\Phi$ a certain collection of polyhedral cones in $K_\QQ$ defining an open set $\wh{X} \subseteq \ol{X} = \Spec \, R$ with a good quotient $X = \wh{X} \quot H$ by the action of the quasitorus $H = \Spec \, \KK[K]$ on $\ol{X}$. Dimension, divisor class group and Cox ring of $X$ are given by $$ \dim(X) \ = \ \dim(R) - \dim(K_\QQ), \qquad \Cl(X) \ = \ K, \qquad \ \mathcal{R}(X) \ = \ R. $$ We call $X=X(R,\mathfrak{F},\Phi)$ the variety associated with the bunched ring $(R,\mathfrak{F},\Phi)$. This construction yields for example all normal complete $A_2$-varieties with a finitely generated Cox ring, e.g.~Mori dream spaces. Observe that our Construction~\ref{constr:RAPu} presented earlier is a special case; it yields precisely the rational projective varieties with a torus action of complexity one. The approach via bunched rings allows in particular an algorithmic treatment~\cite{HaKe:2014}. \begin{construction} \label{const:duplicate} Let $R = \KK[T_1,\ldots,T_r] / \langle g_1, \ldots, g_s \rangle$ a $K$-graded algebra presented by $K$-homogeneous generators $T_i$ and relations $g_j \in \KK[T_1,\ldots,T_{r-1}]$. By \emph{duplicating the free weight $\deg(T_r)$} we mean passing from $R$ to the $K$-graded algebra $$ R' \ := \ \KK[T_1,\ldots,T_r,T_{r+1}] / \langle g_1, \ldots, g_s \rangle, \qquad \deg(T_{r+1}) \ := \ \deg(T_r) \ \in \ K, $$ where $g_j \in \KK[T_1,\ldots,T_{r-1}] \subseteq \KK[T_1,\ldots,T_r,T_{r+1}]$. If in this situation $(R,\mathfrak{F},\Phi)$ is a bunched ring with $\mathfrak{F} = (T_1,\ldots,T_r)$, then $(R',\mathfrak{F}',\Phi)$ is a bunched ring with $\mathfrak{F}' = (T_1,\ldots,T_r,T_{r+1})$. \end{construction} \begin{proof} The $\KK$-algebra $R'$ is normal and, by~\cite[Thm.~1.4]{Be}, factorially $K$-graded. Obviously, the $K$-grading is almost free in the sense of~\cite[Def.~3.2.1.1]{ArDeHaLa}. Moreover, $(R,\mathfrak{F})$ and $(R',\mathfrak{F}')$ have the same sets of generator weights in the common grading group $K$ and the collection of projected $\mathfrak{F}'$-faces equals the collection of projected $\mathfrak{F}$-faces. We conclude that $\Phi$ is a true $\mathfrak{F}'$-bunch in the sense of~\cite[Def.~3.2.1.1]{ArDeHaLa} and thus $(R',\mathfrak{F}',\Phi)$ is a bunched ring. \end{proof} The word ``free'' in Construction~\ref{const:duplicate} indicates that the variable $T_r$ does not occur in the relations $g_j$. In the above setting, we say that $R$ is a complete intersection, for short c.i., if $R$ is of dimension $r-s$. Here are the basic features of the procedure. \begin{proposition} \label{prop:duplicate} Let $(R',\mathfrak{F}',\Phi)$ arise from the bunched ring $(R,\mathfrak{F},\Phi)$ via Construction~\ref{const:duplicate}. Set $X':=X(R',\mathfrak{F}',\Phi)$ and $X:=X(R,\mathfrak{F},\Phi)$. \begin{enumerate} \item We have $\dim(X') = \dim(X)+1$. \item The cones of semiample divisor classes satisfy $\SAmple(X') = \SAmple(X)$. \item The variety $X'$ is smooth if and only if $X$ is smooth. \item The ring $R'$ is a c.i. if and only if $R$ is a c.i.. \item If $R$ is a c.i., $\deg(T_r)$ semiample and $X$ Fano, then $X'$ is Fano. \end{enumerate} \end{proposition} \begin{proof} By construction, $\dim(R') = \dim(R)+1$ holds. Since $R$ and $R'$ have the same grading group~$K$, we obtain~(i). Moreover, $R$ and $R'$ have the same defining relations $g_j$, hence we have~(iv). According to~\cite[Prop.~3.3.2.9]{ArDeHaLa}, the semiample cone is the intersection of all elements of $\Phi$ and thus~(ii) holds. To obtain the third assertion, we show first that $\wh{X}'$ is smooth if and only if $\wh{X}$ is smooth. For every relevant $\mathfrak{F}$-face $\gamma_0 \preceq \QQ_{\ge 0}^r$ consider $$ \gamma_0' \ := \ \gamma_0 + \cone(e_{r+1}), \qquad \gamma_0'' \ := \ \cone(e_{i}; \; 1 \le i < r, \ e_i \in \gamma_0 ) + \cone(e_{r+1}). $$ Then $\gamma_0,\gamma_0',\gamma_0'' \preceq \QQ_{\ge 0}^{r+1}$ are relevant $\mathfrak{F}'$-faces and, in fact, all relevant $\mathfrak{F}'$-faces are of this form. Since the variables $T_r$ and $T_{r+1}$ do not appear in the relations $g_j$, we see that a stratum $\ol{X}(\gamma_0)$ is smooth if and only the strata $\ol{X}'(\gamma_0)$, $\ol{X}'(\gamma_0')$ and $\ol{X}'(\gamma_0'')$ are smooth. Now~\cite[Cor.~3.3.1.11]{ArDeHaLa} gives~(iii). Finally, we show~(v). As we have complete intersection Cox rings, \cite[Prop.~3.3.3.2]{ArDeHaLa} applies and we obtain $$ -\mathcal{K}_{X'} \ = \ \sum_{i=1}^{r+1} \deg(T_i) - \sum_{j=1}^s \deg(g_j) \ = \ -\mathcal{K}_{X} + \deg(T_{r+1}). $$ Since $X$ and $X'$ share the same ample cone, we conclude that ampleness of $-\mathcal{K}_{X}$ implies ampleness of $-\mathcal{K}_{X'}$, \end{proof} We interprete the duplication of free weights in terms of birational geometry: it turns out to be a composition of a Mori fiber space, a series flips and a birational divisorial contraction, where the two contractions both are elementary; see~\cite{Ca} for a detailed study the latter type of maps in the context of general smooth Fano 4-folds. \begin{proposition} \label{prop:duplicate-geom} Let $(R',\mathfrak{F}',\Phi)$ arise from the bunched ring $(R,\mathfrak{F},\Phi)$ via Construction~\ref{const:duplicate}. Set $X':=X(R',\mathfrak{F}',\Phi)$ and $X:=X(R,\mathfrak{F},\Phi)$. Assume that $X$ is $\QQ$-factorial. Then there is a sequence $$ X \ \longleftarrow \ \widetilde{X}_1 \ \dashrightarrow \ \ldots \ \dashrightarrow \ \widetilde{X}_t \ \longrightarrow \ X', $$ where $\widetilde{X}_1 \to X$ is a Mori fiber space with fibers $\PP_1$, every $\widetilde{X}_i \dashrightarrow \widetilde{X}_{i+1}$ is a flip and $\widetilde{X}_t \to X'$ is the contraction of a prime divisor. If $\deg(T_r) \in K$ is Cartier, then $\widetilde{X}_1 \to X$ is the $\PP_1$-bundle associated with the divisor on $X$ corresponding to $T_r$. \end{proposition} \begin{proof} In order to define $\widetilde{X}_1$, we consider the canonical toric embedding $X \subseteq Z$ in the sense of~\cite[Constr.~3.2.5.3]{ArDeHaLa}. Let $\Sigma$ be the fan of $Z$ and $P = [v_1,\ldots,v_r]$ be the matrix having the primitive generators $v_i \in \ZZ^{n}$ of the rays of $\Sigma$ as its columns. Define a further matrix $$ \widetilde{P} \ := \ \left[ \begin{array}{cccccc} v_1 & \ldots & v_{r-1} & v_r & 0 & 0 \\ 0 & \ldots & 0 & -1 & 1 & -1 \\ \end{array} \right]. $$ We denote the columns of $\widetilde{P}$ by $\widetilde{v}_1, \ldots, \widetilde{v}_r, \widetilde{v}_+,\widetilde{v}_- \in \ZZ^{n+1}$, write $\varrho_+$, $\varrho_-$ for the rays through $\widetilde{v}_+$, $\widetilde{v}_-$ and define a fan $$ \widetilde{\Sigma}_1 \ := \ \{ \widetilde{\sigma} + \varrho_+, \, \widetilde{\sigma} + \varrho_-, \, \widetilde{\sigma}; \; \sigma \in \Sigma \}, \qquad\qquad \widetilde{\sigma} \ := \ \cone(\widetilde{v}_{i}; v_i \in \sigma). $$ The projection $\ZZ^{n+1} \to \ZZ^{n}$ is a map of fans $\widetilde{\Sigma}_1 \to \Sigma$. The associated toric morphism $\widetilde{Z}_1 \to Z$ has fibers $\PP_1$. If the toric divisor $D_r$ corresponding to the ray through $v_r$ is Cartier, then $\widetilde{Z}_1 \to Z$ is the $\PP_1$-bundle associated with $D_r$. We define $\widetilde{X}_1 \subseteq \widetilde{Z}_1$ to be the preimage of $X \subseteq Z$. Then $\widetilde{X}_1 \to X$ has fibers $\PP_1$. If $\deg(T_r)$ is Cartier, then so is $D_r$ and hence $\widetilde{X}_1 \to X$ inherits the $\PP_1$-bundle structure. Now we determine the Cox ring of the variety~$\widetilde{X}_1$. For this, observe that the projection $\ZZ^{r+2} \to \ZZ^{r}$ defines a lift of $\widetilde{Z}_1 \to Z$ to the toric characteristic spaces and thus leads to the commutative diagram $$ \xymatrix{ {\widetilde{\pi}^\sharp(\widetilde{X}_1)} \ar@{}[r]|\subseteq \ar[d]_{\widetilde{\pi}} & {\widetilde{W}_1} \ar[r] \ar[d]_{\widetilde{\pi}} & W \ar[d]^{\pi} & {\pi^\sharp(X)} \ar@{}[l]|\supseteq \ar[d]^{\pi} \\ {\widetilde{X}_1} \ar@{}[r]|\subseteq & {\widetilde{Z}_1} \ar[r] & Z & X \ar@{}[l]|\supseteq } $$ where $\widetilde{\pi}^\sharp(\widetilde{X}_1)$ and $\pi^\sharp(X)$ denote the proper transforms with respect to the downwards toric morphisms. Pulling back the defining equations of $\pi^\sharp(X) \subseteq W$, we see that $\widetilde{\pi}^\sharp(\widetilde{X}_1) \subseteq \widetilde{W}_1$ has coordinate algebra $\widetilde{R} :=R[S^+,S^-]$ graded by $\widetilde{K} := K\times \ZZ$ via $$ \deg(T_i) := (w_i,0), \qquad w^+ := \deg(S^+) := (w_r,1), \qquad w^- := \deg(S^-) := (0,1), $$ where $w_i := \deg(T_i)\in K$. The $\KK$-algebra $\widetilde{R}$ is normal and, by~\cite[Thm.~1.4]{Be}, factorially $\widetilde{K}$-graded. Moreover the $\widetilde{K}$-grading is almost free, as the $K$-grading of $R$ has this property and $\widetilde{\mathfrak{F}}=(T_1, \ldots, T_r, S^+, S^-)$ is a system of pairwise non-associated $\widetilde{K}$-prime generators. We conclude that $\widetilde{R}$ is the Cox ring of $\widetilde{X}_1$. Next we look for the defining bunch of cones for $\widetilde{X}_1$. Observe that $K$ sits inside $\widetilde{K}$ as $K \times \{0\}$. With $\theta := \SAmple(X)\times\{0\}$ we obtain a GIT-cone $\theta_1 := \cone(\theta,w^+) \cap \cone(\theta,w^-)$ of the $\widetilde{K}$-graded ring $\widetilde{R}$. The associated bunch $\widetilde{\Phi}_1$ consists of all cones of the form $$ \widetilde{\tau} + \cone(w^+), \qquad \widetilde{\tau} + \cone(w^-), \qquad \widetilde{\tau} + \cone(w^+, w^-), $$ where $\widetilde{\tau} =\tau \times \{0\}$, $\tau \in \Phi$. Since $\Phi$ is a true bunch, so is $\widetilde{\Phi}_1$. Together we obtain a bunched ring $(\widetilde{R}, \widetilde{\mathfrak{F}}, \widetilde{\Phi}_1)$. By construction, the fan corresponding to $\widetilde{\Phi}_1$ via Gale duality is $\widetilde{\Sigma}_1$. We conclude that $\widetilde{X}_1$ is the variety associated with $(\widetilde{R}, \widetilde{F}, \widetilde{\Phi}_1)$ and $\widetilde{X}_1 \subseteq \widetilde{Z}_1$ is the canonical toric embedding. Observe that $\widetilde{X}_1 \to X$ corresponds to the passage from the GIT-cone $\theta_1$ to the facet $\theta$. In particular, we see that $\widetilde{X}_1 \to X$ is a Mori fiber space. To obtain the flips and the final divisorial contraction, we consider the full GIT-fan. \begin{center} \begin{tikzpicture}[scale=0.5] \path[fill=gray!20!] (1.8,0)--(6,0)--(6.5,6.5)--(1.8,0); \path[fill=gray!60!] (1.8,0)--(6,0)--(57/16, 39/16)--(1.8,0); \path[fill=gray!60!] (1713/466, 1209/466)--(207/34, 39/34)--(105/17, 39/17)--(372/97, 273/97)--(1371/370, 195/74); \coordinate[] (w1) at (0,0); \node at (-0.5,0.3) {{\tiny{$w_r$}}}; \fill (w1) circle (3pt); \coordinate[] (w2) at (1.8,0); \fill (w2) circle (3pt); \coordinate[] (w3) at (6,0); \fill (w3) circle (3pt); \coordinate[] (w4) at (8,0); \fill (w4) circle (3pt); \coordinate[] (w5) at (16.5,0); \fill (w5) circle (3pt); \coordinate (wp) at (3,3); \node at (2.5,3.3) {\tiny{$w^+$}}; \fill (wp) circle (3pt); \coordinate(wm) at (6.5,6.5); \node at (6.5,7) {\tiny{$w^-$}}; \fill (wm) circle (3pt); \coordinate(w0) at (-3.5,0); \fill (w0) circle (3pt); \draw (w5) -- (wm) -- (wp) -- (w1); \draw (wp) -- (w1); \draw (wp) -- (w2); \draw (wp) -- (w3); \draw (wp) -- (w4); \draw (wp) -- (w5); \draw (wm) -- (w1); \draw (wm) -- (w2); \draw (wm) -- (w3); \draw (wm) -- (w4); \draw (wm) -- (w5); \draw (w0) -- (w5); \draw (w0) -- (wp); \draw (w0) -- (wm); \draw[decorate, ultra thick] (w2) -- (w3); \node at (3.9,-0.45) {\tiny{{$\theta$}}}; \node at (3.9,0.7) {{\tiny{$\theta_1$}}}; \node at (5.7,2) {{\tiny{$\theta_t$}}}; \node at (5.5,3.5) {{\tiny{$\theta_{t+1}$}}}; \path[densely dotted, ->] (3.85,1.05) edge [bend left] (4.85, 1.5); \path[densely dotted, ->] (4.85, 1.5) edge [bend left] (5.4, 2.05); \end{tikzpicture} \end{center} Important are the GIT-cones inside $\theta + \cone(w^-)$. There we have the facet $\theta$ and the semiample cone $\theta_1$ of $\widetilde{X}_1$. Proceding in the direction of $w^-$, we come across other full-dimensional GIT-cones, say $\theta_2,\ldots,\theta_{t+1}$. This gives a sequence of flips $\widetilde{X}_1\dashrightarrow \ldots\dashrightarrow\widetilde{X}_{t}$, where $\widetilde{X}_i$ is the variety with semiample cone $\theta_i$. Passing from $\theta_t$ to $\theta_{t+1}$ gives a morphism $\widetilde{X}_{t} \to \widetilde{X}_{t+1}$ contracting the prime divisor corresponding to the variable $S^-$ of the Cox ring $\widetilde{R}$ of $\widetilde{X}_{t}$. Note that $\widetilde{X}_{t+1}$ is $\QQ$-factorial, as it is the GIT-quotient associated with a full-dimensional chamber. We show $\widetilde{X}_{t+1}\cong X'$. Recall that $X'$ arises from $X$ by duplicating the weight $\deg(T_r)$. We have $\Cl(X') = K$ and the Cox ring $R'=R[T_{r+1}]$ of $X'$ is $K$-graded via $\deg(T_i)=w_i$ for $i=1,\ldots,r$ and $\deg(T_{r+1})=w_r$. In particular, the fan of the canonical toric ambient variety of $X'$ has as its primitive ray generators the columns of the matrix $$ P' \ = \ \left[ \begin{array}{ccccc} v_1 & \ldots & v_{r-1} & v_r & 0 \\ 0 & \ldots & 0 & -1 & 1 \\ \end{array} \right]. $$ On the other hand, the canonical toric ambient variety $\widetilde{Z}_{t+1}$ of $\widetilde{X}_{t+1}$ is obtained from $\widetilde{Z}_{t}$ by contracting the divisor corresponding to the ray $\varrho_-$. Hence $P'$ is as well the primitive generator matrix for the fan of $\widetilde{Z}_{t+1}$. We conclude $$ \Cl(\widetilde{X}_{t+1}) \ = \ \ZZ^{r+1} /\text{ im}((P')^*) \ = \ \Cl(X') \ = \ K. $$ Similarly, we compare the Cox rings of $\widetilde{X}_{t+1}$ and $X'$. Let $\widetilde{Z}_t$ denote the canonical toric ambient variety of $\widetilde{X}_t$. Then the projection $\ZZ^{r+2} \to \ZZ^{r+1}$ defines a lift of $\widetilde{Z}_t \to \widetilde{Z}_{t+1}$ to the toric characteristic spaces and thus leads to the commutative diagram $$ \xymatrix{ {\widetilde{\pi}^\sharp(\widetilde{X}_t)} \ar@{}[r]|\subseteq \ar[d]_{\widetilde{\pi}} & {\widetilde{W}_t} \ar[r] \ar[d]_{\widetilde{\pi}} & {\widetilde{W}_{t+1}} \ar[d]^{\pi} & {\pi^\sharp(\widetilde{X}_{t+1})} \ar@{}[l]|\supseteq \ar[d]^{\pi} \\ {\widetilde{X}_t} \ar@{}[r]|\subseteq & {\widetilde{Z}_t} \ar[r] & {\widetilde{Z}_{t+1}} & {\widetilde{X}_{t+1}} \ar@{}[l]|\supseteq } $$ where the proper transforms $\widetilde{\pi}^\sharp(\widetilde{X}_t)$ and $\pi^\sharp(\widetilde{X}_{t+1})$ are the characteristic spaces of $\widetilde{X}_t$ and $\widetilde{X}_{t+1}$ respectively and the first is mapped onto the second one. We conclude that the Cox ring of $\widetilde{X}_{t+1}$ is $R[S^+]$ graded by $\deg(T_i)=w_i$ for $i = 1, \ldots, r$ and $\deg(S^+)=w_{r}$ and thus is isomorphic to the Cox ring $R'$ of $X'$. The final step is to compare the defining bunches of cones $\widetilde{\Phi}_{t+1}$ of $\widetilde{X}_{t+1}$ and $\Phi'$ of $X'$. For this, observe that the fan of the toric ambient variety $\widetilde{Z}_{t+1}$ contains the cones $\widetilde{\sigma} + \varrho_+$, where $\sigma \in \Sigma$. Thus, every $\tau \in \Phi'$ belongs to $\widetilde{\Phi}_{t+1}$. We conclude $$ \SAmple(\widetilde{X}_{t+1}) \ \subseteq \ \SAmple(X'). $$ Since $\widetilde{X}_{t+1}$ is $\QQ$-factorial, its semiample cone is of full dimension. Both cones belong to the GIT-fan, hence we see that the above inclusion is in fact an equality. Thus $\widetilde{\Phi}_{t+1}$ equals $\Phi'$. \end{proof} We return to the Fano varieties of Theorem~\ref{thm:main2}. We first list the (finitely many) examples which do not allow duplication of a free weight and then present the starting models for constructing the Fano varieties via duplication of weights. \begin{proposition} \label{prop:noisodivs} The varieties of Theorem~\ref{thm:main2} containing no divisors with infinite general isotropy are precisely the following ones. \begin{center} {\small \setlength{\tabcolsep}{4pt} \begin{longtable}{ccccc} No. & \small{$\mathcal{R}(X)$} & \small{$[w_1,\ldots, w_r]$} & \small{$-\mathcal{K}_X$} & \small{$\dim(X)$} \\ \toprule 1 & $ \frac {\KK[T_1, \ldots , T_7]} {\langle T_{1}T_{2}T_{3}^2+T_{4}T_{5}+T_6T_7 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccccc} 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 &1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3 \\ 4 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 2 & $ \frac {\KK[T_1, \ldots , T_7]} {\langle T_{1}T_{2}T_{3}+T_{4}T_{5}+T_6T_7 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccccc} 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 4 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 3 & $ \frac{\KK[T_1, \ldots , T_6]} {\langle T_{1}T_{2}T_{3}^2+T_{4}T_{5}+T_{6}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc} 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 3 \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 4.A & $ \frac {\KK[T_1, \ldots , T_6]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc} 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 2 \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 4.B & $ \frac {\KK[T_1, \ldots , T_6]} {\langle T_{1}T_{2}^2+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc} 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 3 \\ 2 \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 4.C & $ \frac {\KK[T_1, \ldots , T_6]} {\langle T_{1}T_{2}^2+T_{3}T_{4}^2+T_5T_{6}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc} 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 13 & $ \begin{array}{c} \frac {\KK[T_1, \ldots , T_8]} { \left\langle \begin{array}{l} \scriptstyle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6}, \\[-3pt] \scriptstyle \lambda T_{3}T_{4}+T_{5}T_{6}+T_{7}T_{8} \end{array} \right\rangle } \\ \scriptstyle \lambda \in \KK^* \setminus \{1\} \end{array} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccccc} 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 2 \end{array} \!\!\right] $ } & \small{$4$} \\ \bottomrule \end{longtable} } \end{center} \end{proposition} \begin{proof} For a $T$-variety $X = X(A,P,u)$, the divisors having infinite general $T$-isotropy are precisely the vanishing sets of the variable $S_k$. Thus we just have to pick out the cases with $m=0$ from Theorem~\ref{thm:main2}. \end{proof} \begin{theorem} \label{thm:duplicate} Let $X$ be a smooth rational Fano variety with a torus action of complexity one and Picard number two. If there is a prime divisor with infinite general isotropy on $X$, then $X$ arises via iterated duplication of the free weight~$w_r$ from one of the following varieties~$Y$. \begin{center} {\small \setlength{\tabcolsep}{4pt} \begin{longtable}{ccccc} No. & \small{$\mathcal{R}(Y)$} & \small{$[w_1,\ldots, w_r]$} & \small{$u$} & \small{$\dim(Y)$} \\ \toprule 4.A & $ \frac {\KK[T_1, \ldots , T_6, S_1]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|c} 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 4.A & $ \frac {\KK[T_1, \ldots , T_6, S_1,S_2]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|cc} 0 & 1 & 0 & 1 & 0 & 1 & -1 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1 \end{array} \!\!\right] $ } & \small{$5$} \\ \midrule 4.B & $ \frac {\KK[T_1, \ldots , T_6, S_1]} {\langle T_{1}T_{2}^2+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|c} 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 4.C & $ \frac {\KK[T_1, \ldots , T_6, S_1]} {\langle T_{1}T_{2}^2+T_{3}T_{4}^2+T_5T_{6}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|c} 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 5 & $ \frac {\KK[T_1, \ldots , T_6, S_1]} {\langle T_{1}T_{2}+T_{3}^2T_{4}+T_5^2T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|c} 0 & 2a+1 & a & 1 & a & 1 & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 \end{array} \!\!\right] \\[1em] a \ge 0 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2a+2 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 6 & $ \frac {\KK[T_1, \ldots , T_6, S_1]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5^2T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|c} 0 & 2c+1 & a & b & c & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 \end{array} \!\!\right] \\[1em] a, b, c \ge 0, \quad a<b,\\ a+b=2c+1 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2c+2 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 7 & $ \frac {\KK[T_1, \ldots , T_6, S_1]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|c} 0 & 0 & 0 & 0 & -1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 8 & $ \frac {\KK[T_1, \ldots , T_6, S_1,S_2]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|cc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & a \end{array} \!\!\right] \\[1em] a\in\{1,2,3\} \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ a+1 \end{array} \!\!\right] $ } & \small{$5$} \\ \midrule 8 & $ \frac {\KK[T_1, \ldots , T_6, S_1,S_2,S_3]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccccc|ccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & a-1 & a \end{array} \!\!\right] \\[1em] a\in\{1,2\} \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ a+1 \end{array} \!\!\right] $ } & \small{$6$} \\ \midrule 8 & $ \frac {\KK[T_1, \ldots , T_6, S_1,\ldots,S_4]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{cccccc|cccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2 \end{array} \!\!\right] $ } & \small{$7$} \\ \midrule 9 & $ \frac {\KK[T_1, \ldots , T_6, S_1, S_2]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_5T_{6} \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{cccc|cc} 0 & a_2 & \ldots & a_6 & 1 & 1 \\ 1 & 1 & \ldots & 1 & 0 & 0 \end{array} \!\!\right] \\[1em] 0 \le a_3 \le a_5 \le a_6 \le a_4 \le a_2,\\ a_2=a_3+a_4=a_5+a_6 \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} a_2+1 \\ 1 \end{array} \!\!\right] $ } & \small{$5$} \\ \midrule 10 & $ \frac {\KK[T_1, \ldots , T_5, S_1]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccc|c} 1 & 1 & 1 & 1 & 1 & 0 \\ -1 & 1 & 0 & 0 & 0 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 1 \end{array} \!\!\right] $ } & \small{$3$} \\ \midrule 11 & $ \frac {\KK[T_1, \ldots , T_5, S_1,S_2]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cc} 1 & 1 & 1 & 1 & 1 & 0 & a \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 \end{array} \!\!\right] \\[1em] a\in\{1,2\} \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} a+1 \\ 1 \end{array} \!\!\right] $ } & \small{$4$} \\ \midrule 11 & $ \frac {\KK[T_1, \ldots , T_5, S_1,S_2,S_3]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{ccccc|ccc} 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{array} \!\!\right] $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 2 \\ 1 \end{array} \!\!\right] $ } & \small{$5$} \\ \midrule 12 & $ \frac{\KK[T_1, \ldots , T_5, S_1,S_2]} {\langle T_{1}T_{2}+T_{3}T_{4}+T_{5}^2 \rangle} $ & \tiny{ \setlength{\arraycolsep}{2pt} $ \begin{array}{c} \left[\!\! \begin{array}{ccccc|cc} 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 2c & a & b & c & 1 & 1 \end{array} \!\!\right] \\[1em] 0 \le a \le c \le b, \ a+b=2c \end{array} $ } & \tiny{ \setlength{\arraycolsep}{2pt} $ \left[\!\! \begin{array}{c} 1 \\ 2c+1 \end{array} \!\!\right] $ } & \small{$4$} \\ \bottomrule \end{longtable} } \end{center} For Nos.~4, 8 and~11, the variety $Y$ is Fano and any iterated duplication of $w_r$ produces a Fano variety $X$. For the remaining cases, the following table tells which~$Y$ are Fano and gives the characterizing condition when an iterated duplication of $w_r$ produces a Fano variety $X$: \begin{longtable}{cccccccc} No. & 5 & 6 & 7 & 9 & 10 & 12 \\ \toprule $Y$ Fano & $a=0$ & $c=0$ & $\checkmark$ & $a_2=0$ & $\checkmark$ & $c=0$ \\ \midrule $X$ Fano & $m > 2a$ & $m > 3c+1$ & $m \le 3$ & $m > 2a_2$ & $m \le 2$ & $m > 3c$ \\ \bottomrule \end{longtable} \end{theorem} \begin{proof} A $T$-variety $X = X(A,P,u)$ has a divisor with infinite general $T$-isotropy if and only if $m\ge1$ holds. In the cases 4.A, 4.B, 4.C, 5, 6, 7, 9, 10 and 12 we directly infer from Theorem~\ref{thm:main2} that the examples with higher $m$ arise from those listed in the table above via iterated duplication of $w_r$. We still have to consider Nos.~8 and 11. If $X$ is a variety of type~8, then the condition for $X$ to be a Fano variety is $$ 4 + a_2 + \ldots, + a_m > ma_m, $$ where $a_m=1,2,3$ and $0\le a_2 \le \ldots \le a_m$. This is satisfied if and only if one of the following conditions holds: \begin{enumerate} \item $a_2 = \ldots = a_m \in \{1,2,3\}$. \item $a_2+1 = a_3 \ldots = a_m \in \{1,2\}$, with $m\ge3$. \item $a_2=a_3=0$ and $a_4 = \ldots = a_m = 1$, with $m\ge4$. \end{enumerate} Similarly for No.~11 the Fano condition in the table of Theorem~\ref{thm:main2} is equivalent to the fulfillment of one of the following: \begin{enumerate} \item $a_2 = \ldots = a_m \in \{1,2\}$. \item $a_2=0$ and $a_3 = \ldots = a_m = 1$, with $m\ge3$. \end{enumerate} In both cases this explicit characterization makes clear that we are in the setting of the duplication of a free weight. \end{proof} \begin{remark} Consider iterated duplication of $w_r$ for a variety $X=X(A,P,u)$ as in Theorem~\ref{thm:duplicate}. Recall that the effective cone of $X$ is decomposed as $\tp \cup \tx \cup \tm$, where $\tx =\Ample(X)$. Lemma \ref{lem:tau}~(i) says $w_r \not\in \tx$ and thus we have a unique $\kappa \in \{ \tp, \tm\}$ with $w_r \notin \kappa$. Then the number of flips per duplication step equals $$ |\{\cone(w_{ij}), \cone(w_k); \ w_{ij},w_k \in \kappa\}|-1. $$ In particular, for Nos.~4.A, 4.B, 4.C, 8, 11, 9 with $a_i =0$, 12 with $b=0$ the duplications steps require no flips. \end{remark} \begin{remark} \label{rem:toric-not} For toric Fano varieties, there is no statement like Corollary~\ref{cor:duplicate}. Recall from~\cite{BeHa:2004} that all smooth projective toric varieties $Z$ with $\Cl(Z)=\ZZ^2$ admit a description via the following data: \begin{itemize} \item weight vectors $w_1 := (1,0)$ and $w_i := (b_i,1)$ with $0=b_n < b_{n-1} < \ldots < b_2$, \item multiplicities $\mu_i := \mu(w_i)\ge1$, where $\mu_1\ge2$ and $\mu_2+\ldots+\mu_n\ge2$. \end{itemize} \begin{center} \begin{tikzpicture}[scale=0.6] \path[fill=gray!60!] (0,0)--(7,0)--(7,1+1/6)--(0,0); \coordinate[] (w1) at (1,0); \fill (w1) circle (3pt); \node[below] at (w1) {\tiny{$(\mu_1)$}}; \coordinate[] (w2) at (6,1); \fill (w2) circle (3pt); \node[above] at (w2) {\tiny{$(\mu_2)$}}; \coordinate[] (w3) at (4,1); \fill (w3) circle (3pt); \node[above] at (w3) {\tiny{$(\mu_3)$}}; \coordinate[] (w4) at (3,1); \fill (w4) circle (3pt); \node[above] at (w4) {\tiny{$(\mu_4)$}}; \coordinate[] (w5) at (0,1); \node[left] at (w5) {\tiny{$(\mu_n)$}}; \fill (w5) circle (3pt); \draw (-1,0) -- (7,0); \draw (0,-1) -- (0,2); \draw (0,0) -- (7,1+1/6); \fill (1.1, 1) circle (1.25pt); \fill (1.5, 1) circle (1.25pt); \fill (1.9, 1) circle (1.25pt); \end{tikzpicture} \end{center} The variety $Z$ arises from the bunched polynomial ring $(R,\mathfrak{F},\Phi)$, where $R$ equals $\KK[S_{ij} ;\, 1 \le i \le n, 1 \le j \le \mu_i ]$ with the system of generators $\mathfrak{F}=(S_{11},\ldots,S_{n\mu_n})$ and the bunch $\Phi = \{ \cone(w_{1},w_{i}); i=2,\ldots,n \}.$ In this setting $Z$ is Fano if and only if $$ b_2(\mu_3+\ldots+\mu_n) < \mu_1 + \mu_3b_3 + \ldots + \mu_{n-1}b_{n-1}. $$ For any $n\in\ZZ_{\ge4}$ and $i=2,\ldots,n$ set $\mu_i := 1$ and $w_i := (n-i,1)$. Then, with $\mu_1 := 2$ we obtain a smooth (non-Fano) toric variety $Z_n'$ of Picard number two and dimension $n-1$. Moreover, for $\mu_1 := 1+(n-2)(n-1)/2$ we obtain a smooth toric Fano variety $Z_n$ of Picard number two that is Fano and is obtained from $Z_n'$ via iterated duplication of $w_1$ but cannot be constructed from any lower dimensional smooth variety this way. \end{remark} \section{Geometry of the Fano varieties} \label{sec:geomfanos} We take a closer look at the Fano varieties~$X$ listed in Theorem~\ref{thm:main2} and describe their possible divisorial contractions in detail, i.e., the morphisms $X \to Y$ arising from semiample divisors which either birationally contract a prime divisor or are Mori fiber spaces. The approach is via a suitable ambient toric variety. The following Remark can be found, at least partially, for example in~\cite[Section~7.3]{CLS}. \begin{remark} \label{rem:torgeo} Let $Z$ be a smooth projective toric variety of Picard number~2, given by weight vectors $w_1 := (1,0)$ and $w_i := (b_i,1)$ with $0=b_n < b_{n-1} < \ldots < b_2$, and multiplicities $\mu_i := \mu(w_i)\ge1$, where $\mu_1\ge2$ and $\mu_2+\ldots+\mu_n\ge2$ as in Remark~\ref{rem:toric-not}. Then the toric variety $Z$ is a projectivized split vector bundle of rank~$r$ over a projective space $Y_Z := \PP_s$, where $s := \mu_1-1$ and $r := \mu_2 + \ldots + \mu_n-1$. More precisely, we have $$ Z \ \cong \ \PP \left( \bigoplus_{i=1}^{\mu_n} \mathcal{O}_{\PP_s} \oplus \bigoplus_{i=1}^{\mu_{n-1}} \mathcal{O}_{\PP_s}(b_{n-1}) \oplus \ldots \oplus \bigoplus_{i=1}^{\mu_{2}} \mathcal{O}_{\PP_s}(b_{2}) \right). $$ The bundle projection $Z \to Y_Z$ is the divisorial contraction associated to the divisor class $w_1 \in \ZZ^2 = \Cl(Z)$. If $n=2$ holds, then we have $Z\cong\PP_s \times \PP_{r}$. If $n=3$ and $\mu_3 = 1$ hold, then the class $w_3 \in \ZZ^2 = \Cl(Z)$ gives rise to a birational divisorial contraction onto a weighted projective space: $$ Z \ \to \ Z' \ := \ \PP( \underbrace{1,\ldots,1}_{\mu_1}, \underbrace{b_2,\ldots,b_2}_{\mu_2} ). $$ The exceptional divisor $E_Z \subseteq Z$ is isomorphic to $\PP_s \times \PP_{\mu_2 -1}$ and the center $C(Z') \subseteq Z'$ of the contraction is isomorphic to $\PP_{\mu_2 -1}$. In particular, for $\mu_2=1$, we have $E_Z \cong \PP_s$ and $C(Z')$ is a point. \end{remark} From the explicit description of the Cox ring of our Fano variety $X$, we obtain via Construction~\ref{constr:RAPu} a closed embedding $X \to Z$ into a toric variety~$Z$. As a byproduct of our classification, it turns out that, whenever~$X$ admits a divisorial contraction, then $X$ inherits all its divisorial contractions from~$Z$. Remark~\ref{rem:torgeo} together with the explicit equations for $X$ in $Z$ will then allow us to study the situation in detail. We now present the results. The cases are numbered according to the table of Theorem~\ref{thm:main2}. Moreover, we denote by $Q_3\subseteq\PP_4$ and $Q_4\subseteq\PP_5$ the three and four-dimensional smooth projective quadrics and we write $\PP(a_1^{\mu_1},\ldots,a_r^{\mu_r})$ for the weighted projective space, where the superscript $\mu_i$ indicates that the weight $a_i$ occurs $\mu_i$ times. \begin{vartype}{1} \label{rem::no1} The variety $X$ is of dimension four and admits two divisorial contractions, $Q_4 \leftarrow X \to \PP_1$. The morphism $X \to Q_4$ is birational with exceptional divisor isomorphic to $\PP_1\times\PP_1\times\PP_1$ and center isomorphic to $\PP_1\times\PP_1$. The morphism $X \to \PP_1$ is a Mori fiber space with general fiber isomorphic to $Q_3$ and singular fibers over $[0,1]$ and $[1,0]$ each isomorphic to the singular quadric $V(T_2T_3+T_4T_5) \subseteq \PP_4$. \end{vartype} \begin{vartype}{2} \label{rem::no2} The variety $X$ is of dimension four and admits two divisorial contractions, $Q_4 \leftarrow X \to \PP_3$. The morphism $X \to Q_4$ is birational with exceptional divisor isomorphic to a hypersurface of bidegree $(1,1)$ in $\PP_1\times\PP_3$ and center isomorphic to~$\PP_1$. The morphism $X \to \PP_3$ is a Mori fiber space with fibers isomorphic to $\PP_1$. \end{vartype} \begin{vartype}{3} \label{rem::no3} The variety $X$ is of dimension three and occurs as No.~2.29 in the Mori-Mukai classification~\cite{MoMu}. Moreover, $X$ admits two divisorial contractions, $Q_3 \leftarrow X \to \PP_1$. The morphism $X \to Q_3$ is birational with exceptional divisor isomorphic to $\PP_1\times \PP_1$ and center isomorphic to $\PP_1$. The morphism $X \to \PP_1$ is a Mori fiber space with general fiber isomorphic to $\PP_1 \times \PP_1$ and singular fibers over $[0,1]$ and $[1,0]$ each isomorphic to $V(T_1T_2+T_3^2) \subseteq \PP_3$. \end{vartype} \begin{vartype}{4A} \label{rem::no4A} \emph{Case~1:} we have $c= -1$. Then $X$ admits two divisorial contractions $Y \leftarrow X \to \PP_2$, where $Y := V(T_1T_2+T_3T_4+T_5T_6) \subseteq \PP_{m+4}$ is a terminal factorial Fano variety which is smooth if and only if $m=1$ holds. The morphism $X \to Y$ is birational with exceptional divisor isomorphic to a hypersurface of bidegree $(1,1)$ in $\PP_2\times\PP_{m+1}$ and center isomorphic to $\PP_{m+1}$. The morphism $X \to \PP_2$ is a Mori fiber space with fibers isomorphic to $\PP_{m+1}$. \medskip \noindent \emph{Case~2:} we have $c = 0$. Then $X$ is a hypersurface of bidegree $(1,1)$ in $\PP_2\times\PP_{m+2}$. Moreover, $X$ admits two Mori fiber spaces $\PP_{m+2} \leftarrow X \to \PP_2$. The Mori fiber space $X \to \PP_2$ has fibers isomorphic to $\PP_{m+1}$, whereas the Mori fiber space $X \to \PP_{m+1}$ has general fiber isomorphic to $\PP_{1}$ and special fibers over $V(T_1,T_2,T_3) \subseteq \PP_{m+2}$ isomorphic to $\PP_2$. For $m=0$, we have $\dim(X)=3$ and $X$ is the variety No.~2.32 in~\cite{MoMu}. \end{vartype} \begin{vartype}{4B} \label{rem::no4B} The variety $X$ admits two divisorial contractions $Y \leftarrow X \to \PP_2$, where $Y := V(T_1^2+T_2T_3+T_4T_5) \subseteq \PP_{m+4}$ is a terminal factorial Fano variety. The variety~$Y$ is smooth if and only if $m=0$ holds and in this case $X$ occurs as No.~2.31 in~\cite{MoMu}. The morphism $X \to Y$ is birational with exceptional divisor isomorphic to a hypersurface of bidegree $(1,1)$ in $\PP_2\times\PP_{m+1}$ and center isomorphic to $\PP_{m+1}$. The morphism $X \to \PP_2$ is a Mori fiber space with fibers isomorphic to $\PP_{m+1}$. \end{vartype} \begin{vartype}{4C} \label{rem::no4C} The variety $X$ is a hypersurface of bidegree $(2,1)$ in $\PP_2\times\PP_{m+2}$; for $m=0$ we have $\dim(X)=3$ and $X$ is No.~2.24 in~\cite{MoMu}. Moreover, $X$ admits two Mori fiber spaces $\PP_{m+2} \leftarrow X \to \PP_2$. The morphism $X \to \PP_2$ has fibers isomorphic to $\PP_{m+1}$. To describe the fibers of $\varphi \colon X \to \PP_{m+2}$, set $Y_{i} := V_{\PP_{m+2}}(T_i)$, $Y_{ij} := V_{\PP_{m+2}}(T_i, T_j)$ and $Y_{123} := V_{\PP_{m+2}}(T_1, T_2, T_3)$. Then we have \begin{equation*} \varphi^{-1}(z) \ \cong \ \begin{cases} \PP_2 & \text{if } z\in Y_{123}, \\ \PP_1 & \text{if } z\in (Y_{12} \cup Y_{13} \cup Y_{23}) \setminus Y_{123}, \\ V_{\PP_2}(T_1T_2) & \text{if } z\in (Y_{1} \cup Y_{2} \cup Y_{3}) \setminus (Y_{12} \cup Y_{13} \cup Y_{23}), \\ \PP_1 & \text{otherwise}. \end{cases} \end{equation*} \end{vartype} \begin{vartype}{5} \label{rem::no5} The variety $X$ admits a Mori fiber space $\varphi \colon X \to \PP_{m+1}$, whose general fiber is isomorphic to $\PP_1\times\PP_1$. More precisely, with $Y_1 := V_{\PP_{m+1}}(T_1)$ and $Y_2 := V_{\PP_{m+1}}(T_2)$, we have \begin{equation*} \varphi^{-1}(z) \ \cong \ \begin{cases} V_{\PP_3}(T_1T_2) & \text{if } z\in Y_1\cap Y_2, \\ V_{\PP_3}(T_1T_2+T_3^2) & \text{if } z\in Y_1\setminus Y_2 \text{ or } z\in Y_2\setminus Y_1, \\ \PP_1\times\PP_1 & \text{otherwise}. \end{cases} \end{equation*} \end{vartype} \begin{vartype}{6} \label{rem::no6} The variety $X$ admits a Mori fiber space $X \to \PP_{m}$, with general fiber isomorphic to $Q_3$ and singular fibers over $V(T_1) \subseteq \PP_{m}$ each isomorphic to $V(T_1T_2+T_3T_4) \subseteq \PP_4$. \end{vartype} \begin{vartype}{7} \label{rem::no7} The variety $X$ admits a birational divisorial contraction $X \to \PP_{m+3}$ with exceptional divisor isomorphic to the projectivized split bundle $$ \PP \ \biggl( \ \bigoplus_{i=1}^{m} \mathcal{O}_{\PP_1\times\PP_1} \oplus \mathcal{O}_{\PP_1\times\PP_1}(1,1) \ \biggr) $$ and center isomorphic to $\PP_1\times\PP_1$. Moreover, if $m=1$ holds, $X$ admits a further birational divisorial contraction $X \to Q_4$ with exceptional divisor isomorphic to $\PP_3$ and center a point. \end{vartype} \begin{vartype}{8} \label{rem::no8} Here we have $X = \PP(\mathcal{O}_{Q_4} \oplus \mathcal{O}_{Q_4} (a_2) \ldots \oplus \mathcal{O}_{Q_4} (a_m) )$. Thus, there is a Mori fiber space $X \to Q_4$ with fibers isomorphic to $\PP_{m-1}$. If $a_2= \ldots = a_m > 0$ holds, then $X$ admits in addition a birational divisorial contraction $X \to Y$, where $Y := V(T_1T_2+T_3T_4+T_5T_6) \subseteq \PP(1^6, a_2^{m-1})$. The exceptional divisor is isomorphic to $Q_4 \times \PP_{m-2}$ and the center to $\PP_{m-2}$. \end{vartype} \begin{vartype}{9} \label{rem::no9} The variety $X$ is a bundle over $\PP_{m-1}$ with fibers isomorphic to $Q_4$. In particular, if $a_i=0$ holds for all $2\le i \le 6$, then $X\cong Q_4\times\PP_{m-1}$. \end{vartype} \begin{vartype}{10} \label{rem::no10} The variety $X$ admits a birational divisorial contraction $X \to \PP_{m+2}$ with exceptional divisor isomorphic to the projectivized split bundle $$ \PP \ \biggl( \ \bigoplus_{i=1}^{m} \mathcal{O}_{\PP_1} \oplus \mathcal{O}_{\PP_1}(1) \ \biggr) $$ and center isomorphic to $\PP_1$. For $m=1$, we have $\dim(X)=3$ and $X$ is No.~2.30 from~\cite{MoMu}; in this case it admits a further birational divisorial contraction $X \to Q_3$ with exceptional divisor isomorphic to $\PP_2$ and center a point. \end{vartype} \begin{vartype}{11} \label{rem::no11} Here $X = \PP(\mathcal{O}_{Q_3} \oplus \mathcal{O}_{Q_3} (a_2) \ldots \oplus \mathcal{O}_{Q_3} (a_m) ) $ holds. Thus, there is a Mori fiber space $X \to Q_3$ with fibers isomorphic to $\PP_{m-1}$. If $a_2= \ldots = a_m > 0$ holds, then $X$ admits a birational divisorial contraction $X \to Y$, where the variety~$Y$ equals $V(T_1T_2+T_3T_4+T_5^2) \subseteq \PP(1^5, a_2^{m-1})$. The exceptional divisor is isomorphic to $Q_3 \times \PP_{m-2}$ and the center to $\PP_{m-2}$. \end{vartype} \begin{vartype}{12} \label{rem::no12} The variety $X$ is a bundle over $\PP_{m-1}$ with fibers isomorphic to $Q_3$. In particular, if $a=b=c=0$ holds, then $X\cong Q_3\times\PP_{m-1}$. \end{vartype} \begin{vartype}{13} \label{rem::no13} This case presents a one-parameter family of varieties $X_\lambda$, with parameter $\lambda\in\KK^*\!\setminus\!\{1\}$. They are generally non-isomorphic to each other, except for the pairs $X_\lambda \cong X_{\lambda^{-1}}$ for all $\lambda$. The variety $X_\lambda$ is the intersection of two hypersurfaces $$ D_1 \ = \ V(T_1S_1+T_2S_2+T_3S_3), \qquad D_2 \ = \ V(\lambda T_2S_2+T_3S_3+T_4S_4), $$ both of bidegree (1,1) in $\PP_3\times\PP_3$, where the $T_i$ are the coordinates of the first $\PP_3$ and the $S_j$ those of the second. Note that each $D_i$ has an isolated singularity, which is not contained in the other hypersurface. Both $D_1,D_2$ are terminal and factorial. Moreover, $X$ admits two Mori fiber spaces $\PP_3 \leftarrow X \to \PP_3$, both with typical fiber $\PP_1$ and having four special fibers, all isomorphic to $\PP_2$ and lying over the points $[1,0,0,0]$, $ [0,1,0,0]$, $[0,0,1,0]$ and $[0,0,0,1]$. \end{vartype} \begin{remark} In contrast to the toric case, a smooth projective variety of Picard number~$2$ with torus action of complexity one need not admit a non-trivial Mori fiber space. For example, in Theorem~\ref{thm:main2}, this happens in precisely two cases, namely No.~7 and No.~10, both with $m=1$. \end{remark} \begin{remark} In the list of Theorem~\ref{thm:main2} there are several examples, where the effective cone coincides with the cone of movable divisor classes: No.~4A with $c=0$, No.~4C, No.~5 with $a=0$, No.~6 with $a=0$, No.~8 with $a_2=0$, No.~9 with $a_3=0$, No.~11 with $a_2=0$, No.~12 with $a=0$ and No.~13. Thus, these varieties admit no birational divisorial contraction. \end{remark} \begin{remark} In Theorem~\ref{thm:main1} it is possible that non-isomorphic varieties share the same Cox ring and thus differ from each other by a small quasimodification, i.e. only by the choice of the ample class. This happens exactly in the following cases: \begin{enumerate} \item No.~4 with $\l_2=l_4=2$, $l_6=1$, $a=0$, $b=1$, $c_i=0$ for all $i=1,\ldots,m$ has the same Cox ring as No.~5 with $a=0$. Note that for $m=0$ both varieties are truly almost Fano, whereas for $m \ge 1$ No.~5 is Fano. \item For $m\ge1$, No.~4 with $\l_2=2$, $l_4=l_6=1$, $a=b=1$, $c_i=0$ for all $i=1,\ldots,m$ has the same Cox ring as No.~6 with $a=c=0$ and $b=1$. Note that for $m=1$ both varieties are truly almost Fano, whereas for $m\ge2$ No.~6 is Fano. \item For $m\ge2$, No.~7 has the same Cox ring as No.~9 with $a_2=2$ and $a_3=\ldots=a_6=1$. Note that for $m=2,3$ No.~7 is Fano, for $m=4$ both varieties are truly almost Fano, whereas for $m\ge5$ No.~9 is Fano. \item For $m\ge2$, No.~10 has the same Cox ring as No.~12 with $a=b=c=1$. Note that for $m=2$ No.~10 is Fano, for $m=3$ both varieties are truly almost Fano, whereas for $m\ge4$ No.~12 is Fano. \end{enumerate} \end{remark}
1,477,468,750,574
arxiv
\section{Divergences and divergence statistics} \PARstart{M}{any} of the divergence measures used in statistics are of the $f$-divergence type introduced independently by I. Csisz\'{a}r \cite{Csiszar1963}, T. Morimoto \cite{morimoto1963}, and Ali and Silvey \cite{Ali1966}. Such divergence measures have been studied in great detail in \cite{Liese1987}. Often one is interested inequalities for one $f$-divergence in terms of another $f$-divergence. Such inequalities are for instance needed in order to calculate the relative efficiency of two $f$-divergences when used for testing goodness of fit but there are many other applications. In this paper we shall study the more general problem of determining the joint range of any pair of $f$-divergences. The results are useful in determining general conditions under which information divergence is a more efficient statistic for testing goodness of fit than another $f$-divergence, but will not be discussed in this short paper. Let $f:\left( 0,\infty\right) \rightarrow\mathbb{R}$ denote a convex function satisfying $f\left( 1\right) =0.$ We define $f\left( 0\right) $ as the limit $\lim_{t\rightarrow0}f\left( t\right) $. We define $f^{\ast }\left( t\right) =tf\left( t^{-1}\right) .$ Then $f^{\ast}$ is a convex function and $f^{\ast}\left( 0\right) $ is defined as $\lim_{t\rightarrow 0}tf\left( t^{-1}\right) =\lim_{t\rightarrow\infty}\frac{f\left( t\right) }{t}.$ Assume that $P$ and $Q$ are absolutely continuous with respect to a measure $\mu,$ and that $p=\frac{dP}{d\mu}$ and $q=\frac{dQ}{d\mu}.$ For arbitrary distributions $P$ and $Q$ the $f$-divergence $D_{f}(P,Q)\geq0$\ is defined by the formula \begin{equation} D_{f}(P,Q)=\int_{\left\{ q>0\right\} }f\left( \frac{p}{q}\right) ~dQ+f^{\ast}\left( 0\right) P\left( q=0\right) \label{4 \end{equation} (for details about the definition (\ref{4}) and properties of the $f$-divergences, see \cite{Liese2006}, \cite{Liese1987} or \cite{Read1988}). With this definition \[ D_{f}\left( P,Q\right) =D_{f^{\ast}}\left( Q,P\right) . \] \begin{example} The function $f(t)=\left\vert t-1\right\vert $ defines the $L^{1}$-distanc \begin{equation} \left\Vert P-Q\right\Vert =\sum_{j=1}^{k}q_{j}\,\left\vert {\frac{p_{j} {q_{j}}-1}\right\vert =\sum_{j=1}^{k}\,\left\vert p_{j}{-q_{j}}\right\vert \text{ \ \ (cf. (\ref{4}))} \label{V \end{equation} which plays an important role in information theory and mathematical statistics \cite{Barron1992, Fedotov2003} . \input{pinsker.TpX} \end{example} In (\ref{4}) is often taken the convex function $f$ which is one of the power functions $\phi_{\alpha}$\ of order $\alpha\in\mathbb{R}$ given in the domain $t>0$ by the formula \begin{equation} \phi_{\alpha}(t)={\frac{t^{\alpha}-\alpha(t-1)-1}{\alpha(\alpha-1)}}\text{ \ \ \ when \ }\alpha(\alpha-1)\neq0 \label{4a \end{equation} and by the corresponding limits \begin{equation} \phi_{0}(t)=-\ln t+t-1\text{ \ \ and \ \ }\phi_{1}(t)=t\ln t-t+1. \label{4b \end{equation} The $\phi$-divergences \begin{equation} D_{\alpha}(P,Q)\overset{def}{=}D_{\phi_{\alpha}}(P,Q),\text{ \ \ }\alpha \in\mathbb{R} \label{4c \end{equation} based on (\ref{4a}) and (\ref{4b}) are usually referred to as power divergences of orders $\alpha.$ For details about the properties of power divergences, see \cite{Liese2006} or \cite{Read1988}. Next we\ mention the best known members of the family of statistics (\ref{4c}), with a reference to the skew symmetry $D_{\alpha}(P,Q)=D_{1-\alpha}(Q,P)$ of the power divergences (\ref{4c}).$\medskip$ \begin{example} The $\chi^{2}$-divergence (or quadratic divergence or Pearson divergence) \begin{equation} D_{2}(P,Q)=D_{-1}(Q,P)={\frac{1}{2}}\sum_{j=1}^{k}{\frac{(p_{j}-q_{j})^{2 }{q_{j}}} \label{chi \end{equation} leads to the well known Pearson and Neyman statistics. The information divergence \begin{equation} D_{1}(P,Q)=D_{0}(Q,P)=\sum_{j=1}^{k}p_{j}\ln{\frac{p_{j}}{q_{j}}} \label{7 \end{equation} leads to the log-likelihood ratio and reversed log-likelihood ratio statistics. The symmetric Hellinger divergence \[ D_{1/2}(P,Q)=D_{1/2}(Q,P)=H(P,Q) \] leads to the Freeman--Tukey statistic. \end{example} \begin{example} The Hellinger divergence and the total variation are symmetric in the arguments $P$ and $Q.$ Non-symmetric divergences may be symmetrized. For instance the LeCam divergence is nothing but the symmetrized $\chi^{2 $-divergence given by \[ D_{LeCam}\left( P,Q\right) =\frac{1}{2}D_{2}\left( P,\frac{P+Q}{2}\right) +\frac{1}{2}D_{2}\left( Q,\frac{P+Q}{2}\right) \] Another symmetrized divergence is the Jensen Shannon divergence defined by \[ JD_{1}\left( P,Q\right) =\frac{1}{2}D\left( P\left\Vert \frac{P+Q {2}\right. \right) +\frac{1}{2}D\left( Q\left\Vert \frac{P+Q}{2}\right. \right) . \] The joint range of total variation with Jensen Shannon divergence was studied by Bri\"{e}t and Harremo\"{e}s \cite{Briet2009} and is illustrated on Figure \ref{vsjd}. \input{vsjd.TpX} \end{example} In this paper we shall prove that the joint range of any pair of $f$-divergences is essentially determined by the range of distributions on a two-element set. In special cases the significance of determining the range over two-element set has been pointed out explicitly in \cite{Topsoe2001a}. Here we shall prove that a reduction to two-element sets can always be made. \section{\label{sec1}Joint range of $f$-divergences} In this section we are interested in the range of the map $\left( P,Q\right) \rightarrow\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) $ where $P$ and $Q$ are probability distributions on the same set. \begin{definition} A point $\left( x,y\right) \in\mathbb{R}^{2}$ is $(f,g)$\emph{-achievable} if there exist probability measures $P$ and $Q$ on a $\sigma$-algebra such $\left( x,y\right) =\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) .$ A $(f,g)$-divergence pair $\left( x,y\right) $ is $d$-\emph{achievable } if there exist probability vectors $P,Q\in \mathbb{R}^{d}$ such that \[ \left( x,y\right) =\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) . \] \end{definition} \begin{lemma} Assume that \[ P_{0}\left( A\right) =Q_{0}\left( A\right) =1 \] and \[ P_{1}\left( B\right) =Q_{1}\left( B\right) =1 \] and that $A\cap B=\varnothing.$ If $P_{\alpha}=\left( 1-\alpha\right) P_{0}+\alpha P_{1}$ and $Q_{\alpha}=\left( 1-\alpha\right) Q_{0}+\alpha Q_{1} $ then \[ D_{f}\left( P_{\alpha},Q_{\alpha}\right) =\left( 1-\alpha\right) D_{f}\left( P_{0},Q_{0}\right) +\alpha D_{f}\left( P_{1},Q_{1}\right) . \] \end{lemma} \begin{theorem} \label{TheoremConvex}The set of $(f,g)$-achievable points is convex. \end{theorem} \begin{proof} Assume that $\left( P,Q\right) $ and $\left( \tilde{P},\tilde{Q}\right) $ are two pairs of probability distributions on a space $\left( \mathcal{X} ,\mathcal{F}\right) .$ Introduce a two-element set $B=\left\{ 0,1\right\} $ and the product space $\mathcal{X\times}B$ as a measurable space. Let $\phi$ denote projection on $B.$ Now we define a pair $\left( \tilde{P},\tilde {Q}\right) $of joint distribution on $\mathcal{X\times}B.$ The marginal distribution of both $\tilde{P}$ is $\tilde{Q}$ on $B$ is $\left( 1-\alpha,\alpha\right) .$ The conditional distributions are given by $P\left( \cdot\mid\phi=i\right) =P_{i}$ and $Q\left( \cdot\mid \phi=i\right) =Q_{i}$ where $i=0,1.$ Then \begin{multline*} \left( \begin{array} [c]{c D_{f}\left( P_{\alpha},Q_{\alpha}\right) \\ D_{g}\left( P_{\alpha},Q_{\alpha}\right) \end{array} \right) =\\ \left( \begin{array} [c]{c \left( 1-\alpha\right) D_{f}\left( P_{0},Q_{0}\right) +\alpha D_{f}\left( P_{1},Q_{1}\right) \\ \left( 1-\alpha\right) D_{g}\left( P_{0},Q_{0}\right) +\alpha D_{g}\left( P_{1},Q_{1}\right) \end{array} \right) \\ =\left( 1-\alpha\right) \left( \begin{array} [c]{c D_{f}\left( P_{0},Q_{0}\right) \\ D_{g}\left( P_{0},Q_{0}\right) \end{array} \right) +\alpha\left( \begin{array} [c]{c D_{f}\left( P_{1},Q_{1}\right) \\ D_{g}\left( P_{1},Q_{1}\right) \end{array} \right) \\ =\left( 1-\alpha\right) \left( \begin{array} [c]{c D_{f}\left( P,Q\right) \\ D_{g}\left( P,Q\right) \end{array} \right) +\alpha\left( \begin{array} [c]{c D_{f}\left( \tilde{P},\tilde{Q}\right) \\ D_{g}\left( \tilde{P},\tilde{Q}\right) \end{array} \right) . \end{multline*} \end{proof} \begin{example} For the joint range of total variation and Jensen Shannon divergence illustrated on Figure \ref{vsjd} the set of 2-achievable points is not convex but the set of 3-achievable points is convex and equals the set of all $(f,g)$-achievable points. \end{example} \begin{theorem} Any $(f,g)$-achievable points is a convex combination of two $2$-achievable points. Consequently, any $(f,g)$-achievable point is $4$-achievable. \end{theorem} \begin{proof} Let $P$ and $Q$ denote probability measures on Borel space. Define the set $A=\left\{ q>0\right\} $ and the function $X=p/q$ on $A.$ Then $Q$ satisfies \begin{align} Q\left( A\right) & =1,\label{norm}\\ \int_{A}X~dQ & \leq1.\nonumber \end{align} Now we fix $X$ and $A.$ The formulas for the divergences become \begin{align*} D_{f}\left( P,Q\right) & =\int_{A}f\left( X\right) ~dQ+f^{\ast}\left( 0\right) P\left( \complement A\right) \\ & =\int_{A}f\left( X\right) ~dQ+f^{\ast}\left( 0\right) \left( 1-\int_{A}X~dQ\right) \\ & =\int_{A}\left( f\left( X\right) ~+f^{\ast}\left( 0\right) \left( 1-X\right) \right) ~dQ\\ & =\mathrm{E}\left[ f\left( X\right) +f^{\ast}\left( 0\right) \left( 1-X\right) \right] \end{align*} and similarly \[ D_{g}\left( P,Q\right) =\mathrm{E}\left[ g\left( X\right) ~+g^{\ast }\left( 0\right) \left( 1-X\right) \right] . \] Hence, the divergences only depend on the distribution of $X.$ Therefore we may without loss of generality assume that $Q$ is a probability measure on $\left[ 0,\infty\right) $. Define $C$ as the set of probability measures on $\left[ 0,\infty\right) $ satisfying $\mathrm{E}\left[ X\right] \leq1.$ Let $C^{+}$ be the set of additive measures $\mu$ on $\left[ 0,\infty\right) $ satisfying $\mu\left( A\right) \leq1$ and $\int_{A}X~d\mu\leq1.$ Then $C^{+}$ is convex and thus compact under setwise convergence. According to the Choquet--Bishop--de Leeuw theorem \cite[Sec. 4]{Phelps2001} any other point in $C^{+}$ is the barycenter of a probability measure over such extreme points. In particular an element $Q\in C$ is the barycenter of a probability measure $P_{bary}$ over extreme points of $C^{+}$ and these extreme points must in addition be probability measures with $P_{bary}$-probability 1. Hence $Q\in C$ is a barycenter of a probability measure over extreme points in $C.$ Let $Q$ be an element in $C.$ Let $A_{i},i=1,2,3$ be a disjoint cover of $\left[ 0,\infty\right) $ and assume that $Q\left( A_{i}\right) >0.$ Then \[ Q=\sum_{i=1}^{3}Q\left( A_{i}\right) Q\left( \cdot\mid A_{i}\right) . \] For a probability vector $\lambda=\left( \lambda_{1},\lambda_{2},\lambda _{2}\right) $ let $Q_{\lambda}$ denote the distribution \[ Q_{\lambda}=\sum_{i=1}^{3}\lambda_{i}Q\left( \cdot\mid A_{i}\right) . \] Then $Q_{\lambda}$ is element in $C$ if and only if \begin{equation} \sum_{i=1}^{3}\lambda_{i}\int_{A}X~dQ\left( \cdot\mid A_{i}\right) \leq1. \label{reduceret \end{equation} An extreme probability vector $\lambda$ that satisfies (\ref{reduceret}) has one or two of its weights equal to 0. Hence, if $Q$ is extreme in $C$ and $A_{i},i=1,2,3$ is a disjoint cover of $A,$ then at least one of the three sets satisfies $Q\left( A_{i}\right) =0.$ Therefore an extreme point $Q\in C$ is of one of the following two types: \begin{enumerate} \item $Q$ is concentrated in one point. \item $Q$ has support on two points. In this case the inequality $\int _{A}X~dQ\leq1$ holds with equality and $P\left( A\right) =1$ so that $P$ is absolutely continuous with respect to $Q$ and therefore supported by the same two-element set. \end{enumerate} The formulas for divergence are linear in $Q.$ Hence any $(f,g)$-divergence pair is a the barycenter of a probability measure $P_{bary}$ over points generated by extreme distributions $Q\in C.$ The extreme distributions of type $2$ generate 2-achievable points. For extreme points $Q$ concentrated in a single point we can reverse the argument at make a barycentric decomposition with respect to $P$. If an extreme $P$ has a two-point support then $Q$ is absolutely continuous with respect to $P$ and generates a $(f,g)$-achievable point that is $2 -achievable. If $P$ is concentrated in a point then this point may either be identical with the support of $Q$ and the two probability measures are identical, or the support points are different and $P$ and $Q$ are singular but still $\left( P,Q\right) $ is supported on two points. Therefore any $(f,g)$-achievable point has a barycentric decomposition into 2-achievable points. \input{trekant.TpX} Let $\mathbf{y}=\left( y,z\right) $ be a $(f,g)$-achievable point. As we have seen $\mathbf{y}$ is a barycenter of $(f,g)$-achievable points that are 2-achievable. According to the Carath\'{e}odory's theorem \cite{Boltyanski2001} any barycentric decomposition in two dimensions may be obtained as a convex combination of at most three points $\mathbf{y _{i},~i=1,2,3.$ as illustrated in Figure \ref{trekant}. Assume that all three points have positive weight. Let $\ell_{i}$ be the line through $\mathbf{y}$ and $\mathbf{y}_{i}.$ The point $\mathbf{y}$ divides the line $\ell_{i}$ in two half-lines $\ell_{i}^{+}$ and $\ell_{i}^{-}~,$ where $\ell_{i}^{-}$ denotes the half-line that contains $\mathbf{y}_{i}.$ The lines $\ell_{i ^{+},i=1,2,3$ divide $\mathbb{R}^{2}$ into three sectors, each of them containing one of the points $\mathbf{y}_{i},i=1,2,3.$ The set of $(f,g)$-divergence pairs that are $3$-achievable is curve-connected so there exist a continuous curve of $(f,g)$-divergence pairs that are 2-achievable from $\mathbf{y}_{1}$ to $\mathbf{y}_{2}$ that must intersect $\ell_{1 ^{+}\cup\ell_{3}^{+}$ in a point $\mathbf{z}.$ If $\mathbf{z}$ lies on $\ell_{i}^{+}$ then $\mathbf{y}$ is a convex combination of the two points $\mathbf{y}_{i}$ and $\mathbf{z}.$ Hence, any $(f,g)$-divergence pair is a convex combination of two points that are $2$-achievable. From the construction in the proof of Theorem \ref{TheoremConvex} we see that any $(f,g)$-divergence pair is 4-achievable. An $f$-divergence on an arbitrary $\sigma$-algebra can be approximated by the $f$-divergence on its finite sub-algebras. Any finite $\sigma$-algebra is a Borel $\sigma$-algebra for a discrete space so for probability measures $P,Q$ on a $\sigma$-algebra the point $\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) $ is in the closure of 4-achievable points. For any function pairs $(f,g)$ the intersection of the set of 2-achievable points and the first quadrant is closed. 4-achievable points are convex combinations of 2-achievable points so the intersection of the 4-achievable points and the first quadrant is closed contains $\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) $ even if $P,Q$ are measures on a non-atomic $\sigma$-algebra. \end{proof} The set of $(f,g)$-achievable points that are 2-achievable can be parametrized as $P=\left( 1-p,p\right) $ and $Q=\left( 1-q,q\right) .$ If we define $\overline{\left( 1-p,p\right) }=\left( p,1-p\right) $ then $D_{f}\left( P,Q\right) =D_{f}\left( \overline{P},\overline{Q}\right) .$ Hence we may assume without loss of generality assume that $p\leq q$ and just have to determine the image of the simplex $\Delta=\left\{ \left( p,q\right) \mid0\leq p\leq q\leq1\right\} .$ This result makes it very easy to make a numerical plot of the $(f,g)$-achievable point is 2-achievable and the joint range is just the convex hull. \input{simplex.TpX} \section{Image of the triangle} In order to determine the image of the triangle $\Delta$ we have to check what happens at inner points and what happens at or near the boundary. Most inner points are mapped into inner points of the range. On subsets of $\Delta$ where the derivative matrix is non-singular the mapping $\left( P,Q\right) \rightarrow\left( D_{f},D_{g}\right) $ is open according to the open mapping theorem from calculus. Hence, all inner points that are not mapped into interior points of the range must satisfy \[ \left\vert \begin{array} [c]{cc \frac{\partial D_{f}}{\partial p} & \frac{\partial D_{g}}{\partial p}\\ \frac{\partial D_{f}}{\partial q} & \frac{\partial D_{g}}{\partial q \end{array} \right\vert =0. \] Depending on functions $f$ and $g$ this equation may be easy or difficult to solve, but in most cases the solutions will lie on a 1-dimensional manifold that will cut the triangle $\Delta$ into pieces, such that each piece is mapped isomorphically into subsets of the range of $\left( P,Q\right) \rightarrow\left( D_{f},D_{g}\right) .$ Each pair of functions $(f,g)$ will require its own analysis. The diagonal $p=q$ in $\Delta$ is easy to analyze. It is mapped into $\left( D_{f},D_{g}\right) =\left( 0,0\right) .$ \begin{lemma} \label{uendelig}If $f\left( 0\right) =\infty,$ and $\lim_{t\rightarrow0 \inf\frac{g\left( t\right) }{f\left( t\right) }=\beta_{0},$ then the supremum of \[ \beta\cdot D_{f}\left( P,Q\right) -D_{g}\left( P,Q\right) \] over all distributions $P,Q$ is $\infty$ if $\beta>\beta_{0}.$ If $f^{\ast}\left( 0\right) =\infty,$ and $\lim_{t\rightarrow\infty} \inf\frac{g\left( t\right) }{f\left( t\right) }=\beta_{0},$ then the supremum of \[ \beta\cdot D_{f}\left( P,Q\right) -D_{g}\left( P,Q\right) \] over all distributions $P,Q$ is $\infty$ if $\beta>\beta_{0}.$ \end{lemma} If $g\left( 0\right) =\infty,$ and $\lim_{t\rightarrow0}\sup\frac{g\left( t\right) }{f\left( t\right) }=\gamma_{0},$ then the supremum of \[ D_{g}\left( P,Q\right) -\gamma D_{f}\left( P,Q\right) \] over all distributions $P,Q$ is $\infty$ if $\gamma<\gamma_{0}.$ If $g^{\ast}\left( 0\right) =\infty,$ and $\lim_{t\rightarrow\infty} \sup\frac{g\left( t\right) }{f\left( t\right) }=\gamma_{0},$ then the supremum of \[ D_{g}\left( Q,P\right) -\gamma D_{f}\left( Q,P\right) \] over all distributions $P,Q$ is $\infty$ if $\gamma<\gamma_{0}.$ \begin{proof} Assume that \[ f\left( 0\right) =\infty\text{ \ and \ }\lim_{t\rightarrow0}\inf \frac{g\left( t\right) }{f\left( t\right) }=\beta_{0}. \] The first condition implies \[ D_{f}\left( \left( 1,0\right) ,\left( 1/2,1/2\right) \right) =\infty \] and the second condition implies that $g\left( 0\right) =\infty$ and \[ D_{g}\left( \left( 1,0\right) ,\left( 1/2,1/2\right) \right) =\infty. \] We have \begin{multline*} \frac{D_{g}\left( \left( p,1-p\right) ,\left( 1/2,1/2\right) \right) }{D_{f}\left( \left( p,1-p\right) ,\left( 1/2,1/2\right) \right) }\\ =\frac{g\left( 2p\right) /2+g\left( 2\left( 1-p\right) \right) /2}{f\left( 2p\right) /2+f\left( 2\left( 1-p\right) \right) /2}\\ =\frac{g\left( 2p\right) +g\left( 2\left( 1-p\right) \right) }{f\left( 2p\right) +f\left( 2\left( 1-p\right) \right) }. \end{multline*} Let $\left( t_{n}\right) _{n}$ be a sequence such that $\frac{g\left( t_{n}\right) }{f\left( t_{n}\right) }\rightarrow\beta$ for $n\rightarrow \infty.$ Then \[ \frac{D_{g}\left( \left( \frac{t_{n}}{2},1-\frac{t_{n}}{2}\right) ,\left( 1/2,1/2\right) \right) }{D_{f}\left( \left( \frac{t_{n}}{2},1-\frac{t_{n} }{2}\right) ,\left( 1/2,1/2\right) \right) }\rightarrow\beta \] and the first result follows. The other three cases follows by interchanging $f$ and $g,$ and/or replacing $f$ by $f^{\ast}$ and $g$ by $g^{\ast}.$ We have used that \[ \lim_{t\rightarrow0}\inf\frac{g^{\ast}\left( t\right) }{f^{\ast}\left( t\right) }=\lim_{t\rightarrow0}\inf\frac{tg\left( t^{-1}\right) }{tf\left( t^{-1}\right) }=\lim_{t\rightarrow\infty}\inf\frac{g\left( t\right) }{f\left( t\right) }. \] \end{proof} \begin{proposition} Assume that $f$ and $g$ are $C^{2}$ and that $f^{\prime\prime}\left( 1\right) >0$ and $g^{\prime\prime}\left( 1\right) >0.$ Assume that $\lim_{t\rightarrow0}\inf\frac{g\left( t\right) }{f\left( t\right) }>0,$ and that $\lim_{t\rightarrow\infty}\inf\frac{g\left( t\right) }{f\left( t\right) }>0.$ Then there exists $\beta>0$ such that \begin{equation} D_{g}\left( P,Q\right) \geq\beta\cdot D_{f}\left( P,Q\right) \label{nederen \end{equation} for all distributions $P,Q.$ \end{proposition} \begin{proof} The inequality $\lim_{t\rightarrow0}\inf\frac{g\left( t\right) }{f\left( t\right) }>0$ implies that there exist $\beta_{0}$,$t_{0}>0$ such that $g\left( t\right) \geq\beta_{0}f\left( t\right) $ for $t<t_{0}.$ The Inequality $\lim_{t\rightarrow\infty}\inf\frac{g\left( t\right) }{f\left( t\right) }>0$ implies that there exists $\beta_{\infty}>0$ and $t_{\infty}>0$ such that $g\left( t\right) \geq\beta_{\infty}f\left( t\right) $ for $t>t_{\infty.}$ According to Taylor's formula we have \begin{align*} f\left( t\right) & =\frac{f^{\prime\prime}\left( \theta\right) } {2}\left( t-1\right) ^{2},\\ g\left( t\right) & =\frac{g^{\prime\prime}\left( \eta\right) }{2}\left( t-1\right) ^{2 \end{align*} for some $\theta$ and $\eta$ between $1$ and $t.$ Hence \[ \frac{g\left( t\right) }{f\left( t\right) }=\frac{f^{\prime\prime}\left( \theta\right) }{g^{\prime\prime}\left( \eta\right) }\rightarrow \frac{f^{\prime\prime}\left( 1\right) }{g^{\prime\prime}\left( 1\right) }\text{ for }t\rightarrow1. \] Therefore there there exists $\beta_{1}>0$ and an interval $\left] t_{-},t_{+}\right[ $ around $1$ such that $\frac{g\left( t\right) }{f\left( t\right) }\geq\beta_{1}$ for $t\in\left] t_{-},t_{+}\right[ .$ The function $t\rightarrow\frac{g\left( t\right) }{f\left( t\right) }$ is continuous on the compact set $\left[ t_{0},t_{-}\right] \cup\left[ t_{+},t_{\infty}\right] $ so it has a minimum $\tilde{\beta}>0$ on this set. Inequality \ref{nederen} holds for $\beta=\min\left\{ \beta_{0},\beta _{1},\beta_{\infty},\tilde{\beta}\right\} .$ \end{proof} \section{Examples} In this section we shall see a number of examples of how the method developed i this paper can be applied to determine the joint range for some pairs of $f$-divergences. Some of these results are known and others are new. We will not spell out all the details but shall restrict to the main flow of the argument that will lead to the joint range. \subsection{Power divergence of order 2 and 3} We have \begin{align*} f\left( t\right) & =\phi_{2}(t){,}\\ g\left( t\right) & =\phi_{3}(t){. \end{align*} In this case we have \begin{gather*} D_{f}\left( \left( p,1-p\right) ,\left( q,1-q\right) \right) =\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \frac{1}{2}\left( \frac{\left( p-q\right) ^{2}}{q}+\frac{\left( p-q\right) ^{2}}{1-q}\right) ,\\ D_{g}\left( \left( p,1-p\right) ,\left( q,1-q\right) \right) =\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{1}{6}\left( \left( \frac{p}{q}\right) ^{3}q+\left( \frac{1-p}{1-q}\right) ^{3}\left( 1-q\right) -1\right) . \end{gather*} First we determine the image of the triangle. The derivatives are \begin{align*} \frac{\partial D_{f}}{\partial p} & =\frac{2}{2}\cdot\frac{\left( p-q\right) }{\left( 1-q\right) q}~,\\ \frac{\partial D_{f}}{\partial q} & =\frac{1}{2}\cdot\frac{\left( 2pq-q-p\right) \left( p-q\right) }{\left( 1-q\right) ^{2}q^{2}}~,\\ \frac{\partial D_{g}}{\partial p} & =\frac{-3}{6}\cdot\frac{\left( 2pq-q-p\right) \left( p-q\right) }{\left( 1-q\right) ^{2}q^{2}}~,\\ \frac{\partial D_{g}}{\partial q} & =\frac{2}{6}\cdot\frac{\left( \begin{array} [c]{c pq+p^{2}+q^{2}-\\ 3pq^{2}-3p^{2}q+3p^{2}q^{2 \end{array} \right) \allowbreak\left( p-q\right) }{\left( q-1\right) ^{3}q^{3}}~. \end{align*} The determinant of derivatives is \begin{multline*} \left\vert \begin{array} [c]{cc \frac{\partial D_{f}}{\partial p} & \frac{\partial D_{g}}{\partial p}\\ \frac{\partial D_{f}}{\partial q} & \frac{\partial D_{g}}{\partial q \end{array} \right\vert =\\ \frac{\left( p-q\right) ^{2}}{12q^{4}\left( 1-q\right) ^{4}}\left\vert \begin{array} [c]{cc 2 & 3p+3q-6pq\allowbreak\\ 2pq-q-p & \left( \begin{array} [c]{c 6pq^{2}-2p^{2}-2q^{2}\\ -2pq+6p^{2}q-6p^{2}q^{2}\allowbreak \end{array} \right) \end{array} \right\vert \\ =-\frac{1}{12}\left( \frac{p-q}{q\left( 1-q\right) }\right) ^{4}. \end{multline*} $\allowbreak$We see that the determinant of derivatives is different from zero for $p\neq q$ so the interior of $\Delta$ is mapped one-to-one to the image. Hence we just have to determine the image of points on the boundary of $\Delta$ (or near the boundary if undefined on the boundary). For $P=\left( 1,0\right) $ and $Q=\left( 1-q,q\right) $ we get \begin{align*} D_{f}\left( P,Q\right) & =\frac{1}{2}\left( q+\frac{q^{2}}{1-q}\right) =\frac{1}{2}\left( \frac{1}{1-q}-1\right) ,\\ D_{g}\left( P,Q\right) & =\frac{1}{6}\left( \frac{1}{\left( 1-q\right) ^{2}}-1\right) =\frac{1}{6}\frac{\left( 2-q\right) q}{\left( 1-q\right) ^{2}}. \end{align*} The first equation leads to \[ q=\left( 1-\frac{1}{2D_{f}+1}\right) \] and hence \[ D_{g}=\frac{2}{3}D_{f}\left( D_{f}+1\right) . \] We have \[ \frac{f\left( t\right) }{g\left( t\right) }=\frac{{\frac{t^{2} -2(t-1)-1}{2}}}{{\frac{t^{3}-3(t-1)-1}{6}}}\rightarrow\infty\text{ for }t\rightarrow\infty. \] All points $\left( 0,s\right) ,s\in\left[ 0,\infty\right) $ are in the closure of the range of $\left( P,Q\right) \rightarrow\left( D_{f} ,D_{g}\right) .$ By combing these two results we see that the range consists of the point $\left( 0,0\right) ,$ all points on the curve $\left( x,\frac{2}{3}x\left( x+1\right) \right) ,x\in\left( 0,\infty\right) $, and all point above this curve. Similar results holds for any pair of power divergences, but for other pairs than $\left( D_{2},D_{3}\right) $ the computations become much more involved. Note that the R\'{e}nyi divergences are monotone functions of the power divergences so our results easily translate into the results on R\'{e}nyi divergences. More details on R\'{e}nyi divergences can be found in \cite{Erven2010}. \subsection{Total variation and $\chi^{2}$-divergence} In this case we have \begin{align*} f\left( x\right) & =\left\vert x-1\right\vert ,\\ g\left( x\right) & =\frac{1}{2}\left( x-1\right) ^{2}. \end{align*} The function $f$ is not differentiable but on the triangle $\Delta$ we have $p\leq q$ and \begin{align*} D_{f}\left( P,Q\right) & =q\left\vert \frac{p}{q}-1\right\vert +\left( 1-q\right) \left\vert \frac{1-p}{1-q}-1\right\vert \\ & =2\left( q-p\right) . \end{align*} Hence $D_{f}\left( P,Q\right) $ is $C^{\infty}$ on $\Delta$ although $f$ is not differentiable. We get \begin{align*} \frac{\partial D_{f}}{\partial p} & =-2~,\\ \frac{\partial D_{f}}{\partial q} & =2~,\\ \frac{\partial D_{g}}{\partial p} & =\frac{\left( p-q\right) }{\left( 1-q\right) q}~,\\ \frac{\partial D_{g}}{\partial q} & =\frac{\left( 2pq-q-p\right) \left( p-q\right) }{2\left( 1-q\right) ^{2}q^{2}}~. \end{align*} Henc \begin{align*} \left\vert \begin{array} [c]{cc \frac{\partial D_{f}}{\partial p} & \frac{\partial D_{g}}{\partial p}\\ \frac{\partial D_{f}}{\partial q} & \frac{\partial D_{g}}{\partial q \end{array} \right\vert & =\left\vert \begin{array} [c]{cc -2 & 2\\ \frac{\left( p-q\right) }{\left( 1-q\right) q} & \frac{\left( 2pq-q-p\right) \left( p-q\right) }{2\left( 1-q\right) ^{2}q^{2} \end{array} \right\vert \\ & =-2\frac{\left( q-p\right) ^{2}\left( q-1/2\right) }{\left( 1-q\right) ^{2}q^{2}}. \end{align*} $\allowbreak$The mapping $\Delta$ to the range of $\left( D_{f},D_{g}\right) $ is singular for $q=1/2.$ The line $p\rightarrow\left( p,1/2\right) $ is mapped into the curv \begin{align*} p & \rightarrow\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) \\ & =\left( 2\left( p-\frac{1}{2}\right) ,2\left( p-1/2\right) ^{2}\right) . \end{align*} If the total variation is denoted $V$ this curve satisfies $\chi^{2}=\frac {1}{2}V^{2}$ and points satisfying $\chi^{2}\geq\frac{1}{2}V^{2}$ are 2-achievable. The inequality $\chi^{2}\geq\frac{1}{2}V^{2}$ has been proved previously by a different method \cite{Gibbs2002}. \subsection{Total variation and LeCam divergence} On the triangle $\Delta$ we have \begin{align*} D_{f}\left( P,Q\right) & =2\left( q-p\right) ,\\ D_{g}\left( P,Q\right) & =\frac{1}{4}\left( \frac{\left( p-q\right) ^{2}}{p+q}+\frac{\left( p-q\right) ^{2}}{2-p-q}\right) . \end{align*} The derivatives of the LeCam divergence is \begin{align*} \frac{\partial}{\partial p}D_{g}\left( P,Q\right) & =\frac{\left( p-q\right) \left( p+3q-2pq-2q^{2}\right) }{\left( p+q\right) ^{2}\left( 2-p-q\right) ^{2}},\\ \frac{\partial}{\partial q}D_{g}\left( P,Q\right) & =\frac{\allowbreak \left( 2pq-q-3p+2p^{2}\right) \allowbreak\left( p-q\right) }{\left( p+q\right) ^{2}\left( p+q-2\right) ^{2}}. \end{align*} Henc \begin{multline*} \left\vert \begin{array} [c]{cc \frac{\partial D_{f}}{\partial p} & \frac{\partial D_{g}}{\partial p}\\ \frac{\partial D_{f}}{\partial q} & \frac{\partial D_{g}}{\partial q \end{array} \right\vert \\ =\left\vert \begin{array} [c]{cc -2 & 2\\ \frac{\left( p-q\right) \left( p+3q-2pq-2q^{2}\right) }{\left( p+q\right) ^{2}\left( 2-p-q\right) ^{2}} & \frac{\left( 2pq-q-3p+2p^{2 \right) \allowbreak\left( p-q\right) }{\left( p+q\right) ^{2}\left( p+q-2\right) ^{2} \end{array} \right\vert \\ =\frac{4\left( 1-p-q\right) \left( q-p\right) ^{2}}{\left( p+q\right) ^{2}\left( p+q-2\right) ^{2}}. \end{multline*} The mapping is singular for $q=1-p.$ We get the curv \begin{align*} p & \rightarrow\left( 2\left( p-\left( 1-p\right) \right) ,\frac{\left( p-\left( 1-p\right) \right) ^{2}}{p+\left( 1-p\right) }+\frac{\left( p-\left( 1-p\right) \right) ^{2}}{2-p-\left( 1-p\right) }\right) \\ & =\left( 4\left( p-\frac{1}{2}\right) ,2\left( p-\frac{1}{2}\right) ^{2}\right) . \end{align*} If total variation is denoted $V$ then the curve is $D_{g}=\frac{1}{8}V^{2}$ and any point above this curve is achievable. \subsection{Information divergence and reversed information divergence} In this case we hav \begin{align*} f\left( t\right) & =t\ln t,\\ g\left( t\right) & =-\ln t. \end{align*} We see that $g\left( 0\right) =\infty$ and that $\frac{g\left( t\right) }{f\left( t\right) }\rightarrow\infty$ for $t\rightarrow0.$ Lemma \ref{uendelig} implies that the supremum of \[ D_{g}\left( P,Q\right) -\gamma D_{f}\left( P,Q\right) =D\left( Q\Vert P\right) -\gamma D\left( P\Vert Q\right) \] over all distributions $P,Q$ is $\infty$ for any $\gamma<\infty.$ Similarly the supremum of \[ D\left( P\Vert Q\right) -\gamma D\left( Q\Vert P\right) \] over all distributions $P,Q$ is $\infty$ for any $\gamma<\infty.$ Since $\left( 0,0\right) $ is in the range and the range is convex, the range consist of all interior points of the first quadrant and the point $\left( 0,0\right) .$ \section{Acknowledgement} The authors thank Job Bri\"{e}t and Tim van Erven for comments to a draft of this paper. This work was supported by the European Network of Excellence and the GA\v{C}R grants 102/07/1131 and 202/10/0618. \setlength{\itemsep}{5pt} \bibliographystyle{ieeetr}
1,477,468,750,575
arxiv
\section{Introduction} Nature must love neutrinos, because she makes so many of them: neutrinos are more abundant than photons (about $10^3$/cm$^3$; 10$^{17}$/sec pass through your body). In addition to the enormous density of big-bang relict neutrinos, effectively undetectable due to their tiny energy, neutrinos are produced copiously at solar (few MeV) and astrophysical (GeV--EeV) energy scales by a variety of processes. Since neutrinos are uncharged, (probably) massless leptons, they interact with matter only via the weak force. Thus, while they share some features with photons as a probe of the distant universe (straight-line propagation from sources at the speed of light), they offer the advantage of being able to penetrate regions with moderate mass density such as the center of our Galaxy. Neutrinos therefore let us observe regions of the universe as yet unseen. High energy photons and neutrinos are produced by similar processes, for example by the decay of mesons produced in hadronic interactions of charged particles near a cosmic ray source. Compact binary systems, in which a neutron star orbits a giant companion, are excellent candidates for copious photon and neutrino production, as protons are accelerated in the pulsar's intense, rapidly changing magnetic fields, and interact in the periphery of the companion star. One therefore expects to see neutrinos from sources that produce high energy gamma rays. While current experiments have seen clear gamma ray signals from only a few identifiable point sources\cite{haines 1994} this almost certainly must be due to experimental limitations. We know that cosmic ray hadrons (protons and/or nuclei) are produced in the EeV (10$^{18}$ eV) region, beyond the reasonable limit for supernova shock acceleration (thought to account for most of the cosmic ray flux below 10$^{15}$ eV) and if protons are accelerated there must be interactions near the sources yielding photons and neutrinos. Other source mechanisms are unique to neutrinos, such as the widely accepted models for abundant UHE neutrino production in Active Galactic Nuclei (AGNs)\cite{stecker 1991}. Here the power source is thought to be a black hole about 10$^{6-9}$ times as massive as the Sun, protons are accelerated by shocks in jets or flow in the accretion disk, and neutrinos are produced by interactions with the high density of UV or optical photons near the nucleus. Model calculations show that we should expect a neutrino spectrum much harder than the normal cosmic ray spectrum, leading to a previously-unexpected wealth of neutrinos in the PeV (10$^{15}$ eV) range (Fig.~\ref{agnmuvse}), making practical secondary studies such as tomography of the Earth's core. Neutrino observations in the GeV--PeV range thus complement photon observations at all energies, and provide useful discrimination between some models\cite{HENA review}\cite{stenger 1992}. \begin{figure} \epsfxsize=5.75truein \epsffile{agnmuvse.eps} \caption{Expected rates (events per year) in DUMAND-II from AGN neutrinos from several leading models.} \label{agnmuvse} \end{figure} There are basic physics questions to be answered: why do neutrinos come in three flavors, do they have mass, are they the solution to the dark-matter puzzle. As an example, recent results from the Kamiokande-III\cite{kamiokande 1994} and IMB\cite{IMB 1993} underground neutrino detectors suggest a substantial deviation from expectation in the observed ratio of muon to electron neutrinos produced in the atmosphere; it is possible to interpret the data in terms of neutrino oscillations, consistent with an island of allowed values in the mixing-angle/mass-difference parameter space. Neutrino astrophysics experiments like these provide a way to address such questions with costs an order of magnitude below those of contemporary accelerator experiments ({\it ie,} on the order of US\$ 10 million). There is no question that in future we will have to find ways to do particle physics that make much smaller demands on the world economy. But for many of us, one of the most attractive features of neutrino astrophysics is the virginity of the field: the unexpected is always a possibility, and historically science has made great advances whenever a new mode of viewing the universe has been tested. Perhaps the first large-scale neutrino detectors will eventually have the significance of Galileo's spyglass. The basic concept of a water or ice \v{C}erenkov detector is illustrated in Fig.~\ref{watercer}, which depicts a neutrino interaction producing a muon. Seawater (or ice) serves a triple purpose, acting as a low-cost massive target, supplying a track sensitive, transparent medium for production and propagation of Cerenkov radiation by charged particles, and also providing a thick, uniform overburden (in contrast to underground experiments, with nonuniform material and an irregular surface profile) to filter out downward-moving background particles. The water volume is instrumented with an array of sensitive photomultiplier tubes (PMTs). The attenuation length for light in water at the DUMAND site in the appropriate wavelength range is about 40 m, which defines the scale of the transverse spacing of detector ``strings", and the vertical separation of PMTs is set at 10m to provide adequate photocathode coverage; similar parameters apply to ice. Upward moving neutrinos, having passed through the earth (and thus being accompanied by essentially no background, as shown in Fig.~\ref{muangdist}), interact in the contained volume of water or in the nearby seabed, producing muons, charged particles moving near the speed of light {\it in vacuo}, which will therefore generate Cerenkov radiation in the water (n=1.35 in seawater). The Cerenkov light is produced in a characteristic cone-shaped pattern, and thus information on the arrival time and pulse intensity recorded at each of the photomultiplier tubes can be used to reconstruct the muon track direction. For energetic muons, collected photoelectron statistics can be sufficient to provide a muon energy estimate. In the case of ``contained events", where the event vertex is within the sensitive volume, the hadron-electromagnetic cascade can be observed and a more accurate energy estimate made. \begin{figure} \epsfxsize=5.75truein \epsffile{watercer.eps} \caption{Water (or ice) \v{C}erenkov detector concept.} \label{watercer} \end{figure} \begin{figure} \epsfxsize=5.75truein \epsffile{muangdist.eps} \caption{Muon angular distribution: background muons from atmospheric cosmic ray interactions are cut off by looking only at upward-moving tracks.} \label{muangdist} \end{figure} The idea of detecting high energy astrophysical neutrinos is an old one, and calls for development of a practical detector date from at least the early 60s\cite{markov 1960}. Apparent anomalies in the underground muon flux\cite{keuffel 1971} stimulated interest in underwater muon detectors offering uniform overburden, and indirectly fostered development of the current generation of large-scale neutrino detectors\cite{learned 1967}. The DUMAND concept in more or less its present form has been discussed, and construction projects of various degrees of practicality have been proposed, since the mid-70s\cite{DUMAND 1976}. The water Cerenkov technique was further refined in the early 80s by the successful construction and operation of large-scale proton-decay detectors (later used as low-energy neutrino observatories) by the IMB\cite{IMB 1992} and Kamiokande\cite{kamiokande 1992} Collaborations. These projects made it possible for the DUMAND proposal to be accepted for construction funding by the US Department of Energy in 1990. The cost and risks involved in deep-ocean engineering operations were still a matter of concern. At about the same time the AMANDA group proposed an alternative approach, in which the Antarctic ice cap replaces the ocean as overburden, target and detecting medium. Deployment operations take place from the stable platform of the South Pole Station. AMANDA has its substantial logistical requirements covered by the US National Science Foundation's Office of Polar Programs, which supports all scientific research operations in Antarctica. The remainder of this article will compare and contrast AMANDA and DUMAND, ending with a look at initiatives presently being undertaken for the next step in detector sensitivity, a second-generation observatory of scale 1 km$^3$. As a participant in DUMAND, I hope to avoid any inadvertent bias in this review. Two parallel efforts in Europe, the NESTOR project in Greece and the Baikal project in Russia, will not be discussed here simply due to lack of space. Both projects are making significant progress and will have important effects on the development of this rapidly-growing field. \section{DUMAND} Taking our subjects in order of age, the DUMAND project has been discussed in one form or another for nearly 30 years\cite{roberts 1992}. The detector presently being constructed in Hawaii is called DUMAND-II\cite{dumandcollab}. DUMAND-I refers to a ship-suspended single prototype string which was successfully operated in 1987\cite{babson 1990}. The funding plan provides for deployment of the full 9-string array (Fig.~\ref{dumarray9}) in two phases: first 3 strings (the triad) as a demonstration, and the remaining 6 strings (complete octagon, plus center string) after about 1 year of testing and operation. Details of the detector design and physics capabilities have been published elsewhere\cite{icrcdumand 1993}. \begin{figure} \epsfxsize=5.75truein \epsffile{dumarray9.eps} \caption{DUMAND-II underwater neutrino detector array.} \label{dumarray9} \end{figure} The Island of Hawaii was selected for a variety of compelling reasons: exceptional water clarity, proximity of an abyssal plain with appropriate seabed characteristics to a suitable shore site (30 km away), presence of an active particle physics group at the nearby University of Hawaii in Honolulu, and pre-existing laboratory infrastructure at the shore site, due to an ocean thermal energy research project. The latter feature even provided a cost-free conduit for the DUMAND shore cable to pass beneath the surf zone, since the thermal energy project involves slant drilling of tunnels into the ocean. When completed, DUMAND-II will be an array of 216 Optical Modules (OMs: photomultiplier tubes plus front-end electronics, encased in a standard glass oceanographic pressure sphere) deployed on nine vertical strings, which are moored in an octagonal pattern with 40m sides and one string in the center (Fig.~\ref{dumarray9}). The instrumented portion of each string begins 100m above the ocean floor to avoid boundary-layer effects. In addition to OMs, the strings include sets of hydrophones and other acoustical equipment, and calibration modules, in which a constant output laser light source is used to excite a scintillator ball viewed by the PMTs. The array is being deployed on the ocean floor at depth 4800m, 30 km due west from the Kona Coast of the Island of Hawaii (Fig.~\ref{sitemap}), and is connected to the shore laboratory at Keahole Point by a cable combining electrical and fiber optic elements, terminating in an underwater junction box. The shore cable contains 12 fibers (including spares) and a copper layer which supplies 5 kW of electrical power at 350 VDC, using a seawater return system. Fig.~\ref{dumblock} shows an overall block diagram for the DUMAND detector system. The underwater site places no inherent limitation on possibilities for future expansion of the detector. With all 9 strings in place, DUMAND will have an effective detection area of 20,000 m$^2$, instrumenting a column of water which has the height of the Eiffel tower and its width at the base. \begin{figure} \epsfxsize=5.75truein \epsffile{sitemap.eps} \caption{DUMAND site off the Big Island of Hawaii.} \label{sitemap} \end{figure} \begin{figure} \epsfxsize=5.75truein \epsffile{dumblock.eps} \caption{Block diagram of the DUMAND detector system} \label{dumblock} \end{figure} Signals from the PMTs are pre-processed within the optical modules (Fig.~\ref{omdraw}), providing standard pulses which encode time of arrival (to $\sim 1$ ns accuracy), pulse area, and time-over-threshold (TOT), a measure of pulse duration. Data from the 24 OMs on each string are digitized and serialized in the string controller module by a custom 27 channel (including spares and housekeeping) monolithic GaAs TDC/buffer/multiplexer chip which operates with 1.25 nsec timing precision and 2-level internal buffers. The data stream is sent to shore via optical fibers (one per string) at 0.5 GHz. A separate optical fiber carries environmental and acoustical ranging information which are used to measure the geometry of the array. \begin{figure} \epsfxsize=5.75truein \epsffile{omdraw.eps} \caption{DUMAND Optical Module.} \label{omdraw} \end{figure} The data system has been designed to cope with the background rate from radioactivity in the water (primarily from natural $^{40}K$) and bioluminescence and still generate minimal deadtime for recording neutrino events. Results from the 1993 deployment confirmed observations made in the 1987 DUMAND-I experiment\cite{bradner 1987}. As Fig.~\ref{omrates} shows, the dark counting rate for a single OM was found to be on the order of 60 kHz, primarily due to trace $^{40}K$ in the huge volume of seawater each tube views. Noise due to bioluminescence is episodic and likely to be unimportant after the array has been stationary on the ocean bottom for some time, since the light-emitting microscopic creatures are stimulated by motion. $^{40}K$ and bioluminescence contribute mainly 1 photoelectron hits distributed randomly in time over the entire array. The raw information is sent to the shore station 30 km away for processing. The trigger system looks for patterns in time, space and pulse height in the OM signals consistent with the passage of charged particles through the array. Events satisfying the trigger criteria are recorded for further off-line analysis. Since 1992, DUMAND teams have been preparing the site and testing underwater assembly operations. DUMAND-II requires a reasonably flat site with appropriate soil bearing properties. The selected site has been marked with acoustical transponders which have been accurately surveyed in geophysical (GPS) coordinates (Fig.~\ref{site}), and its suitability was verified remotely by acoustical imaging, film camera and video recordings; in addition, DUMAND personnel have cruised the area in a manned submarine, the US Navy's {\it DSV Sea Cliff}, to verify that the site is flat and free of any undesirable features. These preliminary operations also confirmed the exceptional clarity of the water, with attenuation length about 40m in the appropriate wavelength band. \begin{figure} \epsfxsize=5.75truein \epsffile{omrates.eps} \caption{Singles rates in a typical DUMAND optical module. The histogram shows the mean counting rate over a series of 0.2 sec recording intervals. The quiescent rate is about 60 kHz, with occasional intervals showing spikes above 100 kHz due to bioluminescence.} \label{omrates} \end{figure} \begin{figure} \epsfxsize=5.75truein \epsffile{site.eps} \caption{Contour map of the DUMAND site (depths in meters below sea level), showing placement of acoustical transponders, junction box, and cable as surveyed during the 12/93 deployment operation.} \label{site} \end{figure} We need to point reconstructed muon tracks onto the celestial sphere with an accuracy better than 1$^o$ (the median angle between primary $\nu$ and secondary $\mu$ at 1 TeV). This means that relative OM locations must be known to the order of a few cm, and the overall geographical orientation of the array must be known to much better than 1 degree. The Global Positioning Satellite (GPS) system plus conventional oceanographic acoustical survey techniques allow us to measure the geographical coordinates of underwater fiducials (acoustical transponders) to within a few meters, satisfying the geographical orientation requirement. We were unable to find a commercial system able to reliably provide the OM positioning accuracies required, so we designed our own sonar system, which measures acoustical signal transit times with 10 $\mu$sec precision using frequency modulated chirps and matched filtering via DSPs\cite{berns 1993}. Other components of the environmental monitoring system measure oceanographic parameters such as water currents, temperature and salinity (needed to calculate the local speed of sound). In December of 1993, the DUMAND scientific team and the crew of the University of Washington oceanographic ship {\it R/V Thomas G. Thompson} successfully deployed the first major components of DUMAND, including the junction box, the environmental module, and the shore cable, with one complete OM string attached to the junction box. Other DUMAND personnel prepared the shore station for operation. The procedures for the lowering and cable laying operations had been worked out in practice runs. Cable laying equipment was leased and mounted on the ship. Environmental monitoring equipment and the site-defining navigational sonar array were also laid out and used in the deployment operation. The basic infrastructure for DUMAND, comprising the underwater junction box, the 30 km optical fiber/copper cable to shore, and the shore station facility are now in place. The deployed string was used to record backgrounds and muon events. Unfortunately, an undetected flaw in one of over 100 electrical penetrators (connectors) used for the electronics pressure vessels produced a small water leak. Seawater eventually shorted out the string controller electronics, disabling further observations after about 10 hours of operation. In January, 1994, the disabled string was remotely released by an acoustical signal, recovered at sea, and returned to Honolulu for diagnosis and repair. The fault has been analyzed and quality assurance procedures to avoid future recurrences have been put in place. In addition to the refurbished first string, two further strings are currently undergoing final assembly and testing. We plan to make extensive deep water tests of these three strings before mooring them at the DUMAND site. Surface ship and underwater vehicle resources needed to carry out deployment and interconnection operations will be available in 1995. After redeployment of the first string of OMs, each successive string will be moored at the vertices of an octagon at a radius of 40 m. Acceptable placement error is about 5m; this tolerance can be readily achieved using available ships with dynamic positioning capability (basically, GPS navigation coupled to the ship's thrusters), according to simulation studies performed by a marine operations consulting firm. Strings will be connected to the junction box by an umbilical cable and wet-mateable electrical/fiber-optic connector. Using a mockup junction box and string mooring, the US Navy's Advanced Tethered Vehicle (ATV) carried out successful tests of the connecting operation in 1992, proving that tethered remotely operated vehicles (ROVs, which are cheaper and more readily available than manned submersibles) are also an option for DUMAND underwater maintenance activities. Although the success of the DUMAND deployment was marred by the failure of a single penetrator, enough was learned from the limited period of live operation to be confident that it will be possible to complete and operate the whole DUMAND array. The failure provided an undesired but nonetheless useful opportunity to test procedures for recovering faulty equipment from the sea, an essential task for long term operation. The overall plan is to install and operate three strings as a full-up demonstration, and then proceed to deployment of the remaining six strings after about a year of test operation. Further information on DUMAND is available via the DUMAND Home Page on the World Wide Web. The URL address is \noindent \begin{verbatim} http://web.phys.washington.edu/dumand.html \end{verbatim} \section{AMANDA} The Antarctic Muon and Neutrino Detector (AMANDA) uses the same fundamental detector concept as DUMAND, but substitutes polar ice for abyssal seawater\cite{amandacollab}. Photomultiplier tubes are placed in vertical shafts melted into the icecap at the South Pole, and data acquisition is handled in a counting house established at the surface\cite{amandaicrc 1993}. The detector layout is depicted in Fig.~\ref{amanda}. \begin{figure}[p] \epsfxsize=5.75truein \epsffile{amanda.eps} \caption{AMANDA array: upper portion was deployed in 1/94, lower portion is to be deployed in 12/95.} \label{amanda} \end{figure} This approach exploits two significant advantages of ice as a medium: it is a stable solid, and it is biologically and radiologically sterile. The ice forms a rigid, adaptive support for the OM strings, and thus the need for measuring OM positions is reduced from a continuous monitoring process to a one-time survey procedure during deployment. Backgrounds due to bioluminescence and natural radioactivity such as $^{40}K$ are effectively absent, reducing the background noise rate substantially, and allowing lower true event rates per sky pixel to be detected as a significant excess\cite{amanda 1994}. Only the Antarctic plateau provides a layer of ice of sufficient depth, about 3 km total (although deployment depths are for practical purposes limited to about 2 km). While real logistical costs are very high, the US National Science Foundation operates a vigorous, well-supported research program in Antarctica. One significant result is ample support for the operational aspects of AMANDA, from a source independent of conventional particle physics funding. The US South Pole Station is well equipped, and staffed year-round. Access is by air only, and field operations can take place only during the Austral summer season, roughly October through February. A small staff of technicians and scientists volunteers to remain icebound through the 6-month winter season, maintaining experiments and forwarding limited amounts of data to the continental USA via satellite links and land lines. While data rates for communications will be improving over the next few years (plans exist to provide the South Pole Station with 56 kB/sec Internet access) AMANDA presently must depend to some extent on suitcases full of tape cassettes for data transfer. Since the data acquisition system is only a short distance away from the OMs, at the surface of the ice, AMANDA does not require front-end electronics to be built into the optical modules or a local string controller; the OMs, as shown in Fig.~\ref{amandaOM}, are thus just PMTs in a glass pressure sphere (the same type used in DUMAND), connected to the outside world by coaxial cable (which also carries in the high voltage power supply). Signal degradation produces some limitations on cable length, but for the relatively shallow depths used thus far, and planned for the next stage of deployment, there should be no significant loss of timing information. The advantage of having foolproof, simple, dumb OMs is very tangible. The remote location, with highly limited access and long supply lines, causes fewer difficulties than might be imagined, although careful planning is essential (and enforced by Antarctic Program management, who have long experience in these matters). One is about 5,000 km from the nearest electronics parts store, and half the useful season can be lost waiting for a forgotten item, so the supply of spares and equipment must be thought through very carefully and stringent predeployment testing is required. \begin{figure}[hp] \epsfxsize=5.75truein \epsffile{amandaOM.eps} \caption{AMANDA Optical Module.} \label{amandaOM} \end{figure} An additional problem is the need for fuel to melt holes over one km deep and about 60 cm in diameter for string deployment. The initial deployments took advantage of a cache of surplus aircraft fuel at the South Pole, stored too long to be certifiable for aircraft use but perfectly suitable for ice-melting. This supply has been consumed, and future deployments will require every liter of fuel to be flown in (along with all other supplies). Since the existing shafts (approximately 1 km deep) consumed about 12,000 liters of fuel each, and deeper shafts require disproportionately larger amounts of fuel, this is a serious concern. However experience from the initial operations led to a more efficient drill design, now under construction, and it is expected that the deeper holes now required can be made without substantially increasing the fuel requirements. A test string of four 20 cm diameter OMs was successfully deployed and operated at 800 m depth in 1992. The PMTs used were available from a previous experiment, and OM size was limited by drilling capabilities. Data on the flux of Cerenkov light from down-going muons were interpreted to mean that the ice at $\sim 1$ km was essentially bubble-free, and results from this test were considered sufficiently promising to proceed to a first-stage deployment of four full strings, each containing 20 OMs, in 1994. In this operation, the drilling system performed very well, operating nearly continuously for about 45 days and drilling holes at the rate of 90 hr/km. The OM signal characteristics from the 1994 deployment were about as expected: timing resolution about 5 nsec, stable operation with gain $10^8$, dark noise rate about 2 kHz. Of the 80 OMs deployed, 73 were operating well 5 months later, a reasonable survival rate. In addition to coaxial cables carrying power down and signals up, the strings included optical fibers to distribute calibration signals from a laser source on the surface to each OM. Each optical fiber terminates in a nylon diffusing sphere located 30 cm from its OM. Unfortunately, laser calibration signals were found to have transit times between diffuser balls and OMs that were much longer than expected for unobstructed straight-line paths. Fig.~\ref{amandatransit} shows two examples of transit time distributions, with the geometrical distance between source and OM corresponding to arrival time delays of 91 and 142 nsec respectively\cite{amanda 1994}. As can be seen from the figure, the mean arrival time is more than 5 times longer, and even the earliest arrivals take nearly twice as long as expected to reach the OMs. These data have been carefully analyzed by the AMANDA group, and the conclusion is that a) the absorption length of 475 nm photons in polar ice is about 60 meters, but b) the ice contains a significant density of bubbles which produces an effective scattering length of only 20 cm. Fig.~\ref{amandascatt} shows that the arrival time data provide a good fit to these hypotheses. \begin{figure}[hp] \epsfxsize=5.75truein \epsffile{amandatransit.eps} \caption{Optical pulse transit time distributions from AMANDA calibration data, for distances of a) 21 and b) 32 meters. The expected arrival times for direct paths would be approximately 92 and 140 nsec respectively. Solid lines show fits to a diffusion model with appropriate effective scattering length (see Fig.~\ref{amandascatt}).} \label{amandatransit} \end{figure} \begin{figure}[hp] \epsfxsize=5.75truein \epsffile{amandascatt.eps} \caption{Inverse effective scattering length for light at the AMANDA site as a function of depth in ice.} \label{amandascatt} \end{figure} The depth dependence of the scattering length is consistent with results from microscopic examination of ice cores from Greenland and Vostok (a Russian Antarctic base). At Vostok, where the altitude and snow accumulation rate differ from the South pole, but the ice temperature profile is similar, core samples show fewer than 0.5 bubbles/cm$^3$ below 1280 meters depth. This gives hope that putting the AMANDA strings only a few hundred meters deeper will eliminate the scattering problem. The strategy will therefore be to deploy the next set of strings in 1995-96, taking advantage of the verified 60 m absorption length to increase OM spacing, and putting the strings in below 1500 m to avoid bubbles. With an increase to 15 m vertical OM spacing, a considerably larger volume can be instrumented. Six strings of 13 OMs each will be deployed in a circular pattern with 60 m radius. The new drilling system may also make it possible to go to larger diameter phototubes, although current plans call for using the same PMTs used in previous deployments\cite{barwick 1994}. As with DUMAND, the results of the 1994 AMANDA deployment did not include detection of astrophysical neutrinos, but did demonstrate important aspects of the technique. Despite the short scattering length, which in effect reduces track reconstruction accuracy to $\pm 10^o$ on the sky, it was possible to perform a number of tests which verified the general viability of the AMANDA concept using the 1994 array. AMANDA has much less overburden than DUMAND, and therefore a much higher background rate due to downward-going muons. However, the absence of bioluminescence and natural radioactivity makes the OM singles noise rate much lower: about 2 kHz as compared to 60 kHz. The mean OM dark noise rates observed (1.8 kHz) are about half what had been anticipated. Finally, it was possible to operate the strings in coincidence with the South Pole Air Shower Experiment (SPASE), which is located about 800 m away from the AMANDA site. Extensive air showers arriving with zenith angles between 37 and 46 degrees and with appropriate azimuth should be seen by both experiments, and this mode of operation has been successfully demonstrated by using SPASE triggers to log AMANDA data\cite{halzen 1994}. Further information on AMANDA is available via the AMANDA Home Page on the World Wide Web. The URL address is \noindent \begin{verbatim} http://spice2.physics.wisc.edu/amanda2.html \end{verbatim} \section{Comparison of AMANDA and DUMAND} The following table compares salient features of the two detectors. In addition to common features, both projects have a set of unique advantages and disadvantages, often in the form of a tradeoff. For example, AMANDA has rigidly fixed OM positions and the ability to locate front-end electronics very near the detector elements, on the surface just above the array. On the other hand, DUMAND strings can be readily released and recovered for repair or repositioning, and the use of fiber optic data transmission makes cable length irrelevant. DUMAND's thick seawater overburden greatly reduces event backgrounds due to down-going muons, at the expense of heavier singles rates due to radioactivity and bioluminescence, while AMANDA's ice overburden is less than half as thick but makes no contribution to dark noise. The real costs of deployment are probably about equal, but AMANDA's logistical costs are part of a very large Antarctic research enterprise in which AMANDA is (at present) a small perturbation, while DUMAND's costs are a very visible portion of their budget (although in fact ship and submarine time should eventually be available by interagency cooperation). The two groups have had similar outcomes from their first major deployment attempts this year: partial proof of concept, but not the definitive proof offered by unambiguous neutrino detection. \begin{table}[hp] \caption{\bf COMPARISON BETWEEN DUMAND AND AMANDA} \label{dumanda} \medskip \begin{tabular}{|ll|} \hline \bf DUMAND & \bf AMANDA\\ &\\ \bf Seawater -- high noise & \bf Ice -- low noise\\ \hspace{.25in}$\bullet$ $^{40}K$ background & \hspace{.25in}$\bullet$ No $^{40}K$ background\\ \hspace{.25in}$\bullet$ Bioluminescence & \hspace{.25in}$\bullet$ No bioluminescence\\ & \\ \bf Deep: 5000 m & \bf Shallow: 1000 m\\ \hspace{.25in}$\bullet$ Low event background & \hspace{.25in}$\bullet$ High event background\\ \hspace{.25in}$\bullet$ Smart OMs & \hspace{.25in}$\bullet$ Simple OMs\\ \hspace{.25in}$\bullet$ Digital fiber-optic data transfer & \hspace{.25in}$\bullet$ Analog signals to surface - coax cable\\ \hspace{.25in}$\bullet$ Complex underwater electronics & \hspace{.25in}$\bullet$ Simple OMs - processing on surface\\ & \\ \bf Underwater & \bf Under ice\\ \hspace{.25in}$\bullet$ Track visibility proven & \hspace{.25in}$\bullet$ Bubbles remain at 1000m\\ \hspace{.25in}$\bullet$ Well-developed commercial& \hspace{.25in}$\bullet$ Environment less well known\\ \hspace{.35in} technologies & \\ \hspace{.25in}$\bullet$ DSV/ROV required & \hspace{.25in}$\bullet$ Direct access from surface\\ \hspace{.25in}$\bullet$ Recoverable after deployment & \hspace{.25in}$\bullet$ Not recoverable once deployed\\ & \\ \bf Hawaii & \bf Antarctica \\ \hspace{.25in}$\bullet$ Easy access year-round & \hspace{.25in}$\bullet$ Restricted access to site\\ \hspace{.25in}$\bullet$ Local high-tech facilities & \hspace{.25in}$\bullet$ Limited facilities at site\\ \hspace{.25in}$\bullet$ Local university group & \hspace{.25in}$\bullet$ No permanent residents \\ \hspace{.35in}(resident staff planned) & \hspace{.35in}(but continuous staffing) \\ \hspace{.25in}$\bullet$ Near-equatorial site: daily & \hspace{.25in}$\bullet$ Polar site: fixed view of \\ \hspace{.35in} scan of celestial mid-latitudes & \hspace{.35in} celestial northern hemisphere\\ & \\ \hline \multicolumn{2}{c}{\bf Common Features:}\\ \multicolumn{2}{c}{Same basic techniques used}\\ \multicolumn{2}{c}{Overall costs $\sim$ same}\\ \multicolumn{2}{c}{Site permits expansion to next-generation size (1 km$^3$)}\\ \end{tabular} \end{table} While both DUMAND and AMANDA are pursuing the Cerenkov light technique, earlier investigations have suggested that a very large volume detector of high energy neutrinos can be constructed at very low cost using acoustical detection. The deposition of energy in the water by produced particles generates a low level characteristic bipolar sound pulse with an effective frequency spectrum peaked in the range 30 to 60 KHz. The hydrophone array built into DUMAND for its positioning system is very efficient in this range, and should be capable of detecting particle cascades of about 1 PeV at a range of 40m\cite{learned 1993}. Simulation studies suggest that by using noise cancellation and signal coherence techniques (ie, treating our set of hydrophones as a phased array), it will be possible to systematically enhance noise rejection and detect high energy particles. The DUMAND array will be equipped to observe coincidences of OM and acoustical signals and this will provide the first direct practical test of acoustical detection. DUMAND will also supply acoustical equipment to AMANDA for tests of acoustical detection in the ice. Throughout the process of detector construction and deployment, the two groups have engaged in mutual assistance and cooperation despite the inevitable sense of competition. It is quite likely that at some point in the future we will be working together directly to focus resources and expand capabilities. The present DUMAND and AMANDA arrays, even after all currently planned deployments are completed, will serve primarily as test beds and prototypes for a much larger detector. \section{The Next Step: km$^3$} Both the DUMAND and AMANDA groups acknowledge that detectors with effective areas on the order of $10^4$ m$^2$ provide marginal capability for detecting neutrino sources given present theoretical estimates as well as data on gamma rays. The aim of the present generation of detectors, including Baikal and Nestor, is to demonstrate the value of neutrino astronomy by providing the first look at the neutrino sky. Definitive results will be likely to come from the next generation of neutrino detectors, which must have sensitive volumes on the order of a cubic kilometer. Given the history of DUMAND, with a delay of nearly 30 years between the first discussions of the detector concept and its materialization in hardware, everyone with an interest in neutrino astronomy is concerned about reducing the lead time for the next step. In part, during the early years DUMAND was a concept waiting for the development of appropriate technology ({\it eg}, wet-mateable fiber optics connectors, which became available in the late 80s), but it is certainly not too early to begin design and organizational activities on the second generation now. It seems clear that both the deep-sea and polar-ice approaches have valuable features as well as problems that are not yet resolved, at least to the satisfaction of the community at large. At present it is still possible that AMANDA will find no end to its bubble problem at practical depths. Similarly, although the basic feasibility and technological issues are resolved, it is essential for DUMAND to definitively demonstrate its ability to overcome component reliability problems and operate a complex detector system deep underwater on a long-term basis. If either group fails to achieve these goals, the direction for future work will be clear; in the happy circumstance that both detectors work as planned, a decision about whether the km$^3$ detector should be underwater or in the ice will be based on assessment of results from initial runs. Several significant initiatives took place in early 1994: a workshop held at the Jet Propulsion Laboratory led to the formation of a US-based coalition to pursue a cubic kilometer detector, and later the European Community's Megascience Workshop resulted in a similar European coalition. At the 1994 Snowmass Summer Study (entitled ``Particle and Nuclear Astrophysics and Cosmology in the Next Millenium") an interest group combining both coalitions was organized. The existing BAND groups (Baikal, AMANDA, Nestor, DUMAND) are working with the JPL group and others to organize workshops aimed at preparing a conceptual proposal before the end of 1995, so that funding initiatives can begin promptly. Already, JPL workers have begun development of new OM designs which have extremely low power consumption and use optical fibers for power as well as data transmission. Interested individuals should join the group to keep apprised of progress; consult the Worldwide Web for further details: \begin{verbatim} http://web.phys.washington.edu/km3.html \end{verbatim} \section{Acknowledgments} Special thanks are due to Steve Barwick, Francis Halzen, John Learned, and Robert Morse for help in assembling the information needed for this review. However, blame for mistakes and invalid opinions expressed lies with me alone. Thanks are also due to the SLAC Summer School organizers and staff.
1,477,468,750,576
arxiv
\section{Introduction} Forecasting from time series data necessarily involves an attempt to understand uncertainty; volatility or the standard deviation is a key measure of this uncertainty and is found to be time-varying in most financial time series. The seminal work of Engle \cite{Engle}, that first treated volatility as a process rather than just a number to estimate, led to tremendous efforts in devising dynamical volatility models in the last two decades. These are of great importance in a variety of financial transactions including option pricing, portfolio and risk management. Excess volatility (well beyond what can be described by a simple Gaussian process) and the associated phenomenon of clustering \cite{benoit,Fama} are believed to be the key factors underlying many empirical statistical properties of asset prices, characterized by a few key ``stylized facts'' \cite{intro1,intro2,intro3,intro4} described later. A good measure of volatility clustering (roughly speaking, large and small changes in asset prices are often followed by large and small changes respectively) is thus important for understanding financial time series and for constructing and validating a good volatility model. The most popular characterization of volatility clustering is the correlation function of the instantaneous volatilities evaluated at two different times, which shows persistence up to a time scale of more than a month. It has also been established that there is link between asset price volatility clustering and persistence in trading activity (for an extended empirical study on this, see Ref.~\cite{Plerou}). However, the underlying market mechanism for volatility clustering is not clear. The aim of our paper is not to elucidate the mechanism for volatility clustering, but to introduce a more direct measure of it. Specifically, we propose that the {\em conditional} probability distribution of asset returns $r$ over a period $T$ (given the return, $r_p$, in the previous time period )can be fruitfully used to characterize clustering. This is a direct measure based on return over a time lag instead of instantaneous volatility and we believe is more relevant to volatility forecasting. We analyze stock market data using this measure, and we and have found that the conditional probability can be well described by a scaling relation: $P(r|r_p) = \frac{1}{w(r_p)} f(r/w(r_p))$. This scaling relation characterizes both fat tails and volatility clustering exhibited by financial time series. The fat tails are described by a universal scaling function $f(z)$. The functional form of the scaling factor $w(r_p)$, on the other hand, contains the essential information about volatility clustering on the time scale under consideration. The scaling factors we obtain from the stock market data allow us to identify regimes of high and low volatility clustering. We also present a simple phenomenological model which captures some of the key empirical features. The key ``stylized facts'' about asset returns include the following: The unconditional distribution of returns shows a scaling form (fat tail). The distribution of returns $r$ in a given time interval (defined as the change in the logarithm of the price normalized by a time-averaged volatility) is found to be a power law $P(|r| > x) \sim x^{-\eta_r}$ with the exponent $\eta_r \sim 3$ for U.S. stock markets\cite{gopikrishnan,gabaix}, well outside the L\'{e}vy stable range of 0 to 2. This functional form holds for a range of intervals from minutes to several days while for larger times the distribution of the returns is consistent with a slow crossover to a normal distribution. Another key fact is the existence of volatility clustering in financial time series that is by now well established \cite{benoit,Fama,Engle,Bollerslev,intro4}; it can be seen, for example, in the absolute value of the return $|r|$, which shows positive serial correlation over long lags (the Taylor effect \cite{Taylor}). This long memory in the autocorrelation in absolute returns, on a time scale of more than a month, stands in contrast to the short-time correlations of asset returns themselves. Fat tails have been the subject of intense investigation theoretically from Mandelbrot's pioneering early work\cite{benoit} using stable distributions to agent-based models of Bak {\em et al.}\cite{bak} and Lux\cite{lux0,lux} (See Ref.~\cite{LeBaron} for a survey of research on agent based models used in finance). The key problem is to elucidate the nature of the underlying stochastic process that gives rise to both volatility clustering and the power-law (fat) tails in the distribution of asset returns. \section{Conditional Probability and Scaling Form} In an effort to seek a direct quantitative characterization of clustering we consider $P(r|r_p)$, the probability of the return $r$ in a time interval of duration $T$, conditional on the absolute value of the return $r_p$ in the previous interval of the same duration. (We emphasize that the probability is not conditioned on the value of the return at an instant.) By varying $T$, we can check volatility clustering on different time scales. There is a growing literature on conditional measures of distribution for analyzing financial time series (for a review, see Ref.~\cite{Malevergne} and references therein). For example, the conditional probability of return intervals has been used recently to study scaling and memory effects in stock and currency data \cite{Yamasaki}. We have analyzed both the high frequency data and daily closing data of stock indices and individual stock prices using the conditional probability as a probe. Here we only present results of our analysis of high frequency data of QQQ (a stock which tracks the NASDAQ 100 index) from 1999 to 2004 and daily closing data of the Dow Jones Industrial Average from 1900 to 2004. We emphasize that the properties of the financial time series we present are rather general: we have checked that the same properties are also exhibited in other stock indices and future data (for example, the Hang Seng Index, Russell 2000 Index, and German government bond futures) as well as individual stocks. We have checked, as was found in the previous studies \cite{gopikrishnan}, that the probability distribution of the returns in the time intervals $T = 1, 2, 4, 8, 16, 32$ days for DJIA exhibits a fat power-law tail with an exponent close to $-4$; this appears to be true for most stock indices and individual stock data. \begin{figure} \includegraphics*[width=8.5cm]{fig1.eps} \caption{(a) Conditional probability of return for $ T = 5$ minutes in QQQ. Different curves correspond to 10 different absolute values of the return $r_p$ in the previous interval, which are groups of bins centered at values ranging from $8.4 \times 10^{-4}$ to $0.011$. The larger the value of $r_p$, the large the width of the distribution. (b) The conditional probability distribution of return of QQQ (shown in (a)), when scaled by a scale factor $w(r_p)$, collapses to a universal curve. $r_p$ is the absolute value of the return in the previous interval. The tail of the probability distribution can be described by a power law with the exponent approximately equal to $-4$.} \end{figure} We calculate $P(r| r_p)$, by grouping the data into different bins according to the value of $r_p$. In Figure 1(a) we display $P(r,T|r_p, T)$ for $T=5$ minutes for different values of $r_p$. It is clear from the figure that there is a positive correlation between the width of $P(r| r_p)$ and $r_p$. What is more interesting is that, when $r$ is scaled by the width of the distribution (the standard deviation of the conditional return), $w(r_p)$, the different curves of conditional probability collapse to a universal curve: $P(r| r_p) = w(r_p)^{-1} \bar{P}(r/w(r_p))$. Evidence for this is displayed in Fig.~1(b). Note that on the time scales we have analyzed, the probability distribution is symmetric with respect to $r$. Consequently, in Fig.~1(b) we have only displayed the absolute value of the return. The data collapse is good for a wide range of $T$, and the curves display a power-law tail with a well-defined exponent of approximately $-4$. We examine next the behavior of the scale factor $w(r_p)$ on $r_p$. Fig.~2 shows a plot of the scale factor $w(r_p)$ vs. $r_p$ for different values of $T$. It can be seen from the figure that there is a crossover value $r_c(T)$: for $r_p < r_c(T)$, $w$ is almost constant, while for $r> r_c(T)$, $w$ increases with $r_p$. The degree of the dependence of $w$ on $r_p$ can be taken as an indication of strength of volatility clustering. If there is no volatility clustering $w$ will not depend on $r_p$. Note that there is a strong clustering at small $T$. As $T$ increases, the strength of clustering gradually decreases, indicating a crossover to the non-clustering regime. As $T$ increases beyond the time scale of volatility clustering, the clustering disappears. This crossover can not be seen in the QQQ data as the time scales involved are small. Our analysis of DJIA data show an indication of such crossover at the time scale of a few months. In this paper, we do not separate the cases of positive and negative returns in the previous time interval. Thus we do not show explicitly the well-known leverage effect, first expounded by Black \cite{Black}. We have checked that the scaling and data collapse we obtained are equally valid when we separate out the cases of positive and negative returns in the previous interval. The leverage effect is reflected in the scaling factor $w(r_p)$, which shows $w(-r_p) > w(r_p)$ for $r_p > 0$ in the real data. \begin{figure} \includegraphics*[width=8.5cm]{fig2.eps} \caption{ The scale factor $w(r_p,T)$ vs $r_p$ (the absolute value of the return in the previous interval) for different values of $T$, arising from analysis of QQQ data. The dependence is seen to be almost linear for sufficiently large $r_p$} \end{figure} Figure~3 shows that the same scaling form is also exhibited in DJIA data. We have checked that the data collapse extends also to data for different values of $T$ in addition to different values of $r_p$ displayed here. \begin{figure} \includegraphics*[width=8.5cm]{fig3.eps} \caption{ Conditional probability of return for $ T = 2$ and $T=20$ days in DJIA data. The data corresponding to $T=20$ have been shifted to the right for easy viewing. Different curves correspond to 8 different absolute values of the return $r_p$ in the previous interval. The inset shows the dependence of the width $w(r_p)$ on $r_p$. The tail of the probability distribution can be described by a power law with the exponent approximately equal to $-4$.} \end{figure} The data collapse we have displayed for different $r_p$ and different $T$, the power-law behavior including the value of the exponent, and the behavior of the scale factor which encapsulates features of volatility clustering are the same across data from several other stock indices listed earlier and individual stocks. This empirical universality can be stated as \begin{equation} P(r|r_p) = \frac{1}{w(r_p, T)}f(r/w(r_p, T)). \end{equation} Here $f(z)$ is a universal function describing the universal fat tail in the distribution. $f(z)$ satisfies $f(z)\rightarrow$ constant as $z\rightarrow 0$, $f(z) \rightarrow 1/z^4$ as $z \gg 1$, and $\int^{\infty}_{0}f(z)dz = 1$. The dependence of $w(r_p, T)$ on $r_p$, on the other hand, describes the volatility clustering at the time scale $T$. If $w(r_p)$ is a constant (independent of $r_p$), then $P(r|r_p)$ does not depend on $r_p$, and there is no volatility correlation or clustering. The conditional probability distribution contains information about the conditional average of the moments $<r^q>_{r_p}$ of the distribution as well as various volatility correlation functions such as $<r^2r_p^2>$. Given the scaling form we can evaluate these averages and correlation functions in terms of $w(r_p)$, which is itself given by $w(r_p) = \sqrt{<r^2>_{r_p}}$. In particular, we have the moments of the conditional probability distribution given by $M_q(r_p) = <r^q>_{r_p} =C_q w^q(r_p)$ ($C_q$ is a universal constant) and $<r_p^2r^2> = \int dr_pQ(r_p)w^2(r_p)r^2_p$, where $Q(r)$ is the unconditional probability distribution of the return. We believe that this scaling form provides a new and rather complete measure of volatility clustering. \section{Model and Discussion} In the following we will provide the outline of a model that captures the key features exhibited in the conditional probability distribution of stock market data. In a stochastic volatility model, the one-step asset return at time $t$ is written as $\Delta_t = \delta_t z_t$, where $z_t$ is a Gaussian random variable with zero mean and unit variance and $\delta_t$ is magnitude of the price change. For the relatively short time scales we are interested in we have set the intrinsic growth rate to zero. The distribution of $r$ depends on the dynamics of $\delta_t$: Slow changes in $\delta_i$ lead to volatility clustering. There exist a few classes of volatility models that have been used to describe the dynamics of $\delta_t$. These include the widely used models based on GARCH-like processes \cite{Engle}, and more recently, the models based on a multifractal random walk (MRW) \cite{MRW} that will be discussed later. In our model, the dynamics of $\delta(t)$ is specified via the random variable $n(t)$, with $\delta(t) = \delta_0 \gamma^{n(t)}$. In order to describe both the behavior of probability distributions and temporal correlations we have devised the following model for the evolution of $n(t)$. The time evolution of the variable $n$ is assumed to be independent of the change in $S(t)$ and $n(t)$ executes a random walk with reflecting boundaries: We enforce the condition $n(t) \ge 0$; thus $\delta_0$ is the minimum value of $\delta(t)$. An upper bound in $n$, $n_{max}$, can also be incorporated without affecting the scaling behavior of the model. We typically choose $\gamma^{n_{max}} \sim 30$. The change in $n(t)$, $ \delta n(t)$ is given by \begin{eqnarray} \label{dn} \delta n(t)\,&=&\,\eta_t\,+\,\alpha\{\sum_{i=1}^{N_c} [K(i+1)-K(i)]\eta_{t-i} +K(1)\eta_t \nonumber \\ & & -K(N_c+1)\eta_{t-N_c} \}\,-\,\beta\overline{\eta}\,. \end{eqnarray} In the preceding $\{\eta_j\}$ are independent random variables that assume the value $+1$ with probability $p$ and $-1$ with probability $1-p$. This asymmetry builds in the tendency to decrease the volatility. The mean value of $\eta_i$, $2p-1<0$ is denoted by $\overline{\eta}$. We comment on the implications of the different terms next. We focus on the limit $\alpha=0$ and $\beta=0$ first since it is amenable to analytic investigation; this model is related to a model discussed in Ref.~\cite{chen}. Note that this limit already builds in volatility clustering as it takes many steps to change $n(t)$ significantly. It is easy to show that, the steady-state probability distribution of $n$ is given by $P(n) = (1-e^{-\lambda})e^{-\lambda n} \sim \lambda e^{-\lambda n}$, where $\lambda = \ln((1-p)/p)$. The distribution of $\delta(t)$ is then given by a power-law, $P(\delta) \sim \delta^{-\lambda/\ln \gamma -1}$. This mechanism for generating a power-law distribution was first noted by Herbert Simon \cite{simon} in 1955. We have studied this limiting case of the model numerically and find that many features of the conditional probability distribution exhibited by the real data including the power law and scaling behaviors are reproduced. We can show analytically that the conditional probability distribution exhibits scaling collapse, and that scale-invariant behavior with a power law tail (with the exponent $-4$ if we choose $p=1/(1+\gamma^2)$) exists for $r >\sigma_c$, where $\sigma_c = \sigma_0 \gamma^{(T/\delta t) (1-2p)}$. The numerical data in fact show a somewhat larger range of power-law behavior. The re-scaling factor required for data collapse is simply proportional to $r_p$ from our analysis, as we have observed from the real data and from numerical simulations of model when $r_p$ is not too small. The simple limit captures important features of volatility clustering reflected in conditional probability distributions. The second term in Eq.~(\ref{dn}) is based on the multifractal random walk model that builds in long-time correlations via a logarithmic decay of the log volatility correlation $\langle \log\vert r(t+\tau)\vert \log \vert r(t)\vert\rangle$. This term allows us to reproduce the more subtle temporal autocorrelation behavior observed in the data and follows the implementation in Ref.~\cite{Sornette}. The long-term memory effects are incorporated by making the change in $n(t)$ depend on the steps $\eta_{t-i}$ at earlier times with a kernel given by $K(i)\,=\,1/\sqrt{i}$ (this corresponds to the MRW part of the model given by $n(t) = \alpha\sum_{i=1}^{N_c}K(i)\eta_{t-i}$) and allowing memory up to $N_c$ time steps, chosen to be $1000$ in our simulations. The final term allows us to control the rate of drift to lower values of $n$. We have simulated this model with $\alpha_0=0.1$ ($\alpha=\alpha_0/\ln(\gamma)$) and $\beta\approx 1.3$ for $\gamma=1.05$ and displayed the results for $P(r\vert r_p)$ in Figure 4. The model with the stated parameters reproduces the fat tail in the unconditional probability distribution for $r$ observed in the data. The non-universal scale factor $w(r_p)$ is similar to those found from our empirical analysis. We have also checked that this model retains the same temporal behavior in the log-volatility correlation exhibited by the pure MRW model. Thus the model we have investigated is capable of reproducing both probability distributions (conditional and unconditional) and temporal autocorrelations. We note in passing that the model as it stands cannot be used to study the leverage effect; however, it can be modified to do so. \begin{figure} \includegraphics*[width=8.5cm]{fig4.eps} \caption{ The scaled conditional probability distributions of return for the mixed model given by Eq.~(2) with $\gamma=1.05$, $\alpha_0=0.1$, and $\beta=1.3$. The time lag is $T=10$. The curves, corresponding to different absolute values of the return $r_p$ in the previous interval collapse on to a universal curve when scaled by a scale factor $w(r_p)$. The tail of the probability distribution is again described by a power law with the exponent equal to $-4$. The inset shows the dependence of $w(r_p)$ on $r_p$. } \end{figure} In summary, we have proposed a direct measure of volatility clustering in financial time series based on the {\em conditional} probability distribution of asset returns over a time period given the return over the previous time period. We discovered that the conditional probability of stock market data can be well described by a scaling relation, which reflects both fat tails and volatility clustering of the financial time series. In particular, the strength of volatility clustering is reflected in the functional form of the scaling factor $w(r_p)$. By extracting $w(r_p)$ from market data, we are able to estimate the future volatility over a time period, given the return in the previous period. This may be useful in modelling financial transactions including option pricing, portfolio and risk management; all these depend crucially on volatility estimation. The clustering of activities and fat tails in the associated distribution are very common in the dynamics of many social \cite{Barabasi} and natural phenomena (e.g. earthquake clustering \cite{Kagan}). The conditional probability measure we have presented in this paper may serve as a useful tool for characterizing other clustering phenomena. This work was supported by the National University of Singapore research grant R-151-000-032-112.
1,477,468,750,577
arxiv
\section{Introduction} Interest in strongly interacting Fermion systems has recently been invigorated with the discovery of high temperature superconductors. Another strongly interacting Fermion system is the atomic nucleus. The stability of nuclei and description of their low-energy excitations have been understood for many years. But the higher energy excited states of the nucleus are complex and can only be described statistically. Wigner \cite{WIGNER} suggested that the Hamiltonian of this system should be similar to a random matrix and that the distribution of spacings of nuclear energy levels should reflect this \cite{PORTER,BOHIGAS}. In particular the Gaussian Orthogonal Ensemble (GOE) of N$\times$N real symmetric matrices, invariant under orthogonal transformations, with random matrix elements that are Gaussian distributed (zero-mean, variance $v^{2}$ [diagonal elements have variance $2v^{2}$]) has the following properties: a) The ensemble-averaged density of states has the elliptical form \cite{WIGNERB} $\rho(x) = \sqrt{4 - x^{2}}/2\pi$, where $x = E/\sqrt{Nv^2}$, $|x|<2$ and zero otherwise. This is referred to as Wigner's semicircle law. b) The probability that the eigenvalues be $\lambda_{1},\cdots,\lambda_{N}$ is \cite{WISHART}, \begin{equation} P(\lambda_{1},...,\lambda_{N}) = { { 2^{N(N-1)/4} } \over { n!(2v)^{N(N+1)/2} \prod_{g=1}^{N}\Gamma({g\over 2}) } } e^{ -\sum_{i}\lambda_{i}^{2}/4v^{2} } \prod_{i,j}|\lambda_{i}-\lambda_{j}| \quad . \end{equation} The last term in the above equation gives rise to energy level repulsion. The distribution of spacings between pairs of energy levels has been found empirically to be quite accurately described by the `Wigner surmise' \cite{WIGNER} based on two dimensional matrices. This `surmise' is that the probability that the spacing between two adjacent levels be $s$ is $P(s) = (s\pi/2)\exp{(-s^{2}\pi/4)}$ where the probability has been normalized so that $\langle s\rangle = 1$. In contrast integrable systems, which have as many constants of the motion as degrees of freedom, and for which each energy level can be labelled by that many quantum numbers, have generically a Poisson distribution, $P(s) = e^{-s}$, for the energy level spacing \cite{BERRY}. The Hamiltonians of these systems can be thought of being representable by random {\it diagonal} matrices. The interpolation between Poisson and GOE distributions has been modelled by so-called `band random matrix ensembles' (BRME) \cite{CASATI}. In these ensembles only the off-diagonal elements in some sense `close' to the diagonal are non-zero and random. This is meant to interpolate between the random diagonal matrix and the GOE in which all off-diagonal elements are non-zero and random. A BRME might be relevant for local tight-binding models since in a natural basis of states (e.g. the one that is diagonal in $S^{z}_{i}$, for all sites i) only a few entries will have a non-zero value. On the other hand the non-zero matrix elements will be scattered about and not all close to the diagonal. Moreover expressing the Hamiltonian in a basis where all the obvious symmetries are also diagonal will leave us with a block-diagonal matrix where all the blocks will have only non-zero entries (particularly when we diagonalize with respect to total spin). Therefore we do not see how to justify using the BRME to interpret our results. The theoretical motivation for the studies undertaken here and by Montambaux {\it et al.} \cite{MONT} is the search for a microscopic theory of high temperature superconductors. In the `normal' state of these materials they are not Landau-Fermi liquids \cite{PWA}. One of the challenges in the field is to prove or disprove the existence of a Fermi liquid in two-dimensional interacting electron models. In a Landau-Fermi liquid the momenta and spin $\{k_{i},\sigma_{i}\}$ of excited quasiparticles form a set of good quantum numbers. Those who have modelled high temperature superconductors by a strongly interacting model such as the large-U Hubbard model or t-J model and tried to construct elementary single-particle excitations have not had success in constructing elementary quasiparticle excitations which are weakly interacting. In one dimension, Fermi liquid behaviour, and the set of momenta, charge and spin of Landau quasiparticles may be replaced by the integer parameters of a Bethe Ansatz solution of an interacting model (if one exists). If this does not happen in two dimensions, how then might Landau-Fermi liquid behaviour disappear? It has been proposed \cite{RAMMAL} that perhaps there {\it do not exist} weakly interacting quasiparticles whose momenta, charge and spin would be a `good' set of quantum numbers. In that case, the absence of `good' quantum numbers might be signalled by level statistics resembling those of random matrices. If, on the other hand, an interacting Fermion system retained Landau-Fermi liquid behaviour one might expect to see the level statistics of an integrable system, especially for low energy excitations. The first numerical study along these lines was performed by Montambaux {\it et al.} \cite{MONT} who showed that a special case of the doped t-J model has a level distribution agreeing quite well with that of the GOE. In order to investigate the transition between integrability, and non-integrability we have studied the energy level spacing in two integrable quantum spin systems and related, but non-integrable, models which may be obtained from an integrable one by tuning a single parameter. The primary integrable model we worked with was the S=1/2 antiferromagetic chain. This model is well studied and enables us to compare the behaviour of the level separation distribution with the known properties of this system as it is perturbed. \section{Numerical procedure} For simplicity and clarity we chose to work with isotropic spin systems. In this case the total spin and total $S^{z}$ are good quantum numbers. It is only necessary to consider the subspace $S^{z}=0$ which contains all of the eigenenergies. Open boundary conditions were chosen. The eigenstates are representations of the trivial spatial symmetries, namely, reflection and rotation in spin space. Thus they can be grouped according to their respective quantum numbers, parity P and total spin S. The perturbations which carry the system from an integrable to a non-integrable one will always respect the trivial spatial symmetries. States in different (P,S) sectors will never be coupled and their energy levels never correlated. Thus we calculate the energy level separation distribution within each (P,S) sector separately. In order to obtain good statistics one requires a large number of states in each (P,S) sector. It was for this reason that periodic boundary conditions for the spin chain were not used. In that case the parity under reflection P (which takes the values $\pm 1$), would be replaced by momentum which takes on L values where L is the length of the chain. The Hamiltonian was diagonalized numerically using the Jacobi method. States were sorted by energy, S and P. In order to correct for gross variations of the density of states as a function of energy the level spacing was normalized by the smoothed local density of states. This process is commonly referred to as unfolding the spectrum to remove a fluctuating local level density. This was performed for each set of quantum numbers separately. In general this did not affect the level spacing distribution very much at all as the density of states was generally constant with a fall-off at the `band' edges. The states with energies near the band edges were discarded by dividing the states of each (P,S) sector into ten bins ordered by energy and of equal width. The states of the first and last bins were then discarded. In order to compare with the statistical distributions, the energy level separations were normalized to have a mean of unity. After discarding results from (P,S) sectors with less than 50 states (generally the high spin sectors) the results of remaining sectors were combined at this point point in order to improve the statistics. We checked that all of the sectors had roughly the same behaviour. The probability function $P(s)$ was plotted by binning the data and again normalizing the number of states in each bin so that $\int P(s) ds = 1$. In some cases $I(y) = \int_{0}^{y}P(s)ds$ was calculated so this last binning step could be skipped. It is useful to know, given the typical size of the Hamiltonian matrix (after sorting by the straightforward symmetries), how well the level spacings of the eigenvalues of a random matrix of that size follow the Wigner surmise. The linear matrix dimensions encountered in this work are in the range 100 -- 500. Although this sort of calculation has been published in the past \cite{PORTER}, we have re-done such a calculation and in Fig. \ref{RMT} we show the $P(s)$ obtained (by the method described above) with the matrices of the same size as encountered in the (P,S) sectors of an open L=14 site chain. Fig. \ref{RMT} shows the level of `noise' expected even if the random matrix hypothesis is satisfied. Also shown is the `noise' level expected upon comparing $P(s)$ of random diagonal matrices of these sizes to a Poisson distribution. \section{Bethe Chain} We begin by examining the familiar S=1/2 antiferromagnetic chain on L sites with nearest-neighbor coupling, \begin{equation} H = \sum_{i=1}^{L-1} J{\bf S}_{i}\cdot{\bf S}_{i+1} \quad . \end{equation} This model remains integrable with our choice of open boundary conditions. In Fig. \ref{BETHE} we show P(s) for chains of length L=12 and L=14 (L=13 is similar but for clarity it is not shown). They are compared with the Poisson distribution and Wigner's surmise for the GOE. Consistent with the integrability of this system the agreement with the Poisson distribution is good, especially in the tail. There is a small deviation from the Poisson distribution at intermediate $s$ which is more than that of random diagonal matrices of similar dimensions. The comparison of L=12 and L=14 cases gives an idea of finite size effects which may explain this deviation. \section{Two coupled chains} In this section we consider two open chains. They are coupled by a simple nearest-neighbor interaction, \begin{equation} H = \sum_{i=1}^{L-1} \sum_{j=1}^{2} J{\bf S}_{i,j}\cdot{\bf S}_{i+1,j} + \sum_{i=1}^{L} J_{\perp}{\bf S}_{i,1}\cdot{\bf S}_{i,2} \end{equation} where $j=1,2$ is the chain index. An additional symmetry, reflection between chains, appears and so the eigenstates were also sorted by parity under this reflection. For zero coupling the system is integrable and when coupling is turned on the system is not integrable (in fact our calculation erases any doubt that this system might have been integrable). Such a coupling between two chains is believed to be a relevant perturbation for the ground state \cite{SAKAI}. We studied this system with the hope that perhaps the relevance or irrelevance of the inter-chain coupling might be apparent in the level spacing distribution. That is, if the coupling were irrelevant, the spectrum would look like that of two integrable chains. If the coupling were relevant than the spectrum would look like that of an non-integrable system. The distribution $P(s)$ for two chains of length L=7 is plotted in Fig. \ref{JPERP} for various values of $J_{\perp}/J$. There is an evolution from a Poisson distribution to the Wigner surmise as $J_{\perp}/J$ is turned on. In order to quantify the evolution between these two distributions we shall describe it by the single parameter \cite{SHKLOVSKII} $I = \int_{0}^{\eta} P(s)ds$ where $\eta\approx 2.002$ is the greater of the two values of $s$ where the Poisson and GOE distributions cross. At the crossing point $I$ is most sensitive to the difference between Poisson and GOE distributions. For the Poisson and GOE distributions $I$ has the values 0.8649 and 0.9571 respectively. In Fig. \ref{LOGJPERP}, $I$ is plotted as a function of ${\rm ln}(J_{\perp}/Jd)$ separately for the values S=0, 2, 4, 6, 8 of total spin. The parameter $Jd$ is the average spacing between energy levels for a given value of S. It was extracted from our numerical results. We subtracted the (empirical) small $J_{\perp}$ limit of $I$ before plotting because $I$ did not converge to the ideal Poisson distribution value of 0.8649 (probably due to finite size effects). Fig. \ref{LOGJPERP} shows that the transition from Poisson to GOE is roughly the same in the different spin sectors averaged to arrive at Fig. \ref{JPERP}. Our results are consistent with the idea that in general, level repulsion will be fully developed when the typical energy shift due to a perturbation is of order the typical spacing between unperturbed energy levels. For two chains of length L=7, the average level spacing of the large sectors ranges from 0.03 -- 0.07 J. The expectation value of the $J_{\perp}$ perturbation is difficult to estimate but there are seven links between the two chains and the rough order of magnitude of $\langle{\bf S}_{i,1}\cdot{\bf S}_{i,2}\rangle$ should be $1/4$. Thus before ${\rm ln}(J_{\perp}/Jd)$ reaches $-1$ or so level repulsion should have set in. This is observed and so our results are consistent with a transition to non-integrability for arbitrarily small $J_{\perp}/J$ in the thermodynamic limit. Another fact supporting the idea of comparing mean energy spacings with the size of the perturbation is that the level spacing distributions are roughly similar if one changes the sign of $J_{\perp}$. \section{Next-nearest-neighbor-coupled chain} We now consider a chain with next-nearest-neighbor (NNN) coupling, \begin{equation} H = \sum_{i=1}^{L-1} J{\bf S}_{i}\cdot{\bf S}_{i+1} + \sum_{i=1}^{L-2} J_{2}{\bf S}_{i}\cdot{\bf S}_{i+2} \quad . \label{NNN} \end{equation} For $J_{2}/J = 0$ this system is of course integrable. Near $J_{2}/J=0.24$ it is believed that the ground state of this system undergoes a transition from a liquid-like to a dimer-like ground state \cite{JULLIEN}. At $J_{2}/J=0.5$ the ground state is known \cite{MAJUMDAR}, and is simply the (doubly degenerate) dimer solid. It would be interesting to see if this qualitative behaviour of the ground state is at all reflected in the level spacing distribution $P(s)$. One factor which may be significant is the proximity of the integrable `$1/r^{2}$' model \cite{HALDANE}, which is discussed in the following section. In Fig. \ref{J2} we plot P(s) for a number of values of $J_{2}/J$ with L=13. There is no special behaviour near the point $J_{2}/J = 0.24$ except that level repulsion settles in continuously but more slowly (as a function of $J_{2}$ or $J_{\perp}$) than for the coupled chain problem. To illustrate this explicitly we plot the parameter $I$ versus ${\rm ln}(2J_{2}/Jd)$ in Fig. \ref{LOGJ2}. Subtracted from $I$ is its empirical value when $J_{2}=0$. $Jd$ is again the observed average level spacing which is different for sectors of different total spin. The extra factor of two appears because NNN coupling introduces one coupling per site whereas in the inter-chain coupling problem there is one extra coupling for every two sites. Comparing Fig. \ref{LOGJPERP} to Fig. \ref{LOGJ2} one sees that the parameter $I$ starts to deviate from its value for the integrable case at a larger value of ${\rm ln}(2J_{2}/Jd)$ than ${\rm ln}(J_{\perp}/Jd)$ for coupled chains. So in terms of affecting integrability it would seem that interchain coupling is a somewhat stronger perturbation than NNN coupling. This behaviour is also evident upon examining the whole integrated probability distribution curves $I(y) = \int_{0}^{y}P(s)ds$ for these models. Another way of understanding the above observation is that the resistance to level repulsion might be due to the proximity of the integrable $1/r^{2}$ model Eq. \ref{SR2}. When $J_{2}/J = 0.25$ the Hamiltonian \ref{NNN} contains the first two terms of Eq. \ref{SR2}. In order to test this idea we evaluated the level spacing distribution for a {\it ferromagnetic} ($J_{2}/J < 0$) coupling. We found that the level spacing distribution as a function of $|J_{2}/J|$ behaved essentially the same as for antiferromagnetic NNN coupling. So the proximity of the $1/r^{2}$ model is perhaps not responsible. Another, less likely, possibility is that there is a hitherto unknown integrable model nearby with ferromagnetic NNN coupling! \section{$1/r^{2}$ chain} The spin 1/2 periodic chain with Hamiltonian \begin{equation} H = \sum_{i,n} {J\over 2}{\rm sin}^{-2}(n\pi/L) {\bf S}_{i}\cdot{\bf S}_{i+n} \label{SR2} \end{equation} was studied by Haldane and Shastry \cite{HALDANE} and shown to be integrable. We have studied an open chain version, \begin{equation} H = \sum_{i,j=1;i\ne j}^{L} {J\over 2}|i-j|^{-2} {\bf S}_{i}\cdot{\bf S}_{j} \end{equation} in order to avoid the appearance of L conserved momenta which would reduce the statistical significance of the level spacing distribution. The results are summarized in Fig. \ref{USR2}. The level spacing distribution is strikingly unusual in that that the probability of closely spaced levels is {\it larger} than for a Poisson distribution. One way that this might arise is through the Landau levels of an external magnetic field but no such field is present here. It would be interesting to know if recent advances in understanding this model \cite{HALDANEA} could explain this behaviour. \section{Conclusions} In this work we have studied the level spacing distribution for interacting quantum many-body systems represented by antiferromagnetic spin 1/2 chains. We have confirmed that the level spacing distribution for the integrable Bethe chain is Poissonian and that certain perturbations lead to level repulsion. A system was found, the $1/r^2$ model, which displays level attraction. We were able to track the transition from integrability to non-integrability. We conclude, by a small system diagonalization, that certain systems such as two coupled chains or the NNN coupling model (irrespective of the sign of coupling) are definitely {\it not} integrable. A possible problem which did not appear was that of a long chain `almost' having translation invariance and hence an `almost' good momentum quantum number. That would introduce additional degeneracies in the non-integrable models, but none were seen. Level repulsion seems to set in, as one might guess, when the perturbation is of the same size as the typical spacing between energy levels. In the thermodynamic limit the extension of this idea would require some care. One would need to scale both the energy level spacing and perturbation with system size. Additional complications would arise were one to also consider a low energy limit where the density of states is changing rapidly. We found evidence that the introduction of a second space dimension has a slightly stronger effect on integrability than the introduction of the NNN coupling. In the NNN coupling study we saw no special behaviour near $J_{2}/J = 0.24$ other than a resistance to level repulsion. It is perhaps not surprising that a qualitative change in the ground state does not affect the level statistics of the bulk of the states. On the other hand at non-negligible temperatures these higher energy states would be important. Indeed the characteristic linear in temperature resistivity of the normal state of high temperature superconductors persists up to $T > 500{\rm K}$. But we are a long way from formulating transport theory in terms of the random matrix approach. One must go far beyond simple level statistics in order to consider the response functions of an interacting Fermion system. \acknowledgements The authors wish to thank J. Bellisard, T. Dombre, B. Dou\c{c}ot, L. Levy, D. Poilblanc, S. Shastry, and C. Sire.
1,477,468,750,578
arxiv
\section{Introduction} A differential graded (dg) operad $\mathcal{O}$ is {\it formal} if there exists a sequence of quasi-isomorphisms (of dg operads) $$ \mathcal{O} \,\stackrel{\sim}{\leftarrow}\, \bullet \,\stackrel{\sim}{\rightarrow}\, \bullet \,\stackrel{\sim}{\leftarrow}\, \bullet ~ \dots ~ \bullet \,\stackrel{\sim}{\rightarrow}\, H^{{\bullet}}(\mathcal{O}) $$ connecting $\mathcal{O}$ to its cohomology $H^{{\bullet}}(\mathcal{O})$. Formality for dg operads (and other algebraic structures) is a subtle phenomenon. Currently, there are no effective tools for determining whether a given dg operad is formal or not. Moreover, in various interesting examples (including the braces operad $\mathsf{Br}$ \cite{Br}, \cite{K-Soi}, \cite{M-Smith}, its ``framed'' version $\mathsf{CBr}$ \cite{Campos}, \cite{Ward} and the Kontsevich-Soibelman operad $\mathsf{KS}$ \cite{K-Soi1}, \cite{Thomas-KS}) all known proofs of formality require transcendental tools \cite{K-mot}, \cite{LV-formality}, \cite{Dima-disc}, \cite{Thomas-KS}. In this paper we consider a dg operad $\mathcal{O}$ defined over the field ${\mathbb Q}$ of rationals and assume that $\mathcal{O} \otimes_{{\mathbb Q}} {\mathbb K}$ is formal for some field extension\footnote{In concrete examples, ${\mathbb K} = {\mathbb R}$ or ${\mathbb C}$.} ${\mathbb K}$ of ${\mathbb Q}$. We consider a cobar resolution ${\mathrm{Cobar} }(\mathcal{C}) \stackrel{\sim}{\to} H^{{\bullet}}(\mathcal{O})$ of $H^{{\bullet}}(\mathcal{O})$ and show that, under some mild conditions on $\mathcal{O}$ and on the resolution ${\mathrm{Cobar} }(\mathcal{C})$, there is an explicit algorithm which allows us to produce a formality quasi-isomorphism\footnote{Recall that $\mathcal{O}$ is formal if and only if there exists a quasi-isomorphism of dg operads \eqref{desired}.} \begin{equation} \label{desired} {\mathrm{Cobar} }(\mathcal{C}) ~\stackrel{\sim}{\longrightarrow} ~ \mathcal{O} \end{equation} over ${\mathbb Q}$ recursively. The proof that this algorithm works is based on the existence of a sequence of quasi-isomorphisms connecting $\mathcal{O} \otimes_{{\mathbb Q}} {\mathbb K}$ to its cohomology. However, no explicit knowledge about this sequence of quasi-isomorphisms is required at any step of this algorithm. We would like to mention that the existence of a formality quasi-isomorphism \eqref{desired} over ${\mathbb Q}$ (from the existence of a formality quasi-isomorphism over an extension of ${\mathbb Q}$) was proved in paper \cite{Roig-plus} by F. Guill\'en Santos, V. Navarro, P. Pascual, and A. Roig. More precisely, see Theorem 6.2.1 in {\it loc. cit.} Our paper is organized as follows. In Section \ref{sec:prelim}, we recall some basic concepts and fix the notational conventions. In Section \ref{sec:the-constr}, we introduce the concept of an MC-sprout, which can be viewed as an approximation to a formality quasi-isomorphism \eqref{desired}. Using this concept, we formulate the main theorem of this paper (see Theorem \ref{thm:main}) and deduce it from a technical lemma (see Lemma \ref{lem:betaalter}). Section \ref{sec:betaalter} is devoted to the proof of this lemma and Appendix \ref{app:lift} contains the proof of a useful lifting property for cobar resolutions. Finally, Appendix \ref{app:Tam-Arity4} displays a third MC sprout in ${\mathrm{Conv}}(\mathsf{Ger}^{\vee}, \mathsf{Br})$ which can be extended to a genuine MC element in ${\mathrm{Conv}}(\mathsf{Ger}^{\vee}, \mathsf{Br})$. This MC sprout was found using the software \cite{Software} developed by the authors. We should mention that our construction is inspired by Proposition 5.8 from classical paper\footnote{See also Theorem 4 and Corollary 4.1 in D. Bar-Natan's beautiful paper \cite{BN-GT}.} \cite{Drinfeld} by V. Drinfeld. \vspace{0.28cm} \noindent \textbf{Acknowledgements:} The authors were partially supported by the NSF grant DMS-1501001. The authors are thankful to Sergey Plyasunov and Justin Y. Shi for showing them how to use the Python module {\it pickle.} This module was used in the package \cite{Software} related to this paper. \subsection{Preliminaries} \label{sec:prelim} In this paper, ${\mathbb K}$ is any field extension of the field ${\mathbb Q}$ of rational numbers and $\otimes := \otimes_{\mathbb Q}$. For a cochain complex $V$, the notation ${\mathcal{Z}}(V)$ is reserved for the subspace of cocycles. The degree of a vector $v$ in a graded vector space (or a cochain complex) $V$ is denoted by $|v|$. The notation ${\mathbf{s}}\,$ (resp. ${\mathbf{s}^{-1}\,}$) is reserved for the operator which shifts the degree up by $1$ (resp. down by $1$), i.e. $$ ({\mathbf{s}}\, V)^{{\bullet}} = V^{{\bullet} -1}\,, \qquad ({\mathbf{s}^{-1}\,} V)^{{\bullet}} = V^{{\bullet}+1}\,. $$ The notation $S_n$ is reserved for the symmetric group on $n$ letters. The abbreviation ``dg'' always means ``differential graded''. For a dg Lie algebra ${\mathcal{L}}$, $\mathrm{Curv}$ is the map $\mathrm{Curv} : {\mathcal{L}}^1 \to {\mathcal{L}}^2$ defined by the formula \begin{equation} \label{Curv-dfn} \mathrm{Curv}({\alpha}) : = {\partial} {\alpha} + \frac{1}{2} [{\alpha}, {\alpha}]. \end{equation} For example, Maurer-Cartan (MC) elements of ${\mathcal{L}}$ are precisely elements of the zero locus of $\mathrm{Curv}$. Let us recall \cite{GMtheorem}, \cite{Getzler}, \cite{Hinich} that for every filtered dg Lie algebra ${\mathcal{L}}$ (in the sense of \cite[Section 1]{GMtheorem}), the set of MC elements of ${\mathcal{L}}$ can be upgraded to a groupoid\footnote{This groupoid is actually a truncation of an $\infty$-groupoid (i.e. a fibrant simplicial set). However, for our purposes, we will not need cells of dimension $\ge 2$.} with MC elements being objects. Recall that two MC elements ${\alpha}, \ti{{\alpha}}$ of a filtered dg Lie algebra ${\mathcal{L}}$ are isomorphic (in this groupoid) if there exists a degree $0$ element $\xi \in {\mathcal{L}}$ such that \begin{equation} \label{al-isom-ti-al} \ti{{\alpha}} ~ = ~ \exp([\xi, ~]) {\alpha} ~ - ~ \frac{\exp([\xi, ~]) - 1}{[\xi, ~]} \, {\partial} \xi, \end{equation} where the expressions $\exp([\xi, ~])$ and $$ \frac{\exp([\xi, ~]) - 1}{[\xi, ~]} $$ are defined via the corresponding Taylor series\footnote{These series are well defined because ${\mathcal{L}} = {\mathcal{F}}_1 {\mathcal{L}}$ and ${\mathcal{L}}$ is complete with respect to the filtration.}. In this paper, we will freely use the language of (colored) operads \cite{notes}, \cite{Fresse-book}, \cite{LV-book}. For a coaugmented cooperad $\mathcal{C}$, the notation $\mathcal{C}_{\circ}$ is reserved for the cokernel of the coaugmentation. For a dg pseudo-cooperad $P$, we denote by $P^{\diamondsuit}$ the dg cooperad which is obtained from $P$ by formally adjoining the counit. Clearly, for every coaugmented cooperad $\mathcal{C}$, the cooperad $\mathcal{C}_{\circ}^{\diamondsuit}$ is canonically identified with $\mathcal{C}$. The notation $\Xi$ is reserved for the ordinal of colors. A ($\Xi$-colored) {\it collection} $V$ is a family of cochain complexes $\{ V({\mathbf{q}}) \}_{{\mathbf{q}}}$ indexed by all $\Xi$-colored corollas ${\mathbf{q}}$ (with the standard labeling). For every $\Xi$-colored corolla ${\mathbf{q}}$, $V({\mathbf{q}})$ is equipped with the left action of the group $$ S_{k_1({\mathbf{q}})} \times S_{k_2({\mathbf{q}})} \times \dots \times S_{k_m({\mathbf{q}})}, $$ where $m$ is the total number of colors of the incoming edges and $k_i({\mathbf{q}})$ is the number of incoming edges of the $i$-th color. For example, if the ordinal of colors $\Xi$ is the singleton, then a collection is simply a family of cochain complexes $\{ V(n) \}_{n \ge 0}$, where each $V(n)$ is equipped with a left action of $S_n$. The notation $\mathsf{Coll}$ is reserved for the category of $\Xi$-colored collections of graded vector spaces. For objects $Q_1, Q_2$ of $\mathsf{Coll}$ the notation $$ \mathrm{Hom}_{\mathsf{Coll}}(Q_1, Q_2) $$ is reserved for the vector space of homomorphisms (of all degrees) from the collection $Q_1$ to the collection $Q_2$. For example, if the ordinal of colors is the singleton, then \begin{equation} \label{Hom-Coll} \mathrm{Hom}_{\mathsf{Coll}}(Q_1, Q_2) : = \prod_{n \ge 0} \mathrm{Hom}_{S_n} \big(Q_1(n), Q_2(n)\big), \end{equation} where $$ \mathrm{Hom}_{S_n} \big(Q_1(n), Q_2(n)\big) = \Big( \mathrm{Hom} \big(Q_1(n), Q_2(n)\big) \Big)^{S_n} $$ and $\mathrm{Hom} \big(Q_1(n), Q_2(n)\big)$ is the inner hom in the category of graded vector spaces. For a dg pseudo-cooperad $P$ and a dg operad $\mathcal{O}$, the notation ${\mathrm{Conv}}(P, \mathcal{O})$ is reserved for the convolution Lie algebra \cite[Section 2.3]{stable}, \cite[Section 4]{notes}. The underlying graded vector space of ${\mathrm{Conv}}(P, \mathcal{O})$ is $\mathrm{Hom}_{\mathsf{Coll}}(P, \mathcal{O})$ and the Lie bracket is given by the formula $$ [f,g] : = f \bullet g - (-1)^{|f| |g|} g \bullet f, $$ where $f \bullet g$ is the pre-Lie multiplication\footnote{See eq. (2.41) in \cite{stable}.} of $f$ and $g$ defined in terms of comultiplication on $P$ and multiplications on $\mathcal{O}$. Let us recall \cite[Proposition 5.2]{notes} that MC elements of ${\mathrm{Conv}}(P, \mathcal{O})$ are in bijection with operad morphisms $F : {\mathrm{Cobar} }(P^{\diamondsuit}) \to \mathcal{O}$. In particular, the operad morphism corresponding to a MC element ${\alpha} \in {\mathrm{Conv}}(P, \mathcal{O})$ will be denoted by $F_{{\alpha}}$. In this paper, we assume that \begin{cond} \label{cond:P-filtered} Every dg pseudo-cooperad $P$ carries an ascending filtration \begin{equation} \label{P-circ-filtr} {\mathbf{0} } = {\mathcal{F}}^0 P \subset {\mathcal{F}}^1 P \subset {\mathcal{F}}^2 P \subset {\mathcal{F}}^3 P \subset \dots \end{equation} which is compatible with the differential and the comultiplications in the following sense: \begin{equation} \label{diff-P-filtr} {\partial}_{P} \big( {\mathcal{F}}^m P \big) \subset {\mathcal{F}}^{m-1} P, \end{equation} \begin{equation} \label{D-bt-filtr} {\Delta}_{{\mathbf{t}}} \big( {\mathcal{F}}^m P \big) \subset \bigoplus_{m_1 + \dots + m_k = m} {\mathcal{F}}^{m_1} P \otimes {\mathcal{F}}^{m_2} P \otimes \dots \otimes {\mathcal{F}}^{m_k} P \,, \end{equation} where ${\mathbf{t}}$ is a ($\Xi$-colored) planar tree with the set of leaves $\{1,2,\dots, n \}$ and $k$ nodal vertices. Moreover, $P$ is cocomplete with respect to filtration \eqref{P-circ-filtr}, i.e. \begin{equation} \label{cocomplete} P = \bigcup_{m} {\mathcal{F}}^m P. \end{equation} \end{cond} \begin{remark} \label{rem:Sul-alg} Cobar resolutions ${\mathrm{Cobar} }(P^{\diamondsuit})$ for which $P$ satisfies Condition \ref{cond:P-filtered} may be thought of as analogs of Sullivan algebras from rational homotopy theory. Let us also mention that, due to \cite[Proposition 38]{MV11}, such dg operads ${\mathrm{Cobar} }(P^{\diamondsuit})$ are cofibrant. \end{remark} For example, if the ordinal of colors $\Xi$ is the singleton, and $P(0) = P(1) = {\mathbf{0} }$, then the filtration ``by arity'' \begin{equation} \label{filtr-arity} {\mathcal{F}}^m P (n) : = \begin{cases} P(n) \qquad {\rm if} ~~ n \le m+1 \\ {\mathbf{0} } \qquad {\rm otherwise}. \end{cases} \end{equation} satisfies Condition \ref{cond:P-filtered}. Condition \ref{cond:P-filtered} guarantees that, for every dg operad $\mathcal{O}$, the dg Lie algebra \begin{equation} \label{Conv-P-cO} {\mathrm{Conv}}(P, \mathcal{O}) \end{equation} is equipped with the complete descending filtration: $$ {\mathrm{Conv}}(P, \mathcal{O}) = {\mathcal{F}}_1 {\mathrm{Conv}}(P, \mathcal{O}) \supset {\mathcal{F}}_2 {\mathrm{Conv}}(P, \mathcal{O}) \supset \dots $$ \begin{equation} \label{filtr-Conv} {\mathcal{F}}_m {\mathrm{Conv}}(P, \mathcal{O}) : = \big\{ f\in {\mathrm{Conv}}(P, \mathcal{O}) ~\big|~ f \big|_{{\mathcal{F}}^{m-1} P} = 0 \big\}. \end{equation} In other words, ${\mathrm{Conv}}(P, \mathcal{O})$ is a filtered dg Lie algebra in the sense of \cite[Section 1]{GMtheorem}. \section{The recursive construction of formality quasi-isomorphisms} \label{sec:the-constr} Let $\mathcal{O}$ be a dg operad and ${\mathcal{H}}$ be the cohomology operad for $\mathcal{O}$: $$ {\mathcal{H}} : = H^{{\bullet}}(\mathcal{O}). $$ We assume that ${\mathcal{H}}$ admits a cobar resolution ${\mathrm{Cobar} }(P^{\diamondsuit})$ where $P^{\diamondsuit}$ is a dg pseudo-cooperad satisfying Condition \ref{cond:P-filtered}. Due to Corollary \ref{cor:zig-zag-shorter} from Appendix \ref{app:lift}, the problem of constructing a zig-zag of quasi-isomorphisms (of dg operads) connecting $\mathcal{O}$ to ${\mathcal{H}}$ is equivalent to the problem of constructing a single quasi-isomorphism (of dg operads) $$ F : {\mathrm{Cobar} }(P^{\diamondsuit}) \to \mathcal{O}. $$ The latter problem is, in turn, equivalent to the problem of constructing a MC element $$ {\alpha} \in {\mathrm{Conv}}(P, \mathcal{O}) $$ whose corresponding morphism $F_{{\alpha}} : {\mathrm{Cobar} }(P^{\diamondsuit}) \to \mathcal{O}$ is a quasi-isomorphism of dg collections. In this paper, we consider a dg operad $\mathcal{O}$ and a cobar resolution \begin{equation} \label{rho-cH} \rho \,: \, {\mathrm{Cobar} }(P^{\diamondsuit}) ~\stackrel{\sim}{\longrightarrow}~ {\mathcal{H}} : = H^{{\bullet}}(\mathcal{O}). \end{equation} We assume that the pair $(P, \mathcal{O})$ satisfies the following conditions: \begin{enumerate} \item[\textbf{C1}] The dg pseudo-operad $P$ is equipped with an {\it additional} grading \begin{equation} P = \bigoplus_{k \geq 1} \mathcal{G}^k P\,, \qquad \mathcal{G}^{\,\le 0} P = {\mathbf{0} } \end{equation} which is compatible with the differential ${\partial}_P$ and the comultiplications ${\Delta}_{{\mathbf{t}}}$ in the following sense: \begin{equation} {\partial}_P (\mathcal{G}^k P) \, \subset \, \mathcal{G}^{k-1} P, \end{equation} \begin{equation} {\Delta}_{\mathbf{t}}(\mathcal{G}^m P) ~\subset~ \bigoplus_{r_1+..+ r_q = m} \mathcal{G}^{r_1}P \otimes \mathcal{G}^{r_2}P \otimes ... \otimes \mathcal{G}^{r_q}P, \end{equation} where ${\mathbf{t}}$ is ($\Xi$-colored) tree with $q$ nodal vertices. \item[\textbf{C2}] $\mathcal{G}^k P$ is finite dimensional for every $k$ and the graded components of $\mathcal{O}({\mathbf{q}})$ are finite dimensional for every $\Xi$-colored corolla ${\mathbf{q}}$. \item[\textbf{C3}] The operad ${\mathcal{H}}$ is generated by $\rho ({\mathbf{s}}\, \mathcal{G}^1 P)$ and \begin{equation} \label{rho-for-k-ge2} \rho \big|_{{\mathbf{s}}\, \mathcal{G}^k P} ~ = ~0 \quad \forall ~~ k \ge 2. \end{equation} \end{enumerate} \begin{example} \label{ex:aritygrading} Suppose that the ordinal of colors $\Xi$ is the singleton, $P(0) = P(1) = 0$ and the differential ${\partial}_P = 0$. Then the grading by arity \begin{equation} \label{cG-arity} \mathcal{G}^k P (n) : = \begin{cases} P(n) \qquad {\rm if} ~~ n = k +1, \\ {\mathbf{0} } \qquad {\rm otherwise}. \end{cases} \end{equation} satisfies Condition \textbf{C1}. Moreover, if $P(n)$ is finite dimensional for all $n$ and each graded component of $\mathcal{O}(n)$ is finite dimensional, then Condition \textbf{C2} is also satisfied. In particular, for $P = \mathsf{Ger}^\vee_{\circ}$ the Koszul dual of the Gerstenhaber operad, and $\mathcal{O} = \mathsf{Br}$ the braces operad \cite{Br}, \cite{DeligneTW}, all these assumptions are met. \end{example} \begin{remark} \label{rem:CBr-KS} Conditions \textbf{C1}, \textbf{C2}, and \textbf{C3} are also satisfied for the pairs $(P_{\mathsf{BV}}, \mathsf{CBr})$ and $(\mathsf{calc}^{\vee}, \mathsf{KS})$, where $P_{\mathsf{BV}}$ is the dg pseudo-cooperad used for the cobar resolution \cite{BV} of the operad $\mathsf{BV}$ governing $BV$-algebras and $\mathsf{calc}^{\vee}$ shows up in the cobar resolution for the operad governing calculus algebras \cite{calc}, \cite[Definition 3]{HoCalc}. \end{remark} \begin{remark} \label{rem:cG-cF} Clearly, every dg pseudo-operad $P$ with a grading $\mathcal{G}^{{\bullet}} P$ satisfying the above conditions has the ascending filtration \begin{equation} \label{cF-P} {\mathcal{F}}^{m} P : = \bigoplus_{k \le m} \mathcal{G}^k P \end{equation} and this filtration satisfies Condition \ref{cond:P-filtered}. \end{remark} \begin{remark} \label{rem:cG-k-P} If we forget about the differential ${\partial}_P$ on $P$, every $\mathcal{G}^k P$ can be viewed as a collection of graded vector spaces. So we will tacitly identify elements in $\mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P , \mathcal{O})$ with elements $f \in {\mathrm{Conv}}(P, \mathcal{O})$ which satisfy the condition $f \big|_{\mathcal{G}^m P} \equiv 0$ for all $m \neq k$. It is clear that $\mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P , \mathcal{O})$ is closed with respect to the differential ${\partial}_{\mathcal{O}}$ on $\mathcal{O}$ for every $k$. However, for the map $f \mapsto f \circ {\partial}_{P}$, we have $$ f \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P , \mathcal{O}) ~ \mapsto~ f \circ {\partial}_P \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^{k+1} P , \mathcal{O}). $$ \end{remark} \begin{remark} \label{rem:syzygy} In many examples, the gradation on the (dg) pseudo-operad $P$ from Condition \textbf{C1} is precisely the syzygy gradation \cite[Appendix A]{BV}, \cite[Sections 3.3, 7.3]{LV-book}. \end{remark} \hspace{0.5cm} Let $F$ be an arbitrary morphism of dg operads $$ F : {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \to \mathcal{O} \otimes {\mathbb K} $$ and $\pi_{{\mathcal{H}}}$ be the canonical projection $$ \pi_{{\mathcal{H}}} : {\mathcal{Z}}(\mathcal{O}) \to {\mathcal{H}} $$ from the sub-operad ${\mathcal{Z}}(\mathcal{O}) : = \mathcal{O} \cap \ker({\partial})$ to ${\mathcal{H}}$. Since every vector in ${\mathbf{s}}\, \mathcal{G}^1 P$ is a cocycle in ${\mathrm{Cobar} }(P^{\diamondsuit})$ the restriction $F \big|_{{\mathbf{s}}\, \mathcal{G}^1 P}$ gives us a map of dg collections $$ F \big|_{{\mathbf{s}}\, \mathcal{G}^1 P} : {\mathbf{s}}\, \mathcal{G}^1 P \to {\mathcal{Z}}(\mathcal{O}). $$ We claim that \begin{prop} \label{prop:only-cG1-for-q-iso} If the image of $$ \pi_{{\mathcal{H}}} \circ F \big|_{{\mathbf{s}}\, \mathcal{G}^1 P}~ : ~ {\mathbf{s}}\, \mathcal{G}^1 P ~\to~ {\mathcal{H}} $$ generates the operad ${\mathcal{H}}$ then $F$ is a quasi-isomorphism of dg operads. The same statement holds if the base field ${\mathbb Q}$ is replaced by its extension ${\mathbb K}$. \end{prop} \begin{proofOld} Since all vectors in ${\mathbf{s}}\, \mathcal{G}^1 P$ are cocycles in ${\mathrm{Cobar} }(P^{\diamondsuit})$ and the sub-collection $\pi_{{\mathcal{H}}} \circ F ({\mathbf{s}}\, \mathcal{G}^1 P)$ generates the operad ${\mathcal{H}}$, the map $$ H^{{\bullet}}(F) : H^{{\bullet}}\big({\mathrm{Cobar} }(P^{\diamondsuit})\big) \to {\mathcal{H}} $$ is surjective. Since each graded component of $\mathcal{O}({\mathbf{q}})$ is finite dimensional for every corolla ${\mathbf{q}}$ (see Condition \textbf{C2}), we know that each graded component of ${\mathcal{H}}({\mathbf{q}})$ is finite dimensional for every ${\mathbf{q}}$. On the other hand, $H^{{\bullet}}\big({\mathrm{Cobar} }(P^{\diamondsuit})\big)$ is isomorphic to ${\mathcal{H}}$. Thus the proposition follows from the fact a surjective map between isomorphic finite dimensional vector spaces is an isomorphism. Since this proof works for any base field (of characteristic zero), the last assertion in the proposition is obvious. \end{proofOld} \subsection{MC-sprouts in ${\mathrm{Conv}}(P, \mathcal{O})$} \label{sec:MC-sprout} \begin{defi} \label{dfn:n-MC-sprout} Let ${\mathcal{F}}_{{\bullet}}{\mathrm{Conv}}(P, \mathcal{O})$ be the descending filtration on ${\mathrm{Conv}}(P, \mathcal{O})$ coming from the ascending filtration \eqref{cF-P} on $P$ and $n$ be an integer $\ge 1$. An \emph{$n$-th MC-sprout} in ${\mathrm{Conv}}(P, \mathcal{O})$ is a degree $1$ element ${\alpha} \in {\mathrm{Conv}}(P, \mathcal{O})$ such that $$ \mathrm{Curv}({\alpha}) \in {\mathcal{F}}_{n+1} {\mathrm{Conv}}(P, \mathcal{O}) $$ or equivalently \begin{equation} \label{sprout-cG} \mathrm{Curv}({\alpha}) (X) = 0, \qquad \forall ~~ X \in \mathcal{G}^{\,\le n} P. \end{equation} \end{defi} \begin{remark} \label{rem:finite-sum} Since $P$ is graded, every element ${\alpha} \in {\mathrm{Conv}}(P, \mathcal{O})$ can be uniquely written as $$ {\alpha} = \sum_{k=1}^{\infty} {\alpha}_k\,, \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}). $$ Moreover, since ${\alpha}_{k}$ for $k > n$ do not contribute to the left hand side of \eqref{sprout-cG}, we may only consider $n$-th MC-sprouts of the form $$ {\alpha} = \sum_{k=1}^{n} {\alpha}_k\,, \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}). $$ Due to our conditions on $\mathcal{O}$ and $P$, any such MC-sprout is determined by a finite number of coefficients. \end{remark} \begin{example} \label{ex:truncation} Let ${\alpha}$ be a genuine MC element of ${\mathrm{Conv}}(P, \mathcal{O})$. \emph{The $n$-th truncation} of ${\alpha}$ is the degree $1$ element ${\alpha}^{[n]}$ of ${\mathrm{Conv}}(P, \mathcal{O})$ defined by the formula \begin{equation} \label{trunc-dfn} {\alpha}^{[n]} (X) = \begin{cases} {\alpha}(X) \qquad {\rm if} ~~ X \in \mathcal{G}^{\,\le n} P \,, \\ 0 \qquad {\rm otherwise}\,. \end{cases} \end{equation} Clearly, the $n$-th truncation of any MC element ${\alpha}$ of ${\mathrm{Conv}}(P, \mathcal{O})$ is an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$. It is also easy to see that the same formula \eqref{trunc-dfn} defines an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ provided ${\alpha}$ is an $m$-th MC-sprout and $m \ge n$. We also call ${\alpha}^{[n]}$ \emph{the $n$-th truncation} of ${\alpha}$ even if ${\alpha}$ is not a genuine MC element of ${\mathrm{Conv}}(P, \mathcal{O})$ but merely an $m$-th MC-sprout for $m \ge n$. \end{example} \begin{example} \label{ex:Ger-Br-2nd-sprout} Let $\mathsf{Br}$ be the braces operad and $T_{12}$, $T_{\cup}$, $T_{1,23}$, $T^{\cup}_{123}$ and $T_{1, {\bullet}, 23}$ be the brace trees shown in figures \ref{fig:Br-tree} and \ref{fig:Br-tree1}. Let ${\alpha}'$ be the following vector in $\mathsf{Br}(2) \otimes {\Lambda}^{-2}\mathsf{Ger}(2) \oplus \mathsf{Br}(3) \otimes {\Lambda}^{-2}\mathsf{Ger}(3)$: \begin{equation} \label{al-pr} {\alpha}' : = T_{12} \otimes b_1 b_2 + \frac{1}{2}\, T_{\cup} \otimes \{b_1, b_2 \} + \frac{1}{2} \, T_{1,23} \otimes b_1 \{b_2, b_3\} - \frac{1}{3}\, T^{\cup}_{123} \otimes \{ b_1, \{ b_2, b_3\}\} \end{equation} $$ - \frac{1}{6} \, T^{\cup}_{123} \otimes \{ b_2, \{ b_1, b_3\}\} - \frac{1}{6}\, T_{1, {\bullet}, 23} \otimes \{ b_2, \{ b_1, b_3\}\} -\frac{1}{12} \, T_{1, {\bullet}, 23} \otimes \{ b_1, \{ b_2, b_3\}\}. $$ A direct computation shows that $\mathrm{Av}({\alpha}')$ is a second MC-sprout in ${\mathrm{Conv}}(\mathsf{Ger}^{\vee}, \mathsf{Br})$. Here $\mathrm{Av}$ is the operator $$ \bigoplus_{n \ge 2} \mathsf{Br}(n) \otimes {\Lambda}^{-2}\mathsf{Ger}(n) ~~\to~~ \bigoplus_{n \ge 2} \mathrm{Hom}_{S_n} \big(\mathsf{Ger}^{\vee}(n), \mathsf{Br}(n)\big) $$ defined in eq. (C.3) in \cite[Appendix C]{DeligneTW} and, for ${\alpha}'$, we use the notation for vectors in ${\Lambda}^{-2}\mathsf{Ger}(n)$ from \cite[Section 4.3]{DeligneTW}. \end{example} \begin{figure}[htp] \begin{minipage}[t]{0.3\linewidth} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{lab}=[circle, draw, minimum size=5, inner sep=1] \tikzstyle{n}=[circle, draw, fill, minimum size=3] \tikzstyle{root}=[circle, draw, fill, minimum size=0, inner sep=1] \node[root] (rr) at (0, 0) {}; \node [lab] (v1) at (0,1) {$1$}; \node [lab] (v2) at (0,2.2) {$2$}; \draw (rr) edge (v1); \draw (v1) edge (v2); \end{tikzpicture} \end{minipage} ~ \begin{minipage}[t]{0.3\linewidth} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{lab}=[circle, draw, minimum size=5, inner sep=1] \tikzstyle{n}=[circle, draw, fill, minimum size=3] \tikzstyle{root}=[circle, draw, fill, minimum size=0, inner sep=1] \node[root] (rr) at (0, 0) {}; \node [n] (n) at (0,1) {}; \node [lab] (v1) at (-0.8,1.8) {$1$}; \node [lab] (v2) at (0.8,1.8) {$2$}; \draw (n) edge (rr) edge (v1) edge (v2); \end{tikzpicture} \end{minipage} ~ \begin{minipage}[t]{0.3\linewidth} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{lab}=[circle, draw, minimum size=5, inner sep=1] \tikzstyle{n}=[circle, draw, fill, minimum size=3] \tikzstyle{root}=[circle, draw, fill, minimum size=0, inner sep=1] \node[root] (rr) at (0, 0) {}; \node [lab] (v1) at (0,1) {$1$}; \node [lab] (v2) at (-0.8,1.8) {$2$}; \node [lab] (v3) at (0.8,1.8) {$3$}; \draw (v1) edge (rr) edge (v2) edge (v3); \end{tikzpicture} \end{minipage} \caption{The brace trees $T_{12}, T_{\cup}$, and $T_{1,23}$, respectively} \label{fig:Br-tree} \vspace{0.5cm} \begin{minipage}[t]{0.45\linewidth} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{lab}=[circle, draw, minimum size=5, inner sep=1] \tikzstyle{n}=[circle, draw, fill, minimum size=3] \tikzstyle{root}=[circle, draw, fill, minimum size=0, inner sep=1] \node[root] (rr) at (0, 0) {}; \node [n] (n) at (0,0.8) {}; \node [lab] (v1) at (-0.8,1.8) {$1$}; \node [lab] (v2) at (0,1.8) {$2$}; \node [lab] (v3) at (0.8,1.8) {$3$}; \draw (n) edge (rr) edge (v1) edge (v2) edge (v3); \end{tikzpicture} \end{minipage} ~ \begin{minipage}[t]{0.45\linewidth} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{lab}=[circle, draw, minimum size=5, inner sep=1] \tikzstyle{n}=[circle, draw, fill, minimum size=3] \tikzstyle{root}=[circle, draw, fill, minimum size=0, inner sep=1] \node[root] (rr) at (0, -0.3) {}; \node [lab] (v1) at (0,0.5) {$1$}; \node [n] (n) at (0,1.5) {}; \node [lab] (v2) at (-0.8,2.5) {$2$}; \node [lab] (v3) at (0.8,2.5) {$3$}; \draw (v1) edge (rr) (n) edge (v1) edge (v2) edge (v3); \end{tikzpicture} \end{minipage} \caption{The brace trees $T^{\cup}_{123}$, and $T_{1, {\bullet}, 23}$, respectively} \label{fig:Br-tree1} \end{figure} Since all vectors in ${\mathbf{s}}\, \mathcal{G}^1 P$ are cocycles in ${\mathrm{Cobar} }(P^{\diamondsuit})$, $$ {\alpha} (\mathcal{G}^1 P) \subset {\mathcal{Z}}(\mathcal{O}) $$ for every MC-sprout ${\alpha} \in {\mathrm{Conv}}(P,\mathcal{O})$. Let us observe that \begin{prop} \label{prop:2nd-exists} If $H^{{\bullet}}(\mathcal{O}) \cong {\mathcal{H}}$ and ${\alpha}_{{\mathcal{H}}}$ is the MC element in ${\mathrm{Conv}}(P, {\mathcal{H}})$ corresponding to \eqref{rho-cH}, then there exists a second MC-sprout ${\alpha} \in {\mathrm{Conv}}(P,\mathcal{O})$ such that the diagram \begin{equation} \label{diag-al-rho} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em] {~~~ & {\mathcal{Z}}(\mathcal{O}) ~ \\ \mathcal{G}^1 P & {\mathcal{H}}\\ }; \path[->, font=\scriptsize] (m-1-2) edge node[right] {$\pi_{{\mathcal{H}}}$} (m-2-2) (m-2-1) edge node[auto] {${\alpha}_{{\mathcal{H}}}$} (m-2-2) (m-2-1) edge node[auto] {${\alpha} $} (m-1-2); \end{tikzpicture} \end{equation} commutes. \end{prop} \begin{proofOld} Since we work with vector spaces, there exist splittings \begin{equation} \label{split} \eta_{{\mathbf{q}}} : {\mathcal{H}}({\mathbf{q}}) \to {\mathcal{Z}}(\mathcal{O}({\mathbf{q}})) \end{equation} of the projections $\pi_{{\mathcal{H}}} : {\mathcal{Z}}(\mathcal{O}({\mathbf{q}})) \to {\mathcal{H}}({\mathbf{q}})$ for every $\Xi$-colored corolla ${\mathbf{q}}$. Moreover, since our base field has characteristic zero, we can use the standard averaging operators (for products of symmetric groups) and turn the splittings \eqref{split} into a map of collections \begin{equation} \label{ms} {\mathfrak{s}} : {\mathcal{H}} \to {\mathcal{Z}}(\mathcal{O}) \end{equation} for which \begin{equation} \label{pi-ms} \pi \circ {\mathfrak{s}} = \mathrm{id}_{{\mathcal{H}}}\,. \end{equation} The similar argument, implies that there exists a map of collections \begin{equation} \label{ti-ms} \ti{{\mathfrak{s}}} : \ker \big( {\mathcal{Z}}(\mathcal{O}) \to {\mathcal{H}} \big) \to \mathcal{O} \end{equation} which splits ${\partial}_{\mathcal{O}}: \mathcal{O} \to \ker \big( {\mathcal{Z}}(\mathcal{O}) \to {\mathcal{H}} \big)$. By setting\footnote{Note that, due to \eqref{rho-for-k-ge2}, ${\alpha}^{(1)}(X) = 0$ for every $X \in \mathcal{G}^{\ge 2} P$.} \begin{equation} \label{al-first} {\alpha}^{(1)} : = {\mathfrak{s}} \circ {\alpha}_{{\mathcal{H}}} \end{equation} we get a first MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ for which \begin{equation} \label{pi-al-first} \pi_{{\mathcal{H}}} \circ {\alpha}^{(1)} = {\alpha}_{{\mathcal{H}}}\,. \end{equation} Let us observe that, since ${\alpha}^{(1)}$ lands in ${\mathcal{Z}}(\mathcal{O})$, the assignment $$ X \in \mathcal{G}^2 P ~\mapsto~ {\alpha}^{(1)} {\partial}_{P} (X) + {\alpha}^{(1)} \bullet {\alpha}^{(1)} (X) $$ gives us a map of collections: \begin{equation} \label{cG2-P-cO} \mathcal{G}^2 P \to {\mathcal{Z}}(\mathcal{O}). \end{equation} Since $\pi_{{\mathcal{H}}}$ is compatible with the operadic multiplications, the composition of \eqref{cG2-P-cO} with $\pi_{{\mathcal{H}}}$ sends $X \in \mathcal{G}^2 P$ to \begin{equation} \label{X-to-zero} {\alpha}_{{\mathcal{H}}}({\partial}_{P} X) + {\alpha}_{{\mathcal{H}}} \bullet {\alpha}_{{\mathcal{H}}} (X) ~\in ~ {\mathcal{H}} \end{equation} On the other hand, the vector \eqref{X-to-zero} is zero since ${\alpha}_{{\mathcal{H}}}$ satisfies the MC equation and ${\mathcal{H}}$ has the zero differential. Since the composition of \eqref{cG2-P-cO} with $\pi_{{\mathcal{H}}}$ is zero, the map \eqref{cG2-P-cO} lands in $\ker \big( {\mathcal{Z}}(\mathcal{O}) \to {\mathcal{H}} \big)$ and hence \eqref{cG2-P-cO} can be composed with the splitting \eqref{ti-ms}. Setting \begin{equation} \label{al-second} {\alpha} (X) = \begin{cases} \quad {\alpha}^{(1)}(X) \qquad {\rm if} ~~ X \in \mathcal{G}^1 P \\[0.3cm] - \ti{{\mathfrak{s}}} \big( {\alpha}^{(1)} {\partial}_{P} (X) + {\alpha}^{(1)} \bullet {\alpha}^{(1)} (X) \big) \qquad {\rm if} ~~ X \in \mathcal{G}^2 P \\[0.3cm] \quad 0 \qquad {\rm otherwise} \end{cases} \end{equation} we get a degree $1$ element ${\alpha}$ which satisfies $$ {\partial}_{\mathcal{O}} {\alpha}(X) + {\alpha} ({\partial}_P X) + {\alpha} \bullet {\alpha} (X) = 0 \qquad \forall ~~ X \in \mathcal{G}^1 P \oplus \mathcal{G}^2 P. $$ In other words, ${\alpha}$ is a second MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$. Equations \eqref{pi-al-first} and \eqref{al-second} imply that the diagram \eqref{diag-al-rho} commutes. \end{proofOld} \begin{remark} \label{rem:then-q-iso} Let $n$ be an integer $\ge 2$ and $$ {\alpha} = {\alpha}_1 + {\alpha}_2 + \dots + {\alpha}_n, \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}) $$ be an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$. Proposition \ref{prop:only-cG1-for-q-iso} implies that, if ${\alpha}$ is a truncation of a genuine MC element $\beta \in {\mathrm{Conv}}(P, \mathcal{O})$ and the diagram \eqref{diag-al-rho} commutes then the corresponding map of dg operads $$ F_{\beta} : {\mathrm{Cobar} }(P^{\diamondsuit}) \to \mathcal{O} $$ is a quasi-isomorphism. Thus, for our purposes, it makes sense to consider only MC-sprouts in ${\mathrm{Conv}}(P, \mathcal{O})$ for which the diagram \eqref{diag-al-rho} commutes. \end{remark} \begin{remark} \label{rem:non-formal} Due to Proposition \ref{prop:2nd-exists}, a second MC-sprout ${\alpha}$ exists even if $\mathcal{O}$ is non-formal. Of course, if $\mathcal{O}$ is non-formal, such ${\alpha}$ is not a truncation of any MC element in ${\mathrm{Conv}}(P, \mathcal{O})$. \end{remark} \subsection{The main theorem} \label{sec:main-thm} Let, as above, $\mathcal{O}$ be a dg operad defined over ${\mathbb Q}$, ${\mathcal{H}} : = H^{{\bullet}}(\mathcal{O})$, and $$ \rho \,: \, {\mathrm{Cobar} }(P^{\diamondsuit}) ~\stackrel{\sim}{\longrightarrow}~ {\mathcal{H}} $$ be a cobar resolution for ${\mathcal{H}}$, where $P$ is a dg pseudo-cooperad. The main result of this paper is the following theorem. \begin{thm} \label{thm:main} Let us assume that the pair $(\mathcal{O}, P)$ satisfies Conditions {\bf C1}, {\bf C2}, {\bf C3}, and $\mathcal{O} \otimes {\mathbb K}$ is formal for some field extension ${\mathbb K}$ of ${\mathbb Q}$. Let, furthermore, $n$ be an integer $\ge 2$ and \begin{equation} \label{al-n-th} {\alpha} = \alpha_1 + \dots + \alpha_n \in {\mathrm{Conv}}(P, \mathcal{O}), \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}) \end{equation} be an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ for which the diagram \eqref{diag-al-rho} commutes. Then there exists an $(n+1)$-th MC-sprout $\ti{{\alpha}}$ such that $$ \ti{{\alpha}}_k = {\alpha}_k \qquad \forall~~ k < n. $$ Moreover, the unknown vectors $\ti{{\alpha}}_{n}$ and $\ti{{\alpha}}_{n+1}$ can be found by solving a finite dimensional linear system. \end{thm} Theorem \ref{thm:main} has the following immediate corollaries: \begin{cor} \label{cor:main} Under the above conditions on the pair $(\mathcal{O}, P)$, a quasi-isomorphism of operads \begin{equation} \label{q-iso} {\mathrm{Cobar} }(P^{\diamondsuit}) \, \stackrel{\sim}{\longrightarrow} \, \mathcal{O} \end{equation} can be constructed recursively. Moreover the algorithm for constructing \eqref{q-iso} requires no explicit knowledge about a sequence of quasi-isomorphisms (of operads) connecting $\mathcal{O} \otimes {\mathbb K}$ to ${\mathcal{H}} \otimes {\mathbb K}$. \qed \end{cor} \begin{cor} \label{cor:every} If the assumptions of Theorem \ref{thm:main} hold and $$ {\alpha} = \alpha_1 + \dots + \alpha_n \in {\mathrm{Conv}}(P, \mathcal{O}), \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}) $$ is an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ for which the diagram \eqref{diag-al-rho} commutes, then there exists a genuine MC element ${\alpha}_{MC} \in {\mathrm{Conv}}(P, \mathcal{O})$ whose $(n-1)$-th truncation ${\alpha}_{MC}^{[n-1]}$ coincides with $$ {\alpha}_1 + \dots + {\alpha}_{n-1}\,. $$ \end{cor} The proof of Theorem \ref{thm:main} is based on the following technical statement: \begin{lem} \label{lem:betaalter} Let $n$ be an integer $\ge 2$ and $$ {\alpha} = {\alpha}_1 + {\alpha}_2 + \dots + {\alpha}_n\,, \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^kP, \mathcal{O}) $$ be an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ for which the diagram \eqref{diag-al-rho} commutes. Then there exists a genuine MC element $\beta \in {\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ such that $$ {\alpha}_1 + {\alpha}_2 + \dots + {\alpha}_{n-1} = \beta^{[n-1]}\,, $$ where $\beta^{[n-1]}$ is the $(n-1)$-th truncation of $\beta$. \end{lem} \subsection{Theorem \ref{thm:main} follows from Lemma \ref{lem:betaalter}} \label{sec:thm-proof} Lemma \ref{lem:betaalter} is proved in Section \ref{sec:betaalter} below. Here we show that Theorem \ref{thm:main} is a consequence of Lemma \ref{lem:betaalter}. Our goal is to find $$ \tilde{\alpha} := \tilde{\alpha}_1 + \tilde{\alpha}_2 + \dots + \tilde{\alpha}_{n+1}\,, \qquad \ti{{\alpha}}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}) $$ satisfying $$ \ti{{\alpha}}_k = {\alpha}_k\,, \qquad \forall~~ k \le n - 1 $$ and \begin{equation} \label{Curv-zero} \mathrm{Curv}(\tilde{\alpha})(X) = 0 \qquad \forall ~~ X \in \mathcal{G}^{\,\le n+1} P. \end{equation} So we set \begin{equation} \label{al-k-le-1n} \ti{{\alpha}}_k : = {\alpha}_k\,, \qquad \forall~~ k \le n - 1 \end{equation} and observe that the unknown terms $\ti{{\alpha}}_{n}$ and $\ti{{\alpha}}_{n+1}$ show up only in the equations \begin{equation} \label{MC0} {\partial}_{\mathcal{O}}\tilde{\alpha}_n(X) + \alpha_{n-1}({\partial}_{P}X) + \frac{1}{2}\sum_{\substack{i+j=n, \\[0.1cm] i,j \ge 1}}[\alpha_i,\alpha_j](X) = 0 \quad X \in \mathcal{G}^{n} P, \end{equation} \begin{equation} \label{MC1A} {\partial}_{\mathcal{O}}\tilde{\alpha}_{n+1}(Y) + \tilde{\alpha}_n({\partial}_{P} Y) + [{\alpha}_1, \ti{{\alpha}}_n](Y) + \frac{1}{2}\sum_{\substack{i+j=n+1 \\ i,j < n}}[{\alpha}_i,{\alpha}_j](Y) = 0 \quad Y \in \mathcal{G}^{n+1} P. \end{equation} Moreover, the unknown terms enter these equations linearly. Due to the finite dimensionality condition (see {\bf C2}), equations \eqref{MC0} and \eqref{MC1A} can be viewed as a finite dimensional inhomogeneous linear system for the unknown vectors $\ti{{\alpha}}_{n}$ and $\ti{{\alpha}}_{n+1}$. Thanks to Lemma \ref{lem:betaalter}, there exists a genuine MC element in ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ $$ \beta = \sum_{k=1}^{\infty} \beta_k \qquad \beta_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^kP, \mathcal{O} \otimes {\mathbb K}) $$ such that $$ \beta_k = {\alpha}_k, \qquad \forall~~ k \le n-1. $$ Therefore, the linear system corresponding to equations \eqref{MC0} and \eqref{MC1A} has a solution over the field ${\mathbb K}$. Thus, since both the coefficient matrix and the right hand side of this linear system are defined over ${\mathbb Q}$, we have a solution over ${\mathbb Q}$. Finally, equation $\mathrm{Curv}(\tilde{\alpha})(X) = 0$ is satisfied for every $X \in \mathcal{G}^{\, \le n-1} P$, since $\ti{{\alpha}}_k : = {\alpha}_k $ for $k \le n-1$ and the original ${\alpha}$ is an $n$-th MC-sprout. \qed \section{The proof of Lemma \ref{lem:betaalter}} \label{sec:betaalter} Let us first prove the following statement. \begin{prop} \label{prop:beta-needed} Let $n$ be an integer $\ge 2$ and ${\alpha}$ be an $n$-th MC sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ for which the diagram \eqref{diag-al-rho} commutes. Then there exists a MC element $$ \beta = \sum_{k=1}^{\infty} \beta_k, \qquad \beta_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O} \otimes {\mathbb K} ) $$ in ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ such that \begin{equation} \label{cG-1-all-good} \beta \big|_{\mathcal{G}^1 P} ~ = ~ {\alpha} \big|_{\mathcal{G}^1 P}. \end{equation} \end{prop} \begin{remark} \label{rem:F-beta-q-iso} Proposition \ref{prop:only-cG1-for-q-iso} and Condition {\bf C3} imply that the operad morphism $$ F_{\beta} : {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \to \mathcal{O} \otimes {\mathbb K} $$ corresponding to the above MC element $\beta$ is a quasi-isomorphism. \end{remark} \begin{proof}[Proposition \ref{prop:beta-needed}] Since $\mathcal{O} \otimes {\mathbb K}$ is formal, there exists a quasi-isomorphism of dg operads \begin{equation} \label{F} F : {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \to \mathcal{O} \otimes {\mathbb K} \end{equation} Both $F$ and $\rho$ \eqref{rho-cH} induce the isomorphisms of operads \begin{equation} \label{H-F} H^{{\bullet}}(F) : H^{{\bullet}} \big( {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \big) \to {\mathcal{H}} \otimes {\mathbb K} \end{equation} and \begin{equation} \label{H-rho} H^{{\bullet}}(\rho) : H^{{\bullet}} \big( {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \big) \to {\mathcal{H}} \otimes {\mathbb K}. \end{equation} Hence there exists (a unique) operad automorphism $$ T : {\mathcal{H}} \otimes {\mathbb K} \to {\mathcal{H}} \otimes {\mathbb K} $$ such that \begin{equation} \label{H-F-rho-T} T \circ H^{{\bullet}}(F) = H^{{\bullet}}(\rho). \end{equation} Due to Corollary \ref{cor:lift} from Appendix \ref{app:lift}, there exists a map of operads $$ \ti{T} ~:~ {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} ~\to~ {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} $$ such that the diagram \begin{equation} \label{diag-ti-T} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em] { {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} & {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} ~ \\ {\mathcal{H}} \otimes {\mathbb K} & {\mathcal{H}} \otimes {\mathbb K} \\ }; \path[->, font=\scriptsize] (m-1-1) edge node[above] {$\ti{T}$} (m-1-2) edge node[left] {$\rho$} (m-2-1) (m-1-2) edge node[right] {$\rho$} (m-2-2) (m-2-1) edge node[above] {$T$} (m-2-2) ; \end{tikzpicture} \end{equation} commutes up to homotopy. Since $\rho \circ \ti{T}$ is homotopic to $T\circ \rho$, $\rho$ is a quasi-isomorphism, and $T$ is an automorphism of operads, $\ti{T}$ is a quasi-isomorphism of dg operads. Hence so is the composition \begin{equation} \label{ti-F} \ti{F} : = F \circ \ti{T} ~:~ {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} ~\to~ \mathcal{O} \otimes {\mathbb K}. \end{equation} Again, since diagram \eqref{diag-ti-T} commutes up to homotopy, we have \begin{equation} \label{H-ti-T} H^{{\bullet}} (\ti{T}) = H^{{\bullet}}(\rho)^{-1} \circ T \circ H^{{\bullet}}(\rho). \end{equation} Combining \eqref{H-F-rho-T} with \eqref{H-ti-T}, we deduce that $$ H^{{\bullet}}(\ti{F}) = H^{{\bullet}} (F) \circ H^{{\bullet}}(\ti{T}) = T^{-1} \circ H^{{\bullet}}(\rho) \circ H^{{\bullet}}(\rho)^{-1} \circ T \circ H^{{\bullet}}(\rho) = H^{{\bullet}}(\rho). $$ In other words, both $\ti{F}$ and $\rho$ induce the same map at the level of cohomology. Let us denote by $\ti{\beta}$ the MC element in ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ corresponding to the map $\ti{F}$. Since the diagram \eqref{diag-al-rho} for ${\alpha}$ commutes and the maps $\ti{F}$, $\rho$ induce the same map at the level of cohomology, we have $$ \pi_{{\mathcal{H}}} \circ (\ti{\beta} - {\alpha}) \big|_{\mathcal{G}^1 P} ~ = ~ 0, $$ where $\pi_{{\mathcal{H}}}$ is the canonical projection from ${\mathcal{Z}}(\mathcal{O}) \to {\mathcal{H}}$. Hence, composing $(\ti{\beta} - {\alpha}) \big|_{\mathcal{G}^1 P}$ with a splitting \eqref{ti-ms}, we get a degree $0$ map of collections \begin{equation} \label{h-cG1} h : = \ti{{\mathfrak{s}}} \circ (\ti{\beta} - {\alpha})~:~ \mathcal{G}^1 P \otimes {\mathbb K} \to \mathcal{O} \otimes {\mathbb K} \end{equation} such that $$ \ti{\beta} (X) - {\alpha} (X) = {\partial}_{\mathcal{O}} \circ h (X) \qquad \forall~~ X \in \mathcal{G}^1 P $$ or equivalently\footnote{Recall that ${\partial}_P \big|_{\mathcal{G}^1 P} = 0$.} \begin{equation} \label{ti-beta-al} \ti{\beta}(X) - {\alpha} (X) = {\partial}_{\mathcal{O}} \circ h (X) + h \circ {\partial}_P (X) \qquad \forall~~ X \in \mathcal{G}^1 P. \end{equation} Let us extend $h$ to the degree zero element in ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ by setting $$ h \big|_{\mathcal{G}^{\,> 1} P } = 0, $$ and form the new MC element of ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ \begin{equation} \label{beta} \beta : = \exp([h, ~]) \ti{\beta} ~ - ~ \frac{\exp([h, ~]) - 1}{[h, ~]} \, {\partial} h, \end{equation} where ${\partial}$ is the differential on ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$. Equation \eqref{ti-beta-al} and Condition {\bf C1} imply that equation \eqref{cG-1-all-good} holds. So the desired statement is proved. \end{proof} Note that Proposition \ref{prop:beta-needed} already implies the statement of Lemma \ref{lem:betaalter} for $n=2$. So we can now assume that $n \ge 3$. For this case, Lemma \ref{lem:betaalter} is a consequence of the following statement. \begin{prop} \label{prop:the-step} Let $n > m \ge 2$ be integers and $$ {\alpha} = \sum_{k=1}^{n} {\alpha}_k\,, \qquad {\alpha}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O}) $$ be an $n$-th MC-sprout in ${\mathrm{Conv}}(P, \mathcal{O})$ for which the diagram \eqref{diag-al-rho} commutes. Furthermore, let $$ \beta = \sum_{k=1}^{\infty} \beta_k \qquad \beta_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O} \otimes {\mathbb K}) $$ be a genuine MC element in ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ such that \begin{equation} \label{k-less-than-m} \beta_k = {\alpha}_k \qquad \forall~~1 \le k \le m-1. \end{equation} Then there exists a MC element $$ \ti{\beta} = \sum_{k=1}^{\infty} \ti{\beta}_k \qquad \ti{\beta}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^k P, \mathcal{O} \otimes {\mathbb K}) $$ of ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$ such that $\ti{\beta}_k = {\alpha}_k$ for every $k \le m$. \end{prop} \subsection{The sub-spaces $\mathrm{Der}^{(t)} \subset \mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big)$} \label{sec:Der-prime} Let us recall that, as the operad in the category of graded vector spaces\footnote{In this subsection, we assume that the base field is any field of characteristic zero.}, ${\mathrm{Cobar} }(P^{\diamondsuit})$ is the free operad generated by the collection ${\mathbf{s}}\, P$. So, using the grading on the dg pseudo-operad $P$, we introduce the following grading on ${\mathrm{Cobar} }(P^{\diamondsuit})$: \begin{equation} \label{weights-Cobar} {\mathrm{Cobar} }(P^{\diamondsuit}) = \bigoplus_{q \ge 0} {\mathrm{Cobar} }(P^{\diamondsuit})^{(q)}, \end{equation} where ${\mathrm{Cobar} }(P^{\diamondsuit})^{(q)}$ is spanned by operadic monomials in ${\bf s} X_1 \in {\mathbf{s}}\, \mathcal{G}^{k_1} P, ~ {\bf s} X_2 \in {\mathbf{s}}\, \mathcal{G}^{k_2} P, ~ \dots $ such that $$ \sum_{i \ge 1} (k_i -1) = q. $$ For example, ${\mathrm{Cobar} }(P^{\diamondsuit})^{(0)}$ is precisely ${\mathbb{OP}} ({\mathbf{s}}\, \mathcal{G}^1 P)$ and ${\mathrm{Cobar} }(P^{\diamondsuit})^{(1)}$ is spanned by operadic monomials in ${\mathbb{OP}} ({\mathbf{s}}\, \mathcal{G}^1 P \oplus {\mathbf{s}}\, \mathcal{G}^2 P)$ for which a vector in ${\mathbf{s}}\, \mathcal{G}^2 P$ appears exactly once. This grading is clearly compatible with the operadic multiplications on ${\mathrm{Cobar} }(P^{\diamondsuit})$. In addition, Conditions \textbf{C1} and \textbf{C3} imply that \begin{equation} \label{diff-weight} {\partial} \big( {\mathrm{Cobar} }(P^{\diamondsuit})^{(q)} \big) ~\subset~ {\mathrm{Cobar} }(P^{\diamondsuit})^{(q-1)} \qquad \forall ~~ q \ge 0, \end{equation} \begin{equation} \label{rho-weight} \rho\big|_{ {\mathrm{Cobar} }(P^{\diamondsuit})^{(q)} } = 0 \qquad \forall~~ q \ge 1, \end{equation} and the map \begin{equation} \label{rho-surj} \rho\big|_{ {\mathrm{Cobar} }(P^{\diamondsuit})^{(0)} } ~:~ {\mathrm{Cobar} }(P^{\diamondsuit})^{(0)} ~\to~ {\mathcal{H}} \end{equation} is onto. We claim that \begin{claim} \label{cl:h-q-exist} There exist maps of collections (for $q \ge 1$) of degree $-1$ \begin{equation} \label{h-q} h_q : {\mathcal{Z}}\big( {\mathrm{Cobar} }(P^{\diamondsuit})^{(q)} \big) \to {\mathrm{Cobar} }(P^{\diamondsuit})^{(q+1)} \end{equation} and a degree $-1$ map of collections \begin{equation} \label{h-0} h_0 : \ker \big( {\mathrm{Cobar} }(P^{\diamondsuit})^{(0)} ~\stackrel{\rho}{\longrightarrow} ~{\mathcal{H}} \big) \to {\mathrm{Cobar} }(P^{\diamondsuit})^{(1)} \end{equation} such that $$ {\partial} \circ h_q (Y) = Y \qquad \forall~~ Y \in {\mathcal{Z}}\big( {\mathrm{Cobar} }(P^{\diamondsuit})^{(q)} \big), ~~ q \ge 1, $$ $$ {\partial} \circ h_0 (Y) = Y \qquad \forall~~ Y \in \ker \big( {\mathrm{Cobar} }(P^{\diamondsuit})^{(0)} ~\stackrel{\rho}{\longrightarrow} ~{\mathcal{H}} \big). $$ \end{claim} \begin{proofOld} Since $\rho$ is a quasi-isomorphism, the existence of the desired maps follows from \eqref{diff-weight}, \eqref{rho-weight}, \eqref{rho-surj} and the fact that we work with collections of vector spaces over a field of characteristic zero. \end{proofOld} \vspace{0.18cm} Let us denote by ${\alpha}_{\mathrm{id}}$ the MC element of ${\mathrm{Conv}}(P, {\mathrm{Cobar} }(P^{\diamondsuit}))$ corresponding to $$ \mathrm{id} : {\mathrm{Cobar} }(P^{\diamondsuit}) \to {\mathrm{Cobar} }(P^{\diamondsuit}) $$ and consider \begin{equation} \label{Conv-P-Cobar} {\mathrm{Conv}}(P, {\mathrm{Cobar} }(P^{\diamondsuit})) \end{equation} as the cochain complex with the differential ${\partial} + {\partial}_P + [{\alpha}_{\mathrm{id}}, ~]$, where ${\partial}$ (resp. ${\partial}_P$) is the differential coming from the one on ${\mathrm{Cobar} }(P^{\diamondsuit})$ (resp. $P$). Since ${\mathrm{Cobar} }(P^{\diamondsuit})$ is freely generated by ${\mathbf{s}}\, P$, the assignment $$ \mathcal{D} \mapsto \mathcal{D} \big|_{{\mathbf{s}}\, P} ~\circ~ {\mathbf{s}}\, $$ gives us an isomorphism of graded vector spaces \begin{equation} \label{Der-Conv} \mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big) ~\cong~ {\mathrm{Conv}}(P, {\mathrm{Cobar} }(P^{\diamondsuit})) \end{equation} with the obvious shift: every degree $d$ derivation $\mathcal{D}$ corresponds to a degree $d+1$ vector in ${\mathrm{Conv}}(P, {\mathrm{Cobar} }(P^{\diamondsuit}))$. Using the grading \eqref{weights-Cobar} on ${\mathrm{Cobar} }(P^{\diamondsuit})$, we introduce the following subspaces of \eqref{Conv-P-Cobar} for $t \in {\mathbb Z}$ \begin{equation} \label{Conv-P-Cobar-grading} {\mathcal{L}}^{(t)} : = \big\{ f \in {\mathrm{Conv}}(P, {\mathrm{Cobar} }(P^{\diamondsuit})) ~\big|~ f(\mathcal{G}^q P) \subset {\mathrm{Cobar} }(P^{\diamondsuit})^{(q-1) + t}, ~~~ \forall ~ q \ge 1 \big\}. \end{equation} Let us denote by $\{ \mathrm{Der}^{(t)} \}_{t \in {\mathbb Z}}$ the corresponding subspaces in $\mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big)$, i.e. \begin{equation} \label{Der-t} \mathrm{Der}^{(t)} : = \big\{ \mathcal{D} \in \mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big) ~\big|~ \mathcal{D} \big|_{{\mathbf{s}}\, P} \circ {\mathbf{s}}\, \in {\mathcal{L}}^{(t)} \big\}. \end{equation} It is clear that the commutator $[~, ~]$ on $\mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big)$ satisfies \begin{equation} \label{brack-with-wghts} [~,~] \, : \, \mathrm{Der}^{(t_1)} \otimes \mathrm{Der}^{(t_2)} \subset \mathrm{Der}^{(t_1 + t_2)} \qquad \forall~~ t_1, t_2 \in {\mathbb Z}. \end{equation} Moreover, due to \eqref{diff-weight} \begin{equation} \label{diff-weight-Der} [{\partial}, ~] : \mathrm{Der}^{(t)} \to \mathrm{Der}^{(t-1)} \qquad \forall ~~ t \in {\mathbb Z}, \end{equation} where ${\partial}$ is the full differential on ${\mathrm{Cobar} }(P^{\diamondsuit})$. Let us prove the following statement \begin{claim} \label{cl:loc-nilpot} Let $t$ be a negative integer and $\mathcal{D}$ be a degree $0$ derivation in $ \mathrm{Der}^{(t)}$. Then $\mathcal{D}$ acts locally nilpotently on ${\mathrm{Cobar} }(P^{\diamondsuit})$, i.e. for every $X \in {\mathrm{Cobar} }(P^{\diamondsuit})$, there exists an integer $m$ such that $$ \mathcal{D}^{m}(X) = 0. $$ \end{claim} \begin{proofOld} Since every vector in ${\mathrm{Cobar} }(P^{\diamondsuit})$ is a finite linear combination of operadic monomials in ${\bf s} P$, it suffices to prove that for every $X \in {\bf s} P$, there exists $m$ such that $$ \mathcal{D}^m (X) = 0. $$ Again, since every $X \in {\bf s} P$ is a linear combination of vectors in ${\mathbf{s}}\, \mathcal{G}^{k} P$ for various $k$'s, we may assume without loss of generality, that $X \in {\mathbf{s}}\, \mathcal{G}^k P$ for some $k \ge 1$. By definition of $\mathrm{Der}^{(t)}$, we have $$ \mathcal{D}^m (X) \subset {\mathrm{Cobar} }(P^{\diamondsuit})^{((k -1) + m t )}. $$ So the desired statement follows from the fact that $$ {\mathrm{Cobar} }(P^{\diamondsuit})^{(r )} = {\mathbf{0} } \qquad \forall ~~ r < 0. $$ \end{proofOld} Claim \ref{cl:loc-nilpot} implies that \begin{claim} \label{cl:exp-OK} For every negative integer $t$, every ${\partial}$-closed degree degree $0$ derivation $$ \mathcal{D} \in \mathrm{Der}^{(t)} \subset \mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big) $$ gives us the automorphism of the dg operad ${\mathrm{Cobar} }(P^{\diamondsuit})$ $$ \exp(\mathcal{D}) : {\mathrm{Cobar} }(P^{\diamondsuit}) \stackrel{\cong}{\longrightarrow} {\mathrm{Cobar} }(P^{\diamondsuit}). $$ \end{claim} \begin{proofOld} Claim \ref{cl:loc-nilpot} implies that the formal Taylor series $$ \exp(\mathcal{D}) : = \mathrm{id} + \sum_{k \ge 1} \frac{1}{k!} \mathcal{D}^k $$ is a well defined automorphism of the graded operad ${\mathbb{OP}}({\bf s} P)$. Since $\mathcal{D}$ is ${\partial}$-closed, this automorphism is also compatible with the differential on ${\mathrm{Cobar} }(P^{\diamondsuit})$. \end{proofOld} Let us now prove the following technical statement: \begin{prop} \label{prop:psi-lift} Let $m$ be an integer $\ge 2$ and $$ \psi \in \mathrm{Hom}_{\mathsf{Coll}} (\mathcal{G}^m P, {\mathcal{H}}) \in {\mathrm{Conv}}(P, {\mathcal{H}}) $$ be a degree $1$ vector satisfying \begin{equation} \label{psi-closed} \psi \circ {\partial}_{P} + [{\alpha}_{{\mathcal{H}}}, \psi] =0. \end{equation} Then there exists a degree $0$ ${\partial}$-closed derivation $\mathcal{D} \in \mathrm{Der}^{(1-m)} \subset \mathrm{Der} \big({\mathrm{Cobar} }(P^{\diamondsuit}) \big)$ such that \begin{equation} \label{cD-psi} \rho \circ \mathcal{D} \circ {\mathbf{s}}\,\, \big|_{P} = \psi \end{equation} and \begin{equation} \label{cD-less-m} \mathcal{D} ({\mathbf{s}}\, X) = 0 \qquad \forall ~~ X \in \mathcal{G}^{< m} P. \end{equation} \end{prop} \begin{proofOld} Since $$ \rho \big|_{{\mathbb{OP}}({\mathbf{s}}\, \mathcal{G}^1 P)} : {\mathbb{OP}}({\mathbf{s}}\, \mathcal{G}^1 P) \to {\mathcal{H}} $$ is onto (and we work with fields of characteristic zero), there exists a degree $1$ vector $$ \Psi_m \in \mathrm{Hom}_{\mathsf{Coll}}\big(\mathcal{G}^m P, {\mathbb{OP}}({\mathbf{s}}\, \mathcal{G}^1 P) \big) \subset {\mathrm{Conv}} \big(P, {\mathrm{Cobar} }(P^{\diamondsuit}) \big) $$ such that $\rho \circ \Psi_m(X) = \psi(X)$ for all $X \in \mathcal{G}^m P$. Clearly, $\Psi_m \in {\mathcal{L}}^{(1-m)}$ and $\Psi_m$ satisfies the equation $$ {\partial} \Psi_m (X) + \Psi_m ({\partial}_P X) + [{\alpha}_{\mathrm{id}}, \Psi_m] (X) = 0 \qquad \forall ~~ X \in \mathcal{G}^{\le m} P. $$ Due to \eqref{psi-closed}, the map $$ \big( \Psi_m \circ {\partial}_P + [{\alpha}_{\mathrm{id}}, \Psi_m] \big) \big|_{ \mathcal{G}^{m+1} P } : \mathcal{G}^{m+1} P \to {\mathrm{Cobar} }(P^{\diamondsuit})^{(0)} $$ lands in the kernel of $\rho$. Hence, by Claim \ref{cl:h-q-exist}, the map $$ \Psi_{m+1} (Y) : = - h_0 \big( \Psi_m ({\partial}_P Y) + [{\alpha}_{\mathrm{id}}, \Psi_m] (Y)\big) ~:~ \mathcal{G}^{m+1} P \to {\mathrm{Cobar} }(P^{\diamondsuit})^{(1)} $$ satisfies $$ {\partial} \Psi_{m+1} (Y) + \Psi_m ({\partial}_P Y) + [{\alpha}_{\mathrm{id}}, \Psi_m] (Y) = 0 \qquad \forall ~~Y \in \mathcal{G}^{(m+1)} P. $$ Therefore, the sum $\Psi^{(m+1)} = \Psi_m + \Psi_{m+1} $ satisfies the equation $$ {\partial} \Psi^{(m+1)} (Y) + \Psi^{(m+1)} ({\partial}_P Y) + [{\alpha}_{\mathrm{id}}, \Psi^{(m+1)} ] (Y) = 0 \qquad \forall ~~Y \in \mathcal{G}^{\le (m+1)} P. $$ Moreover, $\Psi^{(m+1)}$ belongs to ${\mathcal{L}}^{(1-m)}$ by construction. Let us assume that we extended $\Psi^{(m+1)}$ to a vector (for some $k \ge 1$) $$ \Psi^{(m+k)} = \Psi_m + \Psi_{m+1} + \dots + \Psi_{m+k}\,, \qquad \Psi_j \in \mathrm{Hom}_{\mathsf{Coll}}\big(\mathcal{G}^j P, {\mathrm{Cobar} }(P^{\diamondsuit})^{(j-m)} \big) $$ such that \begin{equation} \label{Psi-mk-closed} {\partial} \Psi^{(m+k)} (Y) + \Psi^{(m+k)} ({\partial}_P Y) + [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ] (Y) = 0 \qquad \forall ~~Y \in \mathcal{G}^{\le (m+k)} P. \end{equation} Let $X \in \mathcal{G}^{m+k+1} P$. Using \eqref{Psi-mk-closed} and the MC equation $$ {\partial} \circ {\alpha}_{\mathrm{id}} + {\alpha}_{\mathrm{id}}\circ {\partial}_P + \frac{1}{2} [{\alpha}_{\mathrm{id}}, {\alpha}_{\mathrm{id}}] = 0 $$ for ${\alpha}_{\mathrm{id}}$, we deduce that $$ {\partial} \big( \Psi^{(m+k)} ({\partial}_P X) + [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ] (X) \big) = - [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ] ({\partial}_P X) + {\partial} \big( [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ] (X) \big) $$ $$ = [{\partial} \circ {\alpha}_{\mathrm{id}} + {\alpha}_{\mathrm{id}} \circ {\partial}_P\,, \, \Psi^{(m+k)} ] (X) - [{\alpha}_{\mathrm{id}}\,,\, {\partial} \circ \Psi^{(m+k)} + \Psi^{(m+k)} \circ {\partial}_P ] (X) $$ $$ = \big( [{\alpha}_{\mathrm{id}}, [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ]] - \frac{1}{2} [[{\alpha}_{\mathrm{id}}, {\alpha}_{\mathrm{id}}], \Psi^{(m+k)}] \big)(X) = 0. $$ In other words, the map $$ \big( \Psi^{(m+k)} \circ {\partial}_P + [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ] \big) \big|_{ \mathcal{G}^{m+k+1} P} ~:~ \mathcal{G}^{m+k+1} P ~\to~ {\mathrm{Cobar} }(P^{\diamondsuit})^{(k)} $$ lands in ${\mathcal{Z}}({\mathrm{Cobar} }(P^{\diamondsuit})^{(k)})$. Hence, by Claim \ref{cl:h-q-exist}, the map $$ \Psi_{m+k+1} (X) : = - h_k \big( \Psi^{(m+k)} ({\partial}_P X) + [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)}] (X)\big) ~:~ \mathcal{G}^{m+k+1} P \to {\mathrm{Cobar} }(P^{\diamondsuit})^{(k+1)} $$ satisfies the equation \begin{equation} \label{next} {\partial} \Psi_{m+k+1} (X) + \Psi^{(m+k)} ({\partial}_P X) + [{\alpha}_{\mathrm{id}}, \Psi^{(m+k)} ] (X) = 0 \qquad \forall ~~X \in \mathcal{G}^{m+k+1} P. \end{equation} Therefore the vector $$ \Psi^{(m+k+1)} : = \Psi^{(m+k)} + \Psi_{m+k+1} = \Psi_{m} + \Psi_{m+1} + \dots + \Psi_{m+k+1} $$ satisfies the equation \begin{equation} \label{Psi-mk1-closed} {\partial} \Psi^{(m+k+1)} (X) + \Psi^{(m+k+1)} ({\partial}_P X) + [{\alpha}_{\mathrm{id}}, \Psi^{(m+k+1)} ] (X) = 0 \qquad \forall ~~X \in \mathcal{G}^{\le (m+k+1)} P. \end{equation} Moreover, since $ \Psi_{m+k+1} \in {\mathcal{L}}^{(1-m)}$, the vector $\Psi^{(m+k+1)}$ also belongs to ${\mathcal{L}}^{(1-m)}$. This inductive argument shows that there exists a degree $1$ vector $$ \Psi = \sum_{j=m}^{\infty} \Psi_j\,, \qquad \Psi_j \in \mathrm{Hom}_{\mathsf{Coll}}\big(\mathcal{G}^j P, {\mathrm{Cobar} }(P^{\diamondsuit})^{(j-m)} \big) $$ such that \begin{equation} \label{Psi-closed} {\partial} \circ \Psi + \Psi \circ {\partial}_P + [{\alpha}_{\mathrm{id}}, \Psi] = 0 \end{equation} and \begin{equation} \label{rho-Psi-m} \rho \circ \Psi_m = \psi. \end{equation} Since $\rho(Z) = 0$ for every $Z \in {\mathrm{Cobar} }(P^{\diamondsuit})^{(t)} $ if $t \ge 1$, equation \eqref{rho-Psi-m} implies that \begin{equation} \label{rho-Psi} \rho \circ \Psi = \psi. \end{equation} Equation \eqref{Psi-closed} implies that the (degree $0$) derivation $$ \mathcal{D} \in \mathrm{Der}^{(1-m)} \subset \mathrm{Der}\big( {\mathrm{Cobar} }(P^{\diamondsuit}) \big) $$ corresponding to $\Psi$ is ${\partial}$-closed. Furthermore, equation \eqref{rho-Psi} implies \eqref{cD-psi}. Finally, equation \eqref{cD-less-m} is a consequence of $$ \Psi \big|_{\mathcal{G}^{< m } P} ~ = ~ 0. $$ \end{proofOld} \subsection{The proof of Proposition \ref{prop:the-step}} \label{sec:proof} We will now use Proposition \ref{prop:psi-lift} to prove Proposition \ref{prop:the-step}. Since ${\alpha}$ is an $n$-th MC-sprout and $\beta$ is a genuine MC element of ${\mathrm{Conv}}(P, \mathcal{O} \otimes {\mathbb K})$, we have \begin{equation} \label{curv-beta} {\partial}_{\mathcal{O}} \circ \beta_m + \beta_{m-1} \circ {\partial}_P + \sum_{k=1}^{m-1} \beta_k \bullet \beta_{m-k} = 0, \end{equation} \begin{equation} \label{curv-al} {\partial}_{\mathcal{O}} \circ {\alpha}_m + {\alpha}_{m-1} \circ {\partial}_P + \sum_{k=1}^{m-1} {\alpha}_k \bullet {\alpha}_{m-k} = 0, \end{equation} \begin{equation} \label{curv-beta-next} {\partial}_{\mathcal{O}} \circ \beta_{m+1} + \beta_{m} \circ {\partial}_P + [\beta_1, \beta_m] + \sum_{k=2}^{m-1} \beta_k \bullet \beta_{m+1-k} = 0, \end{equation} and \begin{equation} \label{curv-al-next} {\partial}_{\mathcal{O}} \circ {\alpha}_{m+1} + {\alpha}_{m} \circ {\partial}_P + [{\alpha}_1, {\alpha}_m] + \sum_{k=2}^{m-1} {\alpha}_k \bullet {\alpha}_{m+1-k} = 0. \end{equation} Subtracting \eqref{curv-al} from \eqref{curv-beta} and using \eqref{k-less-than-m}, we get $$ {\partial}_{\mathcal{O}} \circ (\beta_m - {\alpha}_m) = 0. $$ In other, words $\beta_m - {\alpha}_m$ is a map from $\mathcal{G}^m P $ to ${\mathcal{Z}}(\mathcal{O} \otimes {\mathbb K})$. Let \begin{equation} \label{psi-m} \psi_m : = \pi_{{\mathcal{H}}} \circ (\beta_m - {\alpha}_m) \in {\mathrm{Conv}}(P, {\mathcal{H}} \otimes {\mathbb K}). \end{equation} Subtracting \eqref{curv-al-next} from \eqref{curv-beta-next} and using \eqref{k-less-than-m} again, we get \begin{equation} \label{m-plus-1} (\beta_{m} - {\alpha}_m) \circ {\partial}_P + [{\alpha}_1, \beta_m- {\alpha}_m] = -{\partial}_{\mathcal{O}} \circ (\beta_{m+1} - {\alpha}_{m+1}). \end{equation} Next, we observe that both sides of \eqref{m-plus-1} are maps which land in ${\mathcal{Z}}(\mathcal{O} \otimes {\mathbb K})$. So applying $\pi_{{\mathcal{H}}}$ to both sides of \eqref{m-plus-1} and using $\pi_{{\mathcal{H}}} \circ {\alpha}_1 = {\alpha}^{{\mathcal{H}}}$, we deduce that $$ \psi_m \circ {\partial}_P + [{\alpha}_{{\mathcal{H}}}, \psi_m] = 0. $$ In other words, $\psi_m$ is a cocycle in the cochain complex $$ {\mathrm{Conv}}(P, {\mathcal{H}} \otimes {\mathbb K}) $$ with the differential ${\partial}_P + [{\alpha}_{{\mathcal{H}}}, ~]$. Due to Proposition \ref{prop:psi-lift}, there exists a ${\partial}$-closed degree zero derivation $$ \mathcal{D} \in \mathrm{Der}^{(1-m)} \subset \mathrm{Der} \big( {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \big) $$ such that \begin{equation} \label{cD-psi-m} \rho \circ \mathcal{D} \circ {\mathbf{s}}\,\, \big|_{P} = \psi_m \end{equation} and \begin{equation} \label{cD-less-m-here} \mathcal{D} ({\mathbf{s}}\, X) = 0 \qquad \forall ~~ X \in \mathcal{G}^{< m} P. \end{equation} Thanks to Claim \ref{cl:exp-OK}, $-\mathcal{D}$ can be exponentiated to the automorphism $\exp(-\mathcal{D})$ of the dg operad ${\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K}$. Let $F_{\beta}$ be the quasi-isomorphism of dg operads $ {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \to \mathcal{O} \otimes {\mathbb K}$ corresponding to the MC element $\beta$. Due to \eqref{cD-less-m-here}, the quasi-isomorphism $$ F : = F_{\beta} \circ \exp(-\mathcal{D}) ~:~ {\mathrm{Cobar} }(P^{\diamondsuit}) \otimes {\mathbb K} \to \mathcal{O} \otimes {\mathbb K} $$ satisfies $$ F_{\beta} \circ \exp(-\mathcal{D})({\mathbf{s}}\, X) = F_{\beta}({\mathbf{s}}\, X) \qquad \forall ~~ X \in \mathcal{G}^{< m} P. $$ Furthermore, $$ F_{\beta} \circ \exp(-\mathcal{D})({\mathbf{s}}\, X) - F_{\beta}({\mathbf{s}}\, X) \in {\mathcal{Z}}(\mathcal{O} \otimes {\mathbb K}) \qquad \forall~~ X \in \mathcal{G}^{m} P. $$ Using equations $\pi_{{\mathcal{H}}} \circ \beta_1 = {\alpha}^{{\mathcal{H}}}$ and \eqref{cD-psi-m}, we deduce that $$ \pi_{{\mathcal{H}}} \big( F_{\beta} \circ \exp(-\mathcal{D})({\mathbf{s}}\, X) - F_{\beta}({\mathbf{s}}\, X) \big) = - \psi_m (X). $$ Thus the MC element $$ \beta^{\diamond} = \sum_{k=1}^{\infty} \beta^{\diamond}_k \qquad \beta^{\diamond}_k \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^kP, \mathcal{O} \otimes {\mathbb K}) $$ corresponding to $F$ has these properties: $$ \beta^{\diamond}_k = \beta_k (= {\alpha}_k) \qquad \forall~~ k < m, $$ $$ (\beta^{\diamond}_m - \beta_m) (X) \in {\mathcal{Z}}(\mathcal{O}) \qquad \forall~~ X \in \mathcal{G}^m P $$ and $$ \pi_{{\mathcal{H}}} \circ (\beta^{\diamond}_m - \beta_m) (X) = - \psi_m \qquad \forall~~ X \in \mathcal{G}^m P $$ or equivalently \begin{equation} \label{beta-dia-al} \pi_{{\mathcal{H}}} \circ (\beta^{\diamond}_m - {\alpha}_m) (X) = 0 \qquad \forall~~ X \in \mathcal{G}^m P. \end{equation} Hence, using the splitting \eqref{ti-ms}, we define the following degree $0$ vector $$ \xi \in \mathrm{Hom}_{\mathsf{Coll}}(\mathcal{G}^m P, \mathcal{O} \otimes {\mathbb K}) $$ \begin{equation} \label{xi-dfn} \xi(X) : = \ti{{\mathfrak{s}}} \circ (\beta^{\diamond}_m - {\alpha}_m) (X) \qquad X \in \mathcal{G}^m P, \end{equation} which satisfies \begin{equation} \label{xi-beta-al} \beta^{\diamond}_m(X) = {\alpha}_m (X) + {\partial}_{\mathcal{O}} \circ \xi (X). \end{equation} The desired MC element $\ti{\beta}$ is defined by the formula $$ \ti{\beta} = \exp([\xi, ~]) \beta^{\diamond} ~ - ~ \frac{\exp([\xi, ~]) - 1}{[\xi, ~]} \, {\partial} \xi. $$ Indeed, since $\xi(X) = 0$ for all $X \in \mathcal{G}^{< m} P$, $$ \ti{\beta}_k = \beta_k = {\alpha}_k \qquad \forall~~ k < m. $$ Moreover, equation \eqref{xi-beta-al} implies that $$ \ti{\beta}_m = {\alpha}_m\,. $$ Thus Proposition \ref{prop:the-step} is proved. $\qed$
1,477,468,750,579
arxiv
\section{Introduction} States that can be interpreted as the first quanta of collective vibrations are a general property of quantum mesoscopic systems which can be found in various fields of physics. In nuclear physics, such vibrational states of the nucleus have been known for many years~\cite{bm}. These one-phonon states are present both in the low-lying excitation spectra of nuclei and at higher energies. The latter are the Giant Resonances (GR). The existence of two-phonon states, i.e. states which can be described as double excitations of elementary modes, has also been predicted since the early days of the collective model~\cite{bm}. Such states have been observed long time ago in the low-lying spectra. More recently, two-phonon states built with giant resonances have been populated in heavy ion inelastic scattering~\cite{f88}, in double charge exchange ($\pi^{\pm},\pi^{\mp}$) reactions~\cite{mor88} and in Coulomb excitation at high energy~\cite{schm,ri93,ST95}. For a review, see ref.~\cite{cho95}. In the harmonic approximation, these states are predicted as degenerate multiplets located at an excitation energy equal to the sum of the individual phonon energies. When the residual interaction is taken into account, the degeneracy is broken by the coupling between phonons. In the present article we consider the residual interaction of two-phonon states among themselves and with one-phonon states. Therefore, the eigenstates are linear combinations of one- and two-phonon components while the energies are shifted and splitted with respect to the harmonic limit. We will call such states mixed states. Evidence of such anharmonic behaviour can be found, for example, in a $(\gamma,\gamma\prime)$ experiment ~\cite{plb94} where the observation of large dipole strength in the low-lying spectrum of some ${\rm Sn}$ isotopes is reported. Such strength is interpreted by the authors as due partially to the population of the $1^-$ member of the quintuplet of states based on the $| 2^+\otimes 3^->$ two-phonon state, and partially to the admixture of the (one-phonon) GDR in the wavefunction of the state observed around 3.5 MeV excitation energy. As it has been shown in~\cite{cat89}, the inclusion of the residual interaction among two-phonon states leads to small, but sizeable, anharmonicities also in the high-lying spectrum, namely for those states that in the harmonic limit are described as double excitations of GR. The mixing between one- and two-phonon states further increases the anharmonicities. When an external field acts on a nucleus, it excites the eigenstates of the internal hamiltonian, which, in our approach, are superpositions of one- and two-phonon states. The microscopic theory suited for the description of collective vibrational states is the Random Phase Approximation (RPA). Two-phonon states and their mixing among themselves and with one-phonon states can be generated by using boson mapping techniques and by taking into account terms of the residual interaction which do not enter at the RPA level~\cite{cat89,bea92}. In this way one has an RPA based approach to treat anharmonicities. In a nucleus-nucleus collision, the mutual excitation of the two partners is described as due to the action of the mean field of each nucleus on the other one, i.e. by a one body operator. Assuming that it induces small deformations of the density, only the particle-hole (ph) terms of the external mean field are usually taken into account. This amounts to consider as elementary processes only those corresponding to the creation or annihilation of one phonon. In this approximation, the external field is linear in the creation and annihilation operators of phonons. When the particle-particle (pp) and hole-hole (hh) terms of the external field are also included, the direct excitation from the ground state to two-phonon states as well as the transition between one-phonon states become possible. These terms can be expressed as quadratic in the creation and annihilation operators of phonons and so correspond to non-linear terms in the excitation operator. In the "standard" approach, based on the independent multiphonon picture, the effects coming from both anharmonicities and non-linearities are neglected (see for instance ref.\cite{be96}). Recent experimental data on Coulomb excitation at relativistic energies have raised some questions on the adequacy of that picture. Indeed, in the excitation of $^{136}$Xe on $^{208}$Pb, the experimental cross section to the double GDR (DGDR) has been found to be 2 to 4 times larger than the theoretical one~\cite{schm}. Recently, new experimental results ~\cite{ST95} on the excitation of several nuclei have shown that the disagreement ranges from about 10\% to 60\%, being about 30\% in the case of $^{208}$Pb. In a previous paper~\cite{vol95}, by using a one-dimensional oscillator model to mimick nuclear states, we have shown that the effects of anharmonicities and non-linearities can lead to an important enhancement of the cross section in the energy range around twice that of the GDR. In this model neither spin nor parity were taken into account. Besides, only one type of phonons was considered. In the present paper we present more realistic calculations, where the collective states of the target nucleus and the action on it of the Coulomb field of the projectile are described starting from RPA. Anharmonicities and non-linearities are included by means of boson mapping techniques. Both low-lying collective states and giant resonances are considered as elementary phonons. We have done calculations for the $^{208}$Pb$ + ^{208}$Pb system at 641 and 1000 MeV per nucleon for which experimental data exist ~\cite{ST95}. We have also studied the Coulomb excitation of $^{40}$Ca in the reaction $^{208}$Pb$ + ^{40}$Ca at 1000 MeV/A although there are no experimental data for this case. In both cases we consider as elementary modes all natural-parity RPA phonons whose multipolarity is lower than 4 and whose contribution to the associated energy weighted sum rule (EWSR) is larger than 5\%. Then, we have built the residual interaction in the one and two-phonon space and we have diagonalized the hamiltonian in this subspace in order to define the mixed states $|\phi_\alpha>$ . By solving the time dependent Schr\"odinger equation in this subspace we get the probability amplitudes for each of the $|\phi_\alpha>$ states from which we calculate the cross section. We will describe in detail the results for the $^{208}$Pb$ + ^{208}$Pb system at 641 MeV/A, the results at 1000 Mev/A being essentially the same except for the absolute values of the cross section which are higher in the latter case. We will mostly discuss the two regions around the energy of the states built with two low-lying states or with two giant resonances. We will see that non-linearities and anharmonicities may strongly change the cross sections associated with some specific states. Their influence is found to be of the order of 10\% for $^{208}$Pb and 20\% for $^{40}$Ca in the DGDR energy region, bringing the theoretical results closer to the experimental ones. In the next section we detail the model employed and in section 3 we describe the semiclassical electromagnetic field used to excite the nuclei. Section 4 is devoted to the description of the results on $^{208}$Pb where we discuss in a detailed way the effects of both anharmonicities and non-linear terms on the excitation cross section. The results for $^{40}$Ca are reported in section 5 and finally we draw our conclusions in section 6. \section{The multiphonon picture} Heavy ion collisions at high incident energies can be described within a semiclassical approach, where the relative motion is treated classically while quantum mechanics is used for the internal degrees of freedom of the colliding nuclei. For grazing and large impact parameter collisions the densities of the two nuclei have a small overlap. Therefore, the total hamiltonian can be written as \begin{equation} H = H_A + H_B \end{equation} where $H_A (H_B)$ denotes the hamiltonian of nucleus A(B) and \begin{equation} H_A = H_A^0 + \sum_{\alpha \alpha^\prime} <\alpha|U_B ({\bf R}(t))|\alpha^\prime> a^\dagger_\alpha a_{\alpha^\prime} = H_A^0 + W_A (t) \end{equation} $H^0_A$ being the internal hamiltonian of A. $W_A$ describes the excitation of A by the mean field $U_B$ of nucleus B, whose matrix elements depend on time through the relative coordinate {\bf R}(t). The sums over the single particle states, denoted by $\alpha$ and $\alpha^\prime$, run over both particle and hole states. \subsection{Harmonic approximation} Within RPA, the excited states $|\Psi_\nu>$ of each nucleus are described as superpositions of $p h$ and $h p$ configurations with respect to the ground state $|\Psi_0>$ \begin{equation} |\Psi_\nu> = q^\dagger_\nu |\Psi_0>=\sum_{ph } [X^\nu_{ph} a^\dagger_p a_h - Y^\nu_{ph} a^\dagger_h a_p] |\Psi_0> \end{equation} where the amplitudes $X$ and $Y$ are solutions of the RPA secular equation, with eigenvalues $E_\nu$. The ground state is defined as the vacuum of the $q_\nu$ operators \begin{equation} q_\nu |\Psi_0>=0 \end{equation} In order to avoid unnecessarily complicated expressions, we do not introduce explicitly the coupling to total angular momentum and isotopic spin. When the RPA phonons are mapped onto bosons~\cite{rs}, the internal hamiltonian of the nucleus can be written as \begin{equation} H^0 = \sum_{\nu} E_\nu Q^\dagger_\nu Q_\nu \end{equation} which shows that the excitation spectrum is harmonic. The boson operators $Q^\dagger_\nu$ and $Q_\nu$ in the above equation are given by \begin{equation} Q^\dagger_\nu =\sum_{ph } [X^\nu_{ph} B^\dagger_{ph} - Y^\nu_{ph} B_{ph}] \end{equation} with the same X and Y amplitudes as in eq.(3) but in terms of the boson images ($B^\dagger_{ph}$ and $B_{ph}$) of the ph operators \begin{equation} a^\dagger_p a_h \rightarrow B^\dagger_{ph} + ....... \end{equation} In the above equation we have indicated only the first term of the boson mapping. Assuming that the external field induces only small deformations of the density, only $p h$ and $h p$ terms contribute to $W_A$ and one gets \begin{equation} W_A(t) = \sum_{ph} <p|U_B({\bf R}(t))|h> a^\dagger_p a_h + h.c. \end{equation} By introducing the boson mapping of eq.(7), it can be rewritten as \begin{equation} W_A(t) = \sum_{\nu} W_\nu^{10}(t) Q^\dagger_\nu + h. c. \end{equation} with \begin{equation} W_\nu^{10} = <\Psi_\nu| W(t) |\Psi_0> \end{equation} The independent multiphonon picture is based on eqs. (5) and (9). Within this picture, the Schr\"odinger equation can be solved exactly and the state of each nucleus at time t is found to be the coherent state \begin{equation} |\Phi(t)> = \prod_{\nu} e^{-{1 \over 2} |I_\nu (t)|^2} \sum_{n_\nu} {[I_\nu (t)]^{n_\nu} \over n_\nu !} e^{-i n_\nu E_\nu t} (Q^\dagger_\nu)^{n_\nu} |0> \end{equation} with \begin{equation} I_\nu = \int_{-\infty}^t W_\nu^{10} (t^\prime) e^{-i n_\nu E_\nu t^\prime} \, dt^\prime \end{equation} where the integral is performed along the relative motion trajectory corresponding to a definite impact parameter. The probability amplitude to excite one- or two-phonon states is calculated by projecting eq. (11) on the corresponding states \begin{equation} |\nu> = Q^\dagger_\nu |0> \end{equation} \begin{equation} |\nu\nu'> = (1 + \delta_{\nu\nu'})^{-1/2} Q^\dagger_\nu Q^\dagger_{\nu'} |0> \end{equation} where $|0>$ is the vacuum of the $Q^\dagger_\nu$ operators. Finally, the cross sections are obtained by integrating the relevant probabilities over the impact parameter. \subsection{Non-linear excitation} In this section we present an approach to go beyond the independent multiphonon picture by eliminating its two main limitations, the first in the external field and the second in the internal hamiltonian. Let us first consider the $p p$ and $h h$ contributions to the sums in eq.(2), which are neglected in the linear approximation for the external field. Assuming the same boson mapping, truncated at the lowest order, it is easily shown~\cite{cat89} that the mappings \begin{equation} a^\dagger_p a_{p^\prime} \rightarrow \sum_h B^\dagger_{ph} B_{p^\prime h} \end{equation} \begin{equation} a_h a^\dagger_{h^\prime} \rightarrow \sum_p B^\dagger_{ph} B_{p h^\prime } \end{equation} are exact, in the sense that they preserve the commutation relations between fermion-pair operators. Using these relations, the inclusion of the $p p$ and $h h$ terms in eq.(2) gives a W quadratic in the boson operators $B^\dagger_{p h}$ and $B_{p h}$. By expressing the latter in terms of the collective bosons $Q^\dagger_\nu$ and $Q_\nu$ one gets \begin{equation} W = W^{00} + \sum_\nu W^{10}_\nu Q^\dagger_\nu + h.c. + \sum_{\nu\nu^\prime} W^{11}_{\nu\nu^\prime} Q^\dagger_\nu Q_{\nu^\prime} + \sum_{\nu\nu^\prime} W^{20}_{\nu\nu^\prime} Q^\dagger_\nu Q^\dagger_{\nu^\prime} + h.c. \end{equation} where \begin{equation} \label{e:W10} W^{10}_\nu = \sum_{ph} (W_{ph} X_{ph}^{\nu^*} + W_{hp} Y_{ph}^{\nu^*}) \ \ \end{equation} is the standard linear response expression, whereas \begin{equation} \label{e:W11} W^{11}_{\nu \nu'} = \sum_{php'h'} \left( W_{pp'} \delta_{hh'} - W_{hh'} \delta_{pp'} \right) \left( X_{ph}^{\nu^*} X_{p'h'}^{\nu'}+ Y_{ph}^{\nu^*}Y_{p'h'}^{\nu'} \right) \ \ \end{equation} \begin{equation} \label{e:W20} W^{20}_{\nu \nu'} = \sum_{php'h'} \left( W_{pp'} \delta_{hh'} - W_{h'h} \delta_{pp'} \right) X_{ph}^{\nu^*} Y_{p'h'}^{\nu'^*} \ \ \end{equation} provide new excitation routes. The matrix elements of $U_B$ depend on the considered excitation mechanism. Since the general discussion we present here is independent of their form, we postpone to the next section the derivation of their expressions in the case of Coulomb excitation at relativistic energy. The hamiltonian $H_A$, with the inclusion of the terms $W^{11}$ and $W^{20}$, is a quadratic form in the $Q^\dagger_\nu$ and $Q_\nu$ operators. Therefore, a coherent state solution to the Schr\"odin\-ger equation still exists. We do not exploit this property because it does not hold any more when the anharmonicities are included, as we are going to do in next subsection. The effects of introducing non-linear terms in the external field can be important, for example, whenever some selection rule disfavours one of the two steps necessary to make the transition from the ground state to a two-phonon state through the action of $W^{10}$ alone. The term $W^{11}_{\nu \nu^\prime}$ describes the transition from the one-phonon state $|\Psi_\nu>$ to $|\Psi_{\nu^\prime}>$ or from a two-phonon state to another one. In ref.~\cite{ca86,ca87,ca88} it was shown that these non-linear terms can lead to an increase of the population of two-phonon states. The term $W^{20}$ induces a direct transition from the ground state to a two-phonon state that can be very important. This effect has been already reported in~\cite{plb94} where the direct matrix element between the ground state and the dipole member of the low-lying $| 2^+\otimes 3^->$ quintuplet of states in some Sn isotopes was found to be very large. Similar results, but involving double GR states, have been obtained in~\cite{ccg92}. \subsection{Anharmonic spectrum} Let us now turn our attention to the other limitation of the independent multiphonon picture we have stressed above, namely the assumption that the internal hamiltonian has the harmonic form of eq.(5). The simplest way to go beyond this approximation starts from the observation that in RPA only the $phph$ and $pphh$ terms of the residual interaction are taken into account. The $pppp$ and $hhhh$ terms, when expressed by the same boson mapping used before, introduce a coupling between two-phonon states~\cite{cat89} while the remaining, $ppph$ and $hhhp$, terms mix one- and two-phonon states. Finally, when considering two-phonon states one should also take care of the possible violation of the Pauli principle. In the boson mapping method the exclusion principle is introduced through high order terms in the boson expansion ~\cite{cat89} built to conserve the fermion-pair commutation algebra. In such a way an additional residual interaction between two-phonon states coming from the particle-hole matrix elements is generated \cite{bea92}. As a result of these different couplings, the eigenstates of the internal hamiltonian of each nucleus are \begin{equation} |\Phi_\alpha> = \sum_\nu c^\alpha_\nu |\nu> + \sum_{\nu_1 \nu_2} d^\alpha_{\nu_1 \nu_2} |\nu_1 \nu_2 > \end{equation} Therefore, the states excited by the external field will be such mixed states and one cannot speak of pure one- or two-phonon excitations any more. However, when a $|\Phi_\alpha>$ state has a strong overlap with a one-phonon state we may discuss the associated cross-section as part of the one-phonon cross-section. Conversely, if the $|\Phi_\alpha>$ state is dominated by its two-phonon components we may speak about two-phonon strength. For example, let us consider a state $|\Phi_\alpha>$ which strongly overlaps with a two-phonon state, i.e. whose largest component is $|\nu_1 \nu_2>$. In addition to the possible excitation of $|\Phi_\alpha>$ via this two-phonon component, this state can also be excited by $W^{10}$ through its one-phonon components. However, the energy $E_\alpha$ of $|\Phi_\alpha>$ will be not far from $E_{\nu_1} + E_{\nu_2}$. Therefore, it will contribute to the cross section at that energy. In this sense, because of its structure and of its energy, one may say that it contributes to the two-phonon cross section. This fact was disregarded in ref.~\cite{bro}, where the mixing of a huge number of one- and two- phonon states was considered. A good description of the width of the GDR was thus achieved. However, all the $|\Phi_\alpha>$ states were considered to be one-phonon states when they were excited through their one-phonon components while the two-phonon excitations were calculated in~\cite{bro} as the transition to states of the form $|\Phi_\alpha \otimes \Phi_{\alpha^\prime}>$. Therefore, that calculation is somewhat equivalent to consider a harmonic spectrum with the states $|\Phi_\alpha>$ as the elementary quanta. \subsection{Time-dependent excitation process} The cross section is calculated, non perturbatively, by solving the Schr\"odinger equation in the space of the ground state and the $|\Phi_\alpha>$ states. Then the time dependent state, $|\Psi(t)>$, of the nucleus can be expressed as \begin{equation} |\Psi(t)> = \sum_\alpha A_\alpha (t) e^{-i E_\alpha t} |\Phi_\alpha> \end{equation} where the ground state is also included in the sum as the term $\alpha=0$. The amplitudes $A_\alpha (t)$ are solutions of the set of linear differential equations \begin{equation} \label{adot} \dot A_\alpha (t) = -i \sum_{\alpha \alpha^\prime} e^{i (E_\alpha - E_{\alpha^\prime}) t} <\Phi_\alpha|W(t)|\Phi_{\alpha^\prime}> \end{equation} and the probability of exciting the internal state $|\Phi_\alpha>$ is given by \begin{equation} P_\alpha = |A_\alpha (t = + \infty)|^2 \end{equation} for each impact parameter. Finally, by integrating $P_\alpha$ over the impact parameter we obtain the cross section \begin{equation} \sigma_\alpha = 2\pi \int_{0}^{+\infty} P_\alpha (b) T(b) b db, \end{equation} where the transmission coefficient $T(b)$ has been taken equal to a sharp cutoff function $\theta (b-b_{min})$. The parameter $b_{min}$ is usually chosen such that the contribution from the nuclear part can be neglected. \section{ Relativistic Coulomb excitation} Let us now look in detail the multipole expansion of the external field of eq.(2). Alder and Winther \cite{AW79} have worked out an analytic expression for the Fourier transform of the semiclassical electromagnetic field in relativistic nucleus-nucleus collisions, with the assumptions that the projectile follows a straight-line trajectory and that the charge densities of both nuclei do not overlap. Therefore, to get the time dependence of the electromagnetic coupling potential the inverse Fourier transform of the expressions derived in \cite{AW79} can be taken. This procedure has the advantage that the multipole expansion of the time dependent coupling potential is readily known as well as its electric and magnetic components. Let us introduce the Fourier components of the time dependent coupling potential \begin{equation} \label{e1} W(t) = {1 \over {2\pi}} \int_{-\infty}^{+\infty} e^{-i\omega t} W(\omega) d\omega \end{equation} \noindent Introducing the expansion of the external field in multipoles $W^{\lambda\mu}$ \begin{equation} \label{e2} W(t) ={1\over {2\pi}} \sum_{\lambda,\mu} \int_0^{+\infty} \big( e^{i\omega t} (-1)^{\lambda +\mu}+ e^{-i\omega t} \big) W^{\lambda\mu}(\omega ) d\omega \end{equation} where we have taken into account the behaviour of the multipoles $W^{\lambda\mu}$ for negative $\omega $. It is shown in ref. \cite{AW79} that the contribution to $W(\omega)$ of the $(\lambda,\mu )$ multipole can be expressed in terms of electric ($\pi=E$) and magnetic ($\pi=M$) one-body operators \begin{equation} \label{a1} W^{\lambda \mu} (\vert \omega \vert) = {{Z_p e} \over { v\gamma}} \sum_\pi G_{\pi \lambda \mu}\big({c\over v}\big) (-1)^\mu K_\mu (\beta \omega )) {{\sqrt{2\lambda+1}}} \big({\omega \over c}\big)^\lambda {\cal M}_t(\pi\omega \lambda -\mu) \end{equation} \noindent where $\beta\omega$ is the adiabaticity parameter related to the impact parameter $b$ and to the Lorentz contraction factor, $\gamma$, and where $K_\mu$ are the modified Bessel functions. The expressions of the functions $G_{\pi \lambda \mu}$ can be found in ref. \cite{AW79}. The 2$^\lambda $-pole electric transition operator is given by \begin{equation} \label{m1} {\cal M} (E\omega \lambda \mu)= {{(2\lambda+1)!! c^\lambda} \over {\omega^{\lambda+1} (\lambda+1)}} \int {\bf J}({\bf r})\cdot {\bf \nabla}\wedge {\bf L} (j_\lambda({{\omega r} \over c})Y_{\lambda\mu}(\hat{r})) d^3{ r} \end{equation} \noindent where ${\bf J}$ is the current density operator while $j_\lambda$ is a spherical Bessel function. This operator can also be written down \cite{rs} as \begin{eqnarray} \label{m2} {\cal M} &(&E\omega \lambda \mu)= {{(2\lambda+1)!! c^\lambda} \over {\omega^\lambda (\lambda+1)}} \nonumber \\ &\times&\int \left\{ {\bf \rho} ({\bf r}) Y_{\lambda\mu}(\hat{r}) {\partial \over {\partial r}} (rj_\lambda({{\omega r} \over c})) + i {\omega \over c^2} {\bf J}({\bf r})\cdot {\bf r} Y_{\lambda\mu}(\hat{r})j_\lambda({{\omega r} \over c}) \right\} d^3{ r} \end{eqnarray} \noindent where the charge density operator ${\bf \rho}({\bf r}) $ has been introduced. The second term will be neglected since, relative to the first, it is of the order of $\hbar\omega /2 m_p c^2$. To get the time dependent coupling potential as the Fourier transform (\ref{e2}-\ref{a1}), we need the transition operators at any value of $\omega$. Since the argument of the spherical Bessel function in the operator is $\omega r/c$, the dependence on $\omega$ and r of the multipole $W^{\lambda \mu}$ will not factorize. Therefore there would be no factorization of the time and r dependence in the coupling potential. However, in the limit of long wavelengths, the first term in expression (\ref{m2}) reduces to the well-known static electric multipole operator, \begin{equation} \label{m3} {\cal M} (E\omega \lambda \mu) \simeq \hat{Q}_{\lambda\mu} = \int {\bf \rho}({\bf r}) r^\lambda Y_{\lambda\mu}(\hat{r}) d{\bf r} \end{equation} \noindent which does not depend on $\omega$. In a similar way, the general magnetic operator \begin{equation} \label{mm1} {\cal M} (M\omega \lambda \mu)= -i{{(2\lambda+1)!! c^{\lambda-1}} \over {\omega^{\lambda} (\lambda+1)}} \int {\bf J}({\bf r})\cdot {\bf L} (j_\lambda({{\omega r} \over c})Y_{\lambda\mu}(\hat{r})) d{\bf r} \end{equation} \noindent in the limit of long wavelengths, becomes \begin{equation} \label{mm3} {\cal M} (M\omega \lambda \mu) \simeq \hat{M}_{\lambda\mu} = {1 \over {c(\lambda +1)}} \int ({\bf r} \wedge {\bf J}({\bf r}))\cdot ({\bf \nabla} r^\lambda Y_{\lambda\mu}(\hat{r})) d{\bf r} \end{equation} That means that, in the limit of long wavelengths, neither the electric nor the magnetic transition operators depend on $\omega$ (we will therefore omit $\omega$ in the arguments of ${\cal M}$) and they will come out of the integral in~(\ref{e2}). We just need to know \begin{equation} H_{\lambda \mu}(\beta,t)=\int_0^{+\infty} \big( e^{i\omega t} (-1)^{\lambda +\mu} + e^{-i\omega t} \big) \omega^\lambda K_\mu (\beta\omega) d\omega \end{equation} \noindent with $\mu \geq 0$. $H_{\lambda \mu}(\beta,t)$ is an analytic function whose explicit expression can be found in the appendix. Therefore, in the long wavelength limit, the following analytic expression of the time dependent coupling potential \begin{eqnarray}\label{multi} W(t) &=&{{Z_p e} \over {2\pi v\gamma}} \sum_{\pi \lambda \mu} G_{\pi \lambda \mu}\big({c\over v}\big) (-1)^\mu {{\scriptstyle{\sqrt{2\lambda+1}}} \over {c^\lambda}} H_{\lambda \mu}(\beta,t) \ {\cal M}(\pi \lambda -\mu) \nonumber \\ &=&\sum_{\pi \lambda \mu} g_{\pi \lambda \mu}(\beta,t) (-1)^\mu {\cal M}(\pi \lambda -\mu)/e \end{eqnarray} \noindent can be explicitly derived. From the use of the static multipole operators has followed that any of the term in the sum factorize into two elements, the first one depending on collisions properties, the second one acting on the nucleus being excited. If one considers only the electric components, which are the most important, only natural parity states can be excited in a first order calculation. However, in a coupled channel calculation, as the present one, non natural parity states have to be included since they can be reached, for example, through a two step process. However we will see in the following that these contributions remain small. We want to have a feeling about the limits that the use of the static electric operator imposes on the interpretation of our results. If a first-order harmonic and linear calculation was to be done, we could just compare the complete matrix elements $<I_f\vert \vert{\cal M}(E \omega_{if}\lambda)\vert\vert I_i>$ and the approximate one $<I_f\vert \vert \hat{Q}_\lambda \vert \vert I_i>$. The matrix elements connecting the ground state with one phonon states are consistent within a maximum of a few per thousand for low-lying states, and differ by a few percent when the GDR or the ISGQR are considered. \begin{figure} [h] \begin{center} \mbox{{\epsfxsize=13truecm \epsfysize=9truecm \epsfbox{fg1.ps}}} \end{center} \caption {Matrix element of the coupling potential between the ground state of $^{208}$Pb and its GDR with magnetic quantum number zero, as function of time. There are two groups of lines corresponding to two different impact parameters. The solid lines have been obtained using the general expression (see eq. 30) of the electric dipole operator, while to get the dashed line the static expression (eq. 31) has been used.} \label{fg1} \end{figure} \begin{figure} \begin{center} \mbox{{\epsfxsize=13truecm \epsfysize=9truecm \epsfbox{fg2.ps}}} \end{center} \caption {As figure \ref{fg1}, but for magnetic quantum number one.} \label{fg2} \end{figure} In a coupled-channel calculation not just a fixed $\omega_{if}$, but the full range of $\omega$ values will contribute to the Fourier transform (\ref{e1}- \ref{a1}). As an illustration, let us consider the colliding system $^{208}$Pb+$^{208}$Pb at E$_{lab}$= 641 MeV to compare the exact multipole expansion with the long wavelength limit. The associated time-dependent transition matrices from the ground state to the giant dipole resonance, $<GDR,\mu\vert W(t) \vert 0>$, are presented in figure \ref{fg1} for the magnetic quantum number $\mu=0$ and in figure \ref{fg2} for $\mu=1$ for two different values of the impact parameter. The solid lines correspond to calculations in which the general expression of the electric multipole operator has been taken into account, and the inverse Fourier transform of the corresponding amplitude has been carried on numerically. The dashed lines correspond to the use of the static electric operator and the analytic expression (\ref{multi}). We can see that qualitatively the time dependence is well reproduced, while the quantitative agreement gets better as the impact parameters increases. That is essentially due to the adiabatic cutoff that the modified Bessel function $K_\mu(\beta \omega)$ introduces. This function decays exponentially when the argument becomes bigger than 2 ~\cite{Ab}. Therefore, if the impact parameter increases the relevant range of $\omega$ in the integral is reduced and we get closer to the long wavelength limit expression for the interacting potential. This effect can be seen in figure \ref{fg3} where the behaviour with the impact parameter of $<GDR,\mu=1\vert W(t=0) \vert 0>$ is presented at t=0. This is the time at which the difference between both approaches is maximum when $\lambda+\mu$ is even. A similar behaviour is found when the matrix elements $W^{11}$ or $W^{20}$ are considered. \begin{figure} \begin{center} \mbox{{\epsfxsize=13truecm \epsfysize=9truecm \epsfbox{fg3.ps}}} \end{center} \caption {Impact parameter dependence of the matrix element of the coupling potential at time t=0 between the ground state of $^{208}$Pb and its GDR with magnetic quantum number one.} \label{fg3} \end{figure} The conclusion of this study is that, in the excitation energy region we will consider, it is reasonable to use the static electric operator. This amounts to a considerable saving of calculations since, for each multipole, the time and the r dependence factorize. \section{Results about the excitation of $^{208}$Pb} Let us now apply the above formalism to a specific nucleus, namely the $^{208}$Pb excited in a collision with a Pb nucleus at 641 MeV per nucleon. We will first discuss the effect of the anharmonicities on the RPA spectrum. Secondly, we will look at the effect of non-linear terms in the external field. Finally, we will consider the influence of both these terms on the excitation probability and the cross sections. \subsection{Energy spectrum} The one-phonon basis is calculated in the self-consistent RPA with SGII Skyrme interaction \cite{SGII}. Although we are using an explicit neutron proton representation the isospin results to be a rather good quantum number as far as collective states are concerned. We have selected all the states which exhaust at least $5\%$ of the appropriate EWSR and, for a particular spin and parity (and isospin), we have grouped together the ones which are closer in energy according to the method described in ref.~\cite{cat89}. We have considered the one-phonon states reported in table \ref{fpb}, i.e. the various components of the isoscalar monopole resonance (GMR), the components of the isovector dipole resonance $(GDR)$, the low-lying $2^+$ state and the quadrupole resonances, both isoscalar (ISGQR) and isovector (IVGQR), and finally the collective low-lying $(3^-)$ and high-lying $(HEOR)$ isoscalar octupole states. \begin {table} \caption { One-phonon basis for the nucleus $^{208}$Pb. For each state its spin and parity, isospin, energy and percentage of the EWSR are reported.} \label{fpb} \begin{tabular}{||r||cc|r|r||} \hline Phonons &$J^\pi$&$ T $&$ E (MeV) $&$ \% EWSR$\\ \hline \hline $ GMR_1 $&$ 0^+ $&$ 0 $&$ 13.610 $&$ 61 $\\ $ GMR_2 $&$ 0^+ $&$ 0 $&$ 15.022 $&$ 28 $\\ \hline $ GDR_1 $&$ 1^- $&$ 1 $&$ 12.435 $&$ 63 $\\ $ GDR_2 $&$ 1^- $&$ 1 $&$ 16.662 $&$ 17 $\\ \hline $ 2^+ $&$ 2^+ $&$ 0 $&$ 5.545 $&$ 15 $\\ $ ISGQR $&$ 2^+ $&$ 0 $&$ 11.599 $&$ 76 $\\ $ IVGQR $&$ 2^+ $&$ 1 $&$ 21.815 $&$ 45 $\\ \hline $ 3^- $&$ 3^- $&$ 0 $&$ 3.464 $&$ 21 $\\ $ HEOR $&$ 3^- $&$ 0 $&$ 21.302 $&$ 37 $\\ \hline \end{tabular} \end{table} We have then constructed the residual interaction between the one- and two-phonon states and also among the two-phonon states. The two-phonon states are coupled to a total angular momentum and parity. In the case of the $1^-$ states, while the coupling between one- and two-phonon states is of the order of 1/2 MeV up to 1 MeV, the coupling between two-phonon states is, in average, about one order of magnitude smaller. \begin {table} \caption { Characteristics of the $|\phi_\alpha>$ dipole 1$^-$ states resulting from the diagonalization of the internal hamiltonian. In the first column we indicate the dominant component. The values in the second column remind us the energies associated with this component in the harmonic approach. The shift in the energy produced by the anharmonicities is indicated by $\Delta E$ (in KeV). We can compare these values with the diagonal matrix elements of the residual interaction, $\Delta E_0$ (in KeV). In the last columns we report the amplitude with which the GDR's appear in such mixed states.} \label{dcpb} {\small \begin{tabular}{||rcl|r||rr|r|r||} \hline Dipole && States &$ E_0 $(MeV)&$ \Delta E $&$ (\Delta E_0) $ &$ c_{_{GDR_1}}$ &$ c_{_{GDR_2}}$\\ \hline \hline $GDR_1\!\!$&$ $&$ $&$ 12.435 $&$ -132. $ &$ (0.) $&$ 0.993 $&$-0.006 $\\ $GDR_2\!\!$&$ $&$ $&$ 16.662 $&$ -56. $ &$ (0.) $&$ 0.002 $&$ 0.994 $\\ \hline $3^-\!\!$&$\!\otimes\!$&$\!\!2^+ $&$ 9.009 $&$ 195. $ &$ (200.) $&$ 0.023 $&$ 0.000 $\\ $3^-\!\!$&$\!\otimes\!$&$\!\!ISGQR $&$ 15.062 $&$ 75. $ &$ (67.) $&$ 0.045 $&$ 0.000 $\\ $GDR_1\!\!$&$\!\otimes\!$&$\!\!2^+ $&$ 17.981 $&$ -207. $ &$ (-220.) $&$ 0.043 $&$ 0.082 $\\ $GDR_2\!\!$&$\!\otimes\!$&$\!\!2^+ $&$ 22.207 $&$ -23. $ &$ (-36.) $&$ 0.007 $&$ 0.048 $\\ $GDR_1\!\!$&$\!\otimes\!$&$\!\!ISGQR $&$ 24.034 $&$ 33. $ &$ (-10.) $&$ 0.057 $&$-0.004 $\\ $3^-\!\!$&$\!\otimes\!$&$\!\!IVGQR $&$ 25.278 $&$ 6. $ &$ (4.) $&$-0.014 $&$ 0.000 $\\ $GDR_1\!\!$&$\!\otimes\!$&$\!\!GMR_1 $&$ 26.046 $&$ 18. $ &$ (-27.) $&$ 0.057 $&$ 0.000 $\\ $HEOR\!\!$&$\!\otimes\!$&$\!\!2^+ $&$ 26.847 $&$ 25. $ &$ (24.) $&$-0.004 $&$ 0.000 $\\ $GDR_1\!\!$&$\!\otimes\!$&$\!\!GMR_2 $&$ 27.458 $&$ -10. $ &$ (-35.) $&$ 0.039 $&$ 0.000 $\\ $GDR_2\!\!$&$\!\otimes\!$&$\!\!ISGQR $&$ 28.261 $&$ -88. $ &$ (-51.) $&$ 0.007 $&$ 0.054 $\\ $HEOR\!\!$&$\!\otimes\!$&$\!\!ISGQR $&$ 32.901 $&$ -31. $ &$ (-30.) $&$-0.004 $&$ 0.000 $\\ $GDR_1\!\!$&$\!\otimes\!$&$\!\!IVGQR $&$ 34.250 $&$ -44. $ &$ (-47.) $&$-0.007 $&$ 0.000 $\\ $GDR_2\!\!$&$\!\otimes\!$&$\!\!IVGQR $&$ 38.477 $&$ -174. $ &$ (-174.) $&$ 0.006 $&$ 0.000 $\\ $HEOR\!\!$&$\!\otimes\!$&$\!\!IVGQR $&$ 43.117 $&$ -49. $ &$ (-53.) $&$-0.011 $&$ 0.000 $\\ \hline \hline \end{tabular} } \end{table} Then for each spin and parity the total matrix has been diagonalised in order to get the states $|\Phi_\alpha>$. Since these states are always dominated by one component we have decided to label them by the name of this dominant component. Table \ref{dcpb} gives for all $1^-$ states the total shift $\Delta E$ (in KeV) from the unperturbed energy $E_0$ and their component on the GDR. Tables \ref{lec} and \ref{hec} contain some information on the results of the diagonalization for the low lying and the high lying two-phonon states, respectively (see caption). We have restricted these tables to natural parity states since the non natural parity states are essentially not mixed and weakly excited. Moreover, states with angular momentum greater than 3 have not been included since they do not play an important role in Coulomb excitation processes. From these tables one can see that the anharmonicities predicted by our microscopic calculations are small, the typical shifts in energy ($\Delta E$) being a few hundred keV. Each multiplet appears to be splitted with a characteristic spreading equal to the global shift. The mixing coefficients are in average also small, around 0.05 and at maximum around 0.2. \begin {table} \caption { As table \ref{dcpb}, but for the low lying states with natural parity. In the first column we give the dominant component while in the last one we report the second most important component and its coefficient.} \label{lec} {\small \begin{tabular}{||rcl|r||c|rr|rrcl||} \hline States&&&$ E_0 $(MeV)&$ J^\pi $&$ \Delta E $&$ (\Delta E_0) $ &$c_{conf}$&Config.&& \\ \hline \hline $ 3^-$&&&$ 3.464 $&$ 3^- $&$ -256. $&$ (0.) $&$ 0.093$ &$ 3^-\!\!$&$\otimes$&$\!\!2^+ $\\ \hline $ 2^+$&&&$ 5.545 $&$ 2^+ $&$-364. $&$ (0.) $&$ -0.201$ &$ 3^-\!\!$&$\otimes$&$\!\!3^- $\\ \hline \hline $3^-\!\!$&$\!\otimes\!$&$\!\!3^- $&$ 6.927 $&$ 0^+ $&$ 958. $ &$ (1137.)$&$-0.163 $&$ GMR_1\!\!$&$ $&$ $\\ $ $&$ $&$ $&$ $&$ 2^+ $&$ 381. $ &$ (400.) $&$ 0.195 $&$ 2^+\!\! $&$ $&$ $\\ \hline $3^-\!\!$&$\!\otimes\!$&$ 2^+\!\! $&$ 9.009 $&$ 1^- $&$ 195. $ &$ (200.) $&$-0.024 $&$ GDR_1\! $&$ $&$ $\\ $ $&$ $&$ $&$ $&$ 3^- $&$ 161. $ &$ (112.) $&$-0.091 $&$ 3^-\!\! $&$ $&$ $\\ \hline $2^+\!\!$&$\!\otimes\!$&$ 2^+\!\! $&$ 11.090 $&$ 0^+ $&$ 136. $ &$ (145.) $&$-0.055 $&$ 3^-\!\! $&$\otimes$&$\!\!3^- $\\ $ $&$ $&$ $&$ $&$ 2^+ $&$ 178. $ &$ (30.) $&$-0.158 $&$ 2^+\!\! $&$ $&$ $\\ \hline $GDR_1\!\!$&$\!\otimes\!$&$2^+\!\!$&$ 17.981 $&$ 1^- $&$-207. $ &$ (-220.)$&$-0.083 $&$ GDR_2\! $&$ $&$ $\\ $ $&$ $&$ $&$ $&$ 3^- $&$ -4. $ &$ (-4.) $&$-0.010 $&$ GMR_1\!\!$&$\otimes $&$ 3^- $\\ \hline \hline \hline \end{tabular} } \end{table} \begin {table} \caption { Same as table \ref{lec}, but for mixed states with natural parity and with energies between 22 and 29 MeV. } \label{hec} {\footnotesize \begin{tabular}{||rcl|c||c|rr|rrcl||} \hline States &$ $&$ $&$ E_0 $(MeV)&$ J^\pi $&$ \Delta E $ &$ (\Delta E_0) $& $c_{conf}$&Config.&&\\ \hline \hline $ GDR_2\!\!$&$\otimes$&$\!\!2^+ $&$ 22.207 $&$ 1^- $&$ -23.$ &$ (-36.) $&$ -0.046 $&$ GDR_2\!\!$&$ $&$ $\\ $ $&$ $&$ $&$ $&$ 3^- $&$ -66.$ &$ (-64.) $&$ -0.018 $&$ GDR_2\!\!$&$\otimes $&$ISGQR $\\ \hline $ ISGQR\!\!$&$\otimes$&$\!\!ISGQR$&$ 23.198 $&$ 0^+ $&$ 4.$ &$ (3.) $&$ 0.014 $&$ GDR_1\!\!$&$\otimes$&$\!\!GDR_1 $\\ $ $&$ $&$ $&$ $&$ 2^+ $&$ 35.$ &$ (-15.) $&$ -0.061 $&$ ISGQR\!\!$&$ $&$ $\\ \hline $ GDR_1\!\!$&$\otimes$&$\!\!ISGQR$&$ 24.034 $&$ 1^- $&$ 33.$ &$ (-10.) $&$ -0.057 $&$ GDR_1\!\!$&$ $&$ $ \\ $ $&$ $&$ $&$ $&$ 3^- $&$ -2.$ &$ (-2.) $&$ 0.010 $&$ 3^-\!\!$&$\otimes$&$\!\!IVGQR $\\ \hline $ 3^-\!\!$&$\otimes$&$\!\!HEOR $&$ 24.766 $&$ 0^+ $&$ 34.$ &$ (14.) $&$ 0.133 $&$ GDR_1\!\!$&$\otimes$&$\!\!GDR_1 $\\ $ $&$ $&$ $&$ $&$ 2^+ $&$ 22.$ &$ (-2.) $&$ 0.225 $&$ GDR_1\!\!$&$\otimes$&$\!\!GDR_1 $\\ \hline $ GDR_1\!\!$&$\otimes$&$\!\!GDR_1$&$ 24.871 $&$ 0^+ $&$ 41.$ &$ (33.) $&$ -0.132 $&$ 3^-\!\!$&$\otimes$&$\!\!HEOR $\\ $ $&$ $&$ $&$ $&$ 2^+ $&$-189.$ &$ (-192.)$&$ -0.223 $&$ 3^-\!\!$&$\otimes$&$\!\!HEOR $\\ \hline $ GMR_1\!\!$&$\otimes$&$\!\!ISGQR$&$ 25.210 $&$ 2^+ $&$ 42.$ &$ (11.) $&$ -0.048 $&$ ISGQR\!\!$&$ $&$ $\\ \hline $ IVGQR\!\!$&$\otimes$&$\!\!3^- $&$ 25.278 $&$ 1^- $&$ 6.$ &$ (4.) $&$ 0.015 $&$ GDR_1\!\!$&$ $&$ $\\ $ $&$ $&$ $&$ $&$ 3^- $&$ -24.$ &$ (-25.) $&$ -0.018 $&$ HEOR\!\!$&$ $&$ $\\ \hline $ GMR_1\!\!$&$\otimes$&$\!\!GDR_1$&$ 26.046 $&$ 1^- $&$ 18.$ &$ (-27.) $&$ -0.057 $&$ GDR_1\!\!$&$ $&$ $\\ \hline $ GMR_2\!\!$&$\otimes$&$\!\!ISGQR$&$ 26.621 $&$ 2^+ $&$ 25.$ &$ (8.) $&$ -0.033 $&$ ISGQR\!\!$&$ $&$ $\\ \hline $ 2^+\!\!$&$\otimes$&$\!\!HEOR $&$ 26.847 $&$ 1^- $&$ 25.$ &$ (24.) $&$ 0.007 $&$ GMR_2\!\!$&$\otimes$&$\!\!GDR_1 $\\ $ $&$ $&$ $&$ $&$ 3^- $&$ -39.$ &$ (-44.) $&$ -0.018 $&$ HEOR\!\!$&$ $&$ $\\ \hline $ GMR_1\!\!$&$\otimes$&$\!\!GMR_1$&$ 27.221 $&$ 0^+ $&$ 297.$ &$ (56.) $&$ -0.128 $&$ GMR_1\!\!$&$ $&$ $\\ \hline $ 2^+\!\!$&$\otimes$&$\!\!IVGQR $&$ 27.360 $&$ 0^+ $&$-196.$ &$ (-195.)$&$ -0.016 $&$ ISGQR\!\!$&$\otimes$&$\!\!IVGQR $\\ $ $&$ $&$ $&$ $&$ 2^+ $&$ -84.$ &$ (-90.) $&$ -0.032 $&$ IVGQR\!\!$&$ $&$ $\\ \hline $ GMR_2\!\!$&$\otimes$&$\!\!GDR_1$&$ 27.458 $&$ 1^- $&$ -10.$ &$ (-35.) $&$ -0.042 $&$ GDR_1\!\!$&$ $&$ $\\ \hline $ GDR_2\!\!$&$\otimes$&$\!\!ISGQR$&$ 28.261 $&$ 1^- $&$ 88.$ &$ (51.) $&$ -0.055 $&$ GDR_2\!\!$&$ $&$ $ \\ $ $&$ $&$ $&$ $&$ 3^- $&$ -60.$ &$ (-62.) $&$ 0.010 $&$ 2^+\!\!$&$\otimes$&$\!\!GDR_2 $\\ \hline $ GMR_1\!\!$&$\otimes$&$\!\!GMR_2$&$ 28.633 $&$ 0^+ $&$ 254.$ &$ (74.) $&$ -0.100 $&$ GMR_2\!\!$&$\otimes$&$\!\!GMR_2 $\\ \hline $ GDR_1\!\!$&$\otimes$&$\!\!GDR_2$&$ 29.097 $&$ 0^+ $&$-178.$ &$ (-182.)$&$ -0.014 $&$ 2^+\!\!$&$\otimes$&$\!\!ISGQR$\\ $ $&$ $&$ $&$ $&$ 2^+ $&$ -64.$ &$ (-65.) $&$ -0.009 $&$ GDR_1\!\!$&$\otimes$&$\!\!GDR_1 $\\ \hline \end{tabular} } \end{table} \subsection{ Excitation Processes } Let us now study the characteristics of the excitation strength. We have seen that the excitation operator contains 3 parts. The first one is the linear response which is usually taken into account in the standard calculations: i.e. in the harmonic and linear picture. The strength associated to the operator $W^{10}$ is, in this picture, concentrated in the one-phonon states. The introduction of a mixing between states with different numbers of phonons spreads the strength over more states. For instance, in the case of the GDR the strength will be distributed among the dipole states of tables \ref{dcpb} in a fashion proportional to the c's coefficients. Analogously, the strength $W^{20}$ initially located around the two-phonon states, after the diagonalisation will be distributed over many states. Moreover, the various states have now two paths to be excited in one step, either through the $W^{10}$ excitation of their one-phonon component or via the $W^{20}$ interaction exciting directly their two-phonon part. Now, depending on the respective sign of the mixing coefficients, these two contributions may interfere constructively or destructively. In addition to these direct transitions from the ground-state, the term $W^{11}$ of the external field may induce transitions between excited states. These new excitation routes may modify the distribution of the excitation probabilities associated with different states. In the next subsection we will give a few examples where we will show the importance of the $W^{11}$ and $W^{20}$ terms and of the anharmonicities. \subsection{Excitation cross-sections} Let us now put all these ingredients together in order to compute the excitation probabilities and cross sections. All the natural parity states with angular momentum less or equal to 3 have been included in the calculations while for the non natural parity states we have included only the $1^+$ and the $2^-$ ones. By solving the coupled equations (\ref{adot}) we get the probability amplitude for each $|\phi_\alpha>$ state, from which we calculate the cross section by integrating over the various impact parameters associated with Coulomb inelastic excitations. The $b_{min}$ has been chosen according to the systematics of ref \cite{Be89}. We will describe in detail the results for the $^{208}$Pb$ + ^{208}$Pb system at 641 MeV/A and we will first focus our discussion on the excitation of dipole states. \begin{figure} \begin{center} \mbox{{\epsfxsize=13truecm \epsfysize=19truecm \epsfbox{bar-l1.ps}}} \end{center} \caption {Relativistic Coulomb target dipole excitation cross section for the $^{208}$Pb$ + ^{208}$Pb system at 641 MeV/A. Each bar corresponds to the cross section of a single state. } \label{l1} \end{figure} In fig. \ref{l1} we present the dipole excitation cross-section as predicted using various approximations in order to disentangle the effects of the anharmonicities and non-linearities coming from $W^{11}$ and $W^{20}$. We have run several calculations corresponding to the various cases we can have, by switching on and off the different terms of the external field. From the figure it is clear that the spectrum is dominated by the dipole resonance. However, one can observe important modifications of the dipole strength for the different calculations compared with the harmonic and linear prediction. \begin{figure} \begin{center} \mbox{{\epsfxsize=11truecm \epsfysize=9truecm \epsfbox{gs-23.ps}}} \end{center} \caption {Schematic representation of the Coulomb excitation of the $|2^+ \otimes 3^->$ state. } \label{gs-23} \end{figure} In particular, states which were not excited in the harmonic and linear picture can reach a sizeable cross-section when all the different corrections are taken into account. For instance, this is the case for the state around 9 MeV, which is mostly built out of the 1$^-$ component of the states resulting from the coupling of the low-lying 3$^-$ and 2$^+$. In the first line of table \ref{spb}, the Coulomb inelastic cross-sections for this state at several degrees of approximation are given. One can see that this two-phonon state is almost not excited in the harmonic and linear picture. Indeed, at this level of approximation, the most direct way to excite this state requires one E$3$ and one E$2$ transitions (see figure \ref{gs-23}.a) which are not favourable. In this case the $W^{11}$ term does not help much because either we reach the state by one E$1$ plus two E$2$ transitions, as in figure \ref{gs-23}.b, or by one E$3$ plus two E$1$ if in the first step we excite the $3^-$ state. In any case, at least one of the involved transitions is of high multipolarity. Conversely, the direct transitions due to the $W^{20}$ terms (see fig. \ref{gs-23}.c) increases the cross section by a huge factor, bigger than 500. Indeed, this term is now a dipole transition which is strongly favoured. The importance of $W^{20}$ will decrease as the excitation energy of the state increases. For instance, the enhancement factor 500 reduces to about 50 for the dipole states $|2^{+} \otimes HEOR>$ or $|ISGQR \otimes HEOR>$ whose energies are around 30 MeV. \begin {table} \caption { Coulomb inelastic target excitation cross sections (in mb) for the $^{208}$Pb$ + ^{208}$Pb system at 641 MeV/A and for the mixed states which are identified by their dominant component (first column) and their angular momenta and parity (second column). In the third column is shown the reference result corresponding to a harmonic and linear calculation. In the fourth column the additional inclusion of only the $W^{11}$ non-linear term is allowed. Similarly, in the fifth column the only difference with the reference calculation is due to the addition of only the $W^{20}$ non-linear term. In the sixth column the results of an anharmonic and linear calculation are presented. The last column correspond to results of the anharmonic and non-linear approach.} \label{spb} {\small \begin{tabular}{||rcl|c||r||r|r|r||r||} \hline \hline States && &$J^\pi$& harm. &$ W^{11} $&$ W^{20} $ & anharm.&anharm. \\ &&&&\& lin. &&&&\& non-lin. \\ \hline \hline $ 2^+ $&$ \otimes$&$ 3^- $&$ 1^- $&$ 0.03 $&$ 0.04 $ &$ 16.21 $&$ 2.60 $&$ 29.53 $\\ \hline $ ISGQR $&$ \otimes$&$ 3^- $&$ 1^- $&$ 0.05 $&$ 0.07 $ &$ 17.22 $&$ 3.63 $&$ 5.18 $\\ \hline $ 22 < $&$ E $&$ < 28 $ (MeV)&$ 1^- $&$ 3.55 $&$ 5.95 $ &$ 5.07 $&$ 6.42 $&$ 12.18 $\\ \hline $ 2^+ $&$ \otimes$&$ GDR_1 $&$ 1^- $&$ 1.24 $&$ 2.07 $ &$ 0.99 $&$ 7.64 $&$ 9.83 $\\ \hline \hline \hline $ ISGQR $&$ $&$ $&$ 2^+ $&$298.91 $&$332.56 $ &$300.09 $&$278.35 $&$ 314.18 $\\ \hline \hline \end{tabular} } \end{table} When the mixing of one- and two-phonons states is taken into account this state can be also populated by $W^{10}$ through its small GDR component (see table \ref{dcpb}). In fact, although the c coefficient of the GDR component is small, this component gives a considerable contribution due to the fact that it is a one step dipole excitation. Moreover, the energy of the state (about 9 MeV) is lower than the one of the GDR state. All together the effect of the anharmonicities on the inelastic cross section is a factor about 100 times bigger with respect to our reference calculation. Finally, when all these different contributions are taken into account this dipole two-phonon state built from low-lying 3$^-$ and 2$^+$ is receiving 30 mb cross section, while in the harmonic and linear limit it was just 0.03 mb. In this case the effects of non-linearities and anharmonicities interfere constructively. That is not a general property. An example in which these effects interfere destructively is shown in table \ref{spb}, where the excitation cross section to the dipole state $|ISGQR \otimes 3^{-}>$ is given. In order to clarify this mechanism we have done a parametric calculation in which only three single phonon states were considered, namely the 3$^-$, the 2$^+$ and the GDR. Then we have mixed the single phonon $|GDR>$ with the two phonon state $|2^{+} \otimes 3^{-}>$, coupled to a total spin 1, in the following way \begin{eqnarray} |\Phi_1> &=& \cos \beta |2^{+} \otimes 3^{-}> + \sin \beta |GDR> \nonumber\\ |\Phi_2> &=& -\sin \beta |2^{+} \otimes 3^{-}> + \cos \beta |GDR> \label {para} \end{eqnarray} \noindent Increasing the parameter $|\beta|$ we can go from a pure harmonic case ($\beta=0$) to a very strong anharmonicity. Changing the $\beta$ sign the relative phases of the $|\Phi_{\alpha}>$ components are changed. The energies of the states were kept fixed and equal to the energy, in the harmonic limit, of the main component; i.e. $E_1= E_{2^+} + E_{3^-}$ and $E_2= E_{GDR}$. \begin {table} \caption { Same as table \ref{spb}, but for the parametric state $|\Phi_1>$ of eq. (\ref{para}). In the last column the values of the parameter $\beta$ used.} \label{tbeta} \begin{tabular}{||c|c|c|c|c|r||} \hline \hline harm. \& lin. &$ W^{11} $&$ W^{20} $& anharm. &anharm. \& non-lin. & $\sin \beta$ \\ \hline \hline $ $&$ $&$ $&$ 1.96 $&$ 29.71 $&$ -0.02$\\ $ 0.26 $&$ 0.27 $&$ 16.58$&$ $&$ $&\\ $ $&$ $&$ $&$ 1.92 $&$ 7.20 $&$ 0.02$\\ \hline \hline \end{tabular} \end{table} The cross sections corresponding to the $|\Phi_1>$ state are shown in table \ref{tbeta} for two opposite values of $\beta$. From the table, we can see that the behaviour of the cross section is very similar to the one obtained in the complete calculation (see table \ref{spb}). Indeed the results for $\beta=-0.02$ are similar to the ones obtained for the $|2^{+} \otimes 3^{-}>$ dipole state, where anharmonicities reinforce the effects of non-linearities on the cross section. Conversely, for $\beta=0.02$ the final result is much lower than the one given by the $W^{20}$ term alone. This result is very similar to the one obtained for the $|ISGQR \otimes 3^{-}>$ state shown in table \ref{spb}. The reason for this different behaviour can be easily understood in a first order calculation if we take into account the following relations \begin{eqnarray} <\nu \lambda \mu| W(t)|0> &=& g_{E \lambda \mu}(\beta,t) <\nu \lambda|V^{10}(E\lambda)|0> \nonumber\\ <\left[\nu_1 \nu_2 \right] \lambda \mu| W(t)|0> &=& {1 \over {\sqrt{1+\delta_{\nu_1,\nu_2}}}} g_{E \lambda \mu}(\beta,t) <\nu_1 \lambda_1 \nu_2 \lambda_2|V^{20}(E\lambda)|0> \end{eqnarray} \noindent where $g_{E \lambda \mu}$ was defined in equation (\ref{multi}). Let us call $\sigma_1$ the cross section corresponding to the state $|\Phi_1>$. In a first order calculation we get \begin{equation}\label{38} {\sigma_1^{{\rm anharm} \& {\rm non-lin}} \over \sigma_1^{{\rm harm} \& {\rm non-lin}}} = \left( \cos \beta + {{\sin \beta} \over x} \right)^2 \end{equation} \noindent where $x$ is given by the following ratio of matrix elements \begin{equation} x= {{<2^{+} \otimes 3^{-}|V^{20}(E1)|0>} \over {<GDR|V^{10}(E1)|0>}} \end{equation} \noindent Since $|x|$ is usually smaller than 1 the second term in equation (\ref{38}) can be important even for small anharmonicities. The values of $\beta$ and $x$ as well as their signs are important. In the same way we can calculate the cross section $\sigma_2$ corresponding to the state $|\Phi_2>$ and get \begin{equation} {\sigma_2^{{\rm anharm} \& {\rm non-lin}} \over \sigma_2^{{\rm harm} \& {\rm non-lin}}} = \left( \cos \beta - x \enskip{\sin \beta} \right)^2 \end{equation} \noindent Since $|x|$ is usually small, the previous ratio will not differ very much from one. Note also that, in first order, $\sigma_2^{{\rm harm} \& {\rm non-lin}}$ and $\sigma_2^{{\rm harm} \& {\rm lin}}$ coincide, while $\sigma_1^{{\rm harm} \& {\rm non-lin}}$ differs from $\sigma_1^{{\rm harm} \& {\rm lin}}=0$. Now, it happens that the $x$ ratio for the $|2^{+} \otimes 3^{-}>$ and $|ISGQR \otimes 3^{-}>$ dipole states has the same sign and similar values: $-$0.058 and $-$0.091, respectively. But the coefficients of their GDR component have opposite sign (see table \ref{dcpb}) and their values are such that the dependence of the ratio in equation (\ref{38}) on $\beta$ is nearly linear. Then in one case anharmonicities and non-linearities interfere constructively and in the other case interfere destructively. By increasing $|\beta |$ this property is lost and we could have a reinforce effect in $\sigma_1$ even if $\beta$ and $x$ have opposite sign. That is confirmed by the parametric calculation to all orders, as can be seen in figure \ref{beta}, where we have increased the $|\beta|$ parameter up to about 0.2. That would support that for nuclei with stronger anharmonicities we should get a higher increase in the cross section with respect to the linear and harmonic case. \begin{figure} \begin{center} \mbox{{\epsfxsize=9truecm \epsfysize=9truecm \epsfbox{beta-d23.ps}}} \end{center} \caption { Relativistic Coulomb target excitation cross section for the parametric calculation of eq. (\ref{para}) as function of absolute value of the mixing coefficient $\sin \beta$. } \label{beta} \end{figure} Of course in this example we have assumed that the mixing has just two components what is a great simplification, specially for the state whose main component is the GDR. Let us go back to the complete calculations where we have a mixing of all the states and their proper energies are taken into account. Similar effects can also be seen on other states in the dipole response of the Pb nucleus (see table \ref{hec}). The two-phonon states located around 25 MeV excitation energy are of particular interest. These states are mainly built by coupling the giant dipole with the monopole and quadrupole states. As in the previous case, the direct transition $W^{20}$ and the mixing are important. In addition to that, also the transition between one-phonon states contributes to increase the cross section. Table \ref{spb} shows that in such a case the increase of the cross section of these two-phonon states is more than 300\%. This reminds the findings discussed in ref \cite{vol95}, where in a very schematic model we were showing that non-linearities and anharmonicities might strongly modify the excitation cross-section. An example where the anharmonicities play an important role is given by the excitation of the $|GDR_1 \otimes 2^+>$ state whose cross section is reported in table \ref{spb}. The big increase of the anharmonic and non-linear cross section can be entirely ascribed to its big $GDR_2$ component (see table \ref{lec}). This increase of the cross section is seen not only in the dipole channel, which gets large contributions from the GDR itself, but also for other multipolarities. Let us for instance consider the isoscalar giant quadrupole resonance (ISGQR) (see table \ref{spb}). Looking at its excitation we see that the inclusion of $W^{11}$ raises the value of $\sigma$ from 299 mb to 333 mb (in the harmonic case). In this case, besides the direct transition to the GQR due to the action of $W^{10}$, we are considering the second order one which proceeds first through the excitation of the GDR by $W^{10}(1)$ and then to GQR by means of $W^{11}$ (see fig. \ref{gs-gqr}). This second order process is able to give almost a 12\% increase because the excitation probability of the first transition is very high and because the effect of $W^{11}$ is enhanced by the fact that the energies of GDR and GQR are close each other. We close this detailed analysis with a comment on the effect of the non natural parity states $1^+$ and $2^-$ we have introduced in the calculations. In our calculation we found that the contribution of the $1^+$ states is, in this respect, irrelevant and the one of the $2^-$, in the region of the DGDR, amounts to about 1 mb. At lower energy, around 16 MeV, its contribution is 2 mb. Similar conclusions have been recently reached in ref.\cite{be96}. \begin{figure} \begin{center} \mbox{{\epsfxsize=11truecm \epsfysize=9truecm \epsfbox{gs-gqr.ps}}} \end{center} \caption {Schematic representation of the Coulomb excitation of the $|GQR>$ state. } \label{gs-gqr} \end{figure} \begin{figure} \begin{center} \mbox{{\epsfxsize=13truecm \epsfysize=9truecm \epsfbox{dsde1.ps}}} \end{center} \caption {Relativistic Coulomb target excitation cross section for the $^{208}$Pb$ + ^{208}$Pb system at 641 MeV/A as function of the excitation energy. The three parts correspond to different energy regions. The cross section for each $|\Phi_\alpha>$ state has been smoothed by a lorentzian with a 3 MeV width. For the low energy region we used a 1 MeV width.} \label{dsde1} \end{figure} So far, we have discussed the influence on some particular states. In order to get a global view on the effects of both non-linearities and anharmonicities we must compute the complete inelastic cross section. Therefore, we have summed up all the contributions coming from the various states after a smoothing of each individual line shape by a lorentzian. The results are presented in fig. \ref{dsde1}. For the low energy region (fig. \ref{dsde1}.a) the width of the lorentzian has been chosen equal to 1 MeV, while for the energy region around the GDR (fig. \ref{dsde1}.b) and the one around the DGDR (fig. \ref{dsde1}.c) it has been fixed equal to 3 MeV. In this figure we can see that the single GDR region is not much affected by the anharmonicities and non-linearities while the cross-section in the DGDR region is increased by 10\% when the anharmonicities and non-linearities are taken into account. We would like to point out that this increase is mainly due to the excitation of two-phonon states whose energies are in the DGDR region and whose population has been possible only because of the presence of the anharmonicities and the non-linear terms $W^{11}$ and $W^{20}$ in the external field. The low lying part of the spectrum is also affected and in particular, as we discussed before, a new dipole strength is visible in the 9 MeV region. \begin {table} \caption { Comparison between our theoretical results and the experimental cross sections (in barn) reported in ref. [6] for the Pb + Pb reaction at 641 MeV per nucleon. The theoretical results (first line) correspond to the sum of all GDR (first column) and all DGDR (second column) cross-section. The third column contains the cross section associated with all the states above the IVGQR (E $>$ 22 MeV). The theoretical cross sections are obtained from the non-linear and anharmonic calculation while the numbers in parenthesis refer to the linear and harmonic limit. The experimental results are reported in the second line. The first number corresponds to the extracted GDR cross section while the second number comes from a gaussian fit of the high energy cross section after subtraction of the GDR and GQR single-phonon strength. } \label{tsigma} \begin{tabular}{||c||c||c|c||} \hline \hline & GDR & DGDR & DGDR energy region \\ \hline \hline $\sigma_{\rm th}$&$3.13$ $(3.14)$&$0.21$ $(0.22)$&$0.31$ $(0.28)$ \\ \hline $\sigma_{\rm exp}$ & $3.28 \pm 0.05$ & \multicolumn{2}{c||} {$0.38\pm 0.04$} \\ \hline \hline \end{tabular} \end{table} In table \ref{tsigma} we show a comparison between our theoretical results and the experimental cross-section for the GDR and the DGDR energy region. The agreement for the GDR seems satisfactory. The theoretical yield associated with the DGDR states explains about 60\% of the experimental cross section. However, this disagreement between the experimental cross section in the DGDR region and our theoretical estimate is reduced to 18\% $\pm$ 10\% by the inclusion of all the different multiphonon states considered in our calculation and lying above the IVGQR. In conclusion, both the introduction of different two-phonon states and the inclusion of anharmonicities and non-linearities are bringing the theoretical prediction rather close to the experimental observation for the Coulomb excitation of Pb nuclei in the DGDR region. \section{Results about the excitation of $^{40}$Ca} We have also done calculations for the excitation of $^{40}$Ca by a $^{208}$Pb projectile with $E_{lab}=1000 MeV/A$. The one-phonon basis for $^{40}$Ca are shown in table \ref{fca}. We do not have any collective low lying $2^+$ state because the RPA does not generate such state for the $^{40}$Ca nucleus. The properties of the dipole $1^-$ states are reported in table \ref{dcca}, which is the analogous of table \ref{dcpb} for $^{208}$Pb. We note that we have bigger anharmonicities than in the Pb case. \begin {table} \caption { Same as table \ref{fpb}, but for $^{40}$Ca.} \label{fca} \begin{tabular}{||r||lr|r|r||} \hline Phonons &$J^\pi$&T &$E (MeV) $ & \% EWSR \\ \hline \hline $GMR_1$& $0^+$&0 & 18.25 & 30 \\ $GMR_2$& $0^+$&0 & 22.47 & 54 \\ \hline $GDR_1$& $1^-$&1 & 17.78 & 56 \\ $GDR_2$& $1^-$&1 & 22.03 & 10 \\ \hline $ISGQR$& $2^+$&0 & 16.91 & 85 \\ $IVGQR$& $2^+$&1 & 29.53 & 26 \\ \hline $3^- $& $3^-$&0 & 4.94 & 14 \\ $LEOR $& $3^-$&0 & 9.71 & 5 \\ $HEOR $& $3^-$&0 & 31.33 & 25 \\ \hline \end{tabular} \end{table} \begin {table} \caption { Same as table \ref{dcpb}, but for $^{40}$Ca.} \label{dcca} {\small \begin{tabular}{||rcl|r||rr|r|r||} \hline Dipole &&States &$ E_0 $(MeV)&$ \Delta E $&$ (\Delta E_0)$ &$ c_{GDR_1}$&$ c_{GDR_2}$ \\ \hline \hline $ GDR_1 $&$ $&$ $&$ 17.780 $&$ -432. $&$ 0. $ &$ 0.989$&$ -0.006$ \\ $ GDR_2 $&$ $&$ $&$ 22.034 $&$ -391. $&$ 0. $ &$ 0.004$&$ 0.990$ \\ \hline $ ISGQR\!\!$&$\otimes$&$\!\!3- $&$ 21.851 $&$ 708. $&$ 713. $ &$ 0.024$&$ 0.011$ \\ $ ISGQR\!\!$&$\otimes$&$\!\!LEOR $&$ 26.616 $&$ 231. $&$ 224. $ &$ 0.011$&$-0.011$ \\ $ IVGQR\!\!$&$\otimes$&$\!\!3^- $&$ 34.541 $&$-125. $&$-128. $ &$ 0.001$&$-0.020$ \\ $ GDR_1\!\!$&$\otimes$&$\!\!ISGQR $&$ 34.690 $&$ 139. $&$ 35. $ &$-0.063$&$-0.044$ \\ $ GMR_1\!\!$&$\otimes$&$\!\!GDR_1 $&$ 36.026 $&$-110. $&$-214. $ &$-0.075$&$-0.004$ \\ $ GDR_2\!\!$&$\otimes$&$\!\!ISGQR $&$ 38.943 $&$ -21. $&$ -74. $ &$-0.034$&$ 0.034$ \\ $ IVGQR\!\!$&$\otimes$&$\!\!LEOR $&$ 39.305 $&$-245. $&$-245. $ &$ 0.000$&$ 0.003$ \\ $ GMR_1\!\!$&$\otimes$&$\!\!GDR_2 $&$ 40.280 $&$-175. $&$-292. $ &$ 0.011$&$-0.079$ \\ $ GMR_2\!\!$&$\otimes$&$\!\!GDR_1 $&$ 40.249 $&$ 9. $&$-202. $ &$-0.098$&$-0.005$ \\ $ GMR_2\!\!$&$\otimes$&$\!\!GDR_2 $&$ 44.502 $&$ 20. $&$-194. $ &$ 0.000$&$-0.098$ \\ $ GDR_1\!\!$&$\otimes$&$\!\!IVGQR $&$ 47.379 $&$-315. $&$-308. $ &$-0.011$&$-0.003$ \\ $ ISGQR\!\!$&$\otimes$&$\!\!HEOR $&$ 48.240 $&$ -13. $&$ -27. $ &$ 0.000$&$ 0.005$ \\ $ GDR_2\!\!$&$\otimes$&$\!\!IVGQR $&$ 51.633 $&$-270. $&$-271. $ &$ 0.001$&$ 0.001$ \\ $ IVGQR\!\!$&$\otimes$&$\!\!HEOR $&$ 60.929 $&$-271. $&$-275. $ &$-0.009$&$ 0.005$ \\ \hline \hline \end{tabular} } \end{table} The coupled channel equations (\ref{adot}) were solved only for the natural parity states which, as we have seen in the case of Pb, are providing the largest contribution to the cross-section. The resulting cross section, after a smoothing by a lorentzian with a 3 MeV width, is shown in fig. \ref{ca}. The peak at around 18 MeV is due to the $GDR_1$ with the contribution of the ISGQR state. The shoulder at about 22 MeV is given by the $GDR_2$ and the two-phonon dipole state $|ISGQR \otimes 3^->$ which gives, in the anharmonic and non-linear case, a 10\% increase. The latter state is excited in the same fashion of the $|ISGQR \otimes 3^->$ state of $^{208}$Pb, see table \ref{sca}, with the difference that now the increasing factor is 1000 while in the $^{208}$Pb case it was only 100. Finally, we note two interesting energy regions where there is a difference between the harmonic and linear and the anharmonic and non-linear case, namely the regions around 35 and 40 MeV. The sum of the cross section for the $1^-$ state belonging to these two regions are reported in table \ref{sca} for the different kinds of calculations we can make within our approach. From the table we see that the increase is essentially due to the dipole $1^-$ states and this is an almost pure anharmonic effect. The global increase in the DGDR energy region amounts to a 20\%, which is twice what we obtained for Pb. \begin {table} \caption { Same as table \ref{spb}, but for $^{40}$Ca.} \label{sca} {\small \begin{tabular}{||rcl|c||r||r|r|r||r||} \hline \hline States && &$J^\pi$& harm. &$ W^{11} $&$ W^{20} $ & anharm.&anharm. \\ &&&&\& lin. &&&&\& non-lin. \\ \hline \hline $ ISGQR $&$ \otimes$&$ 3^- $&$ 1^-$&$ 0.004 $&$ 0.006$ &$ 6.660$&$ 0.284 $&$ 3.955 $\\ \hline $ 34 < $&$ E $&$ < 36 $ (MeV)&$ 1^-$&$ 0.110 $&$ 0.287$ &$ 0.295$&$ 1.723 $&$ 2.221 $\\ \hline $ 38 < $&$ E $&$ < 45 $ (MeV)&$ 1^-$&$ 0.008 $&$ 0.020$ &$ 0.022$&$ 1.468 $&$ 1.698 $\\ \hline \hline \hline \end{tabular} } \end{table} \begin{figure} \begin{center} \mbox{{\epsfxsize=13truecm \epsfysize=9truecm \epsfbox{dsde-ca.ps}}} \end{center} \caption {Relativistic Coulomb target excitation cross section for the $^{208}$Pb$ + ^{40}$Ca system at 1000 MeV/A as function of the excitation energy. The cross section for each $|\Phi_\alpha>$ state has been smoothed by a lorentzian with a 3 MeV width.} \label{ca} \end{figure} \section{Discussion and conclusion} We have employed an RPA based approach to compute the anharmonicities: We have diagonalized the residual interaction between RPA phonons in the space of one- and two-phonons states. We have taken into account also the particle-particle and hole-hole terms in the external field making possible the direct excitation of two-phonon states as well as the transition between one-phonon states. These non-linear terms and the anharmonicities are not taken into account in the "standard" approach of the multiphonon picture. Within this framework we have calculated the Coulomb excitation of $^{208}$Pb and $^{40}$Ca nuclei due to the impinging $^{208}$Pb nucleus at 641 and 1000 MeV/A. In this paper we have shown that the inclusion of both anharmonicities and non-linear terms in the external field reduce the disagreement between the experimental cross section in the DGDR region and the theoretical one calculated within the "standard" approach. Moreover, for the $^{208}$Pb case, we have found a big effect also at low energy where the $|2^+ \otimes 3^+>$ state would have never been excited without the presence of both the anharmonicities and non-linearities. Since these low lying two-phonon states are strongly mixed and since their energy is low, we believe that they could also be strongly excited by the nuclear part of the mean field at an incident energy lower than the one considered here. Theoretical and experimental work in this direction are called for. In view of our calculations it is clear that non-linearities and anharmonicities have an influence on the Coulomb excitation. On some particular states this influence can be very strong, while averaging over all the states we have found an increase of the cross section by about 10\% for Pb and 20\% for Ca in the region of the two-phonon states while the energies were modified only by a few percent. However, this might not be the final answer because of different reasons. First, we are working in a truncated subspace in order to keep only one- and two- phonon states. However, we know that a large part of the increase observed in ref. \cite{vol95} is due to the increase of transition matrix elements coming from components of the wave function containing large phonon numbers. These components are not taken into account in the present calculation and this reduces the influence of anharmonicities. In fact we have tested this point on the simple model reported in reference \cite{vol95} and we have observed that a truncation of the multiphonon space at the two-phonon level reduces the increase of the cross section by almost a factor 4. Unfortunately, this point is not easy to improve because the computation time will become too long if we are forced to include more multiphonon states. We are now trying to develop an alternative approach based on time-dependent mean field theory in the boson representation. The anharmonicities we are computing are mainly due to the residual interaction into channels which are different from the usual particle-hole interaction. One may argue that, as far as effective interactions are concerned, their parameters are only fitted close to the ground state. Therefore, except for the particle-hole channel the other parts of the interaction are not really constrained by the theory. However, in some cases the residual interaction has been tested far from the ground state. In this respect the relative success of the time-dependent mean field theory (and other treatments such as adiabatic TDHF or generator coordinate method) might be an indication that the same Skyrme parametrisation also holds for large amplitude motion. However, this point is certainly calling for more theoretical developments in order to better define the effective interaction in channels different from the standard particle-hole ones. From the experimental point of view it seems that the $^{208}$Pb nucleus behaves as a rather good vibrator. In fact the discrepancy about the cross-section between theory and experiment is apparently much smaller in the case of Pb than in the case of Xe \cite{ST95}. Moreover, as far as the shift in energy of the two-phonon state with respect to the harmonic limit is concerned, a shift of less than 1\% was found experimentally for Pb while for Xe it was of the order of 10\% \cite{schm,ST95}. In our calculation the Pb also appears as a rather good vibrator and the predicted effects on the energy shifts are consistent with the experiment. This is probably related to the fact that $^{208}$Pb is a double-magic nucleus. It would be very important to study non-magic, open-shell or deformed nuclei which are expected to be poorer vibrators than double-magic nuclei. In particular, it is known that the energy of the GDR is strongly affected by the deformation, indicating a possible strong coupling between dipole and quadrupole degrees of freedom. This may induce a modification of the cross-section stronger than the predicted 10 \% to 20\% for spherical-magic nuclei. In this respect, extensions of the presented results to open shell and deformed nuclei are called for. In conclusion, we would like to stress that in addition to the DGDR excitation several states are contributing to the cross-section in the DGDR energy region. When the non-linearities and the anharmonicities are taken into account the total theoretical cross-section above the IVGQR come rather close to the experimental result for the Coulomb excitation of Pb. \ack{ This work has been partially supported by the spanish DGICyT under contract PB92-0663, by the Spanish-Italian agreement between the DGICyT and the INFN and by the Spanish-French agreement between the DGICyT and the IN2P3. } \section{ Appendix } We just need to know \begin{equation} \label{e3} H_{\lambda \mu}(\beta,t)=\int_0^{+\infty} \big( e^{i\omega t} (-1)^{\lambda +\mu} + e^{-i\omega t} \big) \omega^\lambda K_\mu (\beta\omega) d\omega \end{equation} \noindent with $\mu \geq 0$, since $H_{\lambda \mu}(\beta,t)=H_{\lambda \vert\mu\vert}(\beta,t)$. Considering the cases $\lambda+\mu $ even or odd, together with t positive, negative or null, the integral (\ref{e3}) will be proportional to integrals in \cite{Gr}. Combining all cases we get \begin{eqnarray} \label{e4} & &H_{\lambda \mu}(\beta,t) = \nonumber \\ & &( 1+(-1)^{\lambda+\mu}) {2^{\lambda-1} \over \beta^{\lambda+1}} \Gamma({\scriptstyle{{1+\lambda+\mu} \over 2}}) \Gamma({\scriptstyle{{1+\lambda-\mu} \over 2}}) F\big({\scriptstyle{ {{1+\lambda+\mu} \over 2},{{1+\lambda-\mu} \over 2}; {1\over 2};-{t^2\over \beta^ 2} }}) - \nonumber \\ &i&( 1-(-1)^{\lambda+\mu}) {{2^\lambda t} \over \beta^{\lambda+2}} \Gamma({\scriptstyle{{2+\lambda+\mu} \over 2}}) \Gamma({\scriptstyle{{2+\lambda-\mu} \over 2}}) F\big({\scriptstyle{ {{2+\lambda+\mu} \over 2},{{2+\lambda-\mu} \over 2}; {3\over 2};-{t^2\over \beta^ 2} }}) \end{eqnarray} \noindent These hypergeometric functions can be transformed following \cite{Ab} as \begin{eqnarray} \label{e5} F(n+{\scriptstyle{1\over 2}},n+{\scriptstyle{1\over 2}}-\mu; m+{\scriptstyle{1\over 2}}; -x^2) &=& {1\over {(1+x^2)^{2n-\mu-m+1/2}}} \nonumber \\ &\times &F(m-n,m-n+\mu;m+{\scriptstyle{1\over 2}};-x^2) \end{eqnarray} If $\lambda+\mu$ is even, then $m=0$ and $n=\frac{\lambda+\mu}{2}$. Whereas if $\lambda+\mu$ is odd, then $m=1$ and $n=\frac{\lambda+\mu+1}{2}$. Therefore, in the cases in which we are interested $m-n$ is an integer $\leq 0$, and the latter hypergeometric function reduces to the following polynomial \begin{equation} \label{e6} F(-\ell,b;c;z)= \sum_{k=0}^\ell {{(-\ell)_k(b)_k} \over {(c)_k}} {{z^k}\over {k!}} \> \>. \end{equation}
1,477,468,750,580
arxiv
\section{Introduction} The famous Heisenberg uncertainty relations \cite{H,H2} play an important and significant role in the understanding of the quantum world and in explanations of its properties. There is a mathematically rigorous derivation of the position--momentum uncertainty relation and the uncertainty relation for any pair of non--commuting observables, say $A$ and $B$, but the same cannot be said about time--energy uncertainty relation. Nonetheless the time--energy uncertainty relation is considered by many authors using it as having the same status as the position--momentum uncertainty relation and it is often used as the basis for drawing far--reaching conclusions regarding the prediction of the behavior of some physical systems in certain situations in various areas of physics and astrophysics and from time to time such conclusions were considered as the crucial. So, the time--uncertainty relation still requires its analysis and checking whether it is correct and well motivated by postulates of quantum mechanics. Here we present the analysis of the general uncertainty relation derived by Robertson \cite{Robertson} and Schr\"{o}dinger \cite{Schrod-1930} and also of the Heisenberg and Mandelstam--Tamm (MT) time--energy uncertainty relations made within the framework of the standard formalism of Schrodinger and von Neumann quantum mechanics and we show that the validity of these time--energy uncertainty relations is limited. In Section 2 the reader finds a general theory and calculations. The MT time--energy uncertainty relation is analyzed in Sec. 3. Discussion is presented in Sec. 4. Sec. 5 contains conclusions. \section{Uncertainty principles } The uncertainty principle belongs to one of the most characteristic and important consequences of the quantum mechanics. The most known form of this principle is the Heisenberg uncertainty principle \cite{H} for the position and momentum, which can be written as follows \begin{equation} \Delta_{\phi} x\ \cdot \Delta_{\phi} p_{x}\,\geq \,\frac{\hbar}{2}, \label{H1} \end{equation} where, according to Heisenberg, $\Delta_{\phi} x$ and $\Delta_{\phi} p_{x}$ are {\em "precisions"} with which the values $x$ and $p$ are known \cite{H}. The current interpretation of $\Delta_{\phi} x$ and $\Delta_{\phi} p_{x}$ follows from the derivation of the uncertainty relation made by Robertson \cite{Robertson} and Schr\"{o}dinger \cite{Schrod-1930}, (see also \cite{M}). According to them $\Delta_{\phi} x$ and $\Delta_{\phi} p_{x}$ denote the standard (root--mean--square) deviations: In a general case for an observable $F$ the standard deviation is defined as follows \begin{equation} \Delta_{\phi} F = \| \delta F|\phi\rangle\|, \label{dF} \end{equation} where $\delta F = (F - \langle F\rangle_{\phi}\,\mathbb{I} )$, and $\langle F\rangle_{\phi} \stackrel{\rm def}{=} \langle \phi|F|\phi\rangle$ is the average (or expected) value of an observable $F$ in a system whose state is represented by the normalized vector $|\phi\rangle \in {\cal H}$), provided that $|\langle\phi|F|\phi \rangle |< \infty$. Equivalently: $\Delta_{\phi} F \equiv \sqrt{\langle F^{2}\rangle_{\phi} - \langle F\rangle_{\phi}^{2}}$. (In Eq. (\ref{H1}) $F$ stands for position and momentum operators $x$ and $p_{x}$ as well as for their squares). The observable $F$ is represented by hermitian operator $F$ acting in a Hilbert space ${\cal H}$ of states $|\phi\rangle$. In general, the relation (\ref{H1}) results from basic assumptions of the quantum theory and from the geometry of Hilbert space \cite{Teschl}. Analogous relations hold for any two observables, say $A$ and $B$, represented by non--commuting hermitian operators $A$ and $B$ acting in the Hilbert space of states (see \cite{Robertson} and also \cite{Schrod-1930}), such that $[A,B]$ exists and $|\phi\rangle \in {\cal D}(AB) \bigcap {\cal D}(BA)$, (${\cal D}({\cal O})$ denotes the domain of an operator $\cal O$ or of a product of operators): \begin{equation} \Delta_{\phi} A \cdot \Delta_{\phi} B\;\geq\;\frac{1}{2} \left|\langle [A,B] \rangle_{\phi} \right|,\label{R1} \end{equation} where the equality takes place if $\left(B - \langle B\rangle_{\phi}\right)|\phi \rangle = i \lambda\left(A - \langle A\rangle_{\phi}\right)|\phi \rangle$, (here, $\lambda = \lambda^{\ast}$), or if $|\phi \rangle$ is an eigenvector for operators $A$ or $B$, (see, eg. \cite{Teschl}). The derivation of inequality (\ref{R1}) is the rigorous one. Various derivations of the Heisenberg inequality can be found in the literature and in textbooks. One of methods of deriving the uncertainty relation (\ref{R1}), which can be found in the literature, is the following: The first step is to use the obvious relation resulting from the Schwartz inequality, \begin{equation} \left\| \delta A |\phi\rangle \right\|^{2}\;\left\| \delta B|\phi\rangle \right\|^{2}\,\geq \,\left|\langle\phi| \delta A\;\delta B|\phi \rangle \right|^{2}, \label{dAdB} \end{equation} which holds for all $|\phi\rangle \in {\cal D}(AB) \bigcap {\cal D}(BA)$, (where $ (\Delta_{\phi} A)^{2} \equiv \left\|\delta A|\phi\rangle \right\|^{2}$ and $ (\Delta_{\phi} B)^{2} \equiv \left\|\delta B|\phi\rangle \right\|^{2}$). Next step consists in a suitable transformation of the right hand side of Eq. (\ref{dAdB}), \begin{eqnarray} \left|\langle\phi| \delta A\;\delta B|\phi \rangle \right|^{2} & = & \left[ \Re\,(\langle \phi|\delta A\;\delta B|\phi \rangle) \right]^{2} + \left[ \Im\,(\langle \phi|\delta A\;\delta B|\phi \rangle) \right]^{2}, \nonumber \\ &=& \frac{1}{4} \left(\langle\phi|( \delta A\;\delta B\,+\, \delta B\;\delta A)|\phi \rangle \right)^{2} \nonumber \\ & &\;\;\;\; \;\;\;+\; \frac{1}{4} \left|\langle\phi|( \delta A\;\delta B\,-\, \delta B\;\delta A)|\phi \rangle \right|^{2} \nonumber \\ &\equiv & \frac{1}{4} \left(\langle\phi|( \delta A\;\delta B\,+\, \delta B\;\delta A)|\phi \rangle \right)^{2} \,+\, \frac{1}{4} \left|\langle\phi|[A,B]|\phi \rangle \right|^{2} \label{Sch-1}\\ & \geq & \frac{1}{4} \left|\langle\phi|[A,B]|\phi \rangle \right|^{2}, \label{R1+1} \end{eqnarray} where $\Re\,(z)$ denotes the real part of the complex number $z$ and $\Im\,(z)$ is the imaginary part of $z$. The property $[\delta A, \delta B] = [A,B]$ taking place for all $|\phi\rangle \in {\cal D}(AB) \bigcap {\cal D}(BA)$ was used in (\ref{Sch-1}). Now if one replaces the right hand side of Eq. (\ref{dAdB}) by (\ref{Sch-1}) then one obtains the uncertainty relation of the type derived by Schr\"{o}dinger \cite{Schrod-1930}: \begin{equation} (\Delta_{\phi} A )^{2}\, \cdot\,( \Delta_{\phi} B)^{2}\,\geq \frac{1}{4} \left(\langle\phi|( \delta A\;\delta B\,+\, \delta B\;\delta A)|\phi \rangle \right)^{2} \,+\, \frac{1}{4} \left|\langle\phi|[A,B]|\phi \rangle \right|^{2}, \label{Sch-2} \end{equation} or, equivalently, in more familiar form, \begin{equation} (\Delta_{\phi} A )^{2}\, \cdot\,( \Delta_{\phi} B)^{2}\,\geq \left(\frac{ \langle(AB +BA)\rangle_{\phi}}{2} - \langle A\rangle_{\phi}\,\langle B \rangle_{\phi}\right)^{2} \,+\, \left|\frac{\langle [A,B] \rangle_{\phi}}{2} \right|^{2}. \label{Sch-3} \end{equation} Note that relations (\ref{Sch-2}), (\ref{Sch-3}) seem to be more general than the relation (\ref{R1}). On the other hand, if one replaces the right hand side in Eq. (\ref{dAdB}) by (\ref{R1+1}) then one obtains the uncertainty relation (\ref{R1}) as a result. Let us analyze now the cases of vanishing expectation value of the commutator $[A,B]$, i.e., the cases when $\langle\phi|[A,B]|\phi \rangle = 0$. Note that it is not necessary for $A$ and $B$ to commute, $[A,B] =0$, in order that $\langle\phi|[A,B]|\phi \rangle = 0$ for some $|\phi\rangle \in {\cal H}$. Simply it may happen that for some $|\phi \rangle \in {\cal H}$ and for some non-commuting observables $A$ and $B$ there is $\langle\phi|[A,B]|\phi \rangle = 0$. First, let us assume that there exist some non--commuting observables $A$ and $B$ and some vectors $|\phi\rangle \in {\cal H}$, which are not eigenvectors for $A$ and $B$, such that $\langle\phi|[A,B]|\phi \rangle = 0$ for these $|\phi\rangle$. Then \begin{equation} \langle\phi|[A,B]|\phi \rangle = 0\;\;\Rightarrow\;\; \langle\phi|AB|\phi \rangle = \langle\phi |BA|\phi \rangle = (\langle\phi|AB|\phi \rangle)^{\ast}. \label{[ab]=0} \end{equation} where generally $\langle \phi |AB|\phi \rangle \neq 0$. The relation (\ref{[ab]=0}) implies that also $\langle\phi|\delta A \,\delta B|\phi \rangle$ $ = \langle\phi |\delta B \, \delta A|\phi \rangle = (\langle\phi|\delta A \,\delta B|\phi \rangle)^{\ast}$ and in such a case the uncertainty relations (\ref{Sch-2}), (\ref{Sch-3}) take the form of the output inequality (\ref{dAdB}). The another situation is in the case of the inequality (\ref{R1}): If it happens that $\langle\phi|[A,B]|\phi \rangle = 0$ for some $|\phi\rangle \in {\cal H}$ then for these $|\phi\rangle$ the inequality (\ref{R1}) takes the following form \begin{equation} \Delta_{\phi} A \cdot \Delta_{\phi} B\;\geq\;0, \label{dAdB=0} \end{equation} which means that in such a case, contrary to the inequalities (\ref{Sch-2}), (\ref{Sch-3}), the inequality (\ref{dAdB=0}) does not impose any restrictions for the values of $\Delta_{\phi} A$ and $ \Delta_{\phi} B$. Now let us consider the second case: The case when $[A,B] \neq 0$ and $|\phi \rangle$ is normalized eigenvector of $A$ or $B$. So let $|\phi \rangle = |\phi_{\alpha}\rangle \in \Sigma_{A} \bigcap {\cal D}(B)$, where $\Sigma_{A} \subset {\cal H}$ is the set of all eigenvectors for $A$ (or $|\phi \rangle = |\phi_{\beta}\rangle \in \Sigma_{B} \bigcap {\cal D}(A)$ and here $\Sigma_{B} \subset {\cal H}$ denotes the set of all eigenvectors for $B$). We assume that sets $\Sigma_{A}, \Sigma_{B}$ are not empty. Typical observables are represented by self--adjoint operators, whose eigenvectors usually form a linearly dense (complete, or total) set in the Hilbert (state) space ${\cal H}$. Our analysis refers to just such cases and does not apply to cases of observables with only continuous spectrum. There is $A|\phi_{\alpha}\rangle = a_{\alpha}|\phi_{\alpha}\rangle$ for all $|\phi_{\alpha}\rangle \in \Sigma_{A}$ and therefore $\langle\phi_{\alpha}|[A,B]|\phi_{\alpha} \rangle = 0$ and $\delta A |\phi_{\alpha}\rangle = 0$ for all $|\phi_{\alpha}\rangle \in \Sigma_{A} \bigcap {\cal D}(A)$. This means that the right hand sides of the inequalities (\ref{R1}), (\ref{dAdB}) and (\ref{Sch-2}) take the zero values and all these inequalities take the form of the inequality (\ref{dAdB=0}). (In fact the property $\delta A |\phi_{\alpha}\rangle = 0$ means that one obtains the result (\ref{dAdB=0}) directly from (\ref{dAdB})\,). What is more $\Delta_{\phi_{\alpha}}A = \|\delta A|\phi_{\alpha}\rangle\| = 0$ in the considered case, what together with the relation (\ref{dAdB=0}) imply that in such a case the inequality (\ref{dAdB=0}) does not impose any restrictions on the standard (root–mean–square) deviation $\Delta_{\phi_{\alpha}}B$ besides the condition that there should be $0 \leq \Delta_{\phi_{\alpha}}B < \infty$. One meets analogous situations when $|\phi\rangle = |\phi_{\beta}\rangle \in \Sigma_{B} \bigcap {\cal D}(A)$. So, the considered case shows that there are situations (that is there are some state vectors $|\phi\rangle \in {\cal H}$) such that the uncertainty relations (\ref{R1}), (\ref{Sch-2}), (\ref{Sch-3}) lead to no restrictions on the standard deviations of non--commuting observables $A$ and $B$. The set of vectors for which this property takes place needs not to be small: Contrary, sets $\Sigma_{A}, \Sigma_{B}$, as the sets of eigenvectors for self--adjoint operators, are usually linearly dense (complete) sets in the state space ${\cal H}$ and can be used as the basis in ${\cal H}$. \section{Time--energy uncertainty relations } Now we can apply results of the previous Section to the analyze of the time--energy uncertainty relations. The validity of the analogous relation to (1) for the time and energy was postulated by Heisenberg in \cite{H} (see also \cite{J}). This time--energy uncertainty relation was a result of Heisenberg's heuristic considerations and it is usually written as follows \begin{equation} \Delta_{\phi} t \cdot \Delta_{\phi} E \geq \frac{ \hbar}{2}.\label{H2} \end{equation} According to Heisenberg, this relation {\em "shows how a precise determination of the energy can only be obtained at the cost of a corresponding uncertainty in the time"} \cite{H}. The more rigorous derivation of the inequality of the form (\ref{H2}) was proposed by Mandelstm and Tamm \cite{M-T} and now it is known as the Mandelstam--Tamm time--energy uncertainty relation. Their derivation is reproduced in \cite{M}. The starting point of this derivation is the general uncertainty relation (\ref{R1}). In (\ref{R1}) the operator $B$ is replaced by the selfadjoint non--depending on time Hamiltonian $H$ of the system considered and $\Delta_{\phi} B$ is replaced by $\Delta_{\phi} H $ and then identifying the standard deviation $\Delta_{\phi} H $ with $\Delta_{\phi} E$ one finds that \begin{equation} \Delta_{\phi} A \cdot \Delta_{\phi} E\;\geq\;\frac{1}{2} \left| \langle [A,H] \rangle_{\phi} \right|,\label{M1} \end{equation} where it is assumed that $A$ does not depend upon the time $t$ explicitly, $|\phi\rangle \in {\cal D}(HA) \bigcap {\cal D}(AH)$, and $[A,H]$ exists. The next step is to use the Heisenberg representation and corresponding equation of motion which allows to replace the average value of the commutator standing in the right--hand side of the inequality (\ref{M1}) by the derivative with respect to time $t$ of the expected value of $A$, \begin{equation} \langle [A,H] \rangle_{\phi} \equiv i\hbar \frac{d}{dt} \langle A \rangle_{\phi}. \label{M2} \end{equation} The Eq. (\ref{M2}) means the the inequality (\ref{M1}) takes the following equivalent form, \begin{equation} \Delta_{\phi} A \cdot \Delta_{\phi} E\;\geq\;\frac{\hbar }{2} \left| \frac{d}{dt} \langle A \rangle_{\phi} \right|.\label{M3} \end{equation} (Relations (\ref{M1}) --- (\ref{M3}) are rigorous). Next authors \cite{M,M-T} and many others divide both sides of the inequality (\ref{M3}) by the term $\left| \frac{d}{dt} \langle A \rangle_{\phi} \right|$, which leads to the following relation \begin{equation} \frac{\Delta_{\phi} A \cdot \Delta_{\phi} E }{\left| \frac{d}{dt} \langle A \rangle_{\phi} \right|} \; \geq \; \frac{\hbar}{2}, \label{M4} \end{equation} usually written as \begin{equation} \frac{\Delta_{\phi} A }{\left| \frac{d}{dt} \langle A \rangle_{\phi} \right|} \cdot \Delta_{\phi} E\; \geq \; \frac{\hbar}{2}, \label{M4a} \end{equation} or, using \begin{equation} \tau_{A} \stackrel{\rm def}{=} \frac{\Delta_{\phi} A}{\left| \frac{d}{dt} \langle A \rangle_{\phi} \right|},\label{tau} \end{equation} to the final result known as the MTtime--energy uncertainty relation, \begin{equation} \tau_{A} \cdot \Delta_{\phi} E \geq \frac{\hbar}{2}, \label{M5} \end{equation} where $\tau_{A}$ is usually considered as a time characteristic of the evolution of the statistic distribution of $A$ \cite{M}. The time--energy uncertainty relation (\ref{M5}) and the above described derivation of this relation is accepted by many authors analyzing this problem or applying this relation (see, e.g. \cite{JB,Bauer,Gislason,skr} and many other papers). On the other hand there are some formal controversies regarding the role and importance of the parameter $\tau_{A}$ in (\ref{M5}) or $\Delta t$ in (\ref{H2}). These controversies are caused by the fact that in the quantum mechanics the time $t$ is a parameter. Simply it cannot be described by the self--adjoint operator, say $T$, acting in the Hilbert space of states (that is time cannot be an observable) such that $[T,H] = i\hbar \mathbb{I}$ if the Hamiltonian $H$ is bounded from below. This observation was formulated by Pauli \cite{Pauli} and it is know as "Pauli's Theorem" (see, eg. \cite{JB,Busch}). Therefore the status of the relations (\ref{H2}) and relations (\ref{H1}), (\ref{R1}) is not the same regarding the basic principles of the quantum theory (see also discussion, e.g., in \cite{Vor,Hi1,Hi2,Br}). At this point it should be mentioned there was many attempts to derive the time--energy uncertainty relation using the "time" operator $T$. For example, the relation $[T,H] = i\hbar \mathbb{I}$, (where the operator $T$ is self--adjoint), was used to derive rigorously the time--energy uncertainty relation from a quantum theory of events but one should remember that the {\em "events" theory} is not a "particle" theory like standard quantum theory and $H$ is not a "normal" quantum--mechanical Hamiltonian (see \cite{Edwards}). An another attempt to derive the inequality (\ref{H2}) is related to a use of so--called "{\em tempus}" operator $\cal T$ (see \cite{Kobe2,Kobe3}). Simply using in classical mechanics canonical transformation with suitable generating function $S(q,E,t)$ one can transform the "old" canonical variables $(q,p)$, (where $q$ is the position and $p$ is the momentum), to a set of new variables $(q\,',p\,')$, such that the new canonical momentum equals to the energy $p\,'=E$ and the new generalized coordinate $q\,'$ conjugate to $p\,'=E$ has a dimension of the time and it is denoted as $\cal T$ and called {\em tempus}, $q\,'= {\cal T}$, (see \cite{Kobe1}). In general, tempus ${\cal T}= {\cal T}(q,E,t)$ is a function of the old generalized coordinate $q$, the new canonical momentum $p\,'=E$ and the time of evolution $t$. It is not unique because the arbitrary function of the energy can be added to tempus \cite{Kobe2,Kobe3}. Next step performed in \cite{Kobe2,Kobe3} is to use the property that the energy and the tempus are canonically conjugate variables and must satisfy the same Poisson bracket as the "old" canonical variables $q$ and $p$: $\{E,{\cal T} \}\equiv \{q,p\} = 1$. This property is the basis in \cite{Kobe2,Kobe3} for the conclusion that replacing the tempus $\cal T$ and the energy $E$ by hermitian operators $\hat{{\cal T}}$ and $\hat{ E}$ one can replace the Poisson bracket $\{E,{\cal T} \}$ by a commutator $[{\hat{E}},\hat{{\cal T}}]$ to obtain for $\hat{E}, \hat{\cal T}$ the same commutation relation $[\hat{E},\hat{ {\cal T} }]= i \hbar \mathbb{I}$ as for the pair of position and momentum operators $q$ and $p$:\;$[q,p]= i\hbar \mathbb{I}$. The commutation relation $[\hat{E},\hat{ {\cal T} }]= i \hbar \mathbb{I}$ obtained in this way is used in \cite{Kobe2} to derive the inequality $\Delta E \cdot \Delta {\cal T} \geq \frac{\hbar}{2}$. At first glance, everything looks good but a more detailed analysis shows a lot of inconsistencies in the approach used to derive this "uncertainty" relation. The main problems are the following: The first unclear problem is that $\cal T$ is not unique, therefore the tempus operator $\hat{{\cal T}}$ is also not unique and in a result the standard deviation $\delta {\cal T}$ needs not be unique. The second obscureness is connected with the dependence of the tempus $\cal T$ (and thus the operator $\hat{{\cal T}}$ too) on $q,E$ and the time $t$. Therefore $\Delta {\cal T}$ should also depend on these parameters: $\Delta {\cal T} = \Delta {\cal T}(q,E,t)$, which makes the interpretation of the inequality $\Delta E \cdot \Delta {\cal T} \geq \frac{\hbar}{2}$ unclear. The biggest problem is the obscureness associated with the postulated commutation relation $[\hat{E},\hat{ {\cal T} }]= i \hbar \mathbb{I}$: There is no proof in papers cited in \cite{Kobe2,Kobe3} and in references those one can find therein that the quantum theory resulting form this commutation relation is a unitary equivalent to the quantum theory resulting from the postulate that $[q,p]= i\hbar \mathbb{I}$. In order to prove that these quantum theories are equivalent one should prove that there exists such a unitary operator, say $R$, that $RpR^{-1} = p\,'\equiv \hat{E}$ and $RqR^{-1} = q\,'\equiv \hat{{\cal T}}(q,E,t)$. (If fact it is sufficient that $R$ is invertible). Without the mentioned proof the interpretation of the relation $\Delta E \cdot \Delta {\cal T} \geq \frac{\hbar}{2}$ is unclear. In addition to the above reservations, one should remember the conclusions resulting from Paulie's theorem: if $\hat{E}$ is bounded from below, then $\hat{{\cal T}}$ cannot be self--adjoint. The MT uncertainty relation (\ref{M5}) is also not free of controversies. Researchers applying and using the above described derivation of (\ref{M5}) in their discussions of the time-energy uncertainty relation made use (consciously or not) of a presumption that the right hand sides of Es. (\ref{M1}), (\ref{M3}) are non--zero, that is that there does not exist any vector $|\phi\rangle \in {\cal H}$ such that $\langle[A,H]\rangle_{\phi} = 0$, or $d/dt\langle A\rangle_{\phi} =0$. Although in the original paper of Mandelstam and Tamm \cite{M-T} there is a reservation that for the validity of the formula of the type (\ref{M5}) it is necessary that $\Delta_{\phi} H \neq 0$ (see also, e.g. \cite{Gray,Aharonov}), there are no analogous reservations in \cite{M} and in many other papers. Basic principles of mathematics require that before dividing both sides of Eq. (\ref{M3}) by $\left| \frac{d}{dt} \langle A \rangle_{\phi} \right|$, one should check whether $ \frac{d}{dt} \langle A \rangle_{\phi} $ is different from zero or not. Let us do this now: Let $ \Sigma_{H} \subset {\cal H}$ be a set of eigenvectors $ |\phi_{\beta}\rangle $ of $H$ for the eigenvalues $E_{\beta}$. Then, as it has been shown in the general case in the previous Section, there is $H|\phi_{\beta}\rangle = E_{\beta}|\phi_{\beta}\rangle$ for all $|\phi_{\beta}\rangle \in \Sigma_{H}$ and therefore for all $|\phi_{\beta}\rangle \in \Sigma_{H} \bigcap {\cal D}(A)$ (see (\ref{M2})), \begin{equation} \langle [A,H] \rangle_{\phi_{\beta}} = i\hbar \frac{d}{dt} \langle A \rangle_{\phi_{\beta}} \equiv 0. \label{U1} \end{equation} Similarly, \begin{equation} \Delta_{\phi_{\beta}} H = \sqrt{\langle| H^{2}|\rangle_{\phi_{\beta}} - (\langle |H|\rangle_{\phi_{\beta}})^{2}} \stackrel{\rm def}{=} \Delta_{\phi_{\beta}} E \equiv 0, \label{U2} \end{equation} for all $|\phi_{\beta}\rangle \in \Sigma_{H}$. This means that in all such cases the non--strict inequality (\ref{M3}) takes the form of the following equality \begin{equation} \Delta_{\phi} A \cdot 0 \;= \;\frac{\hbar }{2} \cdot 0, \label{U3} \end{equation} which is the particular case of the general result (\ref{dAdB=0}) obtained in the previous Section. In other words, one cannot divide the both sides of the inequality (\ref{M3}) by $ \left|\frac{d}{dt} \langle A \rangle_{\phi_{\beta}}\right| \equiv 0 $ for all $|\phi_{\beta}\rangle \in \Sigma_{H}$, because in all such cases the result is an undefined number and such mathematical operations are unacceptable. It should be noted that although the authors of the publications \cite{M,Gray} knew that the property (\ref{U1}) occurs for the vectors from the set $\Sigma_{H}$, it did not prevent them to divide both sides of inequality equality (\ref{M3}) by $\left| \frac{d}{dt} \langle A \rangle_{\phi} \right|$, that is by $\left| \frac{d}{dt} \langle A \rangle_{\phi} \right| \equiv 0$, without taking into account (\ref{U2}) and without any explanations. What is more, this shows that there is no reason to think of $\tau_{A}$ as infinity in this case as it was done, e.g, in \cite{M,Gray}. For example, in \cite{M} one can read at the end of \S 13, Chap. VIII: {\em "If, in particular, the system is in a stationary state, $\frac{d}{dt} \langle A \rangle_{\phi} = 0$, no matter what $ A$, and consequently $\tau_{A}$ is infinite; however, $\Delta E_{\phi} = 0$, in conformity with relation (\ref{M5})"}. Note that this is exactly the case described by (\ref{U3}). Similar point of view one can meet, eg., in \cite{Gray}. Our definition (\ref{tau}) of $\tau_{A}$ corresponds with the formula (11) in \cite{Gray} (our $\phi$ is replaced by $\psi$ in \cite{Gray}) and directly after this formula one reads: {\em "If $\psi$ is an eigenvector of $H$, then the denominator in (11) is always zero, thus no observable varies in time. Thinking of $\tau_{A}$ as infinity in this case makes sense."} Again it is exactly the case described by our relation (\ref{U3}). Analogous statements to those cited one can meet in many other papers and books. Summing up, the interpretation of the case $ \frac{d}{dt} \langle A \rangle_{\phi_{\beta}} \equiv 0$ as $\tau_{A} = \infty$ for $|\phi_{\beta}\rangle \in \Sigma_{H}$ (or $|\phi_{\alpha}\rangle \in \Sigma_{A}$) is not a mathematical consequence of the relation (\ref{M3}) and of the derivation of the inequalities (\ref{M4}) and (\ref{M4a}). It is because if $ i\hbar \frac{d}{dt} \langle A \rangle_{\phi_{\beta}} \equiv \langle[A,H]\rangle_{\phi_{\beta}} = 0$ then always $\Delta_{\phi} H = \Delta_{\phi} E =0$, which means that in this case the left hand side of (\ref{M4}) and (\ref{M4a}) become an indefinite number. So, the statement that $\tau_{A} = \infty$ when $ \frac{d}{dt} \langle A \rangle_{\phi_{\beta}} = 0$ seems to be rather heuristic statement arbitrarily entered by hand. It should be noted that none of the authors using this interpretation of $\tau_{A}$ evaluated the number of vectors for which condition (\ref{U1}) occurs and the size of the set of such vectors. In general, the problem is that usually the set $\Sigma_{H}$ of the eigenvectors of the Hamiltonian $H$ is a linearly dense (complete) set in the state space ${\cal H}$. Hence the conclusion, that such relations as (\ref{M4a}) and then (\ref{M5}) are correct only for some specific states $|\phi\rangle$ and observables $A$, and for others need not to be correct, seems to be valid and justified. The following analysis confirms this conclusion. As it has been shown $\tau_{A}$ cannot be defined correctly for eigenvectors $|\phi_{\beta}\rangle $ of $H$, although $[A, H] \neq 0$. Let us consider now vectors close to eigenvectors $|\phi_{\beta}\rangle$. Defining \begin{equation} |\psi_{\eta} \rangle = N_{\eta}\left(|\phi_{\beta}\rangle + \eta |\psi\rangle \right), \label{psi-eta} \end{equation} (where $\eta$ is the real number, $N_{\eta}$ is the normalization constant: $\langle \psi_{\eta}|\psi_{\eta}\rangle = 1$, $|\psi\rangle \neq |\phi_{\beta}\rangle$, $\langle \phi_{\beta}|\phi_{\beta}\rangle = \langle \psi|\psi\rangle = 1$, and $|\psi\rangle$ is not an eigenvector for $H$ and for $A$), one can see that the distance $ d(\phi_{\beta},\psi_{\eta}) =\| |\phi_{\beta}\rangle - |\psi_{\eta}\rangle \| $ tends to zero when $\eta \to 0$. It is because $N_{\eta} \to 1$ when $\eta \to 0$. So, for $\eta \to 0$ vector $|\psi_{\eta}\rangle$ gets closer and closer to the vector $|\phi_{\beta}\rangle$. There are $H|\phi_{\beta}\rangle = E_{\beta}|\phi_{\beta}\rangle$, $\Delta_{\psi_{\eta}}A = \|\delta A |\psi_{\eta}\rangle \| \neq 0$, $\Delta_{\psi_{\eta}}E \equiv \Delta_{\psi_{\eta}} H = \|\delta H| \psi_{\eta}\rangle \| \neq 0$ and $\langle [A,H] \rangle_{\psi_{\eta}} = i\hbar \frac{d}{dt} \langle A \rangle_{\psi_{\eta}} \neq 0$ for $\eta \neq 0$, where $\delta A |\psi_{\eta}\rangle = (A - \langle A \rangle_{\psi_{\eta}})|\psi_{\eta}\rangle$ ($\delta H|\psi_{\eta}\rangle $ is calculated analogously). This means that the left hand side of the inequalities (\ref{M4}), (\ref{M4a}) are well defined and thus $\tau_{A}^{\eta}$ calculated for $|\psi_{\eta}\rangle$ using the definition (\ref{tau}) is well defined too and finite which means that in this case the relation (\ref{M5}) is correct. Then we can observe that $\lim_{\eta \to 0} \Delta_{\psi_{\eta}}A \neq 0$, but $\lim_{\eta \to 0} \Delta_{\psi_{\eta}}H = 0$ and $\lim_{\eta \to 0} \langle [A,H] \rangle_{\psi_{\eta}} = i\hbar \, \lim_{\eta \to 0} \frac{d}{dt} \langle A \rangle_{\psi_{\eta}} = 0$, which shows that in limiting case $\eta \to 0$ reservations leading to the Eq. (\ref{U3}) seem to be not removed. However a more detailed analysis of the Eqs (\ref{M4a}), (\ref{tau}) shows that \begin{equation} \tau_{A} = \lim_{\eta \to 0} \tau_{A}^{\eta} = \lim_{\eta \to 0}\, \frac{\Delta_{\psi_{\eta}} A }{\left| \frac{d}{dt} \langle A \rangle_{\psi_{\eta}} \right|} = \lim_{\eta \to 0}\, \hbar \frac{\Delta_{\psi_{\eta}} A }{\left| \langle [A,H] \rangle_{\psi_{\beta}} \right|} = \infty. \label{eta1} \end{equation} Let us consider now the case of $|\phi\rangle = |\phi_{\alpha}\rangle$ being an eigenvector for $A$. (This case was also noticed in \cite{Gray}). Then also for any $|\phi_{\alpha}\rangle \in \Sigma_{A}\bigcap{\cal D}(H)$, (where by $\Sigma_{A}$ we denote the set of eigenvectors $|\phi_{\alpha}\rangle$ for $A$), $\left| \frac{d}{dt} \langle A \rangle_{\phi} \right| \equiv 0$ and $\Delta_{\phi}A \equiv 0$. Thus, instead of (\ref{U3}) one once more has $\;\;0\cdot \Delta_{\phi}H = \frac{\hbar}{2}\cdot 0$, and once again dividing both sides of this inequality by zero has no mathematical sense. Now note that the relations (\ref{H1}), (\ref{R1}) are always satisfied for all $|\phi\rangle \in {\cal H}$ fulfilling the conditions specified before Eq. (\ref{R1}), and in contrast to this property, we have proved that the MT relation (\ref{M4a}) may not be true not only on the set $\Sigma_{H} \subset {\cal H}$, whose span is usually dense in ${\cal H}$, but also on the set $\Sigma_{A} \subset {\cal H}$. So the conclusion that for eigenvectors $|\phi_{\alpha}\rangle \in \Sigma_{A} \subset {\cal H}$ the uncertainty relation (\ref{M4a}) may not be true seems to be justified. In this context the following question arises: What can the result be if to consider the vectors close to these eigenvectors? The possible answer for this question can be found performing similar analysis to that leading to the result (\ref{eta1}). In order to this one can use similar vectors to those defined in Eq. (\ref{psi-eta}): \begin{equation} |\psi_{\lambda} \rangle = N_{\lambda}\left(|\phi_{\alpha}\rangle + \lambda |\psi\rangle \right), \label{psi-ka} \end{equation} (where $\lambda$ is the real number, $N_{\lambda}$ is the normalization constant, $|\psi\rangle \neq |\phi_{\alpha}\rangle$, and $|\psi\rangle$ is not an eigenvector for $A$ and for $H$, $A |\phi_{\alpha}\rangle = a_{\alpha}|\phi_{\alpha}\rangle$). The distance $ d(\phi_{\alpha},\psi_{\lambda}) =\| |\phi_{\alpha}\rangle - |\psi_{\lambda}\rangle \| $ tends to zero when $\lambda \to 0$. This means that for $\lambda \to 0$ vector $|\psi_{\lambda}\rangle$ gets closer and closer to the vector $|\phi_{\alpha}\rangle$. Here the situation is similar to the one leading to the result (\ref{eta1}): We have $\Delta_{\psi_{\lambda}}A = \|\delta A |\psi_{\lambda}\rangle \| \neq 0$, $\Delta_{\psi_{\lambda}}H = \|\delta H| \psi_{\lambda}\rangle \| \neq 0$ and $\langle [A,H] \rangle_{\psi_{\lambda}} = i\hbar \frac{d}{dt} \langle A \rangle_{\psi_{\lambda}} \neq 0$ for $\lambda \neq 0$, where $\delta A |\psi_{\lambda}\rangle = (A - \langle A \rangle_{\psi_{\lambda}})|\psi_{\lambda}\rangle$ and so on. So also in this case $\tau_{A} \equiv \tau_{A}^{\lambda}$ calculated for $|\psi_{\lambda}\rangle$ using the definition (\ref{tau}) is well defined and finite and therefore the relation (\ref{M5}) is correct. In the limiting case $ \lambda \to 0$ we have: $\lim_{\lambda \to 0} \Delta_{\psi_{\lambda}}A = 0$, $\lim_{\lambda \to 0} \Delta_{\psi_{\lambda}}H \neq 0$ and $\lim_{\lambda \to 0} \langle [A,H] \rangle_{\psi_{\lambda}} = i\hbar \lim_{\lambda \to 0} \frac{d}{dt} \langle A \rangle_{\psi_{\lambda}} = 0$, which shows that in this case doubts concerning the inequality (\ref{M5}) still hold. What is more using $\tau_{A}^{\lambda}$, \begin{equation} \tau_{A}^{\lambda} \stackrel{\rm def}{=} \frac{\Delta_{\psi_{\lambda}} A} {\left| \frac{d}{dt} \langle A \rangle_{\psi_{\lambda}} \right| } \equiv \hbar \frac{\Delta_{\psi_{\lambda}} A }{\left| \langle [A,H] \rangle_{\psi_{\beta}} \right|}, \label{ka1} \end{equation} one finds that contrary to the case of eigenvectors $|\phi_{\beta}\rangle$ for $H$ considered earlier the limit $\tau_{A} = \lim_{\lambda \to 0} \tau_{A}^{\lambda}$ is not unique and it depends on the choice of $|\psi\rangle$ in (\ref{psi-ka}). This means that in the case of eigenvectors $|\phi_{\alpha}\rangle$ for $A$ all doubts concerning the definition and interpretation of $\tau_{A} $ remains and that the MT relation (\ref{M4a}) does not apply in this case. Let us come back for a moment to the case of eigenvectors for $H$ and the result (\ref{eta1}). Mathematical correctness requires that looking for the limit $\eta \to 0$ of the left hand side of the inequality (\ref{M4}) calculated for $|\psi_{\eta}\rangle$ one can not consider only the fraction used in Eq (\ref{eta1}) but one should calculate the limit $\eta \to 0$ of the full fraction $\left(\|\delta A|\psi_{\eta}\rangle\|\cdot\|\delta H|\psi_{\eta}\rangle\| \right)/\left| \frac{d}{dt} \langle A \rangle_{\psi_{\eta}} \right| $. It is so because the inequality (\ref{M4}) is the result of dividing of two sides of the inequality (\ref{M3}) by $\left|\langle\frac{d}{dt} \langle A \rangle_{\psi_{\eta}} \right|$ and, contrary to relation (\ref{M4a}) is mathematically correct. (There are no mathematical reasons to write (\ref{M4}) in the form (\ref{M4a}): It is an arbitrary choice of many authors studying this problem). The result of this limit is similar to that obtained for the limit of $\tau_{A}^{\lambda}$ and it is non--unique: \begin{equation} \lim_{\eta \to 0}\, \frac{\|\delta A|\psi_{\eta}\rangle\|^{2}\;\|\delta H|\psi_{\eta}\rangle\|^{2}}{\left| \frac{d}{dt} \langle A \rangle_{\psi_{\eta}} \right|^{2}} = \lim_{\eta \to 0}\, \hbar^{2} \frac{\|\delta A|\psi_{\eta}\rangle\|^{2}\;\|\delta H|\psi_{\eta}\rangle\|^{2} }{\left| \langle [A,H] \rangle_{\psi_{\eta}} \right|^{2}} = c_{\psi}^{2}, \label{c-psi} \end{equation} where the value of $c_{\psi}$ depends on the choice of $|\psi\rangle$ in (\ref{psi-eta}). Taking this into account one should be careful when interpreting the result (\ref{eta1}). Namely, the result (\ref{eta1}) suggests that if to consider $\tau_{A}$ defined by Eq. (\ref{tau}) as the separate, independent quantity, then in the limiting case $|\psi_{\eta} \rangle \underset{\eta \to 0}{\rightarrow} |\phi_{\beta}\rangle$ the fraction (\ref{tau}) defining $\tau_{A}$ leads to the acceptable result $\tau_{A} = \lim_{\eta \to 0} \tau_{A}^{\eta}= \infty$ and thus the inequality (\ref{M5}) with $\tau_{A}$ given by Eq. (\ref{eta1}) and $\lim_{\eta \to 0} \Delta_{\psi_{\eta}}H = 0 \equiv \Delta E_{\phi_{\beta}} = 0$ gives an impression to be correct. On the other hand there is no unique limit of the left hand side of the inequality (\ref{M4}), when it is calculated for vectors of the type (\ref{psi-eta}). So, although the result (\ref{eta1}) seems to be acceptable, the result (\ref{c-psi}) suggests that the assumption of its correctness may be wrong. From this analysis one can conclude that even if for eigenvectors $|\phi_{\beta}\rangle $ of $H$ doubts concerning the inequality (\ref{M5}) and the interpretation of $\tau_{A}$ remain then for vectors infinitely close to eigenvectors $|\phi_{\beta}\rangle$ of $H$ everything is correct. The problem is that in the literature and in applications no one uses the inequality (\ref{M5}) with $\tau_{A}$ calculated for vectors close (or infinitely close) to eigenvectors $|\phi_{\beta}\rangle$ of $H$ and no one considers the inequality (\ref{M5}) and the case $\tau_{A} = \infty$ as the limiting case of the type described above. Instead of this, $\tau_{A}$ is considered as the quantity defined separately and independently of Eqs (\ref{M3}), (\ref{M4}). In such a situation the natural and acceptable conclusion is that for eigenvectors $|\phi_{\beta}\rangle$ of $H$ one has $\tau_{A} = \infty$. Unfortunately from the mathematical point of view such thinking is wrong. The definition (\ref{tau}) of $\tau_{A}$ is strictly connected with relations (\ref{M3}), (\ref{M4}): The inequality (\ref{M4}) is the result of dividing of both sides of (\ref{M3}) by $\left|\langle\frac{d}{dt} \langle A \rangle_{\psi_{\eta}} \right|$ which equals zero for eigenvectors of $A$ or $H$. In a general case, when the relation (\ref{M5}) is used one usually calculates standard deviations $\Delta_{\psi}A, \Delta_{\psi} H = \Delta_{\psi}E$ and $\tau_{A}$ for a given vector $|\psi\rangle$ but does not for vectors close to it. The same was done in \cite{M,M-T}. In fact problems with the MT uncertainty relation (\ref{M5}) for eigenvectors $|\phi_{\alpha}\rangle \in \Sigma_{A}$ and $|\phi_{\beta}\rangle \in \Sigma_{H}$ are consequences of the result (\ref{dAdB=0}): In such a case the inequalities (\ref{M1}), (\ref{M3}) take a form of (\ref{dAdB=0}), where there is no room for defining such quantities as $\tau_{A}$. So, the intuitive thinking of $\tau_{A}$ as infinity in the case of stationary states is not based on conclusions resulting from the rigorous derivation of the relations (\ref{M4}), (\ref{M5}). Such an interpretation of $\tau_{A}$ makes a sense only when one considers $\tau_{A}$ as the independently defined quantity being a limit of sequences of vectors approaching stationary states. At this point it should be emphasized that in this paper we discuss results obtained within the approach based on the ideas presented in \cite{Robertson,Schrod-1930,M,Teschl} and others where the standard deviations are calculated for pure quantum states, that is for given state vectors, but not for sets or sequences of vectors and therefore the interpretation of $\tau_{A}$ in (\ref{M5}) as the limiting case of the type (\ref{eta1}) does not fall within this approach. \section{Discussion} As it was mentioned, uncertainty relations calculated for a pure state corresponding to an eigenvalue from the discrete spectra of considered non--commuting observables were analyzed in previous Sections. (Mixed states were not considered). With such assumptions there is no room to consider sets (or sequences) of vectors tending to the given state. It seems that the use of mixed states and the density matrix $\rho$ describing a state of the system together with appropriate uncertainty relation written for the density matrix (see, e.g \cite{M-T,Dodonov}) can help in finding a solution to the problem of the limits of sequences of states tending to a given eigenvector, or at least to work around this problem. Referring to the density matrix and the uncertainty relation written using the density matrix it should be noted however that the problem of dividing "zero" by "zero" was ignored in many papers, where the time--energy uncertainty relation for mixed states was considered. One of the typical situation of this type one can meet, e.g. in \cite{Beretta}: For example: if to analyze formula (41) in \cite{Beretta} we can see that assuming that $\rho$ is formed from the eigenvectors of the considered operator $F$, we encounter the problem of dividing zero by zero described in Sec. 3. It is easy to see if to choose $\rho = |\phi_{F}\rangle \langle \phi_{F}|$, where $|\phi_{F}\rangle$ is a normalized eigenvector for $F$. Nevertheless, acting analogously as it is described in the previous Section, we can use the matrix $\rho$ built from the normalized vectors of type (\ref{psi-ka}): $|\psi_{F,\,\epsilon}\rangle = N_{\epsilon}(|\phi_{F}\rangle + \epsilon |\psi\rangle)$ and then we can see that the operator defined by Eq. (41) has a the unit norm for $\epsilon \neq 0$ and in the limit $\epsilon \to 0$ this norm will be also a unit norm in topology used in \cite{Beretta}. Note also that the use of the Schr\"{o}dinger uncertainty relation (\ref{Sch-2}), (\ref{Sch-3}) instead of the Robertson--Messiah relation (\ref{R1}) does not remove problems with the interpretation or derivation of the relation (\ref{M5}) and the characteristic time $\tau_{A} = \infty$ for eigenvectors of $A$ and $H$. It is because for these eigenvectors relations (\ref{Sch-2}), (\ref{Sch-3}) take also the form of the inequality (\ref{dAdB=0}). A detailed analysis of the relation (\ref{H2}) suggests that it may be in conflict with one of the basic postulates of Quantum Mechanics: Namely, with the projection (reduction) postulate. It is because the projection postulate leads to the Quantum Zeno Effect \cite{misra} (QZE), (see also, e.g., \cite{bh,ku,bh1,kk,pf}), that is it makes possible to force the system to stay in a given state as a result of continuous or quasi--continuous observations verifying if the system is in this given state. It is possible if the successive measurements (observations) are separated by suitable short time intervals $\Delta t$ such that $\Delta t \to 0$ when the number of observations increases \cite{bh,ku,kk,pf}. In general the duration of each of these measurements must be shorter than the time interval separating them, and in turn, the uncertainty of the time $t$ cannot be larger then the duration of these measurements. Therefore the conclusion that the relation (\ref{H2}) should make impossible to observe the QZE seems to be legitimate. Contrary to such a conclusion there are experimental tests verifying and confirming this effect \cite{WMI}. The state of the system is characterized by a set of quantum numbers and one of these numbers is the energy of the system in the state considered. Therefore if the quantum system is forced to stay in the given state by continuously or quasi--continuously checking it if it is in this state, then quantum numbers characterizing this state (including the energy) also remain unchanged. This means that there is $\Delta E =0$ and $\Delta t \to 0$ in such a case and thus there is a conflict with the relation (\ref{H2}). The above conclusion can be made to be more reliable by analyzing the conditions guaranteeing the occurrence of the QZE. So, let us assume that $|\psi\rangle$ is the state of the system at the initial instant of time $t_{0}$ and let us analyze the probability ${\cal P}_{\psi}(t_{n}, \ldots ,t_{1},t_{0})$ of finding the system in a given initial state $|\psi\rangle$ in any of measurements performed at instants $ t_{1} < t_{2}< \ldots t_{n}$, ($t_{1} > t_{0}$), which was derived using the projection postulate (see e. q. \cite{ku,bh1}), \begin{equation} {\cal P}_{\psi}(t_{n}, \ldots ,t_{1},t_{0}) = \prod_{k=1}^{n}\,|a_{\psi}(t_{k} - t_{k-1})|^{2}. \label{P-n} \end{equation} where \begin{equation} a_{\psi}(t) = \langle \psi|e^{\textstyle{-i \frac{t}{\hbar}H}}|\psi\rangle. \label{a-psi} \end{equation} Now, if to assume for simplicity that $\Delta t = t_{k} - t_{k-1}$ for all $k=1,2, \ldots, n$, (i. e., that all measurements are separated by equal time intervals), then the Eq. (\ref{P-n}) takes the following form: \begin{equation} {\cal P}_{\psi}(t_{n}, \ldots ,t_{1},t_{0})= |a_{\psi}(\Delta t)|^{2n}. \label{P-n-Delta} \end{equation} The QZE begins to occur when $\Delta t$ is sufficiently small, when $\Delta t \to 0$. So, we need the form of $|a_{\psi}(\Delta t)|^{2}$ for $\Delta t \to 0$. The analysis of Eq. (\ref{a-psi}) shows that (see \cite{ku}) \begin{equation} |a_{\psi}(\Delta t)|^{2} \simeq 1 - \left(\frac{\Delta t}{\hbar}\right)^{2} ( \Delta_{\psi}H)^{2} + \ldots,\;\;{\rm for}\;\; t \to 0. \label{a-psi-0} \end{equation} This approximate expression correctly describes the short time properties of the square of the modulus of the amplitude $a_{\psi}(\Delta t)$ only if \begin{equation} \left(\frac{\Delta t}{\hbar}\right)^{2} ( \Delta_{\psi}H)^{2} \;\ll \;1. \label{zeno} \end{equation} Inserting the result (\ref{a-psi-0}) into Eq. (\ref{P-n-Delta}) and taking $\Delta t = \frac{t}{n}$, where $t \equiv t_{n}$, one easily finds that \begin{equation} {\cal P}_{\psi}(t, t_{n-1}, \ldots ,t_{1},t_{0})= \left| a_{\psi}(t/n) \right|^{2n} = \left[1 - \left(\frac{t}{n \hbar}\right)^{2} ( \Delta_{\psi}H)^{2}\right]\;\; \underset{n \to \, \infty}{\longrightarrow} \;\; 1, \label{zeno1} \end{equation} which is the Quantum Zeno Effect. Note that the result (\ref{zeno1}) takes place only if the condition (\ref{zeno}) holds. The experimental confirmation of the QZE \cite{WMI} is the proof that the analysis leading to the result (\ref{zeno1}) is correct. What's more, it proves that the assumptions (including (\ref{zeno})) guaranteeing the occurrence of this effect are correct and thus in order to observe the QZE the condition (\ref{zeno}) must be fulfilled. As one can see, the condition (\ref{zeno}) is in direct contradiction to the Heisenberg time--energy uncertainty relation (\ref{H2}): The condition $1 > (\Delta t/\hbar)^{2}\,(\Delta_{\psi}H)^{2} \geq\frac{1}{4}$ is not sufficient for QZE to occur. Therefore the earlier formulated conclusion in this Section that the experimental conformation of the QZE can be considered as the proof that the uncertainty relation (\ref{H2}) can be in conflict with the projection postulate seems to be justified. As it was mentioned earlier there is a reservation in \cite{M-T} that derivation of (\ref{M5}) does not go for eigenvectors of $H$ (Then $\Delta H = 0$). In fact it can be only applied for eigenvectors corresponding to the continuous part of the spectrum of $H$. As an example of possible applications of the relation (\ref{M5}) unstable states modeled by wave--packets of such eigenvectors of $H$ are considered in \cite{M-T}, where using (\ref{M5}) the relation connecting half--time $\tau_{1/2}$ of the unstable state, say $|\varphi\rangle$, with the uncertainty $\Delta_{\varphi} H$ was found: $\tau_{1/2}\,\cdot\, \Delta_{\varphi} H \geq \frac{\pi}{4}\,h$. In general, when one considers unstable states such a relation and the similar one appear naturally \cite{fock,kb,boy,grab} but this is quite another situation then that described by the relations (\ref{H1}), (\ref{R1}). The other example is a relation between a life--time $\tau_{\varphi}$ of the system in the unstable state, $|\varphi \rangle$, and the decay width $\Gamma_{\varphi}$: In such cases we have $\tau_{\varphi} \cdot \Gamma_{\varphi} = \hbar$ but there are not any uncertainties of the type $\Delta E$ and $\Delta t$ in this relation (see, e.g., \cite{fock}). Note that in all such cases the vector $|\varphi\rangle$ representing the unstable state cannot be the eigenvector of the Hamiltonian $H$. It should be noted here that even in the case of unstable states one should be very careful using the relation (\ref{M5}): For example in the case of unstable states $|\phi\rangle$ modeled by the Breit--Wigner energy density distribution $\omega_{BW}(E) = \frac{N}{2\pi}\, \it\Theta (E- E_{min}) \ \frac{{\it\Gamma}_{0}}{(E -E_{0})^{2} + (\frac{{\it\Gamma}_{0}}{2})^{2}}$, where $\it\Theta(E)$ is the unit step function and $N$ is the normalization constant, the average values $\langle H\rangle_{\phi} = \int_{E_{min}}^{\infty}\,E\,\omega_{BW}(E)\,dE$ and $\langle H^{2}\rangle_{\phi} = \int_{E_{min}}^{\infty}\,E^{2}\,\omega_{BW}(E)\,dE$ have not definite values and hence $\Delta_{\phi}H$ is undefined which means that the relation (\ref{M5}) does not work in this case. \section{Conclusions} In recent years there has been a growing interest in the uncertainty principles, and in particular the uncertainty principle of time--energy, due to their importance in quantum optics and quantum thermodynamics. So the results presented in this paper seems to be important not only for the studies of foundations of the quantum theory but also for looking for solutions of some problems in quantum optics and quantum thermodynamics. Results presented in Sec. 2 allow to draw a conclusion that there can exists such pairs of non--commuting observables $A$ and $B$ and such vectors that the lower bound of the product of standard deviations $\Delta A$ and $\Delta B$ calculated for these vectors is zero: $\Delta A\,\cdot\,\Delta B \geq 0$. The other conclusion resulting from the analysis prezented in Sec. 2 is that there can also exist such pairs of non--commuting observables $A, B$ and and such complete sets of vectors that the only bound for $\Delta A$ and $\Delta B$, (with $\leq \Delta A < \infty$, $0 \leq \Delta B < \infty$), calculated for these vectors is $0$. This means that in such cases restrictions resulting from the uncertainty principle (\ref{R1}) can be bypassed. So, the Schr\"{o}dinger uncertainty relation (\ref{Sch-2}), (\ref{Sch-3}) and the Robertson--Messiah relation (\ref{R1}) derived for non--commuting pairs of observables $A$ and $B$ are not as universally valid as it is usually thought. The discussion of relations (\ref{H2}) and (\ref{M5}) presented in previous Sections and the detailed analysis of the derivation of the relation (\ref{M5}) suggests that these time--energy uncertainty relations are not well founded and using them one cannot consider them as universally valid. Therefore when using these relations as the basis for predictions of the properties and of a behavior of some systems in physics or astrophysics (including cosmology --- see, e.g., \cite{skr,cos}) one should be very careful interpreting and applying results obtained. In general in some problems the use of the relation (\ref{M5}) may be reasonable (see, e.g. the case of unstable states) but then $\tau_{A}$ used in (\ref{M5}) should not be considered analogously to standard deviations appearing in inequalities (\ref{H1}), (\ref{R1}). \section*{Acknowledgments} The author would like to thank Neelima Kelkar, Marek Nowakowski and anonymous Referees for their valuable comments and discussions. This work was supported by the program of the Polish Ministry of Science and Higher Education under the name "Regional Initiative of Excellence" in 2019 --- 2022, Project No. 003/RID/2018/19.\\ \hfill\\ {\bf The author contribution statement:} The author declares that there are no conflicts of interest regarding the publication of this article and that all results presented in this article are the author's own results.